0% found this document useful (0 votes)
2 views

Operating System

The document outlines the curriculum for an Online MCA program focused on Operating Systems, authored by Ms. Nidhi Patel from Parul University. It covers various topics including the introduction to operating systems, types, functions, and generations, as well as detailed aspects like process management, memory management, and I/O organization. The content is intended exclusively for enrolled students and is protected under copyright, prohibiting unauthorized distribution or reproduction.

Uploaded by

TYBCA 2021
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Operating System

The document outlines the curriculum for an Online MCA program focused on Operating Systems, authored by Ms. Nidhi Patel from Parul University. It covers various topics including the introduction to operating systems, types, functions, and generations, as well as detailed aspects like process management, memory management, and I/O organization. The content is intended exclusively for enrolled students and is protected under copyright, prohibiting unauthorized distribution or reproduction.

Uploaded by

TYBCA 2021
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 484

OPERATING SYSTEM

Centre for Distance and Online Education


Online MCA Program
Operating System

Semester: 1

Author

Ms. Nidhi Patel, Assistant Professor, Online Degree-CDOE,


Parul University

Credits
Centre for Distance and Online Education,
Parul University,

Post Limda, Waghodia,

Vadodara, Gujarat, India

391760.

Website: https://paruluniversity.ac.in/

Disclaimer

This content is protected by CDOE, Parul University. It is sold under the stipulation that it cannot be
lent, resold, hired out, or otherwise circulated without obtaining prior written consent from the
publisher. The content should remain in the same binding or cover as it was initially published, and this
requirement should also extend to any subsequent purchaser. Furthermore, it is important to note that,
in compliance with the copyright protections outlined above, no part of this publication may be
reproduced, stored in a retrieval system, or transmitted through any means (including electronic,
Mechanical, photocopying, recording, or otherwise) without obtaining the prior
written permission from both the copyright owner and the publisher of this
content.

Note to Students
These course notes are intended for the exclusive use of students enrolled in
Online MCA. They are not to be shared or distributed without explicit permission
from the University. Any unauthorized sharing or distribution of these materials
may result in academic and legal consequences.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 1


Table of Content

SUB LESSON 1.1 INTRODUCTION OF OPERATING SYSTEM


SUB LESSON 1.2 BASIC OF OPERATING SYSTEM
SUB LESSON 1.3 GENERATIONS OF OPERATING SYSTEMS
SUB LESSON 1.4
TYPES OF OPERATING SYSTEMS-1
SUB LESSON 1.5
TYPES OF OPERATING SYSTEMS-2
SUB LESSON 1.6
SERVICES OF OPERATING SYSTEM
USER INTERFACE
PROGRAM EXECUTION
I/O OPERATIONS
FILE SYSTEM MANIPULATION
COMMUNICATIONS BETWEEN PROCESSES
ERROR DETECTION
RESOURCE ALLOCATION
ACCOUNTING
PROTECTION AND SECURITY
SUB LESSON 1.7
SYSTEM CALLS
KERNEL MODE
USER MODE
SYSTEM CALLS
SUB LESSON 1.8
STRUCTURE OF AN OS-LAYERED
SUB LESSON 2.1
PROCESS AND PROCESS STATES
SUB LESSON 2.1
PROCESS CONTROL BLOCK

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 2


SUB LESSON 2.3
CONTEXT SWITCHING
SUBTOPIC 3.1
CONCEPT OF THREADS AND MULTITHREADS
SUB LESSON 4.1
TYPES OF SCHEDULER
SUB LESSON 4.2
PRE-EMPTIVE AND NON PRE-EMPTIVE SCHEDULING ALGORITHM
SUBTOPIC 4.3
FIRST COME FIRST SERVE SCHEDULING ALGORITHM
SUB LESSON 4.4
SHORTEST JOB FIRST AND SHORTEST REMAINING TIME FIRST SCHEDULING
ALGORITHM
SUB LESSON 4.5
PRIORITY SCHEDULING ALGORITHM
NON-PREEMPTIVE PRIORITY SCHEDULING
PREEMPTIVE PRIORITY SCHEDULING
EXAMPLE OF PRIORITY SCHEDULING
SUB LESSON 4.6
ROUND ROBIN SCHEDULING ALGORITHM
EXAMPLE OF ROUND ROBIN SCHEDULING ALGORITHM
SUB LESSON 4.7
MULTILEVEL QUEUE AND MULTILEVEL FEEDBACK QUEUE SCHEDULING
SUB LESSON 5.1
CRITICAL SECTION AND RACE CONDITION
SUB LESSON 5.2
DISABLING INTERRUPTS AND SHARED LOCK VARIABLE
SUB LESSON 5.3 TSL INSTRUCTION AND STRICT ALTERNATION
SUB LESSON 5.4
PETERSON’S SOLUTION
SUB LESSON 5.5
SEMAPHORES AND MONITORS
SUB LESSON 5.6

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 3


THE PRODUCER\ CONSUMER PROBLEM
SUB LESSON 5.7
CLASSICAL IPC PROBLEMS(READER’S & WRITER PROBLEM)
SUB LESSON 5.8
CLASSICAL IPC PROBLEMS( DINING PHILOSOPHER PROBLEM)
SUB LESSON 6.1 DEADLOCKS
RESOURCE ALLOCATION GRAPH
SUB LESSON 6.2 DEADLOCK PREVENTION
SUB LESSON 6.3 DEADLOCK AVOIDANCE: BANKER’S ALGORITHM
SUB LESSON 6.4
DEADLOCK DETECTION AND RECOVERY
SUB LESSON 7.1 INTRODUCTION OF MEMORY MANAGEMENT
SUB LESSON 7.2 CONTIGUOUS MEMORY MANAGEMENT SCHEMES
CONTIGUOUS MEMORY MANAGEMENT SCHEMES
SUB LESSON 7.3 PARTITION ALLOCATION
HOW DOES FIRST FIT WORK?
HOW NEXT FIT WORKS?
HOW DOES BEST FIT WORK?
HOW DOES WORST FIT WORK?
SUB LESSON 7.4 NON-CONTIGUOUS MEMORY MANAGEMENT SCHEMES
SUB LESSON 7.5
VIRTUAL MEMORY
WHAT IS VIRTUAL MEMORY?
SUB LESSON 7.6 PAGE REPLACEMENT ALGORITHMS
SUB LESSON 8.1
I/O DEVICES AND ORGANIZATION OF THE I/O FUNCTION
SUB LESSON 8.2
OS DESIGN ISSUES AND I/O BUFFERING
SUB LESSON 8.3
DISK ORGANISATION
SUB LESSON 8.4
DISK SCHEDULING

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 4


SUB LESSON 8.5
RAID AND DISK CACHE
SUB LESSON 9.1
INTRODUCTION AND FILE ORGANIZATION
SUB LESSON 9.2
FILE DIRECTORIES AND FILE SHARING
SUB LESSON 9.3
RECORD BLOCKING AND DISK SPACE MANAGEMENT
SUB LESSON 10.1
INTRODUCTION OF LINUX AND SHELL
SUB LESSON 10.2
LINUX OPERATING SYSTEM AND ARCHITECTURE
SUB LESSON 10.3
COMPONENT OF LINUX SYSTEM, BASIC FUNCTION OF KERNEL
SUB LESSON 10.4
BASIC COMMANDS
SUB LESSON 10.5
CONTROL STRUCTURE

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 5


SUB LESSON 1.1
INTRODUCTION OF OPERATING SYSTEM

An Operating System(OS) is a program that manages computer hardware. It also provides a


basis for Application Programs and acts as an intermediary between Computer users and
Computer Hardware. Here is some example of the operating system that we widely use these
days, First is Android which is the famous operating system used in mobile devices. Then we
have Mac OS which is the operating system from Apple. On their laptops or MacBooks, they
use mac OS X, and for iPhones, they use the iOS operating system. We also have Linux and
Ubuntu which are two open-source operating systems. Those are widely used on laptops,
desktops, and other devices, and at the end of we have windows.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 6


here is the diagram that represents the basic structure of the computer system. At the lower
level, we have Computer Hardware like Memory (Primary memory and Secondary memory ),
and I/O Devices (Which means input-output devices. The devices like mouse, keyboards, and
microphones are input devices, and Monitor, printers, and speakers are like output devices ),
CPU (central processing Unit), etc. At the top of the operating system, we have System or
Application Programs. There are two kinds of software we have which are System Software and
Application Software, there System Software is the Software that is used to give the direct
command to the computer hardware and an Operating system is also a kind of system
software. Application Programs are the software used to perform a specific task that can
directly use by the User.

We have some examples here, First is Word Processor, Word processors are like Microsoft
word which is software used to make document files. Then we have spreadsheets,
Spreadsheets are used to make tabular Data or make some calculations on your tabular data.
And that are Compilers, the Software that we use to write computer code like C, C++, java, etc.
Next are Text Editors which are used to modify and Write text like your Notepad, WordPad, etc.
Last we have web Browsers, Web Browsers are the software that enables users to browse the
Web like google chrome, Mozilla Firefox, Internet Explorer, etc.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 7


Let's understand by one example, suppose User 1 wants to use Microsoft Word. so he needs to tell the
computer to load Microsoft word in the main memory from secondary memory. Memory is one kind of
Hardware so if the user wants to interact with hardware it is not possible without an operating system.
To write something in Microsoft word user need to use a keyboard and to see the result he needs to
monitor, both I/O devices, again hardware resources. To access any hardware users need to take the
help of the Operating system. For each small or new task, you need to tell the computer hardware what
to do in form of code manually. it is a very difficult task to give commands to computer hardware
directly. All the communication between users and computer hardware is taken care of by the operating
system. The use of a computer becomes easy because of the presents of an operating system basically,
an Operating system(OS) is a program that manages the computer hardware and it also provides a basic
Application program and acts as an intermediary between the computer User and Computer
Hardware.

TYPES OF OPERATING SYSTEM


● Batch OS
● Time-sharing OS
● Distributed OS
● Network OS
● Real-Time OS
● Multi Programming/ Process/ Tasking OS

FUNCTIONS OF OPERATING SYSTEM


● it is the interface between User and Hardware
● Allocation of Resources
● Management of Memory, Security, etc.

GOALS OF OPERATING SYSTEM


● Convenience
● Efficiency
● Both

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 8


KEY TAKEAWAYS

➢ Operating System (OS) is a crucial program managing computer hardware and


serving as a link between users and the hardware. Examples include Android for
mobiles, Mac OS for Apple devices, Linux, Ubuntu, and Windows.
➢ The basic computer structure involves hardware like memory and I/O devices,
with the OS facilitating user interactions.
➢ System and application programs fall into two software categories: system
(directing hardware) and application (task-specific).
➢ Users rely on the OS to interact with hardware, making tasks like loading
software and using input/output devices possible.
➢ Types of OS include Batch, Time-sharing, Distributed, Network, Real-Time, and
Multi Programming.
➢ OS functions include user-hardware interface, resource allocation, and memory
management. The goals are convenience, efficiency, or a balance of both.
➢ Operating systems make computing accessible and efficient by managing
complex hardware interactions.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 9


INTRODUCTION OF OPERATING
SYSTEM

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 1


SUB LESSON 1.2
BASIC OF OPERATING SYSTEM

COMPUTER SYSTEM OPERATION

In the Computer system Operation we will be seen the structure of the Operating system. To
understand how an Operating system work we need to learn some basic about computer
system operation. A modern general-purpose computer system consists of one or more CPUs
and a number of device controllers connected through a common bus that provides access to
shared memory.

The CPU means it is the main part of the computer, the brain of the computer. It is the
Processing Unit all of the computation, processing, and calculation is done by the CPU.CPU is
not a box that you can see beside of Monitor, this box consists of a CPU, Motherboard, buses,
Memory, etc. the CPU is a small chip that is embedded in your motherboard. In modern
computers, we have one or more CPUs embedded in the motherboard. In this diagram, you can
see the number of devices connected to the CPU like Disks, Mouse, Keyboard, Printers, and
Monitor. All these devices are connected through Controller and the controllers are responsible
for how the devices work. For example, all Disks are connected to the Disk controller, mouse,
keyboard, and printer are connected to the USB Controller. CPU is connected along with this all
controllers to shared memory via a Common Bus.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 2


SOME IMPORTANT TERMS

1. Bootstrap Program: -The initial program that runs when a computer is powered up or
rebooted. -It is stored in the ROM (Stands for Read Only Memory is a kind of Main
Memory) -It must know how to load the OS and start executing that system(your
operating system is an interface between you and your hardware, it is one type of
software when you powered up your computer your operating system is load first in
your RAM (Random access memory)). -It must locate and load into memory the OS
Kernel (kernel is the main part of the operating system).
2. Interrupt: -The occurrence of an event is usually signaled by an Interrupt from Hardware
or software. (CPU is always working, and then the CPU is doing that work that time
hardware or software may interrupt the CPU. and tell the CPU to wait and execute this
task which is more important. so this activity is known as an interrupt) -Hardware may
trigger an interrupt at any time by sending a signal to the CPU, usually by the way of the
system bus.
3. System call(Monitor call): -Software may trigger an interrupt by executing a special
operation called System Call. Now let's see how the CPU behaves when it receives an
interrupt When the CPU is interrupted, it stops what it is doing and immediately
transfers execution to a new given task the new task executes. On Completion, the CPU
resumes the previous execution which it doing before interrupting.

STORAGE STRUCTURE

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 3


In Storage Structure we will see the Different Storage devices we have in the computer system
and what are their functions and what are their features. In the diagram, we have all Storage
devices in the hierarchy. First, we have Resisters then cache, main memory, electronic disk,
magnetic disk, optical disk, and magnetic tapes. The Registers are the smallest storage devices,
they store data in bits means o’s and 1’s. And the can be accessed quickly means it is a faster
memory device. Next is a cache Memory device, the size of the cache is bigger than the register
but the speed is a little slower compared to the register memory. then we have Main Memory,
which contains RAM your Random Access Memory which is a very important type of memory.
then we have the electronic disk, magnetic disk, optical disk, and magnetic tapes those are the
secondary memory devices. Now we talk about the hierarchy, when you go up in the hierarchy
the size of the memory device decreases but the speed is increase and it becomes more
expensive. And when you go down in the hierarchy the size of memory devices increase but
speed decrease and price also decrease. Let's talk about the classification of memory, it is
mainly classified into two types Volatile memory and Nonvolatile memory. Volatile memory:
Loses its contents when power is removed. Nonvolatile memory: Retains its contents even
when power is gone.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 4


INTRODUCTION OF OPERATING
SYSTEM

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 1


SUB LESSON 1.3
GENERATIONS OF OPERATING SYSTEMS

One of the most crucial components of a computer system is the operating system.
Communication between the user and the system must be established by the operating
system. The operating systems were not always as effective, quick, or organized as they are
today. An operating system has advanced thus far after many years of effort. Different

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 2


operating system generations can be used to highlight the evolution of operating systems.
Over time, operating systems have changed. Thus, utilizing different operating system
versions, their historical development may be traced.
As is common knowledge, an operating system is a piece of software that controls how a
system's hardware communicates with its user and various external components. Its
fundamental goal is to give users an interface that makes it simple to complete tasks without
worrying about internal computer processes or levels. An operating system is divided into
generations based on the significant changes that have occurred in each generation since its
beginning. Computer language is not something that humans can learn or understand, and the
same is applicable to computers. Additionally, they are unable to interpret human language.
Therefore, the operating system manages all of these tasks and creates a setting in which a
user may effortlessly complete their work on a system. From the beginning to the present, the
operating system has undergone four significant generations of revisions.

THE FIRST GENERATION ( 1945 - 1955 ): VACUUM TUBES AND PLUGBOARDS:

Between 1945 and 1955, the first generation of the operating system is covered. It was the
Second World War during this time. Even digital computers had not yet been created. People
utilizes calculating engines, a type of device made with mechanical relays, to perform
calculations. Because of how slowly these mechanical relays operate, vacuum tubes

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 3


eventually replaced mechanical relays in electrical systems. These are slow machines. One
group of people handled all the duties involved in creating, constructing, and maintaining
these machines. There was no operating system in use at the time, and the idea of
programming languages was completely unknown. So, so machine language was used for all
programming and calculations. Punch cards were introduced in 1950 as a result. These punch
cards now contain the programs, which are placed into the system so that it can read them.
The computer system performed better thanks to these punch cards. The operating system of
these periods makes possible job switching. This operating system served as the foundation
for the batch processing system, in which jobs are collected and carried out sequentially.
When only one task was active, that job alone had complete control over the machine. After
that, other works are put on hold while the running job is fully completed. The other work
then begins to run after that.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 4


THE SECOND GENERATION ( 1955 - 1965 ): TRANSISTORS AND BATCH SYSTEMS

The development of the shared system with multiprogramming and the start of
multiprocessing were challenges that the second generation of operating systems addressed.
A system that uses multiprogramming is one in which the processor switches between tasks
while a large number of user programs are simultaneously stored in the main storage. When a
machine is multi-programmed, its power is improved by utilizing multiple processors on a

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 5


single system. Real-time systems, where computers are utilized to control the operation and
functionality of businesses like coal factories and oil refineries, have arisen in this generation
to get a quick and real-time reaction. transistors were developed, which resulted in the
development of main-frame computers. Large air-conditioned facilities were used to store
and operate these mainframes, and the staff was assigned to do so. The job execution time is
improved using the batch system. Computers today have been quick since their beginnings.
The jobs were collected in a tray and kept in an area known as the Input room. The input is
read from a magnetic tape. After that, a batch operating system was loaded and these
magnetic tapes were placed on a tape driver. The batch operating system's initial task was to
read the first job from the drive and run it. The output from that is then recorded on the
second tape. The input tape, the output tape, and the output tape were printed after the
entire batch had been executed.

THE THIRD GENERATION ( 1965 - 1980 ): INTEGRATED CIRCUITS AND


MULTIPROGRAMMING

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 6


Between 1965 and 1980, two different computer system types—the scientific calculator and
the business calculator—were developed. A company called IBM combined both of these
systems into the system/360. In modern computer systems, integrated circuits are used. When
compared to second-generation systems, the performance of systems was multiplied by the
usage of integrated circuits. The use of integrated circuits significantly reduced system costs
because producing integrated circuits on a big scale involves an expensive setup, but doing so
allows for cheaper production of the circuits when made in large quantities. The introduction of
multiprogramming also resulted forth third-generation operating systems. It implies that there
is no need to wait in line for other works to be completed when one job is already in progress.
The processor schedules additional tasks as well, ensuring that there won't be any time
wastage.

THE FOURTH GENERATION (1980 - PRESENT): PERSONAL COMPUTERS

The fourth generation of operating systems and systems is referred to be the period from 1980
to the present. As integrated circuits have become widely available, personal computers have
become common. To create personal computers, large-scale integrated circuits were combined.
These integrated circuits are made up of thousands of transistors packed onto a very small
silicon plate, sometimes only a few centimeters in size. The cost of microcomputers has
decreased relative to minicomputers due to the use of silicon. Microcomputers started to be
used widely by a lot of people due to their significantly lower price, which ultimately increased
the computer network around the world. Two different types of operating systems—the first
being network operating systems and the second being distributed operating systems—were
produced as a result of the expansion of networks. A data communication interface that
doubles as a server is employed in these systems. People may now access and log in to
machines that are located remotely thanks to the network operating system. Additionally, users
are now able to copy their files between several other PCs.

QUESTION BANK

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 7


MULTIPLE CHOICE QUESTIONS

Quiz 1.3

1. The fourth generation was based on integrated circuits.

a) True

b) False

2. The generation is based on the VLSI microprocessor.

a) 1st

b) 2nd

c) 3rd

d) 4th

3. Batch processing was mainly used in this generation.

a) 1st

b) 2nd

c) 3rd

d) 4th

4. In this generation Time sharing, Real-time, Networks, and Distributed Operating System
was used.

a) 1st

b) 2nd

c) 3rd

d) 4th

5. A term in computer terminology is a change in technology a computer is/was being


used.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 8


a)development

b) generation

c) advancement

d) growth

FILL IN THE BLANKS

1. ______ generation of computer started with using vacuum tubes as the basic
components.

2. The period of ________ generation was 1952-1964.

3. In______ generation of the operating system, operating system designers develop the
concept of multi-programming in which several jobs are in the main memory at once.

4. ____ executes must frequently and makes the fine-grained decision of which process to
execute the next.

5. In the late __________, Herman Hollerith invented data storage on punched cards that
could then be read by the machine.

SHORT ANSWER QUESTIONS


1. What are Batch Systems?

2. Define: First generation

3. Define: Second generation

4. Define: Third generation

5. Define Fourth generation

LONG ANSWER QUESTIONS


1. Explain the History/Generation of the Operating System.

CASE STUDY/PROGRAMS/ NUMERICALS

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 9


INTRODUCTION OF OPERATING
SYSTEM

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 1


SUB LESSON 1.4

TYPES OF OPERATING SYSTEMS-1


An operating system is a cleverly designed group of applications that controls the hardware of a
computer. It is a form of system software that ensures the computer system runs efficiently.

1. Simple Batch System

2. Multiprogramming System

3. Multitasking system

4. Multiprocessor System

5. Distributed Operating System

6. Real-time Operating System

SIMPLE BATCH SYSTEM

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 2


There is no direct communication between the computer and the batch operating system.
Similar tasks are divided and distributed into batches by a distinct method for simple processing
and quick response. The batch operating system is suitable for tough and lengthy work. Each
user completes their tasks offline before submitting them to an operator in order to avoid
slowing down a device.

Advantages

It is incredibly challenging to estimate or determine how long it will take to finish any task.

The batch system processors are aware of how long a work will take to complete when it is in
line.

The batch systems can be shared by numerous users.

The batch system has relatively little downtime.

In batch systems, it is simple to repeatedly manage massive jobs.

Disadvantages

The difficulty of debugging batch operating systems is one of their main drawbacks.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 3


A backlog develops if the system malfunctions.

Installing and maintaining effective batch operating systems may be expensive.

MULTIPROGRAMMING SYSTEM

A multiprogramming OS may run numerous programmes on a machine with a single processor.


The other programmes in a multiprogramming OS are prepared to utilize the CPU if one
application needs to wait for an input/output transfer. As a result, many tasks might need to
share CPU time. They are not expected to finish their work at the same time, though. It is
referred to as a "Task," "Process," or "Job" when software is executed. Concurrent program
executions improve throughput while using fewer system resources than serial or batch
processing systems. The basic goal of multiprogramming is to control all system resources. The
primary parts of a multiprogramming system are the file system, transient area, command
processor, and I/O control system. In order to store numerous applications, the
multiprogramming OS is designed to sub-segment the transient area. The resource

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 4


management procedures are connected to the fundamental operations of the operating
system. Multiple users can accomplish their jobs simultaneously in the multiprogramming
system, and they can be saved in the main memory. The CPU may distribute time to other apps
when it is in idle mode while a programme is conducting I/O operations. Multiple programmes
may share CPU time as one programme waits for an I/O transfer and another is constantly
ready to use the processor. Multiple jobs could be running on the processor at once, even
though not all tasks are completed at once. Parts of other processes might be completed first,
then another segment, and so on. Therefore, the general goal of a multiprogramming system is
to keep the CPU active until certain tasks in the job pool become available. As a result, various
programmes can run simultaneously on a single processor computer, and the CPU is never idle.

Advantages of Multiprogramming:

● CPU never becomes idle


● Efficient resources utilization
● Response time is shorter
● Short-time jobs completed faster than long-time jobs
● Increased Throughput

Disadvantages of Multiprogramming:

● Long-time jobs have to wait long


● Tracking all processes sometimes difficult
● CPU scheduling is required
● Requires efficient memory management
● User interaction not possible during program execution

MULTITASKING SYSTEM

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 5


Modern computers use the word "multitasking." The ability to run multiple programmes at
once is a logical extension of a multiprogramming system. Multitasking in an operating system
enables a user to handle multiple computer tasks at once. When several tasks share similar
processing resources, such as a CPU, they are also referred to as processes. Because the
operating system keeps track of your progress in each of these jobs, you can switch between
them without losing any information. Though multitasking wasn't completely supported, early
operating systems could run multiple programmes at once. As a result, one piece of software
may use the entire CPU of the computer to carry out a particular task. The user was unable to
complete other operations, including opening and shutting windows, because basic operating
system features, like file copying, were blocking them. The complete multitasking feature of
modern operating systems enables multiple programmes to run simultaneously without
interfering with one another. Additionally, a lot of operating system activities can operate
concurrently.

Advantages of Multitasking:

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 6


● Control multiple users

This operating system is more adapted to supporting several users concurrently, and numerous
apps can operate without slowing down the system.

● Virtual Memory

Operating systems that support multiple tasks have the best virtual memory system. Any
programme does not need to wait for a long period of time to finish its responsibilities thanks
to virtual memory; if this issue develops, those applications are shifted to virtual memory.

● Good Reliability

Users are happy as a result of the increased flexibility provided by multitasking operating
systems. It allows each user to run one or more programmes simultaneously.

● Secured Memory

Memory management is clearly specified in operating systems that support many tasks.
Because of this, the operating system forbids any permits for undesired apps to consume
memory.

● Time Shareable

So that no process has to wait for the CPU, each one is given a set length of time.

● Background Processing

An operating system that supports many tasks offers a better environment for background
operations. Most users are not aware of these background programmes, yet they are essential
to the smooth operation of other programmes like firewalls, antivirus software, and others.

● Optimize the computer resources

An operating system that supports many tasks may control the I/O devices, RAM, hard drive,
CPU, and other hardware components.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 7


● Use Several Programs

Users have the option of running multiple programmes at once, including an internet browser,
games, Microsoft Excel, PowerPoint, and other utilities.

Disadvantages of Multitasking:

● Processor Boundation

Due to the processors' modest processing speeds, the system may run programmes slowly, and
processing numerous programmes may cause their reaction times to lengthen. More
computing power is necessary to address this issue.

● Memory Boundation

Multiple programmes running simultaneously can cause the computer to perform slowly since
the main memory becomes overburdened when many apps are loaded. Reaction time
lengthens because the CPU is unable to supply distinct times for each programme. Low-capacity
RAM usage is the main reason for this problem. As a result, a remedy can be offered by
increasing the RAM capacity.

● CPU Heat Up

In a multitasking environment, several processors are busier at once to perform any task, hence
the CPU produces more heat.

QUESTION BANK

MULTIPLE CHOICE QUESTIONS

Quiz 1.4

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 8


1. Which of the following operating system does not interact with the computer directly, in
fact in this operating system each user prepares his job in an offline device and submits
it to the computer?

a) Batch Operating system

b) Multitasking Operating System

c) Time-sharing Operating System

d) Distributed Operating System

2. Which of the following is not a feature of the batch Operating system?

a) Large turnaround time

b) It is very easy to debug the program in this operating system

c) Other pending jobs are affected due to a lack of protection schemes

3. Multiprogramming systems

a) Are easier to develop than single programming systems

b) Execute each job faster

c) Execute more jobs in the same time

d) Are used only on large mainframe computers

4. Programs are executed on the basis of a priority number in a

a) Batch processing system

b) Multiprogramming

c) Time sharing

d) None of these

5. An operating system that can do multitasking means that _____

a)The OS can divide up work between several CPUs.

b) Several programs can be operated concurrently

c) Multiple people can use the computer concurrently

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 9


d) All of the above

FILL IN THE BLANKS

1. CPU scheduling is the basis of ___________.

2. The desktop operating system is also called a ________.

3. Multiprogramming of computer system increases ____?

4. The environment in which programs of the system are executed is called _____?

5. OS stands for______________

SHORT ANSWER QUESTIONS


1. What are the different operating systems?

2. What is the advantage of a multiprogramming system?

3. What is the advantage of a multitasking system?

4. What is the disadvantage of a batch operating system?

5. What are Batch Systems?

LONG ANSWER QUESTIONS


1. Explain the Batch operating system with advantages and disadvantages.

2. Explain the Multyprogramming operating system with advantages and disadvantages.

3. Explain the Multitasking operating system with advantages and disadvantages.

CASE STUDY/PROGRAMS/ NUMERICALS

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 10


INTRODUCTION OF OPERATING
SYSTEM

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 1


SUB LESSON 1.5

TYPES OF OPERATING SYSTEMS-2


An operating system is a cleverly designed group of applications that controls the hardware of a
computer. It is a form of system software that ensures the computer system runs efficiently.

1. Simple Batch System

2. Multiprogramming System

3. Multitasking system

4. Multiprocessor System

5. Distributed Operating System

6. Real-time Operating System

MULTIPROCESSOR SYSTEM

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 2


Multiprocessor operating systems are used in operating systems to increase the performance of
multiple CPUs within a single computer system. Therefore, a job can be distributed among
them for quicker execution, multiple CPUs are connected. Results from every CPU are gathered
and combined once a job completes to produce the final output. Jobs may share additional
system resources among themselves as well as the shared main memory. Additionally, many
CPUs can be employed to run multiple tasks simultaneously.

Advantages

Increased reliability: A multiprocessing system allows for the distribution of processing jobs
among a number of processors. This improves dependability since a job can be passed to
another processor for completion if one processor fails.

Increased throughout : More work can be done in less time as the number of processors rises.

The economy of Scale: Multiprocessor systems are less expensive than single-processor
systems because they share peripherals, secondary storage, and power supply.

Disadvantages

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 3


As the multiprocessing operating system manages multiple CPUs simultaneously, it is highly
sophisticated and complex.

DISTRIBUTED OPERATING SYSTEM

An fundamental kind of operating system is a distributed operating system (DOS). To support


numerous real-time applications and users, distributed systems employ numerous central
processors. Jobs for data processing are therefore divided among the processors. Through a
single communication channel, it links various computers. Each of these systems also has its
own processor and memory. These CPUs may also communicate through phone lines or high-
speed buses. One entity is considered to be each individual system that communicates through
a single channel. They go by the name "loosely connected systems" as well.

The various computers, nodes, and sites that make up this operating system are connected by
LAN/WAN links. It supports a wide range of real-time goods and different users, and it allows

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 4


the division of whole systems across a few center processors. Distributed operating systems
allow users to abstract away from physical machines while sharing compute resources and I/O
files.

REAL-TIME OPERATING SYSTEM

For any task to be completed, a real-time operating system (RTOS), a special-purpose operating
system used in computers, must meet strict time requirements. It is primarily used in systems
where the output of computations is used to modify a process as it is running. A sensor used to
monitor the event is used to transmit information about external events to the computer
whenever they happen. The operating system interprets the signal from the sensor as an
interrupt. When an interrupt occurs, the operating system starts a particular process or group
of processes to handle it. Unless a higher priority interrupt happens while this process is
running, it continues without interruption. As a result, the interruptions must be prioritized in a
specific order. While lower priority interrupts should be retained in a buffer to be handled later,
the interrupt with the highest priority must be allowed to start the process. In a system like
this, interruption control is crucial.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 5


Types of Real-time operating system

Hard Real-Time operating system:

All crucial actions in Hard RTOS must be finished by the deadline, or within the allotted time
frame. Missing the deadline would lead to serious failures, such as equipment damage or even
human life loss.

For Example,

Let's use the airbags and handle in the driver's seat that is provided by the automakers as an
example. The airbags expand when the driver applies the brakes at that precise moment,
preventing the driver's head from striking the handle. An accident would have happened if
there had been even a millisecond delay. Similarly to that, think of online stock trading
software. The system must make sure that an instruction to sell a certain share is executed
within a specified window of time. Otherwise, a sudden decline in the market could result in
the trader suffering a significant loss.

Soft Real-Time operating system:

Through the use of the operating system, Soft RTOS accepts a few delays. It's possible under
this type of RTOS that a certain job has a deadline that must be met, but a brief delay is
allowed. Cutoff dates are therefore handled gently by this type of RTOS.

For Example,

This kind of system, for instance, is utilized in livestock price quote systems and online
transaction systems.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 6


INTRODUCTION OF OPERATING
SYSTEM

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 1


SUB LESSON 1.6

SERVICES OF OPERATING SYSTEM

SERVICES OF OPERATING SYSTEM

The various operating system services are


1. User Interface
2. Program Execution
3. I/O Operations
4. File System Manipulation
5. Communications between Processes
6. Error Detection
7. Resource Allocation
8. Accounting
9. Protection and Security

USER INTERFACE
By working as an interface, a user interface (UI) makes it easier for an application and its user to
communicate. For efficient communication, each application, including the operating system, is
given a unique user interface (UI). An application's user interface's two primary purposes are to
receive input from users and to deliver output to them. However, from one application to
another, the types of inputs and outputs that the UI accepts and produces may alter.

Any operating system's user interface can be categorized into one of the following types:

1. Graphical user interface (GUI)


2. Command line user interface (CLI)

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 2


Graphical user interface (GUI)

A form of GUI is a graphical user interface, which enables users to interact with the operating
system using point-and-click actions. A GUI has a number of icons that are visual
representations of various variables, like a file, directory, and device. Users can control the
graphical icons offered in the user interface (UI) using a mouse, trackball, touch screen, or light
pen, among other acceptable pointing devices. These graphical symbols can also be controlled
by other input devices like the keyboard. Because each object is represented by a
corresponding icon, GUIs are thought to be particularly user-friendly interfaces.

Command line user interface (CLI)

Using a command line interface, users can provide specific commands to the operating system
to communicate with it. The user must provide a command at the command line to carry out an
operation in this interface. The command line interpreter received a command when the user
entered the key. the computer program that is in charge of taking user commands and
processing them. Command line interpreters process commands and then display the
command prompt once more while also displaying the results of any earlier commands the user
has submitted. The CLI's drawback is that it requires a lot of user memory to interface with the
operating system.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 3


PROGRAM EXECUTION

The purpose of computer systems is to allow the user to execute programs. So the operating
systems provide an environment where the user can conveniently run programs. The user
does not have to worry about memory allocation or multitasking or anything. These things
are taken care of by the operating systems. Running a program involves allocating and
deallocating memory, and CPU scheduling in case of multiprocess. These functions cannot be
given to user-level programs. So, user-level programs cannot help the user to run programs
independently without help from operating systems.
Several steps must be completed in order for a program to be executed. The main memory
must be loaded with both the data and the instructions. Additionally, files and input-output
devices need to be initialized, and other resources need to be set up. These kinds of tasks are
carried out by the operating structures. The user no longer needs to be afraid of memory
allocation, multitasking, or anything else.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 4


I/O OPERATIONS

Each program requires an input and produces output. This involves the use of I/O. The
operating system hides the user the details of the underlying hardware for the I/O. All the
user sees is that the I/O has been performed without any details. So the operating systems by
providing I/O make it convenient for the users to run programs. For efficiency and protection
users cannot control I/O so this service cannot be provided by user-level programs.
Communication between the user and device drivers is established by the operating system,
which also controls input-output functions. Device drivers are programs that go along with
hardware that the operating system controls and are necessary for correct device
synchronization. Additionally, it gives the program access to input-output devices when
necessary.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 5


FILE SYSTEM MANIPULATION

The output of a program may need to be written into new files or input taken from some files.
The operating systems provide this service. The user does not have to worry about secondary
storage management. The user gives a command for reading or writing to a file and sees
his/her task accomplished. Thus operating systems make it easier for user programs to
accomplish their task. This service involves secondary storage management. The speed of I/O
that depends on secondary storage management is critical to the speed of many programs
and hence I think it is best relegated to the operating systems to manage it than giving
individual users control of it. It is not difficult for the user-level programs to provide these
services but for the above-mentioned reasons, it is best if these services are left with the
operating system.
Communication between the user and device drivers is established by the operating system,
which also controls input-output functions. Device drivers are programs that go along with
hardware that the operating system controls and are necessary for correct device
synchronization. Additionally, it gives the program access to input-output devices when
necessary. The operating system control how files are manipulated or how the files are
managed, sometimes you have to delete files, sometimes you have to update files or save
files, or search files all are controlled by the operating system. It also controls the permission

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 6


of certain programs or users to access certain files. All the files you have can not be allowed to
use by every program or by every user. there is an access restriction which is taken care of by
the operating system.

COMMUNICATIONS BETWEEN PROCESSES

There are instances where processes need to communicate with each other to exchange
information. It may be between processes running on the same computer or running on
different computers. By providing this service the operating system relieves the user of the
worry of passing messages between processes. In case where the messages need to be passed
to processes on the other computers through a network, it can be done by the user
programs. The user program may be customized to the specifics of the hardware through
which the message transits and provides the service interface to the operating system.
The operating system controls how processes communicate with one another. Transferring
data among processes is a part of their communication. When processes are connected via a

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 7


computer network rather than being on the same machine, the operating system itself is
responsible for managing communication. In this part we are talking about process, so what is
the process? The process is nothing but executing a program. Now some processes need to
communicate with other processes to share some data and they need to synchronize during
execution which is handled by the operating system. when we talk about the communication
of processes, they might be on the same computer or on different computers which creates a
network.

ERROR DETECTION

An error is one part of the system that may cause malfunctioning of the complete system. To
avoid such a situation the operating system constantly monitors the system for detecting
errors. This relieves the user of the worry of errors propagating to various parts of the system
and causing malfunctioning. This service cannot allow being handled by user programs
because it involves monitoring and in some cases altering areas of memory or deallocation of
memory for a faulty process. Or maybe relinquishing the CPU of a process that goes into an
infinite loop. These tasks are too critical to be handed over to the user programs. A user
program if given these privileges can interfere with the correct (normal) operation of the
operating systems.
The operating system also manages errors that happen in input-output devices, the CPU, etc.
Additionally, it repairs the issues and makes sure they don't happen frequently. Additionally, it
keeps the process from reaching a deadlock. Additionally, it searches for any errors or defects

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 8


that might occur while performing any task. The well-secured OS usually serves as a protective
measure for preventing and possibly handling any type of attack on the Computer System
from any external source.

RESOURCE ALLOCATION

Resource allocation means allocating different resources to different processes or different


users. know what you consider as resources are memory, input-output devices, CPU, etc. The
operating system is in charge of controlling resource sharing. Using CPU Scheduling Algorithms,
also controls how much CPU time is given to each process. Additionally, it supports the system's
memory management. Likewise, it manages input-output devices. The OS also assures proper
utilization of all resources by allocating which resource to which user.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 9


ACCOUNTING

To improve overall performance, an operating system gathers usage data for a variety of
assets and keeps track of performance metrics and responsiveness. These private data are
useful for further upgrades and fine-tuning the device to improve overall performance.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 10


PROTECTION AND SECURITY

Protection in an operating system is a method that restricts access to any computer system
resources by processes, programs, or users. All access to system resources must be monitored
and managed, thanks to the operating system. Additionally, it guarantees that peripherals or
external resources must be secured against unauthorized access. By using usernames and
passwords, it offers authentication.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 11


INTRODUCTION OF OPERATING
SYSTEM

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 1


SUB LESSON 1.7

SYSTEM CALLS

System calls to provide an interface between a running program and an operating system.
System calls are generally available as assembly language instructions. Several higher-level
languages such as C also allow making system calls directly.
In the UNIX operating system, the system called the interface layer contains an entry point in
kernel code. All system resources are managed by the kernel. Any request from a user or
application that involves access to any system resource must be handled by kernel code. The
user process must not be given open access to kernel code for security reasons. Many
opening into kernel code called system calls are provided to the user so that the user
processes can invoke the execution of kernel code. System calls allow processes and users to
manipulate system resources.
There are three general methods that are used to pass information (parameters) between a
running program and the operating system.
1. One method is to store parameters in registers.
2. Another is to store parameters in a table in memory and pass the address of the table.
3. The third method is to push parameters on the stack and allow the operating system to pop
the parameters off the stack.

One needs to first know the connection between a CPU's kernel mode and user mode in order
to understand system calls. These two modes are supported by all current operating systems.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 2


KERNEL MODE
● When the CPU is operating in kernel mode, the code being run has access to every
memory location and every piece of hardware.
● Because of this, kernel mode is an extremely privileged and strong mode.
● The entire system will be blocked if a program crashes in kernel mode.

USER MODE
● Programs can't directly access memory and hardware resources when the CPU is in user
mode.
● If a program breaks in user mode, only that specific program is stopped.
● This means that even if a user-mode program crashes, the system will still be in a secure
condition.
● So, the majority of OS programs operate in user mode.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 3


SYSTEM CALLS
System calls serve as the interface between an operating system and a process. System calls
can typically be found as assembly language instructions. They are also covered in the manuals
that the programmers working at the assembly level use. When a process in user mode needs
access to a resource, system calls are typically generated. The resource is then requested from
the kernel using a system call.

This diagram shows how the processes run normally in user mode up until a system call
interrupts them. The system call is then carried out in kernel mode according to priority. The
control switches back to user mode after the system call has been completed, allowing user
processes to continue running.
Generally speaking, system calls are necessary for the following circumstances: −
● if a file system needs files to be created or deleted. A system call is also necessary for
reading and writing from files.
● development and administration of new procedures.
● System calls are also required for network connectivity. This also applies to packet
sending and receiving.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 4


● A system call is necessary to gain access to hardware devices like a printer, scanners,
etc.

System Call Types


The primary categories of system calls are seven. The following are detailed explanations of
these:

System Calls for Process Management


These types of system calls are used to control the processes. Some examples are end, abort,
load, execute, create process, terminate process etc.
Example: The exit() system call ends a process and returns a value to it parent.

System Calls for Signaling


A signal is a limited form of inter-process communication used in UNIX, UNIX-like, and other
POSIX-compliant operating systems. Essentially it is an asynchronous notification sent to a
process in order to notify it of an event that occurred. The number of signals available is system
dependent. When a signal is sent to a process, the operating system interrupts the process’
normal flow of execution. Execution can be interrupted during any non-atomic instruction. If
the process has previously registered a signal handler, that routine is executed. Otherwise, the
default signal handler is executed.
Programs can respond to signals in three different ways. These are
1. Ignore the signal: This means that the program will never be informed of the signal no
matter how many times it occurs.
2. A signal can be set to its default state, which means that the process will be ended when it
receives that signal.
3. Catch the signal: When the signal occurs, the system will transfer control to a previously
defined subroutine where it can respond to the signal as is appropriate for the program.
System Calls for File Management

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 5


The file structure-related system calls available in some operating systems like UNIX let you
create, open, and close files, read and write files, randomly access files, alias and remove files,
get information about files, check the accessibility of files, change protections, owner, and
group of files, and control devices. These operations either use a character string that defines
the absolute or relative path name of a file, or a small integer called a file descriptor that
identifies the I/O channel. When doing I/O, a process specifies the file descriptor for an I/O
channel, a buffer to be filled or emptied, and the maximum size of data to be transferred. An
I/O channel may allow input, output, or both. Furthermore, each channel has a read/write
pointer. Each I/O operation starts where the last operation is finished and advances the pointer
by the number of bytes transferred. A process can access a channel’s data randomly by
changing the read/write pointer. These types of system calls are used to manage files.
Example: Create a file, delete a file, open, close, read, write, etc.

System Calls for Directory Management


You may need the same sets of operations as file management for directories also. If you have
a directory structure for organizing files in the file system. In addition, for either files or
directories, you need to be able to determine the values of various attributes, and perhaps
reset them if necessary. File attributes include the file name, file type, protection codes,
accounting information, and so on. At least two system calls, get file attribute and set file
attribute, is required for this function. Some operating systems provide many more calls.

System Calls for Protection


Improper use of the system can easily cause a system crash. Therefore some level of control is
required; the design of the microprocessor architecture on basically all modern systems (except
embedded systems) offers several levels of control - the (low privilege) level of which normal
applications execute limits the address space of the program to not be able to access nor
modify other running applications nor the operating system itself (called “protected mode” on
x86), it also prevents the application from using any system devices (i.e. the frame buffer,
network devices - any I/O mapped device). But obviously, any normal application needs this

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 6


ability, thus the operating system is introduced, executes at the highest level of order, and
allows the applications to request a service - a system call - which is implemented through
hooking interrupt(s). If allowed the system enters a higher privileged state, executes a specifi c
set of instructions which the interrupting program has no direct control over, then returns
control to the former flow of execution. This concept also serves as a way to implement
security. With the development of separate operating modes with varying levels of privilege, a
mechanism was needed for transferring control safely from lesser privileged modes to higher
privileged modes. Less privileged code could not simply transfer control to more privileged
code at any arbitrary point and with any arbitrary processor state. To allow it to do so could
allow it to break security. For instance, the less privileged code could cause the higher
privileged code to execute in the wrong order, or provide it with a bad stack.

System Calls for Time Management


Many operating systems provide a time profile of a program. It indicates the amount of time
that the program executes at a particular location or set of locations. A time profile requires
either a tracing facility or regular timer interrupts. At every occurrence of the timer interrupt,
the value of the program counter is recorded. With sufficiently frequent timer interrupts, a
statistical picture of the time spent on various parts of the program can be obtained.

System Calls for Device Management


A program, as it is running, may need additional resources to proceed. Additional resources
may be more memory, tape drives, access to files, and so on. If the resources are available,
they can be granted, and control can be returned to the user program; otherwise, the program
will have to wait until sufficient resources are available. These types of system calls are used to
manage devices.
Example: Request device, release device, read, write, get device attributes, etc.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 7


Types of System Windows Linux
Calls

Process Control CreateProcess() fork()


ExitProcess() exit()
WaitForSingleObject() wait()

File Management CreateFile() open()


ReadFile() read()
WriteFile() write()
CloseHandle() close()

Device Management SetConsoleMode() ioctl()


ReadConsole() read()
WriteConsole() write()

Information GetCurrentProcessID() getpid()


Maintenance SetTimer() alarm()
Sleep() sleep()

Communication CreatePipe() pipe()


CreateFileMapping() shmget()
MapViewOfFile() mmap()

open()
A file in a file system can be accessed using the open() system call. In addition to allocating
resources for the file, this system call also gives the process a handle it can use to refer to the

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 8


file. A file can only be opened by one process at a time or by numerous processes
concurrently. Everything is dependent on the file system and organization.

read()
Data from a file that is kept in the file system can be accessed using the read() system call. The
file descriptor of the file to be read can be used to identify it, and it must first be opened using
open() before it can be read. The read() system function typically requires three arguments.
i.e., the file descriptor, the buffer that holds read data, and the number of bytes that need to
be read from the file.

write()
The write() system calls writes the data from a user buffer into a device such as a file. This
system call is one of the ways to output data from a program. In general, the write system
calls take three arguments i.e. file descriptor, a pointer to the buffer where data is stored, and
a number of bytes to write from the buffer.

close()
The close() system call is used to terminate access to a file system. Using this system call
means that the file is no longer required by the program and so the buffers are flushed, Both
the file resources and the file metadata are released.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 9


INTRODUCTION OF OPERATING
SYSTEM

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 1


SUB LESSON 1.8

STRUCTURE OF AN OS-LAYERED

The operating system structure is a container for a collection of structures for interacting with
the operating system’s file system, directory paths, processes, and I/O subsystem. The types
and functions provided by the operating system substructures are meant to present a model for
handling these resources that is largely independent of the operating system. There are
different types of structures as described in Figure
Users and computer hardware are connected by the operating system (OS). It offers users a
setting in which they may run programs quickly and effectively. Technically speaking, the
hardware is managed by software. An operating system manages how memory, processors,
devices, and information are distributed across various resources and services. In order to
conduct more functions and services, the operating system has many layers in its layer
structure. Each layer has a distinct task to complete that has been carefully established.
The operating system is segmented into several layers (say 0 to n). Each layer is in charge of
carrying out its specific duty. The following conventions are also available for the application of
these layers:
● In a layered operating system, the uppermost layer must be a UI (User Interface) layer.
● The hardware layer must be the innermost layer.
● Any layer has access to the layers that are present under it but not to the layer above it.
For instance, All layers from n-2 to 0 are accessible to layer n-1, but the nth layer is not.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 2


The development of this operating system marked the development of the first monolithic
systems. The operating system is divided into many levels, each with its own set of features, the
layered operating system. Here are some recommendations for layer implementation.
Allocating processes and switching between them when timers or disruptions happen are the
responsibilities of Layer 0. The fundamentals of CPU multiprogramming are also covered. As a
result, the response will move across all layers from n-1 to 1 if the user layer wishes to interact
with the hardware layer. It is essential to design and implement each layer so that it only needs
the services of the levels above it.

Six Layers
1- Hardware: This layer controls how all peripheral devices, including a printer, mouse,
keyboard, scanner, and other devices, operate by interacting with the system hardware. Certain
categories of hardware devices must be managed by the hardware layer. In the layered
operating system architecture, the hardware layer is the lowest and most powerful layer. It has
a direct connection to the brain of the system.
2. CPU Planning: The scheduling of CPU processes is handled by this layer. Scheduling queues
are used in conjunction to manage processes. As soon as the methods are introduced to the
system, they are added to the job queue. The processes that are ready to run in the main
memory are listed in the ready queue.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 3


3- Memory Management: Memory, as well as the transfer of processes from the hard drive to
primary memory for execution, are all aspects of memory management.
4- Process Management: By allocating the processor to a particular task at a precise time, this
layer manages the processes. Another name for it is process scheduling. Process scheduling
techniques include FCFS (first come, first served), SJF (shortest job first), round-robin
scheduling, etc.
5- I/O (Input/Output) Buffer: I/O (Input/Output) devices play a crucial role in computer systems.
They enable end users to interact with the system. This layer maintains and makes sure the I/O
device buffers are working properly. Let's say you're using a keyboard to type. The keyboard is
connected to a keyboard buffer, which temporarily stores data. In a similar vein, every
input/output device has a pad. This is brought on by the input/output devices' slow processing
or storage speeds. Pads are used by the computer to synchronize the processor and
input/output devices.
6- User Programs: In an operating system with layers, this is the top layer. The fact that a
computer can run so many user programs and applications, like word processors, games, and
browsers, is due to this layer. It is frequently referred to as an application layer because it is
focused on application programs.

Layering offers particular benefits in operating systems. Each layer can be separately designed
and can interact with other layers as necessary. In addition, it is simple to build, manage, and
update a system with layers. The other layers are unaffected by a change to the specification of
one layer. Only the layers above and below each one of the operating system's layers can
communicate with one another. The hardware is managed by the lowest layer, and user
applications are governed by the top layer.

MONOLITHIC SYSTEMS

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 4


This approach is well known as “The Big Mess”. The operating system is written as a collection
of procedures, each of which can call any of the other ones whenever it needs to. When this
technique is used, each procedure in the system has a well-defined interface in terms of
parameters and results, and each one is free to call any other one if the latter provides some
useful computation that the former needs. For constructing the actual object program of the
operating system when this approach is used, one compiles all the individual procedures, or
files containing the procedures, and then binds them all together into a single object file with
the linker. In terms of information hiding, there is essentially none- every procedure is visible
to every other one i.e. opposed to a structure containing modules or packages, in which much
of the information is local to the module, and only officially designated entry points can be
called from outside the module. However, even in Monolithic systems, it is possible to have at
least a little structure. The services like system calls provided by the operating system are
requested by putting the parameters in well-defined places, such as in registers or on the stack,
and then executing a special trap instruction known as a kernel call or supervisor call.

DIFFERENCE BETWEEN MONOLITHIC KERNEL AND MICROKERNEL

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 5


The central component of an operating system, the kernel, controls how the system's resources
are used. A kernel functions as a sort of link between the computer's software and hardware.
The two more kinds of kernel are microkernel and monolithic kernel.

A sort of kernel that enables operating system customization is the microkernel. It supports
low-level address space management and Inter-Process Communication while operating in
privileged mode (IPC). On top of the microkernel are OS services like the file system, virtual
memory management, and CPU scheduler. To ensure its security, each service has its own
address space. Additionally, each application has its own address space. As a result, kernels, OS
Services, and programs are all protected.

Another type of kernel is one that is monolithic. Each application in systems with a single,
monolithic kernel has its own address space. This one handles system resources between
software and hardware, much like the microkernel, except user services and kernel services are
implemented in the same address space. The operating system's size also grows as a result of
the kernel's size growth. Through system calls, this kernel offers CPU scheduling, a memory
management, file management, and other system operations. Operating system execution is
sped up by the fact that both services are implemented within the same address space.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 6


Terms Monolithic Kernel Microkernel

Definition A monolithic kernel is a type of A microkernel is a kernel type that


kernel in operating systems where provides low-level address space
the entire operating system works management, thread management,
in the kernel space. and interprocess communication to
implement an operating system.

Address space In a monolithic kernel, both user In microkernel user services and kernel,
services and kernel services are services are kept in separate address
kept in the same address space. spaces.

Size The monolithic kernel is larger The microkernel is smaller in size.


than the microkernel.

Execution It has fast execution. It has slow execution.

OS services In a monolithic kernel system, the In a microkernel-based system, the OS


kernel contains the OS services. services and kernel are separated.

Extendible The monolithic kernel is quite The microkernel is easily extendible.


complicated to extend.

Security If a service crashes, then the whole If a service crashed, it does not affect
system crashes in a monolithic the working of the microkernel.
kernel.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 7


Customization It is difficult to add new It is easier to add new functionalities to
functionalities to the monolithic the microkernel. Therefore, it is more
kernel. Therefore, it is not customizable.
customizable.

Code Less coding is required to write a A microkernel is required more coding.


monolithic kernel.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 8


PROCESS MANAGEMENT

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 1


SUB LESSON 2.1

PROCESS AND PROCESS STATES

WHAT IS PROCESS ?

An operating system manages each hardware resource attached to the computer by


representing it as an abstraction. An abstraction hides the unwanted details from the users and
programmers allowing them to have a view of the resources in the form, which is convenient
to them. A process is an abstract model of a sequential program in execution. The operating
system can schedule a process as a unit of work. The term “process” was first used by the
designers of the MULTICS in 1960’s. Since then, the term “process” is used somewhat
interchangeably with ‘task’ or ‘job’. The process has been given many definitions as mentioned
below: 1. A program in Execution. 2. An asynchronous activity. 3. The ‘animated spirit’ of a
procedure in execution. 4. The entity to which processors are assigned. 5. The ‘dispatchable’
unit. Though there is no universally agreed upon definition, the definition “Program in
Execution” is the one that is most frequently used. And this is a concept you will use in the
present study of operating systems. Now that you agreed upon the definition of the process,
the question is - what is the relationship between process and program? It is the same beast
with a different name or when this beast is sleeping (not executing) it is called a program and
when it is executing becomes a process. Well, to be very precise. The process is not the same
as a program.

Two steps happen after writing a program in any language:

1. Compiling

2. Running/Executing

That program only qualifies as a process after the second stage. You can turn any program
(application) into a process by double-clicking it on your computer or tapping it on your

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 2


smartphone. Every application is a program up to the point at which a command, a double
click, or another action runs it. Consequently, a process is an active program.

Now, the operating system loads a process into memory once you start it (RAM). The process'
internal organization is shown as follows:

1. Text – program code


2. Data – contains global variables
3. Stack – contains temporary data
● function parameters return
● addresses Local
● variables
4. Heap – memory allocated dynamically during run time

Global variables are contained in the data section, while the text section holds the program or
code. Because neither the code nor the program's variables will change, these two parts are
fixed in size. For dynamic memory allocation, the heap is used. When we are unable to
calculate the amount of memory needed, we now employ dynamic memory allocation. As a
result, the heap portion can expand in size as needed. The stack portion is utilized for

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 3


functions, and that's it. Once more, the size of this section varies. because it can be
challenging to calculate the necessary number of function calls. Consider a program that
utilizes recursive functions to calculate factorials. The function call is made five times if the
number is five. The function is called 20 times if the factorial number is 20. As a result, the
system is unsure about the exact size of the stack. The stack size is therefore flexible.

PROCESSES CREATION

The creation of a process requires the following steps. The order in which they are carried out is
not necessarily the same in all cases.

1. Name: The name of the program which is to run as the new process must be known.

2. Process ID and Process Control Block: The system creates a new process control block, or
locates an unused block in an array. This block is used to follow the execution of the program
through its course, keeping track of its resources and priority. Each process control block is
labeled by its PID or process identifier.

3. Locate the program to be executed on disk and allocate memory for the code segment in
RAM.

4. Load the program into the code segment and initialize the registers of the PCB with the start
address of the program and appropriate starting values for resources.

5. Priority: A priority must be computed for the process, using a default for the type of process
and any value which the user specified as a `nice’ value.

6. Schedule the process for execution.

Process Hierarchy: Children and Parent Processes

In a democratic system anyone can choose to start a new process, but it is never users which
create processes but other processes! That is because anyone using the system must already be

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 4


running a shell or command interpreter in order to be able to talk to the system, and the
command interpreter is itself a process. When a user creates a process using the command
interpreter, the new process becomes a child of the command interpreter. Similarly, the
command interpreter process becomes the parent for the child. Processes, therefore, form a
hierarchy.

The processes are linked by a tree structure. If a parent is signaled or killed, usually all its
children receive the same signal or are destroyed with the parent. This doesn’t have to be the
case – it is possible to detach children from their parents – but in many cases, it is useful for
processes to be linked in this way.

When a child is created it may do one of two things.

1. Duplicate the parent process.

2. Load a completely new program.

Similarly, the parent may do one of two things.

1. Continue executing alongside its children.

2. Wait for some or all of its children to finish before proceeding.

The specific attributes of the child process that differ from the parent process are

1. The child process has its own unique process ID.

2. The parent process ID of the child process is the process ID of its parent process.

3. The child process gets its own copies of the parent process’s open file descriptors.
Subsequently changing attributes of the file descriptors in the parent process won’t affect the
fi le descriptors in the child, and vice versa. However, the file position associated with each
descriptor is shared by both processes.

4. The elapsed processor times for the child process are set to zero.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 5


5. The child doesn’t inherit fi le locks set by the parent process.

6. The child doesn’t inherit alarms set by the parent process.

7. The set of pending signals for the child process is cleared. (The child process inherits its mask
of blocked signals and signal actions from the parent process.)

PROCESS STATES

1. New

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 6


A new process is a program that the OS is going to pick up and load into the main memory.

2. Ready

The ready state, where a process waits for the CPU to be assigned, is the first state it enters
after being formed. The operating system pulls new processes from secondary memory and
places them all in the main memory. The term "ready state processes" refers to processes that
are in the main memory and are prepared for execution. Numerous processes might be active
at the moment.

3. Running

The OS will select one of the processes from the ready state based on the scheduling
mechanism. As a result, if our system only has one CPU, there will only ever be one process
operating at any given time. We can run n processes concurrently in the system if there are n
processors.

4. Block or wait

Depending on the scheduling mechanism or the inherent behavior of the process, a process
can move from the Running state to the Block or Wait for states. The OS switches a process to
the block or waits for state and allows the CPU to the other processes while it waits for a
specific resource to be allocated or for user input.

5. Completion or termination

A process enters the termination state once it has completed its execution. The operating
system will end the process and erase the entire context of the process (Process Control
Block).

6. Suspend ready

When a process enters the suspend ready state, it is in the ready state after being transferred
from primary memory to secondary memory because primary memory was insufficient. When
a higher-priority process has to run but the main memory is already full, the operating system
must free up memory by moving the lower-priority process to the secondary memory. Until

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 7


the main memory becomes accessible, the processes that are prepared to suspend remain in
the secondary memory.

7. Suspend wait

It is preferable to unblock the process that is blocking resources in the main memory rather
than remove it from the ready queue. It is better if it waits in the secondary memory to free
up space for the higher priority process as it is already waiting for a resource to become
available.

KEY TAKEAWAY

➢ A process in the context of an operating system refers to the dynamic execution of a


program. Acting as an abstraction, it encapsulates the sequential flow of a program in
execution, allowing the operating system to schedule and manage it as a unit of work.
➢ The term "process" is often used interchangeably with "task" or "job." Originating in the
design of MULTICS in the 1960s, a process can be defined in various ways, but the most
commonly used definition is that of a "program in execution."
➢ The key distinction between a program and a process lies in the execution phase – a
program becomes a process only after it has been compiled and is actively running.
➢ Processes are vital components in a computer system, managed by the operating
system through the creation of process control blocks, memory allocation, and
scheduling.
➢ They form hierarchical structures, with parent and child processes linked in a tree,
enabling the efficient management and execution of various tasks in a computer
environment.
➢ Understanding the concept of processes is fundamental to comprehending how
operating systems coordinate and optimize the execution of programs within a
computing system.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 8


PROCESS MANAGEMENT

SUB LESSON 2.1

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 1


PROCESS CONTROL BLOCK

PROCESS CONTROL BLOCK


The operating system groups all information that it needs about a particular process into a
data structure called a process descriptor or a Process Control Block (PCB). Whenever a
process is created (initialized, installed), the operating system creates a corresponding
process control block to serve as its run-time description during the lifetime of the process.
When the process terminates, its PCB is released to the pool of free cells from which new
PCBs are drawn. The dormant state is distinguished from other states because a dormant
process has no PCB. A process becomes known to the O.S. and thus eligible to compete for
system resources only when it has an active PCB associate with it. Information stored in a PCB
typically includes some or all of the following:
1. Process name (ID)
2. Priority
3. State (ready, running, suspended)
4. Hardware state.
5. Scheduling information and usage statistics
6. Memory management information (registers, tables)
7. I/O Status (allocated devices, pending operations)
8. File management information
9. Accounting information.
Once constructed for a newly created process, the PCB is fi lled with the programmer defi ned
attributes found in the process template or specifi ed as the parameters of the CREATE-
PROCESS operating system call. Whenever a process is suspended, the contents of the
processor registers are usually saved on the stack, and the pointer to the related stack frame
is stored in the PCB. In this way, the hardware state can be restored when the process is
scheduled to run again. A process control block or PCB is a data structure (a table) that holds
information about a process. Every process or program that runs needs a PCB. When a user

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 2


requests to run a particular program, the operating system constructs a process control block
for that program.
Typical information that is stored in a process control block is
1. The location the process in memory
2. The priority of the process
3. A unique process identifi cation number (called PID)
4. The current process state (ready, running, blocked)
5. Associated data for the process.
The PCB is a certain store that allows the operating systems to locate key information about a
process. Thus, the PCB is the data structure that defi nes a process to the operating systems.
In our operating system, many processes are executed concurrently. Each process is
accompanied by information and instructions for execution. During the execution of a
process, these instructions may include a list of devices (such as a printer) or a code execution.
When the process is created by the operating system it creates a data structure to store the
information of that process. This is known as Process Control Block (PCB).
Process Control block (PCB) is a data structure that stores information about a process.

PCBs are stored in specially reserved memory for the operating system known as kernel space.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 3


The kernel space and the user space are two independent divisions that can be made logically
of the Random Access Memory (RAM). The heart of the operating system is kernel space. It
typically cannot be accessed by the user but has complete access to all machine hardware and
memory. Each process's PCB includes different attributes including the process ID, priority,
registers, program counters, process states, list of open files, etc.

The task of allocating a CPU to a process falls to the operating system because a CPU is not
always required by a process. Let's use the input/output process as an example; the CPU only
makes use of it when it is triggered. The function of the process control block emerges as a
badge for every process. Before consulting each process' PCB, the operating system is unable
to identify which process is whose. For instance, the CPU is currently handling several
background processes, including those for MS Word, PDFs, and printing. Without knowing the
names of each process, how would OS manage and identify them? Therefore, PCB is used as a
data structure to record information about each operation in this situation. As a result,
anytime a user initiates a process (such as the command to print), the operating system
creates a process control block (PCB) for that process. The PCB is used by the operating
system to execute and manage the processes when the operating system is free.

Structure of Process Control Block

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 4


Each process's process ID, state, priority, accounting data, program counter, CPU registers, and
other features are all contained in the process control block.

1. Process ID:
When a user creates a new process, the operating system gives it a special ID or process ID. This
ID aids in separating the process from other ones already running within the system. The
operating system has a restriction on how many processes it can handle at once; let's say OS
can handle a maximum of N processes. Assuming that the older process at ID 0 has already
been terminated, the operating system will assign ID 0 to the new process if it is created after
the process(N-1). One of the methods for allocating process ids is this. The process IDs are not
assigned according to ascending order in another assignment technique. Let's assume that each
PCB uses X bytes of memory and that N processes can be active at once. Next, N*X bytes will be
set aside by the operating system for each PCB. From 0 to N-1, these PCBs are numbered.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 5


Please take note that we are supplying PCBs IDs here rather than processes. Now, once a user-
triggered process is activated, a free PCB slot is assigned to it, and the process ID of that
process has the same number as the PCB slot. As a result, the operating system keeps a list of
open PCB slots. There can be no new processes started if the chain is empty.

2. Process State:
A process goes through various states from start to finish. A process may typically be in one of
the following 5 states while it is running:

New: The processes that are prepared to be loaded into the main memory by the operating
system are present in this state.

Ready: This state contains the process that is now stored in the system's main memory and is
prepared to be run. The operating system loads the processes into main memory from
secondary memory (hard disc) (RAM). The condition of these processes is known as the Ready
state since they are available in the main memory and are awaiting assignment to the CPU.

Running: The processes that are now being run by our system's CPU are listed in this state. Our
system can support a maximum of x processes operating at once if there are x total CPUs in use.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 6


Block or wait: Depending on the scheduling mechanism or the internal behavior of the process,
a process from its operating state may move to a block or wait for state (the process explicitly
wants to wait).

Termination: A process enters its termination state after completing its execution. The
operating system will also remove all of that process's data (Process Control Block).

3. Process Priority:
Each process' priority is indicated by a numeric value called "process priority." The process's
priority increases as the value decrease. This priority is determined when the PCB is created and
may vary depending on a number of variables, including the length of the process, the number
of resources used, and others. The process can also have an external priority assigned by the
user.

4. Process Accounting Information:


This feature provides data on the resources used by a process over the course of its existence.
For instance, CPU use, connection speed, etc.

5. Program Counter:
The program counter is a pointer that directs program execution to the subsequent instruction.
The address of the subsequent instruction in the process is contained in this PCB characteristic.

6. CPU registers:
The CPU can access small, quickly sized locations in this manner. Virtual memory is where these
registers are kept (RAM).

7. Context Switching:

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 7


Context switching, which includes switching the CPU from one task or process to another, is the
sixth process. It is the technique of preserving a process's state such that it can be recovered
and continued running at a later time. This is a crucial component of a multitasking operating
system since it enables numerous processes to share a single CPU.

8. PCB pointer:
The address of the following PCB, which is in the ready state, is contained in this field. This
enables the operating system to maintain a simple control flow between parent processes and
child processes in a hierarchical manner.

9. List of open files:


As its name implies, it includes details on all the files that the process is now using. This feature
is crucial because it enables the operating system to shut all opened files in the process' end
state.

10. Process I/O information:


This column contains a list of all the input/output devices that are necessary for the process to
operate.

KEY TAKEAWAY

➢ The Process Control Block (PCB) serves as a vital data structure within an operating
system, encapsulating essential information about a particular process.
➢ Created during the initialization of a process, the PCB acts as a dynamic runtime
descriptor, enabling the operating system to manage and coordinate the process
throughout its lifecycle.
➢ Information stored in the PCB encompasses diverse aspects, including the process's
identifier, priority, current state (such as ready, running, or suspended), hardware
status, scheduling details, memory management specifics, I/O status, file management
data, and accounting information.
➢ Upon termination of a process, its corresponding PCB is released, returning to the pool
of free cells available for new PCB creation.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 8


➢ The PCB plays a central role in process management, facilitating CPU allocation, context
switching, and multitasking.
➢ It essentially acts as a repository of a process's vital characteristics, allowing the
operating system to efficiently execute, monitor, and switch between multiple
concurrent processes within the system.
➢ The structured organization of a PCB, containing process-specific details like priority,
program counter, and I/O information, ensures seamless coordination and execution of
diverse tasks within a computing environment.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 9


PROCESS MANAGEMENT

SUB LESSON 2.3

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 1


CONTEXT SWITCHING

CONTEXT SWITCHING

Typically there are several tasks to perform in a computer system. So if one task requires
some I/O operation, you want to initiate the I/O operation and go on to the next task. You
will come back to it later. This act of switching from one process to another is called a
“Context Switch” When you return back to a process, you should resume where you left off.
For all practical purposes, this process should never know there was a switch, and it should
look like this was the only process in the system. To implement this, on a context switch, you
have to 1. Save the context of the current process 2. Select the next process to run 3. Restore
the context of this new process.

Context Switching keeps track of the process's current state, which also contains information
about the data that was previously saved in registers, the program counter's value, and the
stack pointer. Context switching is required because, if we just transfer CPU control to the
new process without first storing the state of the old process, we won't be able to pick up
where the old process left off later since we won't know what was the last instruction the old
process performed. By preserving the process's state, context switching solves this issue.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 2


Process Control Block is referred to as PCB. The operating system stores all the process-
relevant data in a particular data structure called PCB. The process's state, which shows
whether it is ready, running, or waiting, is also stored on PCB. There are two processes, P1 and
P2, as seen in the figure. At this moment, the CPU is running P1. The state of the process must
change if process P1 needs to conduct any I/O (Input-Output) activity or if an interrupt occurs,
but context switching must occur before.
Following are the steps which take place while switching process P1 and process P2:
1. The data present in the register and program counter will be stored in the PCB of
process P1, let's say PCB1, and change the state in PCB1.
2. Process P1 will be moved to the appropriate queue, which can be either the ready
queue, I/O queue, or waiting queue.
3. The next process will be picked from the ready queue, let's say P2.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 3


4. The state of the process P2 will be changed to a running state, and if P2 was the process
that was previously executed by the CPU, it would resume the execution from where it
was stopped.
5. If we need to execute process P1, we need to carry out the same tasks mentioned in
step 1 to step 4.

The three main context-switching triggers are as follows:

1. Context switching is activated in a multitasking environment when one process is using


the CPU and another process is in need of the CPU. Context switching transfers CPU
control to the new process while preserving the state of the previous process.
2. Handling Interrupts: When an interrupt occurs, the CPU must deal with it. Context
Switching is therefore initiated prior to handling the interrupt, saving the process's state
before handling the interrupt.
3. User and Kernel Mode Switching: The kernel mode is the mode in which the process can
perform the system-level actions that are not possible in user mode. User mode is the
default mode in which the user application can execute with restricted access.
Therefore, whenever a process switches from user mode to kernel mode, context
switching is triggered, which stores the status of the active process.

Let's assume that the system has a process running on the CPU and that the ready queue is
empty. A new process has just been created and added to the ready queue. The CPU is allotted
by the system to the process with the greatest priority number. Therefore, the priority number
of the newly generated process will be compared to the priority number of the older process
when it enters the ready queue. In this case, the greater priority number demands preference
over the lower priority number. The older process's execution continues if the new process'
priority number is lower than the older process'. Context Switching will, however, be activated
if a new process has a higher priority number than an older process. The Process Control Block
stores the register data, program counter, and process state for the context switch. While the

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 4


newly generated process's state is changed to a running state, the older process is put in the
waiting queue. After the newly generated process has finished running, the older process that
is waiting in the waiting queue will be fetched and its execution will pick up where it left off.

Context switching's benefits


The fundamental benefit of context switching is that it provides the user with the impression
that the system has several CPUs since multiple processes are being executed even though it
only has one CPU. The user won't even be aware that the processes are moving back and forth
because the context shift is so quick.

Context Switching's Drawback


Although there is very little context switching time, the CPU is still idle and not working
productively during that period. Additionally, the TLB (Translation Lookaside Buffer) and cache
frequently flush as a result of context switching.

KEY TAKEAWAY

➢ Context switching is a crucial mechanism in a multitasking operating system, allowing


seamless transitions between processes while maintaining the illusion of concurrent
execution.
➢ It involves saving the context of the current process, selecting the next process to run,
and restoring the context of the chosen process.
➢ The Process Control Block (PCB) is instrumental in this process, storing relevant data
about a process, including its state and execution details.
➢ Context switching is triggered during tasks such as I/O operations, interrupts, or
switches between user and kernel modes.
➢ It ensures that the CPU efficiently handles multiple processes, maintaining the
appearance of simultaneous execution.
➢ While context switching offers the advantage of apparent multitasking, it introduces a
drawback of brief CPU idle time and potential cache flushing.
➢ Overall, it is a fundamental mechanism for optimizing system resources and enhancing
the user experience in a multitasking environment.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 5


OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 6
PROCESS MANAGEMENT

SUBTOPIC 3.1

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 1


CONCEPT OF THREADS AND MULTITHREADS

WHAT IS THREADS?
Take the human body as an example. A human body has various components with various
functionalities that operate in parallel ( E.g.: Eyes, ears, hands, etc.). Similar to this, a single
computer process may have several concurrently executing features, each of which can be
thought of as a separate thread. A thread is the orderly progression of tasks inside a process.
The types of threads in an OS can be the same or distinct. The performance of the programs is
improved through the use of threads. A program counter, stack, and set of registers are all
unique to each thread. However, a single process's threads may share the same code, data, or
file. Due to the fact that they share resources, threads are often referred to as lightweight
processes. Threads are a way for a program to fork (or split) itself into two or more
simultaneously (or pseudo-simultaneously) running tasks. A thread is a single sequence
stream within in a process. Because threads have some of the properties of processes, they
are sometimes called lightweight processes. In a process, threads allow multiple executions of
streams. In many respects, threads are a popular way to improve applications through
parallelism. The CPU switches rapidly back and forth among the threads giving the illusion that
the threads are running in parallel. Like a traditional process i.e., a process with one thread, a
thread can be in any of several states (Running, Blocked, Ready, or Terminated). Each thread
has its own stack. Since thread will generally call different procedures and thus a different
execution history. This is why the thread needs its own stack.

E.g.: While playing a movie on a device the audio and video are controlled by different threads
in the background.

THREAD STRUCTURE

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 2


A thread, sometimes called a lightweight process (LWP), is a basic unit of resource utilization,
and consists of a program counter, a register set, and a stack. It shares with peer threads its
code section, data section, and operating-system resources such as open fi les and signals,
collectively known as a task.
A traditional or heavyweight process is equal to a task with one thread. A task does nothing if
no threads are in it, and a thread must be in exactly one task. The extensive sharing makes
CPU switching among peer threads and the creation of threads inexpensive, compared with
context switches among heavyweight processes. Although a thread context switch still
requires a register set switch, no memory-management-related work need be done. Like any
parallel processing environment, multithreading a process may introduce concurrency control
problems that require the use of critical sections or locks.
Also, some systems implement user-level threads in user-level libraries, rather than via system
calls, so thread switching does not need to call the operating system, and to cause an
interrupt to the kernel. Switching between user-level threads can be done independently of
the operating system and, therefore, very quickly. Thus, blocking a thread and switching to
another thread is a reasonable solution to the problem of how a server can handle many
requests efficiently. User-level threads do have disadvantages, however. For instance, if the
kernel is single-threaded, then any user-level thread executing a system call will cause the
entire task to wait until the system call returns.
You can grasp the functionality of threads by comparing multiple-thread control with
multiple-process control. With multiple processes, each process operates independently of
the others; each process has its own program counter, stack register, and address space. This
type of organization is useful when the jobs performed by the processes are unrelated.
Multiple processes can perform the same task as well. For instance, multiple processes can
provide data to remote machines in a network fi le system implementation.
However, it is more efficient to have one process containing multiple threads serve the same
purpose. In the multiple process implementation, each process executes the same code but
has its own memory and fi le resources. One multi-threaded process uses fewer resources
than multiple redundant processes, including memory, open fi les and CPU scheduling, for

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 3


example, as Solaris evolves, network daemons are being rewritten as kernel threads to
increase greatly the performance of those network server functions.
Threads operate, in many respects, in the same manner as processes. Threads can be in one
of several states: ready, blocked, running, or terminated.
A thread within a process executes sequentially, and each thread has its own stack and
program counter. Threads can create child threads, and can block waiting for system calls to
complete; if one thread is blocked, another can run. However, unlike processes, threads are
not independent of one another. Because all threads can access every address in the task, a
thread can read or write over any other thread’s stacks. This structure does not provide
protection between threads. Such protection, however, should not be necessary. Whereas
processes may originate from different users, and may be hostile to one another, only a
single user can own an individual task with multiple threads. The threads, in this case,
probably would be designed to assist one another, and therefore would not require mutual
protection.

Let us return to our example of the blocked file-server process in the single-process model. In
this scenario, no other server process can execute until the first process is unblocked. By

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 4


contrast, in the case of a task that contains multiple threads, while one server thread is
blocked and waiting, a second thread in the same task could run. In this application, the
cooperation of multiple threads that are part of the same job confers the advantages of
higher throughput and improved performance. Other applications, such as the producer-
consumer problem, require sharing a common buffer and so also benefi t from this feature of
thread utilization. The producer and consumer could be threads in a task. Little overhead is
needed to switch between them, and, on a multiprocessor system, they could execute in
parallel on two processors for maximum efficiency.
Components of Thread
A thread has the following three components:
● Program Counter
● Register Set
● Stack space
The operating system's threads have a number of advantages and enhance system
performance. The operating system needs threads for a number of reasons, including:
The operational cost across threads is low because they share the same data and programming
code.
Compared to starting or ending a process, creating and terminating a thread is quicker. In
contrast to processes, context switching occurs more quickly in threads.
The goal of multithreading is to split a single process into numerous threads as opposed to
starting a brand-new process. In order to create parallelism and boost application performance,
multithreading is used, as it is quicker in many of the ways discussed above. The following list
includes some of the multithreading's additional benefits. The same resources, including code,
data, and files, are shared by all threads of a single process. Program responsiveness enables a
program to continue running even when a section of it is blocked or doing a time-consuming
function. enhancing user response as a result. Utilizing threads is more cost-effective because
they share a process's resources. Conversely, developing processes is expensive.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 5


PROCESSES VS. THREADS
As we mentioned earlier that in many respects threads operate in the same way as that
processes. Let us point out some of the similarities and differences.
Similarities
1. Like processes threads share CPU and only one thread is active (running) at a time.
2. Like processes, threads within a process, threads within a process execute sequentially.
3. Like processes, a thread can create children.
4. And like processes, if one thread is blocked, another thread can run.

Differences
1. Unlike processes, threads are not independent of one another.
2. Unlike processes, all threads can access every address in the task.
3. Processes might or might not assist one another because processes may originate from
different users, but threads are designed to assist one another.

Process Thread

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 6


Processes use more resources and hence Threads share resources and hence they are
they are termed as heavyweight termed as lightweight processes.
processes.

Creation and termination times of Creation and termination times of threads are
processes are slower. faster compared to processes.

Processes have their own code and Threads share code and data/file within a
data/file. process.

Communication between processes is Communication between threads is faster.


slower.

Context Switching in processes is slower. Context switching in threads is faster.

Processes are independent of each Threads, on the other hand, are


other. interdependent. (i.e. they can read, write or
change another thread’s data)

E.g. Opening two different browsers. E.g. Opening two tabs in the same browser.

TYPES OF THREAD

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 7


1. User Level Thread:

The user implements and manages user-level threads; the kernel is unaware of this.

● The OS is unaware of user-level threads because they are implemented using user-level
libraries.
● In comparison to kernel-level thread, the user-level thread can be created and managed
more quickly.
● It is quicker to switch contexts in user-level threads.
● The entire process becomes stalled if only one user-level thread does a blocking task.
Consider Java threads, POSIX threads, etc.

2. Kernel-level threads:

The OS implements and controls kernel-level threads.

● System calls are used to implement kernel-level threads, and the OS is aware of them.
● Compared to user-level threads, kernel-level threads take longer to create and manage.
● In a kernel-level thread, context switching takes longer.
● The execution of a blocking action by one kernel-level thread has no impact on any
other threads. Consider Window Solaris.

MULTITHREADING MODELS
The application can divide its task into separate threads thanks to multithreading. The same
process or task can be completed by a number of threads in multi-threading, or we can say that
more than one thread is used to complete the task. Multitasking is possible by using
multithreading. One task can only be completed at a time in single threading systems; hence,
multithreading was invented to solve this problem. Multithreading allows numerous tasks to be
completed simultaneously.

There exist three established multithreading models classifying these relationships:

1. Many to one multithreading model

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 8


2. One to one multithreading model
3. Many to Many multithreading model

Many to one multithreading model

Many user-level threads are mapped to one kernel thread in the many-to-one approach. A
context-switching environment that is effective and simple enough to be built on a kernel
without thread support is made possible by this kind of interaction.

The drawback of this architecture is that it is unable to benefit from the hardware acceleration
provided by multithreaded processes or multiprocessor systems because there is only one
kernel-level thread scheduling active at any given moment. All thread management in this is
carried out in user space. If a blockage occurs, the entire system is blocked by this model.

One-to-one multithreading model

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 9


A single user-level thread is mapped to a single kernel-level thread in the one-to-one
architecture. This kind of connection makes it possible to execute several threads
simultaneously. This benefit does have a downside, though. Every time a new user thread is
generated, a corresponding kernel thread must also be created, adding overhead and
potentially hurting the parent process's performance. The Linux and Windows operating
systems make an effort to address this issue by restricting the expansion of the thread count.

Many to Many multithreading model

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 10


Both user-level and kernel-level threads are present in this type of paradigm. The quantity of
kernel threads generated varies depending on the program. The developer may not construct
the same number of threads at both levels. A middle ground between the other two
approaches is the many-to-many model. In this architecture, the kernel has the option to
schedule another thread for execution if any thread issues a blocking system call. Additionally,
complexity is reduced compared to earlier models because of the inclusion of numerous
threads. Despite allowing the development of additional kernel threads, this architecture is
unable to achieve true concurrency. This is due to the kernel's single-process scheduling limit.

THREADING ISSUES
The threading issues are

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 11


1. System calls form and exec is discussed here. In a multithreaded program environment,
form and exec system calls is changed. Unix systems have two versions of form system calls.
One call duplicates all threads and the other duplicates only the thread that invokes the form
system call whether to use one or two versions of the form system call totally depends upon
the application. Duplicating all threads is unnecessary if the exec is called immediately after
the form system call.
2. Thread cancellation is a process of thread termination before the completion of the task.
Example: In multiple thread environment thread concurrently searching through a database.
If any one thread returns the result, the remaining thread might be canceled.
3. Thread cancellation is of two types:
(a) Asynchronous cancellation: One thread immediately terminates the target thread.
(b) Deferred cancellation: The target thread periodically checks whether it should terminate,
allowing it an opportunity to terminate itself in an orderly fashion. With deferred cancellation,
one thread indicates that a target thread is to be canceled, but cancellation occurs only after
the target thread has checked a flag to determine if it should be canceled or not.

BENEFITS OF THREADS
Following are some reasons why threads are used in designing operating systems:
1. A process with multiple threads makes a great server for example printer server.
2. Because threads can share common data, they do not need to use inter-process
communication.
4. Because of their very nature, threads can take advantage of multi-processors.
5. Threads need a stack and storage for registers therefore, threads are cheap to create.
6. Threads do not need new address space, global data, program code, or operating
system resources.

KEY TAKEAWAY

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 12


➢ Threads, in the context of computer processes, function as individual sequences of tasks
within a larger process, analogous to the various functionalities of components in the
human body.
➢ They allow for concurrent execution of multiple features within a single process,
enhancing program performance.
➢ Each thread possesses its program counter, register set, and stack, while sharing the
same code, data, and files with other threads in the same process.
➢ Threads are often referred to as lightweight processes due to their resource-sharing
nature. They provide a means for programs to split into concurrently running tasks,
mimicking parallelism, and offering benefits in terms of program responsiveness and
resource efficiency.
➢ There are two main types of threads—user-level and kernel-level—each with its
advantages and disadvantages.
➢ Threading models, such as many-to-one, one-to-one, and many-to-many, provide
different approaches to managing user and kernel-level threads.
➢ Despite challenges like thread cancellation and system call modifications, the benefits of
threads, including low operational costs, quick creation and termination, and efficient
use of shared resources, make them essential for optimizing system performance in
operating systems.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 13


PROCESS SCHEDULING

SUB LESSON 4.1

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 1


TYPES OF SCHEDULER

WHAT IS CPU SCHEDULING?


CPU scheduling is the basics of multiprogramming. By switching the CPU among several
processes the operating systems can make the computer more productive. The objective of
multiprogramming is to have some process running at all times, in order to maximize CPU
utilization.

The objective of multiprogramming is to have some process running at all times to maximize
CPU utilization. The objective of a time-sharing system is to switch the CPU among processes
so frequently that users can interact with each program while it is running. For a uni-processor
system, there will never be more than one running process. If there are more processes, the
rest will have to wait until the CPU is free and can be rescheduled. As processes enter the
system, they are put into a job queue. This queue consists of all processes in the system. The
processes that are residing in the main memory and are ready and waiting to execute are kept
on a list called the ready queue. This queue is generally stored as a linked list. A ready-queue
header will contain pointers to the first and last PCBs in the list. Each PCB has a pointer field
that points to the next process in the ready queue. There are also other queues in the system.
When a process is allocated the CPU, it executes for a while and eventually quits, is
interrupted, or waits for the occurrence of a particular event, such as the completion of an I/O
request. In the case of an I/O request, such a request may be to a dedicated tape drive, or to a
shared device, such as a disk. Since there are many processes in the system, the disk may be
busy with the I/O request of some other process. The process therefore may have to wait for
the disk. The list of processes waiting for a particular I/O device is called a device queue. Each
device has its own device queue.

Goals for Scheduling

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 2


Make sure your scheduling strategy is good enough with the following criteria:

1. Utilization/Efficiency: keep the CPU busy 100% of the time with useful work

2. Throughput: maximize the number of jobs processed per hour.

3. Turnaround time: from the time of submission to the time of completion, minimize the time
batch users must wait for output

4. Waiting time: Sum of times spent in a ready queue - Minimize this

5. Response Time: time from submission till the first response is produced, minimizing response
time for interactive users

6. Fairness: make sure each process gets a fair share of the CPU

Process Scheduling Queues:

You can create distinct queues for each process state and PCB using process scheduling queues.
The same queue is used to group processes that are in the same stage of execution. The PCB of
a process must therefore be unlinked from its previous queue and forwarded to the new state
queue whenever the status of the process is altered. Operating system queues come in three
varieties:

1. Job queue - It assists in keeping track of all system processes.

2. Ready queue - With this queue type, you may compile a list of all the active processes that
are waiting to be executed in the main memory.

3. Device queues: Device queues is a procedure that has been suspended because there is no
I/O device available.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 3


In the above-given Diagram,
1. Every new process is first put in the Ready queue. It waits in the ready queue until it
is finally processed for execution. Here, the new process is put in the ready queue
and wait until it is selected for execution or it is dispatched.
2. One of the processes is allocated the CPU and it is executing
3. The process should issue an I/O request
4. Then, it should be placed in the I/O queue.
5. The process should create a new subprocess
6. The process should be waiting for its termination.
7. It should remove forcefully from the CPU, as a result, interrupt. Once the interrupt is
completed, it should be sent back to the ready queue.

Non-preemptive vs. Preemptive Scheduling

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 4


Non-preemptive
Non-preemptive algorithms are designed so that once a process enters the running state(is
allowed a process), it is not removed from the processor until it has completed its service time
(or it explicitly yields the processor).
context_switch() is called only when the process terminates or blocks.

Preemptive
Preemptive algorithms are driven by the notion of prioritized computation. The process with
the highest priority should always be the one currently using the processor. If a process is
currently using the processor and a new process with a higher priority enters, the ready list, the
process on the processor should be removed and returned to the ready list until it is once again
the highest-priority process in the system.
context_switch() is called even when the process is running usually done via a timer interrupt.

There are mainly three types of Process Schedulers:


1. Long Term Scheduler
2. Short-Term Scheduler
3. Medium-Term Scheduler

Long Term Scheduler


The new process enters the "Ready State" as a result. It regulates the degree of
multiprogramming or the number of processes that are active and available at any given
moment. The long-term scheduler must carefully choose both CPU-bound and I/O-bound
processes. CPU-bound activities spend their time on the CPU, whereas I/O-bound jobs spend a
large portion of their time on input and output operations. By keeping a balance between the
two, the work scheduler improves efficiency. They frequently function at a high level in batch
processing systems.

Short Term Scheduler

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 5


In order to schedule a process on the running state, it must choose one from the ready state.
Recall that the short-term scheduler merely chooses the process to schedule; it does not load
the process when it is already running. All of the scheduling algorithms are used at this point.
The CPU scheduler is in charge of making sure there isn't hunger because of processes with long
burst times. The dispatcher is in charge of executing the task that the short-term scheduler has
chosen on the CPU (Ready to Running State) Only the dispatcher is capable of changing context.
What a dispatcher does is:
Switching context.
Switching to user mode.
Jumping to the proper location in the newly loaded program.

Medium Term Scheduler


It is in charge of stopping and starting the procedure. It primarily switches (moving processes
from main memory to disc and vice versa). Swapping may be required to enhance the process
mix or because a change in the amount of memory needed has overcommitted the memory
that is already accessible, necessitating memory release. It aids in preserving the ideal balance
between the CPU bound and the I/O bound. The amount of multiprogramming is decreased.

KEY TAKEAWAY
➢ CPU scheduling is a fundamental aspect of multiprogramming, allowing operating
systems to enhance computer productivity by switching the CPU among multiple
processes.
➢ The primary goal is to ensure a running process at all times, maximizing CPU utilization.
➢ Various scheduling strategies aim to meet criteria such as efficiency, throughput,
turnaround time, waiting time, response time, and fairness.
➢ Processes are organized into queues, including the job queue, ready queue, and device
queues, reflecting their different states and execution stages.
➢ Process schedulers, categorized into long-term, short-term, and medium-term
schedulers, play crucial roles in managing the degree of multiprogramming, selecting

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 6


processes for execution, and controlling process transitions between main memory and
disk.
➢ Scheduling algorithms, whether preemptive or non-preemptive, contribute to effective
task allocation, making the system more responsive, efficient, and capable of handling
diverse workloads.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 7


PROCESS SCHEDULING

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 1


SUB LESSON 4.2

PRE-EMPTIVE AND NON PRE-EMPTIVE SCHEDULING


ALGORITHM

PREEMPTIVE SCHEDULING:
Preemptive algorithms are driven by the notion of prioritized computation. The process with
the highest priority should always be the one currently using the processor. If a process is
currently using the processor and a new process with a higher priority enters, the ready list, the
process on the processor should be removed and returned to the ready list until it is once again
the highest-priority process in the system.

context_switch() is called even when the process is running usually done via a timer interrupt.

When a process transitions from the running state to the ready state or from the waiting state
to the ready state, preemptive scheduling is used. The resources, mostly CPU cycles, are given
to the process for a set period of time before being removed; if the process still has CPU burst
time left, it is then put back in the ready queue. Until it has its subsequent opportunity to run,
the process remains in the ready queue.

Round Robin (RR), Shortest Remaining Time First (SRTF), Priority (preemptive version), and
other preemptive scheduling algorithms are examples.

Here are some advantages and disadvantages of preemptive scheduling:

Advantages of Preemptive Scheduling

● Preemptive scheduling is a more reliable mechanism that prevents CPU monopolisation


by a single process.
● After each interruption, the decision of the ongoing task is reviewed.
● Each occasion causes ongoing tasks to be interrupted.
● The OS ensures that each running process uses the same amount of CPU.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 2


● In this, the CPU consumption is uniform, meaning that each running task will use the
CPU in the same way.
● The average response time is likewise altered by this scheduling technique.
● Utilizing preemptive scheduling in a multi-programming environment is advantageous.

Disadvantages of Preemptive Scheduling

● Scheduling requires a finite number of computational resources.


● The scheduler needs more time to pause the active task, change the context, and
dispatch the incoming task.
● If certain high priority processes arrive continually, the low priority process must wait
for a longer period of time.

EXAMPLE OF PRE-EMPTIVE SCHEDULING

Consider this following three processes in Round-robin

Process Queue Burst time

P1 4

P2 3

P3 5

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 3


Step 1) The execution begins with process P1, which has burst time 4. Here, every process
executes for 2 seconds. P2 and P3 are still in the waiting queue.

Step 2) At time =2, P1 is added to the end of the Queue and P2 starts executing

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 4


Step 3) At time=4 , P2 is preempted and add at the end of the queue. P3 starts executing.

Step 4) At time=6 , P3 is preempted and add at the end of the queue. P1 starts executing.

Step 5) At time=8 , P1 has a burst time of 4. It has completed execution. P2 starts execution

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 5


Step 6) P2 has a burst time of 3. It has already executed for 2 interval. At time=9, P2
completes execution. Then, P3 starts execution till it completes.

Step 7) Let’s calculate the average waiting time for above example.

Wait time
P1= 0+ 4= 4 ms
P2= 2+4= 6 ms
P3= 4+3= 7 ms

NON-PREEMPTIVE SCHEDULING

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 6


Non-preemptive algorithms are designed so that once a process enters the running state(is
allowed a process), it is not removed from the processor until it has completed its service time
(or it explicitly yields the processor).

context_switch() is called only when the process terminates or blocks.

Non-preemptive When a process ends or shifts from the running state to the waiting state,
scheduling is employed. Once the resources (CPU cycles) are assigned to a process in this
scheduling, the process retains the CPU until it terminates or enters a waiting state. A process
operating on a CPU is not interrupted mid-execution in the case of non-preemptive scheduling.
Instead, it delays allocating the CPU to another process until the current one has finished its
CPU burst period.

Shortest Job First (SJF is essentially non-preemptive), Priority (non-preemptive version), and
other algorithms are based on non-preemptive scheduling.

Advantages of Non-preemptive Scheduling

● offers little work load


● provides a high throughput
● It is an extremely basic method conceptually.
● Reduced demand for computing resources for scheduling

Disadvantages of Non-Preemptive Scheduling

● For such real-time task specifically, it can result in starvation


● A frozen machine may be the result of bugs.
● It can create a priority in real-time. scheduling challenges
● Process response times are too slow

EXAMPLE OF NON-PREEMPTIVE SCHEDULING

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 7


In non-preemptive SJF scheduling, once the CPU cycle is allocated to process, the process holds
it till it reaches a waiting state or terminated.
Consider the following five processes each having its own unique burst time and arrival time.

Process Queue Burst time Arrival time

P1 6 2

P2 2 5

P3 8 1

P4 3 0

P5 4 4

Step 0) At time=0, P4 arrives and starts execution.

Step 1) At time= 1, Process P3 arrives. But, P4 still needs 2 execution units to complete. It will
continue execution.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 8


Step 2) At time =2, process P1 arrives and is added to the waiting queue. P4 will continue
execution.

Step 3) At time = 3, process P4 will finish its execution. The burst time of P3 and P1 is
compared. Process P1 is executed because its burst time is less compared to P3.

Step 4) At time = 4, process P5 arrives and is added to the waiting queue. P1 will continue
execution.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 9


Step 5) At time = 5, process P2 arrives and is added to the waiting queue. P1 will continue
execution.

Step 6) At time = 9, process P1 will finish its execution. The burst time of P3, P5, and P2 is
compared. Process P2 is executed because its burst time is the lowest.

Step 7) At time=10, P2 is executing, and P3 and P5 are in the waiting queue.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 10


Step 8) At time = 11, process P2 will finish its execution. The burst time of P3 and P5 is
compared. Process P5 is executed because its burst time is lower.

Step 9) At time = 15, process P5 will finish its execution.

Step 10) At time = 23, process P3 will finish its execution.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 11


Step 11) Let’s calculate the average waiting time for above example.

Wait time

P4= 0-0=0

P1= 3-2=1

P2= 9-5=4

P5= 11-4=7

P3= 15-1=14

Average Waiting Time= 0+1+4+7+14/5 = 26/5 = 5.2

Key Differences Between Preemptive and Non-Preemptive Scheduling:

1. Preemptive scheduling allots the CPU to the processes for a set period of time, whereas
non-preemptive scheduling allots the CPU to the process until it finishes or enters the
waiting state.
2. When a higher priority task arrives, the running process in preemptive scheduling is
halted in the middle of it; in contrast, the running process in non-preemptive scheduling
waits until it is finished running.
3. Preemptive scheduling involves the overhead of maintaining the ready queue as well as
transferring a process from the ready state to the running state and vice versa. In

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 12


contrast, there is no cost associated with transitioning a process from its running state
to its ready state when using non-preemptive scheduling.
4. In non-preemptive scheduling, if CPU is allocated to the process having a smaller burst
time then the processes with larger burst times may have to starve. In preemptive
scheduling, if a high-priority process frequently arrives in the ready queue then the
process with low priority has to wait for a long, and it may have to starve.
5. No matter which process is currently running, preemptive scheduling achieves flexibility
by allowing the critical processes to access the CPU when they enter the ready queue.
Non-preemptive scheduling is referred to as rigid since the CPU-intensive operation is
not interrupted even if a crucial process joins the ready queue.
6. Preemptive scheduling is cost associative, unlike non-preemptive scheduling, which is
not required to guarantee the integrity of shared data.

Parameter PREEMPTIVE SCHEDULING NON-PREEMPTIVE SCHEDULING

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 13


Once resources(CPU Cycle) are
In this resources(CPU Cycle) are
allocated to a process, the process
Basic allocated to a process for a
holds it till it completes its burst
limited time.
time or switches to waiting state.

Process can be interrupted in Process can not be interrupted until


Interrupt
between. it terminates itself or its time is up.

If a process having high priority If a process with a long burst time is


frequently arrives in the ready running CPU, then later coming
Starvation
queue, a low priority process process with less CPU burst time
may starve. may starve.

It has overheads of scheduling


Overhead It does not have overheads.
the processes.

Flexibility flexible rigid

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 14


Cost cost associated no cost associated

CPU In preemptive scheduling, CPU It is low in non preemptive


Utilization utilization is high. scheduling.

Waiting Preemptive scheduling waiting Non-preemptive scheduling waiting


Time time is less. time is high.

Response Preemptive scheduling Non-preemptive scheduling


Time response time is less. response time is high.

Examples of preemptive
Examples of non-preemptive
scheduling are Round Robin
Examples scheduling are First Come First
and Shortest Remaining Time
Serve and Shortest Job First.
First.

KEY TAKEAWAY

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 15


➢ Preemptive and non-preemptive scheduling algorithms play critical roles in managing
the allocation of CPU resources to processes, each with distinct characteristics and
implications.
➢ Preemptive scheduling allows processes to be interrupted in the middle of their
execution, ensuring that the CPU is allocated to the highest-priority task.
➢ This flexibility, while advantageous for critical processes, introduces overhead and can
potentially lead to starvation for low-priority processes.
➢ In contrast, non-preemptive scheduling allocates the CPU to a process until it completes
its burst time or enters a waiting state, eliminating the overhead associated with
frequent interruptions.
➢ However, this rigidity may lead to higher waiting times and response times, and it can
result in processes with smaller burst times being starved.
➢ The choice between preemptive and non-preemptive scheduling depends on factors
such as CPU utilization, waiting time, and response time requirements.
➢ Examples of preemptive scheduling algorithms include Round Robin and Shortest
Remaining Time First, while non-preemptive scheduling algorithms include First Come
First Serve and Shortest Job First.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 16


PROCESS SCHEDULING

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 1


SUBTOPIC 4.3

FIRST COME FIRST SERVE SCHEDULING ALGORITHM

FIRST COME FIRST SERVE SCHEDULING ALGORITHM


This is a Non-Preemptive scheduling algorithm. FCFS strategy assigns priority to processes in the
order in which they request the processor. The process that requests the CPU first is allocated
the CPU first. When a process comes in, add its PCB to the tail of the ready queue. When the
running process terminates, dequeue the process (PCB) at the head of the ready queue and run
it.

The simplest CPU scheduling method arranges processes in agreement with their arrival times.
According to the first come, first served scheduling algorithm, the CPU is assigned to the
process that demands it first. The FIFO queue is used to implement it. A process's PCB is
connected to the tail of the queue when it enters the ready queue. The process at the front of
the queue receives the CPU when it is available. After then, the queue is cleared of the active
process. FCFS is a non-preemptive scheduling algorithm.

FCFS characteristics:

● Both non-preemptive and preemptive CPU scheduling techniques are supported by


FCFS.
● It is always first-come, first-served while completing tasks.
● FCFS is simple to use and implement.
● The performance of this method is not very good, and the wait time is pretty long.

Example of FCFS scheduling

Purchasing a movie ticket at the box office is a real example of the FCFS approach. In this
scheduling system, a person is handled in a queue-style fashion. The person who enters the line
first purchases the ticket, followed by the next person. This will go on up until the very last

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 2


person in line buys their ticket. This algorithm is used in the CPU process, which operates
similarly.

Here is an example of five processes arriving at different times. Each process has a different
burst time.

Process Burst time Arrival time

P1 6 2

P2 2 5

P3 8 1

P4 3 0

P5 4 4

Using the FCFS scheduling algorithm, these processes are handled as follows.

Step 0) The process begins with P4 which has arrival time 0

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 3


Step 1) At time=1, P3 arrives. P4 is still executing. Hence, P3 is kept in a queue.

Step 2) At time= 2, P1 arrives which is kept in the queue.

Step 3) At time=3, P4 process completes its execution.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 4


Step 4) At time=4, P3, which is first in the queue, starts execution.

Step 5) At time =5, P2 arrives, and it is kept in a queue.

Step 6) At time 11, P3 completes its execution.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 5


Step 7) At time=11, P1 starts execution. It has a burst time of 6. It completes execution at time
interval 17

Step 8) At time=17, P5 starts execution. It has a burst time of 4. It completes execution at


time=21

Step 9) At time=21, P2 starts execution. It has a burst time of 2. It completes execution at time
interval 23

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 6


Step 10) Let’s calculate the average waiting time for above example.

FCFS advantages
The benefits of FCFS include:
● simple to programme
● Come first, serve first
● simple algorithm for scheduling.

FCFS disadvantages
● The issue of starving occurs as a result of non-preemptive nature.
● More Typical wait time.
● FCFS is ineffective because of how easy it is to use.
● The best scheduling method for the time-sharing system is not FCFS.
● Since FCFS uses a non-preemptive scheduling mechanism, allocating the CPU to a
process never results in the CPU being released until the process has finished running.
● Resource usage in FCFS is poor because it is impossible to utilise the resources
concurrently, which results in the convoy effect.

What is Convoy Effect


Convoy Effect in FCFS is a situation that develops in the FCFS Scheduling algorithm when one
process occupies the CPU for an extended period of time, and another process can only obtain
the CPU after the process using the CPU completes its execution. As a result, there is
insufficient resource usage, which also impacts the operating system's performance.

KEY TAKEAWAY

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 7


➢ The First Come First Serve (FCFS) scheduling algorithm, a non-preemptive method,
prioritizes processes based on their arrival times, allocating the CPU to the first process
that requests it.
➢ This simple and easy-to-implement algorithm utilizes a FIFO queue, linking a process's
Process Control Block (PCB) to the tail of the ready queue upon arrival.
➢ The process at the front of the queue is granted the CPU when available, and after
completion, it is dequeued.
➢ While FCFS supports both preemptive and non-preemptive scheduling, its performance
is criticized for extended wait times and inefficiency, particularly in time-sharing
systems.
➢ An example of FCFS in everyday life is akin to purchasing movie tickets at a box office,
where individuals are served in the order they join the line.
➢ One notable drawback of FCFS is the Convoy Effect, a situation where resource usage is
inefficient, adversely affecting system performance when one process monopolizes the
CPU for an extended duration, leading to delays for other processes in the queue.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 8


PROCESS SCHEDULING

SUB LESSON 4.4

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 1


SHORTEST JOB FIRST AND SHORTEST REMAINING TIME FIRST
SCHEDULING ALGORITHM

The method known as "Shortest Job First" (SJF) selects the process with the shortest execution
time for the subsequent execution. Both preemptive and non-preemptive scheduling strategies
are possible. The average time that other processes are left waiting to be executed is greatly
decreased.

SJF approaches often fall into one of two categories:

1. Shortest Job First Scheduling Algorithm (Non-Preemptive)


2. Shortest Remaining Time First Scheduling Algorithm(SJF Preemptive SJF)

Characteristics of SJF Scheduling

● It is used as a measure of time to do each job.


● This algorithmic approach is useful for batch processing when it is not crucial to wait for
jobs to finish.
● By ensuring that shorter jobs are completed first and possibly having a quick turnaround
time, it can increase process throughput.
● Providing shorter jobs, which should be completed first and typically have a quicker
turnaround time, increases job output.

Completion Time: The time at which the process completes its execution.

Turn Around Time: Time Difference between completion time and arrival time.

Turn Around Time = Completion Time – Arrival Time

Waiting Time(W.T): Time Difference between turn around time and burst time.

Waiting Time = Turn Around Time – Burst Time

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 2


SHORTEST JOB FIRST SCHEDULING ALGORITHM (NON -PREEMPTIVE)
In non-preemptive scheduling, once the CPU cycle is allocated to process, the process holds it
till it reaches a waiting state or terminated.

The SJF algorithm takes processes that use the shortest CPU time first. Mathematically seen,
and corresponding to the experience, this is the ideal scheduling algorithm. I won’t give details
in here about the performance. It’s all to do about overhead and response time, actually: the
system will be fast when the scheduler doesn’t take much of the CPU time itself, and when
interactive processes react quickly (get a fast response). This system would be very good. The
overhead caused by the algorithm itself is huge, however. The scheduler would have top
implement some way to time the CPU usage of processes and predict it, or the user should tell
the scheduler how long a job (this is really a word that comes from very early computer design
when Batch Job Operating Systems were used would take. So, it is impossible to implement this
algorithm without hurting performance very much.

Process Queue Burst time Arrival time

P1 6 2

P2 2 5

P3 8 1

P4 3 0

P5 4 4

Step 0) At time=0, P4 arrives and starts execution.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 3


Step 1) At time= 1, Process P3 arrives. But, P4 still needs 2 execution units to complete. It will
continue execution.

Step 2) At time =2, process P1 arrives and is added to the waiting queue. P4 will continue
execution.

Step 3) At time = 3, process P4 will finish its execution. The burst time of P3 and P1 is
compared. Process P1 is executed because its burst time is less compared to P3.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 4


Step 4) At time = 4, process P5 arrives and is added to the waiting queue. P1 will continue
execution.

Step 5) At time = 5, process P2 arrives and is added to the waiting queue. P1 will continue
execution.

Step 6) At time = 9, process P1 will finish its execution. The burst time of P3, P5, and P2 is
compared. Process P2 is executed because its burst time is the lowest.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 5


Step 7) At time=10, P2 is executing and P3 and P5 are in the waiting queue.

Step 8) At time = 11, process P2 will finish its execution. The burst time of P3 and P5 is
compared. Process P5 is executed because its burst time is lower.

Step 9) At time = 15, process P5 will finish its execution.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 6


Step 10) At time = 23, process P3 will finish its execution.

Step 11) Let’s calculate the average waiting time for the above example.
Turn Around Time = Completion Time – Arrival Time

P4= 3-0=3 ms
P1= 9-2=7 ms
P2= 11-5=6 ms
P5= 15-4=11 ms
P3= 23-1=22 ms

Waiting Time = Turn Around Time – Burst Time

P4= 3-3=0 ms
P1= 7-6=1 ms
P2= 6-2=4 ms
P5= 11-4=7 ms
P3= 22-8=14 ms

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 7


Average Waiting Time= 0+1+4+7+14/5 = 26/5 = 5.2 ms
Advantages of SJF:

● SJF has a shorter average waiting time than the First come first serve (FCFS) algorithm.
● Typically, SJF is used for long-term scheduling.
● It is appropriate for jobs that run in batches and whose run periods are known.
● In terms of turnaround time on average, SJF is probably ideal.
Disadvantages of SJF:

● SJF may result in starvation or extremely long turnaround times.


● The SJF job completion time must be known in advance, but it can be challenging to
predict.
● It can be challenging to predict an upcoming CPU request's length.
● Starvation results, but the average turnaround time is not shortened.

SHORTEST REMAINING TIME FIRST SCHEDULING ALGORITHM(SJF PREEMPTIVE SJF)

Shortest remaining time is a method of CPU scheduling that is a preemptive version of the
shortest job next scheduling. In this scheduling algorithm, the process with the smallest amount
of time remaining until completion is selected to execute. Since the currently executing process
is the one with the shortest amount of time remaining by definition, and since that time should
only reduce as execution progresses, processes will always run until they are complete or a new
process is added that requires a smaller amount of time. The shortest remaining time is
advantageous because short processes are handled very quickly. The system also requires very
little overhead since it only makes a decision when a process completes or a new process is
added, and when a new process is added the algorithm only needs to compare the currently
executing process with the new process, ignoring all other processes currently waiting to
execute. However, it has the potential for process starvation for processes which will require a
long time to complete if short processes are continually added, though this threat can be

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 8


minimal when process times follow a heavy-tailed distribution. Like shortest job fi rst
scheduling, shortest remaining time scheduling is rarely used outside of specialized
environments because it requires accurate estimations of the runtime of all processes that are
waiting to execute.

Jobs are inserted into the ready queue as they are received in preemptive SJF scheduling. The
process with the least burst time starts running. The current process is terminated or prevented
from continuing if one with a shorter burst time enters the system, and the shorter job is given
a CPU cycle.

Process Queue Burst time Arrival time

P1 6 2

P2 2 5

P3 8 1

P4 3 0

P5 4 4

Step 0) At time=0, P4 arrives and starts execution.

Process Queue Burst time Arrival time

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 9


P1 6 2

P2 2 5

P3 8 1

P4 3 0

P5 4 4

Step 1) At time= 1, Process P3 arrives. But, P4 has a shorter burst time. It will continue
execution

Step 2) At time = 2, process P1 arrives with burst time = 6. The burst time is more than that of
P4. Hence, P4 will continue execution.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 10


Step 3) At time = 3, process P4 will finish its execution. The burst time of P3 and P1 is
compared. Process P1 is executed because its burst time is lower.

Step 4) At time = 4, process P5 will arrive. The burst time of P3, P5, and P1 is compared. Process
P5 is executed because its burst time is lowest. Process P1 is preempted.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 11


Process Queue Burst time Arrival time

P1 5 out of 6 is remaining 2

P2 2 5

P3 8 1

P4 3 0

P5 4 4

Step 5) At time = 5, process P2 will arrive. The burst time of P1, P2, P3, and P5 is compared.
Process P2 is executed because its burst time is least. Process P5 is preempted.

Process Queue Burst time Arrival time

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 12


P1 5 out of 6 is remaining 2

P2 2 5

P3 8 1

P4 3 0

P5 3 out of 4 is remaining 4

Step 6) At time =6, P2 is executing.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 13


Step 7) At time =7, P2 finishes its execution. The burst time of P1, P3, and P5 is compared.
Process P5 is executed because its burst time is lesser.

Process Queue Burst time Arrival time

P1 5 out of 6 is remaining 2

P2 2 5

P3 8 1

P4 3 0

P5 3 out of 4 is remaining 4

Step 8) At time =10, P5 will finish its execution. The burst time of P1 and P3 is compared.
Process P1 is executed because its burst time is less.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 14


Step 9) At time =15, P1 finishes its execution. P3 is the only process left. It will start execution.

Step 10) At time =23, P3 finishes its execution.

Step 11) Let’s calculate the average waiting time for above example.
Turn Around Time = Completion Time – Arrival Time

P4= 3-0=3 ms
P1= 15-2=13 ms
P2=7-5 = 2 ms
P5= 10-4 = 6 ms
P3= 23-1 = 22 ms

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 15


Waiting Time = Turn Around Time – Burst Time

P4= 3-3=0 ms
P1= 13-6 =7 ms
P2= 2-2 = 0 ms
P5= 6-4 =2
P3= 22-8 = 14
Average Waiting Time = 0+7+0+2+14/5 = 23/5 =4.6

KEY TAKEAWAY

➢ The Shortest Job First (SJF) scheduling algorithm, available in preemptive and non-
preemptive versions, prioritizes processes based on their execution times, aiming to
execute the shortest job first for optimal efficiency.
➢ In the non-preemptive variant, the CPU cycle is allocated to a process until it reaches a
waiting state or completes, while the preemptive version constantly selects the process
with the smallest remaining time.
➢ The non-preemptive SJF, though theoretically ideal, faces challenges in implementation
due to the need for accurate runtime predictions.
➢ An illustrative example showcases the non-preemptive SJF in action, considering
different processes with distinct burst times. The advantages of SJF include reduced
average waiting times, making it suitable for long-term scheduling and jobs with known
run periods.
➢ However, SJF is not without its drawbacks, such as the potential for process starvation
and the requirement for accurate runtime predictions.
➢ The preemptive SJF variant, also known as Shortest Remaining Time First, dynamically
selects the process with the smallest remaining time, efficiently handling short
processes but facing potential starvation for longer processes.
➢ The accompanying example demonstrates preemptive SJF scheduling, emphasizing its
advantages and potential pitfalls.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 16


PROCESS SCHEDULING

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 1


SUB LESSON 4.5

PRIORITY SCHEDULING ALGORITHM

PRIORITY SCHEDULING ALGORITHM


A priority is associated with each process and the CPU is allocated to the process with the
highest priority. Priority can be defined either internally or externally. Internally defined
priorities use some measurable quantities to compute the priority of a process.

Example: Time limits, memory requirements, no. of open files, the ratio of average I/O burst
time to average CPU burst time, etc. external priorities are set by criteria that are external to
the OS, such as the importance of the process, the type and amount of funds being paid for
computer use, the department sponsoring work and other often political factors. Priority
scheduling can be preemptive or non-preemptive. A preemptive priority scheduling algorithm
will preempt the CPU if the priority of the newly arrived process is higher than the priority of
the currently running process. A non-preemptive priority scheduling algorithm will simply put
the new process at the head of the ready queue. A major problem with priority scheduling
algorithms is indefinite blocking or starvation. This can be solved by a technique called aging
wherein I gradually increase the priority of a long waiting process.

Determining the order in which processes and activities will access the processor and how
much processing time each will receive is a key responsibility of the operating system. Process
scheduling refers to this operating system feature.

The operating system's tasks under process scheduling include

● Monitoring the state of the processes


● Processor allocation to tasks
● Processes reallocation to processes

The operating system stores each of these processes in a process table and assigns each one a
unique ID (PID) in order to keep track of them all. Additionally, the operating system utilises a

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 2


Process Control Block to monitor their current state (PCB). When the process's status changes,
the information is updated in the control block.

But did you ever wonder what the Operating System's criteria are for scheduling these
processes? What process needs to be given CPU resources first? The Operating System makes
these selections in a variety of ways, including by executing the process that requires the least
amount of time first or by executing the processes in the order in which they requested access
to the processor, among other options. The process's priority is one of these crucial factors.

Processes are carried out according to priority scheduling in OS. Higher-priority tasks or
processes are carried out first. Naturally, you might be curious about how procedures are
prioritized.

Priority of processes depends on some factors such as:

● Time limit
● Memory requirements of the process
● Ratio of average I/O to average CPU burst time

There may be more criteria used to determine the priority of a process or task. The scheduler
assigns the processes this priority.

There are two types of priority scheduling algorithms in OS:

1. Non-Preemptive Priority Scheduling


2. Preemptive Priority Scheduling

Characteristics of Priority Scheduling

● It is an algorithm for scheduling tasks that prioritizes the incoming processes before
scheduling them.

● Operating systems employ it to carry out batch operations.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 3


● Priority scheduling executes processes on a first-come, first-served basis if there are two
jobs or processes in the ready state (ready for execution) that have the same priority.
Every job has a priority number associated with it that denotes its level of importance.

● The process has higher priority if the integer value of the priority number is lower. Low
numbers indicate high importance.

Completion Time: The time at which the process completes its execution.

Turn Around Time: Time Difference between completion time and arrival time.

Turn Around Time = Completion Time – Arrival Time

Waiting Time(W.T): Time Difference between turn around time and burst time.

Waiting Time = Turn Around Time – Burst Time

NON-PREEMPTIVE PRIORITY SCHEDULING


This kind of scheduling involves:

● The process that is now running won't be interrupted even if another process with a
higher priority is sent for execution while it is still in progress.
● Since it has a higher priority than the processes that are in the waiting state for
execution, the newly arrived high-priority process will be put in next for execution.
● All other processes will continue to wait in the processing queue. After the current
process has finished running, the CPU will be allocated to the high-priority task.

PREEMPTIVE PRIORITY SCHEDULING


Tasks are typically assigned in Preemptive Scheduling according to their priorities. Even while
the lower-priority operation is still in progress, it is occasionally important to run the higher-
priority activity first. When the higher-priority activity has finished its execution, the lower-
priority task will pause for a while before continuing.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 4


EXAMPLE OF PRIORITY SCHEDULING

Process Priority Burst time Arrival time

P1 1 4 0

P2 2 3 0

P3 1 7 6

P4 3 4 11

P5 2 2 12

Step 0) At time=0, Process P1 and P2 arrive. P1 has a higher priority than P2. The execution
begins with process P1, which has burst time 4.

Step 1) At time=1, no new process arrive. Execution continues with P1.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 5


Step 2) At time 2, no new process arrives, so you can continue with P1. P2 is in the waiting
queue.

Step 3) At time 3, no new process arrives so you can continue with P1. P2 process still in the
waiting queue.

Step 4) At time 4, P1 has finished its execution. P2 starts execution.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 6


Step 5) At time= 5, no new process arrives, so we continue with P2.

Step 6) At time=6, P3 arrives. P3 is at higher priority (1) compared to P2 having priority (2). P2 is
preempted, and P3 begins its execution.

Process Priority Burst time Arrival time

P1 1 4 0

P2 2 1 out of 3 pending 0

P3 1 7 6

P4 3 4 11

P5 2 2 12

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 7


Step 7) At time 7, no-new process arrives, so we continue with P3. P2 is in the waiting queue.

Step 8) At time= 8, no new process arrives, so we can continue with P3.

Step 9) At time= 9, no new process comes so we can continue with P3.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 8


Step 10) At time interval 10, no new process comes, so we continue with P3

Step 11) At time=11, P4 arrives with priority 4. P3 has higher priority, so it continues its
execution.

Process Priority Burst time Arrival time

P1 1 4 0

P2 2 1 out of 3 pending 0

P3 1 2 out of 7 pending 6

P4 3 4 11

P5 2 2 12

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 9


Step 12) At time=12, P5 arrives. P3 has higher priority, so it continues execution.

Step 13) At time=13, P3 completes execution. We have P2,P4,P5 in ready queue. P2 and P5
have equal priority. Arrival time of P2 is before P5. So P2 starts execution.

Process Priority Burst time Arrival time

P1 1 4 0

P2 2 1 out of 3 pending 0

P3 1 7 6

P4 3 4 11

P5 2 2 12

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 10


Step 14) At time =14, the P2 process has finished its execution. P4 and P5 are in the waiting
state. P5 has the highest priority and starts execution.

Step 15) At time =15, P5 continues execution

Step 16) At time= 16, P5 is finished with its execution. P4 is the only process left. It starts
execution.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 11


Step 17) At time =20, P5 has completed execution and no process is left.

Step 18) Let’s calculate the average waiting time for the above example.
Waiting Time = start time – arrival time + wait time for next burst
P1 = o - o = o ms
P2 =4 - o + 7 =11 ms
P3= 6-6=0 ms
P4= 16-11=5 ms
Average Waiting time = (0+11+0+5+2)/5 = 18/5= 3.6 ms
Advantages of Priority Scheduling Algorithm
● Due to the currently active process, high priority processes do not have to wait for their
turn to be executed.
● The relative relevance and priority of processes can be determined.
● Applications with variable resource and time requirements are beneficial.

Disadvantages of Priority Scheduling Algorithm

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 12


● We only run high-priority processes, which can starve low-priority processes because we
only run high-priority activities. Starvation is a phenomenon in which a process is
perpetually delayed because the resources it needs are never allotted to it because
other processes are carried out first.
● Since all low-priority processes are kept in RAM, they will all be lost if the system
crashes at any point.

KEY TAKEAWAY

➢ Priority Scheduling Algorithm in operating systems assigns a priority to each process and
allocates the CPU to the process with the highest priority. Priorities can be determined
by factors such as time limits, memory requirements, and the ratio of average I/O burst
time to average CPU burst time.
➢ The algorithm can be either preemptive, where the CPU is preempted if a higher-priority
process arrives, or non-preemptive, where the new process is placed at the head of the
ready queue.
➢ Priority scheduling aims to optimize resource usage based on task importance.
However, a major challenge is the potential for indefinite blocking or starvation of low-
priority processes, which can be addressed using aging techniques.
➢ The algorithm's characteristics, both preemptive and non-preemptive, are explored,
with an example illustrating their application.
➢ Priority scheduling offers advantages such as efficient execution of high-priority tasks
but also comes with drawbacks like the possibility of starvation for low-priority
processes.
➢ The average waiting time is calculated for an illustrative example, showcasing the
algorithm's practical implications in process execution.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 13


PROCESS SCHEDULING

SUB LESSON 4.6

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 1


ROUND ROBIN SCHEDULING ALGORITHM

ROUND ROBIN SCHEDULING ALGORITHM


Round-robin scheduling is really the easiest way of scheduling. All processes form a circular
array and the scheduler gives control to each process at a time. It is of course very easy to
implement and causes almost no overhead when compared to all other algorithms. But
response time is very low for the processes that need it. Of course, it is not the algorithm I
want, but it can be used eventually. This algorithm is not the best at all for general-purpose
Operating Systems, but it is useful for bath- processing Operating Systems, in which all jobs
have the same priority, and in which response time is of minor or no importance. And this
priority leads us to the next way of scheduling.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 2


With the help of the CPU scheduling technique known as Round Robin, each process is cyclically
given a defined time slot. It is essentially the First come First Serve CPU Scheduling algorithm in
preemptive mode.

The Round Robin CPU algorithm frequently emphasizes the Time Sharing method.

Time quantum refers to the amount of time that a process or job is permitted to operate when
using a preemptive technique.

Every process or job in the ready queue is given a CPU for the duration of that time quantum; if
the process is finished running during that time, the process ends; otherwise, the process
returns to the waiting list and waits for its turn to run again.

Characteristics of Priority Scheduling

● Round robin is a pre-emptive algorithm

● The CPU is shifted to the next process after fixed interval time, which is called time
quantum/time slice.

● The process that is preempted is added to the end of the queue.

● Round robin is a hybrid model which is clock-driven

● Time slice should be minimum, which is assigned for a specific task that needs to be
processed. However, it may differ OS to OS.

● It is a real time algorithm which responds to the event within a specific time limit.

● Round robin is one of the oldest, fairest, and easiest algorithm.

● Widely used scheduling method in traditional OS.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 3


Completion Time: The time at which the process completes its execution.

Turn Around Time: Time Difference between completion time and arrival time.

Turn Around Time = Completion Time – Arrival Time

Waiting Time(W.T): Time Difference between turn around time and burst time.

Waiting Time = Turn Around Time – Burst Time

EXAMPLE OF ROUND ROBIN SCHEDULING ALGORITHM

Process Queue Burst time

P1 4

P2 3

P3 5

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 4


Step 1) The execution begins with process P1, which has burst time 4. Here, every process
executes for 2 seconds. P2 and P3 are still in the waiting queue.

Step 2) At time =2, P1 is added to the end of the Queue and P2 starts executing

Step 3) At time=4 , P2 is preempted and add at the end of the queue. P3 starts executing.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 5


Step 4) At time=6 , P3 is preempted and add at the end of the queue. P1 starts executing.

Step 5) At time=8 , P1 has a burst time of 4. It has completed execution. P2 starts execution

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 6


Step 6) P2 has a burst time of 3. It has already executed for 2 interval. At time=9, P2 completes
execution. Then, P3 starts execution till it completes.

Step 7) Let’s calculate the average waiting time for above example.
Wait time
P1= 0+ 4= 4
P2= 2+4= 6
P3= 4+3= 7

Advantage of Round-robin Scheduling


Here are some advantages and disadvantages of round-robin scheduling:

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 7


● It is not affected by convoy effect or famine.
● A fair amount of CPU time is allotted to each job.
● It handles every process without regard to priority.
● You can estimate the worst-case response time for the same process if you know the
total number of processes in the run queue.
● This scheduling approach is independent of burst time. Because of this, it may be simply
integrated into the system.
● Once a process has run for a particular amount of time, it is preempted and another
process runs for that time period instead.
● enables the OS to save the states of preempted processes using the context-switching
approach.
● In terms of average response time, it performs best.

Disadvantages of Round-robin Scheduling


The following are the disadvantages/cons of round-robin scheduling:
● Low OS slicing times will result in decreased CPU output.
● This approach takes longer to swap contexts.
● Time quantum has a significant impact on its performance.
● The processes cannot have priorities established.
● Round-robin scheduling does not give more significant jobs a higher priority.
● Reduces understanding
● The system's overhead for context switching increases as time quantum decreases.
● In this system, determining the correct time quantum is a very challenging issue.

KEY TAKEAWAY

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 8


➢ Round Robin Scheduling Algorithm is a simple and widely used CPU scheduling
technique where processes are arranged in a circular queue, and each process is given a
fixed time slot for execution.
➢ It operates in a preemptive manner, with the CPU shifting to the next process after a
specified time quantum.
➢ This algorithm is particularly suitable for batch-processing Operating Systems where all
jobs have the same priority and response time is of minimal importance. Round Robin is
known for its fairness, providing each process with an equal share of CPU time.
➢ Despite its simplicity and fairness, there are challenges associated with determining the
optimal time quantum, and low slicing times can impact CPU efficiency.
➢ Context switching overhead increases with shorter time quantum, making it essential to
strike a balance.
➢ The algorithm's advantages include avoiding the convoy effect and providing a fair
allocation of CPU time, but disadvantages include the challenge of setting an
appropriate time quantum and potential reduction in CPU output with low slicing times.
➢ Overall, Round Robin Scheduling is a well-established and widely used method with
both strengths and limitations.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 9


PROCESS SCHEDULING

SUB LESSON 4.7

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 1


MULTILEVEL QUEUE AND MULTILEVEL FEEDBACK QUEUE
SCHEDULING

MULTILEVEL QUEUE SCHEDULING ALGORITHM

Each algorithm supports a different process, but in a general system, some processes require
scheduling using a priority algorithm. While some processes want to stay in the system
(interactive processes), others are background processes whose execution can be delayed. The
number of ready queue algorithms between queues and within queues may differ between
systems. A round-robin method with various time quantum is typically utilized for such
maintenance. Several types of scheduling algorithms are designed for circumstances where the
processes can be readily separated into groups. There are two sorts of processes that require
different scheduling algorithms because they have varying response times and resource
requirements. The foreground (interactive) and background processes (batch process) are
distinguished. Background processes take priority over foreground processes. The ready queue
has been partitioned into seven different queues using the multilevel queue scheduling
technique. These processes are assigned to one queue based on their priority, such as memory
size, process priority, or type. The method for scheduling each queue is different. Some queues
are utilized for the foreground process, while others are used for the background process. The
foreground queue may be scheduled using a round-robin method, and the background queue
can be scheduled using an FCFS strategy.

EXAMPLE

Let's take an example of a multilevel queue-scheduling algorithm with five queues to


understand how this scheduling works:

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 2


1. System process
2. Interactive processes
3. Interactive editing processes
4. Batch processes
5. A student processes

Every queue would have absolute priority over the low-priority queues. No process may
execute until the high-priority queues are empty. In the above instance, no other process may
execute until and unless the queues for system, interactive, and editing processes are empty. If
an interactive editing process enters the ready queue while a batch process is underway, the
batch process will be preempted.

There are descriptions of the processes that are used in the above example:

System Process

The OS has its process to execute, which is referred to as the System Process.

Interactive Process

It is a process in which the same type of interaction should occur.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 3


Batch Process

Batch processing is an operating system feature that collects programs and data into a batch
before processing starts.

Student Process

The system process is always given the highest priority, whereas the student processes are
always given the lowest.

EXAMPLE PROBLEM

Let's take an example of a multilevel queue-scheduling (MQS) algorithm that shows how
multilevel queue scheduling work. Consider the four processes listed in the table below under
multilevel queue scheduling. The queue number denotes the process's queue.

Process Arrival Time CPU Burst Time Queue Number

P1 0 4 1

P2 0 3 1

P3 0 8 2

P4 10 5 4

Queue 1 has a higher priority than queue 2. Round Robin is used in queue 1 (Time Quantum =
2), while FCFS is used in queue 2.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 4


Working:

1. Both queues have been processed at the start. Therefore, queue 1 (P1, P2) runs first
(due to greater priority) in a round-robin way and finishes after 7 units.
2. The process in queue 2 (Process P3) starts running (since there is no process in queue
1), but while it is executing, P4 enters queue 1 and interrupts P3, and then P3 takes the
CPU and finishes its execution.

Advantages

1. You can use multilevel queue scheduling to apply different scheduling methods to
distinct processes.
2. It will have low overhead in terms of scheduling.

Disadvantages

1. There is a risk of starvation for lower-priority processes.


2. It is rigid in nature

MULTILEVEL QUEUE FEEDBACK SCHEDULING ALGORITHM

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 5


In this CPU schedule a process is allowed to move between queues. If a process uses too much
CPU time, it will be moved to a lower-priority queue. This scheme leaves I/O bound and
interactive processes in the higher priority queues. Similarly, a process that waits too long in a
lower-priority queue may be moved to a higher-priority queue.

In a multilevel queue-scheduling algorithm, processes are permanently assigned to a queue


upon entry into the system. Processes do not move between queues. This setup has the
advantage of low scheduling overhead, but the disadvantage of being inflexible.

Multilevel feedback queue scheduling, however, allows a process to move between queues.
The idea is to separate processes with different CPU-burst characteristics. If a process uses too
much CPU time, it will be moved to a lower-priority queue. Similarly, a process that waits too
long in a lower-priority queue may be moved to a higher-priority queue. This form of aging
prevents starvation.

In general, a multilevel feedback queue scheduler is defined by the following parameters:

● The number of queues.

● The scheduling algorithm for each queue.

● The method used to determine when to upgrade a process to a higher-priority queue.

● The method used to determine when to demote a process to a lower-priority queue.

● The method used to determine which queue a process will enter when that process
needs service.

The definition of a multilevel feedback queue scheduler makes it the most general CPU-
scheduling algorithm. It can be configured to match a specific system under design.
Unfortunately, it also requires some means of selecting values for all the parameters to define
the best scheduler. Although a multilevel feedback queue is the most general scheme, it is also
the most complex.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 6


Figure 4.7.4 MQFS (This image needs to be shown in the background of video))

Explanation:

First of all, Suppose that queues 1 and 2 follow round robin with time quantum 8 and 16
respectively and queue 3 follows FCFS. One of the implementations of Multilevel Feedback
Queue Scheduling is as follows:

1. If any process starts executing then firstly it enters queue 1.


2. In queue 1, the process executes for 8 units and if it completes in these 8 units or it
gives CPU for I/O operation in these 8 units unit than the priority of this process
does not change, and if for some reasons it again comes in the ready queue than it
again starts its execution in the Queue 1.
3. If a process that is in queue 1 does not complete in 8 units then its priority gets
reduced and it gets shifted to queue 2.
4. Above points 2 and 3 are also true for processes in queue 2 but the time quantum is
16 units. Generally, if any process does not complete in a given time quantum then it
gets shifted to the lower priority queue.
5. After that in the last queue, all processes are scheduled in an FCFS manner.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 7


6. It is important to note that a process that is in a lower priority queue can only
execute only when the higher priority queues are empty.
7. Any running process in the lower priority queue can be interrupted by a process
arriving in the higher priority queue.

Also, the above implementation may differ for the example in which the last queue will follow
Round-robin Scheduling.

In the above Implementation, there is a problem and that is; Any process that is in the lower
priority queue has to suffer starvation due to some short processes that are taking all the CPU
time.

And the solution to this problem is: There is a solution that is to boost the priority of all the
processes after regular intervals and then place all the processes in the highest priority queue.

THE NEED FOR MULTILEVEL FEEDBACK QUEUE SCHEDULING(MFQS)

Following are some points to understand the need for such complex scheduling:

● This scheduling is more flexible than Multilevel queue scheduling.

● This algorithm helps in reducing the response time.

● In order to optimize the turnaround time, the SJF algorithm is needed which basically
requires the running time of processes in order to schedule them. As we know that the
running time of processes is not known in advance. Also, this scheduling mainly runs a
process for a time quantum and after that, it can change the priority of the process if
the process is long. Thus this scheduling algorithm mainly learns from the past behavior
of the processes and then it can predict the future behavior of the processes. In this
way, MFQS tries to run a shorter process first which in return leads to optimizing the
turnaround time.

ADVANTAGES OF MFQS

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 8


● This is a flexible Scheduling Algorithm

● This scheduling algorithm allows different processes to move between different queues.

● In this algorithm, A process that waits too long in a lower priority queue may be moved
to a higher priority queue which helps in preventing starvation.

DISADVANTAGES OF MFQS

● This algorithm is too complex.

● As processes are moving around different queues it leads to the production of more
CPU overheads.

● In order to select the best scheduler this algorithm requires some other means to select
the values

KEY TAKEAWAY

➢ Multilevel Queue (MLQ) and Multilevel Feedback Queue (MFQ) scheduling algorithms
offer flexible solutions to the diverse needs of various processes within an operating
system.
➢ In MLQ, different processes are assigned to specific queues based on priorities, allowing
for the separation of foreground and background tasks.
➢ Each queue is scheduled independently, accommodating processes with varying
response times and resource requirements.
➢ While MLQ has low scheduling overhead, it may pose a risk of starvation for lower-
priority processes and is inherently rigid.
➢ On the other hand, MFQ introduces flexibility by allowing processes to move between
queues based on their CPU usage and waiting times. This dynamic approach prevents
starvation, but the algorithm is more complex and introduces additional CPU overhead.
➢ MFQ optimizes turnaround time by learning from past process behavior, making it a
valuable scheduling algorithm despite its intricacies.
➢ The need for such complexity arises from the flexibility required to handle diverse
processes efficiently.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 9


➢ While MFQ offers advantages in terms of flexibility and prevention of starvation, its
complexity and additional overhead pose challenges in selecting optimal parameter
values for specific systems.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 10


INTERPROCESS
COMMUNICATION

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 1


SUB LESSON 5.1

CRITICAL SECTION AND RACE CONDITION

PROCESS SYNCHRONIZATION
Modern operating systems, such as Unix, execute processes concurrently. Although a single
Central Processor (CPU) executes the instructions of only one program at a time, the operating
system rapidly switches the processor between different processes (usually allowing a single
process a few hundred microseconds of CPU time before replacing it with another process.)

Some of these resources (such as memory) are simultaneously shared by all processes. Such
resources are being used in parallel between all running processes on the system. Other
resources must be used by one process at a time, so must be carefully managed so that all
processes get access to the resource. Such resources are being used concurrently between all
running processes on the system. The most important example of a shared resource is the CPU,
although most of the I/O devices are also shared. For many of these shared resources, the
operating system distributes the time a process requires of the resource to ensure reasonable
access for all processes. Consider the CPU, the operating system has a clock that sets an alarm
every few hundred microseconds. At this time the operating system stops the CPU and saves all
the relevant information that is needed to re-start the CPU exactly where it last left off (this will
include saving the current instruction being executed, the state of the memory in the CPUs
registers, and other data), and removes the process from the use of the CPU. The operating
system then selects another process to run, returns the state of the CPU to what it was when it
last ran this new process and starts the CPU again. Let’s take a moment to see how the
operating system manages this. In this unit, we shall discuss the deadlock. A deadlock is a
situation wherein two or more competing actions are waiting for the other to finish, and thus
neither ever does. It is often seen in a paradox like ‘the chicken or the egg’. This situation may
be likened to two people who are drawing diagrams, with only one pencil and one ruler

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 2


between them. If one person takes the pencil and the other takes the ruler, a deadlock occurs
when the person with the pencil needs the ruler and the person with the ruler needs the pencil,
before he can give up the ruler. Both requests can’t be satisfied, so a deadlock occurs.

This is a new chapter on process synchronization. in this chapter, we learn what process
synchronization is, and how it is a useful and very important part of the operating system.

Process synchronization means that we need some kind of synchronization between processes.
To understand this we need to understand what is cooperating processes. so basically we have
two types of processes here.

1. Independent processes
2. Cooperating processes

While a cooperative process can be impacted by other processes that are executing, an
independent process is unaffected by the execution of other processes. Although it is possible
to assume that processes operating independently will function extremely effectively, there are
really many circumstances in which cooperative nature can be used to boost computational
speed, ease, and modularity. Processes can communicate with one another and coordinate
their actions through a technique called inter-process communication (IPC). These processes'
communication with one another might be thought of as a means of cooperation. Processes
can speak with one another using both:

1. Cooperation by sharing

The processes may cooperate by sharing data, including variables, memory, databases, etc. The
critical section provides data integrity, and writing is mutually exclusive to avoid inconsistent
data.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 3


Here, you see a diagram that shows cooperation by sharing. In this diagram, Process P1 and P2
may cooperate by using shared data like files, databases, variables, memory, etc.

2. COOPERATION BY COMMUNICATION

The cooperating processes may cooperate by using messages. If every process waits for a
message from another process to execute a task, it may cause a deadlock. If a process does not
receive any messages, it may cause starvation.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 4


Here, you have seen a diagram that shows cooperation by communication. In this diagram,
Process P1 and P2 may cooperate by using messages to communicate.

ADVANTAGES OF COOPERATING PROCESS IN OPERATING SYSTEM

There are various advantages of cooperating processes in the operating system. Some
advantages of the cooperating system are as follows:

1. Information Sharing: Cooperating processes can be used to share data between processes.
That could involve having access to the same files. A technique is required to allow the
processes to access the files concurrently.

2. Modularity: The division of large tasks into smaller subtasks is referred to as modularity.
These smaller subtasks can be completed by multiple collaborating processes. As a result, the
necessary tasks are finished quicker and more effectively.

3. Computation Speedup: A single task's subtasks can be completed simultaneously using


cooperative processes. It improves computation speed by allowing the task to be accomplished
faster. Although, it is only conceivable if the system comprises numerous processing elements.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 5


4. Convenience: A user must complete a variety of tasks, such as printing, compiling, editing,
etc. It is more convenient if these operations may be managed through collaborating processes.

Systems that allow processes to communicate and synchronize their actions are necessary for
the concurrent execution of cooperating processes.

CRITICAL SECTION IN SYNCHRONIZATION

The key to preventing trouble involving shared storage is to find some way to prohibit more
than one process from reading and writing the shared data simultaneously. That part of the
program where the shared memory is accessed is called the Critical Section. To avoid race
conditions and flawed results, one must identify codes in Critical Sections in each thread.

Here, the important point is that when one process is executing shared modifiable data in its
critical section, no other process is to be allowed to execute in its critical section. Thus, the
execution of critical sections by the processes is mutually exclusive in time.

A way of making sure that if one process is using shared modifiable data, the other processes
will be excluded from doing the same thing. Formally, while one process executes the shared
variable, all other processes desiring to do so at the same time moment should be kept waiting;
when that process has finished executing the shared variable, one the processes waiting; while
that process has finished executing the shared variable, one of the processes waiting to do so
should be allowed to proceed. In this fashion, each process executing the shared data
(variables) excludes all others from doing so simultaneously. This is called Mutual Exclusion.

The critical section is a code segment that allows access to the shared variables. In a critical
section, an atomic action is required, which means that only one process can execute its
critical part at a time. All the other processes have to wait to execute in their essential areas.

A diagram that demonstrates the critical section is as follows −

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 6


In the above diagram, the entry section handles the entry into the critical section. It acquires
the resources needed for the execution of the process. The exit section handles the exit from
the critical section. It releases the resources and also informs the other processes that the
critical section is free.

Example:

In the clothes section of a supermarket, two people are shopping for clothes.

Once boy A comes out of the changing room, the sign on it changes from ‘occupied’ to ‘vacant’
– indicating that another person can use it. Hence, girl B proceeds to use the changing room,
while the sign displays ‘occupied’ again.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 7


After boy A exits the changing room, the sign on the door turns from 'occupied' to 'empty,'
indicating that it is available for use by another person. Thus, girl B starts to use the changing
room, while the sign reads 'occupied' again.

The changing room is nothing but the critical section, boy A and girl B are two different
processes, while the sign outside the changing room indicates the process synchronization
mechanism being used.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 8


A SOLUTION TO THE CRITICAL SECTION PROBLEM

The critical section problem needs a solution to synchronize the different processes. The
solution to the critical section problem must satisfy the following conditions −

● Mutual Exclusion: Mutual exclusion implies that only one process can be inside
the critical section at any time. If any other processes require the critical section,
they must wait until it is free.
● Progress: Progress means that if a process is not using the critical section, then it
should not stop any other process from accessing it. In other words, any process
can enter a critical section if it is free.
● Bounded Waiting: Bounded waiting means that each process must have a limited
waiting time. It should not wait endlessly to access the critical section.

RACE CONDITION

What is a race condition?

A race condition is an undesirable scenario that arises when a device or system attempts to do
two or more actions at the same time, but because of the nature of the device or system, the
activities must be done in the proper sequence to be done correctly. Race conditions are most
typically connected with computer science and programming. They arise when two computer
programme processes, or threads, try to access the same resource at the same time, causing
system issues. Race conditions are a prevalent problem in multithreaded programmes.

What are examples of race conditions?

A light switch is a basic example of a race situation. In certain homes, many light switches are
linked to a single ceiling light. When these circuits are used, the switch position is rendered
meaningless. If the light is turned on, moving either switch away from its current position turns
it off. Similarly, if the light is off, then moving either switch from its current position turns the
light on. Consider what would happen if two people tried to turn on the light using two
different switches at the same time. One instruction may negate the other, or the two acts may
cause the circuit breaker to trip. A race condition may arise in computer memory or storage if
requests to read and write a large amount of data are received almost simultaneously, and the
machine attempts to overwrite some or all of the old data while that old data is still being read.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 9


KEY TAKEAWAY

➢ Process synchronization is a crucial aspect of modern operating systems, enabling


concurrent execution of processes while managing shared resources efficiently.
➢ Processes can operate independently or cooperatively, with cooperation involving
communication or sharing of resources.
➢ The critical section, a part of the program accessing shared memory, is pivotal in
avoiding race conditions and ensuring data integrity.
➢ Mutual exclusion, progress, and bounded waiting are essential conditions for solving the
critical section problem.
➢ However, race conditions, undesirable scenarios arising from simultaneous attempts to
access resources, can still occur, often leading to system issues.
➢ An everyday example is the race condition in a light switch, demonstrating the
importance of proper sequence execution.
➢ In computer science, race conditions are prevalent in multithreaded programs,
emphasizing the need for robust synchronization mechanisms.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 10


INTERPROCESS
COMMUNICATION

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 1


SUB LESSON 5.2

DISABLING INTERRUPTS AND SHARED LOCK VARIABLE

MUTUAL EXCLUSION WITH BUSY WAITING


Busy waiting, also known as spinning or busy looping, is a process synchronisation strategy in
which a process/task waits for a condition to be satisfied before continuing with its execution. A
busy waiting process executes instructions that check for the existence of an entry condition,
such as the availability of a lock or resource in the computer system.

Consider a case in which a process requires a resource for a particular software. Nevertheless,
the resource is now in use and unavailable, thus the procedure must wait for resource
availability before proceeding. This is referred to as busy waiting, as depicted below:

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 2


In operating systems, there are two general approaches to waiting: First, while consuming the
CPU, a process/task can continually check for the condition to be satisfied.
Second, a process can pause without using up the processor. When the condition is satisfied,
the process/task is notified or woken. Sleeping, blocked waiting, or sleep waiting are all terms
for the latter.
In operating systems, busy looping is commonly employed to establish mutual exclusion.
Mutual exclusion prohibits multiple processes from accessing a shared resource at the same
time. In mutual exclusion, a process is given exclusive control of resources in its critical
segment, with no interference from other processes. A crucial part of software code is one in
which concurrent access must be avoided.
1. Disabling interrupts (Hardware approach)
2. Shared lock variable (Software approach)
3. Strict alteration (Software approach)
4. TSL (Test and Set Lock) instruction (Hardware approach)
5. Peterson’s solution (Software approach)

DISABLING INTERRUPTS

Each process disables all interrupts just after entering in its critical section and re-enable all
interrupts just before leaving critical section. With interrupts turned off the CPU could not be
switched to other process. Hence, no other process will enter its critical and mutual exclusion
achieved.

The easiest solution is for each process to disable all interrupts as soon as it enters its critical
region and re-enable them as soon as it exits it. No clock interrupts can occur while interrupts
are disabled. After all, the CPU is only shifted from process to process as a result of a clock or
other interruptions, and if interrupts are disabled, the CPU will not be switched to another
process. As a result, once a process has deactivated interrupts, it can examine and update
shared memory without risk of interference from other processes.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 3


This solution is often undesirable since it is risky to provide user processes the ability to disable
interrupts. Assume one of them did it and never switched them back on again. That might be
the system's demise. Furthermore, if the system has two or more CPUs, disabling interrupts
only affects the CPU that performed the disable command. The others will continue to run and
have access to the shared memory.

Yet, it is frequently advantageous for the kernel to suppress interrupts for a few instructions
while updating variables or lists. Race situations may develop if an interrupt happened while
the list of ready processes, for example, was in an inconsistent state. The conclusion is that
while deactivating interrupts is frequently useful within the operating system, it is not suited as
a broad mutual exclusion mechanism for user programs.

while (true)

< disable interrupts >;

< critical section >;

< enable interrupts >;

< remainder section >;

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 4


Problems in Disabling interrupts (Hardware approach)
● Unattractive or unwise to give user processes the power to turn off interrupts.
● What if one of the process did it (disable interrupt) and never turned them on (enable
interrupt) again? That could be the end of the system.
● If the system is a multiprocessor, with two or more CPUs, disabling interrupts affects
only the CPU that executed the disable instruction. The other ones will continue running
and can access the shared memory.

Conclusion
Disabling interrupts is sometimes a useful interrupts is sometimes a useful technique within the
kernel of an operating system, but it is not appropriate as a general mutual exclusion
mechanism for users process. The reason is that it is unwise to give user process the power to
turn off interrupts.

SHARED LOCK VARIABLE (SOFTWARE APPROACH)

In this solution, you consider a single, shared, (lock) variable, initially 0. When a process wants
to enter in its critical section, it fi rst test the lock. If lock is 0, the process fi rst sets it to 1 and
then enters the critical section. If the lock is already 1, the process just waits until (lock) variable
becomes 0. Thus, a 0 means that no process in its critical section, and 1 means hold your horses
- some process is in its critical section.

Let us look for a software solution as a second effort. Consider having a single shared (lock)
variable that starts at 0. When a process wishes to reach its crucial region, it first attempts to
unlock the lock. If the lock is set to 0, the process enters the critical zone. If the lock is already a
1, the process simply waits for it to become a 0. As a result, a 0 indicates that no process is in its
critical region, whereas a 1 indicates that some process is in its critical region.

Sadly, this concept suffers the same fatal fault that we discovered in the spooler directory.
Assume that one process reads the lock and discovers that it is set to 0. Another process is
scheduled, executes, and sets the lock to 1 before it can set the lock to 1. When the first

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 5


process is restarted, the lock is reset to 1, and two processes are in their critical zones at the
same time.

You may think that we might avoid this problem by first reading out the lock value and then
checking it again right before saving it, however, that is not the case. The race now happens if
the second process updates the lock immediately after the first has completed its second check.

while (true)

{ < set shared variable to 1 >;

< critical section >;

< set shared variable to 0 >;

< remainder section >;

Problem:

• If process-A sees the value of lock variable 0 and before it can set it to 1 context switch
occurs.
• Now process-B runs and finds the value of lock variable 0, so it sets the value to 1, and
enters the critical region.
• At some point of time process-A resumes, sets the value of the lock variable to 1, and
enters the critical region.
• Now two processes are in their critical regions accessing the same shared memory, which
violates the mutual exclusion condition.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 6


Conclusion
The flaw in this proposal can be best explained by example. Suppose process A sees that the
lock is 0. Before it can set the lock to 1 another process B is scheduled, runs, and sets the lock
to 1. When the process A runs again, it will also set the lock to 1, and two processes will be in
their critical section simultaneously.

KEY TAKEAWAY

➢ The concept of busy waiting, specifically the approach of disabling interrupts and using a
shared lock variable, is explored in interprocess communication for achieving mutual
exclusion.
➢ Disabling interrupts involves preventing the CPU from switching to other processes by
turning off interrupts while a process is in its critical section.
➢ While this ensures mutual exclusion, it poses risks and is not a suitable general solution
for user programs. On the other hand, the shared lock variable approach utilizes a single
variable that is initially set to 0.
➢ Processes attempt to set it to 1 before entering the critical section, but this method
suffers from a race condition where multiple processes may enter the critical section
simultaneously, violating mutual exclusion.
➢ These approaches demonstrate the challenges of busy waiting and the need for more
robust synchronization mechanisms in interprocess communication systems.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 7


INTERPROCESS
COMMUNICATION

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 1


SUB LESSON 5.3
TSL INSTRUCTION AND STRICT ALTERNATION

TSL (TEST AND SET LOCK) INSTRUCTION (HARDWARE APPROACH)

● The Test and Set Lock (TSL) mechanism is a synchronization method.


● It employs a test and set instruction to achieve synchronization across processes that
are running concurrently.

TEST-AND-SET INSTRUCTION

● In a single atomic operation, it returns the old value of a memory location and sets the
memory location value to 1.
● If one process is currently running a test-and-set, no other process may start a new test-
and-set until the first process's test-and-set is completed

It is implemented as-

Initially, lock value is set to 0.

●Lock value = 0 means the critical section is currently vacant and no process is present
inside it.
●Lock value = 1 means the critical section is currently occupied and a process is present
inside it.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 2


WORKING-

This synchronization mechanism works as explained in the following scenes-

SCENE-01:

●Process P0 arrives.
●It executes the test-and-set(Lock) instruction.
●Since the lock value is set to 0, so it returns the value 0 to the while loop and sets the
lock value to 1.
●The returned value 0 breaks the while loop condition.
●Process P0 enters the critical section and executes.
●Now, even if process P0 gets preempted in the middle, no other process can enter the
critical section.
●Any other process can enter only after process P0 completes and sets the lock value to 0.

SCENE-02:

●Another process P1 arrives.


●It executes the test-and-set(Lock) instruction.
●Since the lock value is now 1, so it returns value 1 to the while loop and sets the lock
value to 1.
●The returned value 1 does not break the while loop condition.
●The process P1 is trapped inside an infinite while loop.
●The while loop keeps the process P1 busy until the lock value becomes 0 and its
condition breaks.

SCENE-03:

●Process P0 comes out of the critical section and sets the lock value to 0.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 3


●The while loop condition breaks.
●Now, process P1 waiting for the critical section enters the critical section.
●Now, even if process P1 gets preempted in the middle, no other process can enter the
critical section.
●Any other process can enter only after process P1 completes and sets the lock value to 0.

CHARACTERISTICS-

The characteristics of this synchronization mechanism are-

● It guarantees mutual exclusion.


● There is no deadlock.
● It does not ensure confined waiting and may result in hunger.
● It has a spin lock problem.
● It is not architecturally neutral because it necessitates that the operating system
provides test-and-set instructions.
● It is a busy waiting solution that keeps the CPU busy while the process waits.

EXPLANATIONS-
POINT-01:

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 4


This synchronization mechanism guarantees mutual exclusion.

EXPLANATION-
● The test-and-set instruction is crucial to the mechanism's success in establishing
mutual exclusion.
● The test-and-set instruction returns the previous value of a memory location
(lock) while also updating its value to 1.
● Because these two processes are conducted as a single atomic operation, they
are mutually exclusive.
● The failure of the lock variable synchronization technique was largely due to
preemption after reading the lock value.
● Preemption can no longer occur immediately after reading the lock value.

POINT-02:

This synchronization mechanism guarantees freedom from deadlock.

EXPLANATION-
● When the process arrives, it runs the test-and-set instruction, which returns 0 to
the while loop and sets the lock value to 1.
● No other process can now enter the crucial portion until the process that started
the test-and-set completes the critical section.
● Additional processes can only enter when the process that started the test-and-
test has been completed and set the lock value to 0.
● This avoids a stalemate from occurring.

POINT-03:

This synchronization mechanism does not guarantee bounded waiting.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 5


EXPLANATION-
● This synchronization mechanism may lead a process to run out of CPU resources.
● There could be an unlucky process that is busy when it comes time to run the
important part.
● As a result, it continues to wait in the while loop until it is preempted.
● When it is rescheduled and comes to perform the important section, it discovers
that another process is already running it.
● So, once again, it waits in the while loop until it is preempted.
● This can happen numerous times, causing the unfortunate process to run out of
CPU resources.

POINT-04:

This synchronization mechanism suffers from spin lock where the execution of processes
is blocked.

EXPLANATION-

Consider a scenario where-

● The processes are scheduled using the priority scheduling algorithm.


● When a higher priority process arrives, a lower priority process is removed from
the critical section.
● Now, a higher priority process arrives to carry out the important section.
● But, the synchronization mechanism prevents it from entering the critical part
before the lower priority operation has finished.
● Therefore, the lower priority process cannot be executed until the higher priority
process has completed execution.
● As a result, the execution of both processes is halted.

STRICT ALTERATION (SOFTWARE APPROACH)

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 6


In this proposed solution, the integer variable ‘turn’ keeps track of whose turn is to enter the
critical section. Initially, process A inspect turn, finds it to be 0, and enters its critical section.
Process B also finds it to be 0 and sits in a loop continually testing ‘turn’ to see when it becomes
1. Continuously testing a variable waiting for some value to appear is called the Busy-Waiting.

●Turn variable is a synchronization mechanism that provides synchronization among two


processes.
●It uses a turn variable to provide the synchronization.

It is implemented as-

Initially, the turn value is set to 0.

●Turn value = 0 means it is the turn of process P0 to enter the critical section.
●Turn value = 1 means it is the turn of process P1 to enter the critical section.

Working- This synchronization mechanism works as explained in the following scenes-

SCENE-01:

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 7


● Process P0 has arrived.
● It carries out the turn!=0 instruction.
● Because the turn value is set to 0, it returns 0 to the while loop.
● The condition of the while loop is broken.
● Process P0 enters and runs the important portion.
● Even if process P0 is preempted in the midst, process P1 will be unable to enter the
critical part.
● Process P1 cannot enter until Process P0 completes and the turn value is set to 1.

SCENE-02:
● Process P1 has arrived.
● It carries out the turn!=1 instruction.
● Because the turn value is set to 0, the while loop receives value 1.
● The while loop condition is not broken by the returned value 1.
● P1 is entrapped inside an unending while loop.
● The while loop keeps process P1 busy until the turn value is 1, at which point the
condition breaks.

SCENE-03:
● Process P0 exits the critical section and changes the turn value to 1.
● Process P1's while loop condition fails.
● Now, the process P1 that was waiting for the critical section enters and executes the
critical section.
● Process P0 can no longer enter the critical part if process P1 is preempted in the middle.
● Process P0 cannot enter until Process P1 is finished and the turn value is set to 0.

CHARACTERISTICS-

The characteristics of this synchronization mechanism are-

●It ensures mutual exclusion.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 8


●It follows the strict alternation approach.

STRICT ALTERNATION APPROACH

In a strict alternation approach,

●Processes have to compulsorily enter the critical section alternately whether they
want it or not.
●This is because if one process does not enter the critical section, then other
processes will never get a chance to execute again.

● It does not ensure progress because it uses a tight alternation strategy.


● It ensures bounded waiting since processes are executed one by one and each process is
guaranteed to get a chance.
● It ensures that processes are not starved for CPU resources.
● It is architecturally neutral because it does not require operating system support.
● It is free of stalemate.
● It is a busy waiting solution that keeps the CPU busy while the process is waiting.

KEY TAKEAWAY

➢ The Test and Set Lock (TSL) instruction, as a hardware approach to synchronization,
utilizes a test-and-set instruction to manage concurrent processes.
➢ This mechanism ensures mutual exclusion by employing a lock variable that is set to 0
when the critical section is vacant and 1 when occupied.
➢ While effective in guaranteeing mutual exclusion and preventing deadlocks, TSL lacks
bounded waiting, may result in hunger, and introduces a spin lock problem, keeping the
CPU busy during the waiting process.
➢ On the other hand, the Strict Alternation, a software-based approach, employs a turn
variable to synchronize two processes.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 9


➢ While ensuring mutual exclusion and bounded waiting, it follows a strict alternation
approach, requiring processes to enter the critical section alternately.
➢ This method avoids starvation, is architecturally neutral, and is free of deadlocks.
However, it relies on busy waiting, keeping the CPU occupied while waiting for the
critical section.
➢ Both approaches provide insights into synchronization mechanisms, highlighting their
characteristics, advantages, and limitations in interprocess communication systems.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 10


INTERPROCESS
COMMUNICATION

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 1


SUB LESSON 5.4

PETERSON’S SOLUTION

PETERSON’S SOLUTION
Peterson's solution is elegant, making it an interesting approach that provides these three
crucial features. This strategy, however, is rarely employed in modern systems. Peterson's
solution is based on the premise that instructions be executed in a specific order and that
memory accesses can be accomplished atomically. With current hardware, both of these
assumptions can fail. The instructions may be executed in a different order due to the
complications in the design of pipelined CPUs. Furthermore, if the threads are running on
different cores that do not ensure immediate cache coherency, the threads' memory values
may diverge.

Although hardware solutions exist to address these challenges, Peterson's solution suffers from
a lack of abstraction. It is preferable for a systems programmer to use abstractions such as
locking and unlocking a vital region rather than manipulating memory bits. As a result, the
remainder of this chapter will concentrate on higher-level abstractions and approaches that
enable more robust and adaptable solutions to these types of challenges, known as
synchronization primitives. In general, these primitives fail to provide fairness and, as a result,
cannot give the same theoretical features as Peterson's solution. Their higher level of
abstraction, on the other hand, provides better clarity in their practical use.

Peterson's solution is a traditional critical section problem solution. The critical section problem
assures that no two processes update or modify the value of a resource at the same time.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 2


Let's say int a=5, and there are two processes p1 and p2 that can change the value of a. p1 adds
2 to a=a+2, and p2 multiplies a by 2, yielding a=a*2. If both processes affect the value at the
same time, the value is determined by the order in which the processes are executed. If p1 is
the first to execute, an is 14; if p2 is the first to execute, and is 12. The critical section problem
is caused by a change in values caused by simultaneous access by two processes.

The critical section is the section in which the values are changed. Except for the important
areas, there are three sections: the entry section, the exit section, and the reminder section.

● The process that enters the crucial region must first travel via the entry region where they
request access to the critical section.
● The process that exits the vital part must travel via the exit region.
● The remainder portion contains the code that was left over after execution.

PETERSON'S SOLUTION PROVIDES A SOLUTION TO THE FOLLOWING PROBLEMS,

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 3


● It guarantees that if a process is in the critical section, no other process may enter it. This
is known as mutual exclusion.
● If more than one process wishes to enter the crucial region, the process that should
enter first must be determined. This is referred to as progress.
● There is a limit on the number of requests that processors can make to enter the critical
region if a process has previously requested and is waiting to enter. This is known as
bounding.
● It supports platform neutrality because this solution is designed to run in user mode,
which does not require kernel authorization.

Now we will first see Peterson's solution algorithm and then see how any two processes P and
Q get mutual exclusion using Peterson's solution.

#define N 2

#define TRUE 1

#define FALSE 0

int interested[N]=False

int turn;

void Entry_Section(int process)

int other;

other=1-process

interested[process]= TRUE ;

turn = process;

while(interested[other]==TRUE && Turn=process);

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 4


}

void exit_section(int process)

interested[process]=FALSE;

EXPLANATION

There will be two processes, with the first process having a process number of 0 and the
second process having a process number of 1.

As a result, if process 1 calls the entry section, other = 1-process =1-1=0.

Other = 1-process = 1-0 = 1 if process 0 calls

So, because the process is named the entry section, it signifies that the process wants to run a
critical section, thus it will set interest.

[process]=TRUE

Thus, if process 1 is referred to as the entry part, then interested[1]=TRUE.

If process 0 is referred to as the entry section, interested[0]=TRUE.

It will set its turn after announcing that the process is interesting. Thus, if process 1 is invoked,
turn =1.

While (interested[other]==TRUE && Turn=process); will then be executed.

In this line, the process determines whether or not other processes are interested. If that
process is interested, then interested[other]==TRUE is true, and the process believes that
another process is executing the key part.

It will then enter a loop until another process is no longer interesting. Now, if another process
is interested, interested[other]==TRUE.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 5


It will become False, and the process will reach a critical stage. As a result, only one process
may enter the critical area. As a result, the reciprocal exclusion is ensured in Peterson's
solution. While exiting the crucial section phase, interest will be set to False.

Mutual Exclusion: For certain, the approach gives reciprocal exclusion. Because the while
condition in the entry section includes the criterion for two variables, a process cannot enter
the crucial part until the other process is interested and the process is the last to update the
turn variable.

Process: A process that is uninterested will never prevent another interested process from
entering the critical section. If the other process is likewise fascinating, the process will be put
on hold.

Bounded waiting: Because it did not provide bounded waiting, the interested variable
mechanism failed. In Peterson's technique, however, a stalemate can never occur because the
process that initially sets the turn variable will always enter the critical area. As a result, if a
process is preempted after executing line 4 of the entry section, it will almost certainly enter
the critical section on its second chance.

Portability: Because this is a complete software solution, it can run on any hardware.

KEY TAKEAWAY

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 6


➢ Peterson's solution, although an elegant approach to achieving mutual exclusion and
progress in the critical section problem, is rarely utilized in modern systems due to
challenges arising from changes in hardware and lack of abstraction.
➢ The solution relies on assumptions about the execution order of instructions and atomic
memory accesses, which may not hold true in contemporary pipelined CPUs and multi-
core systems with potential cache coherency issues.
➢ While Peterson's solution guarantees mutual exclusion and determines the order of
entry into the critical section, it lacks fairness and can face challenges in achieving
bounded waiting.
➢ Moreover, its low-level manipulation of memory bits makes it less preferable in
comparison to higher-level synchronization primitives like locking and unlocking vital
regions.
➢ This chapter emphasizes the importance of more robust and adaptable synchronization
solutions in modern interprocess communication systems.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 7


INTERPROCESS
COMMUNICATION

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 1


SUB LESSON 5.5

SEMAPHORES AND MONITORS

SEMAPHORES
semaphore is a hardware-based solution to the critical section problem. A Semaphore S is a
integer variable that, apart from initialization is accessed only through two standard atomic
operations: wait() and signal().
The wait () operation is termed as P and signal () was termed as V
Definition of wait () is
Wait (S) {
While S <= 0
; S--;
}
Definition of signal () is
Signal (S) {
S++;
}
All modifications to the integer value of the semaphore in the wait () and signal() operations
must be executed indivisibly, that is when one process modifies the semaphore value, no other
process can simultaneously modify that same semaphore value
Semaphore is basically a non-negative variable that is shared between threads. A semaphore is
a signaling mechanism, and another thread can signal a thread that is waiting on a semaphore.
For process synchronization, it employs two atomic operations: 1) Wait and 2) Signal.
Depending on how it is configured, a semaphore either enables or denies access to the
resource.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 2


CHARACTERISTICS OF SEMAPHORE

The following are the features of a semaphore:


● It is a mechanism that can be utilized to ensure task synchronization.
● It is a technique for low-level synchronization.
● Semaphores will always be non-negative integers.
● Test operations and interrupts, which should be conducted using file descriptors, can be
used to implement semaphores.

TYPES OF SEMAPHORES

The two common kinds of semaphores are


● Counting semaphores
● Binary semaphores.
COUNTING SEMAPHORES

This type of Semaphore uses a count that helps tasks to be acquired or released numerous
times. If the initial count = 0, the counting semaphore should be created in the unavailable
state.

However, If the count is > 0, the semaphore is created in the available state, and the number of
tokens it has equals to its count.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 3


BINARY SEMAPHORES

The binary semaphores are quite similar to counting semaphores, but their value is restricted to
0 and 1. In this type of semaphore, the wait operation works only if semaphore = 1, and the
signal operation succeeds when semaphore= 0. It is easy to implement than counting
semaphores.

EXAMPLE OF SEMAPHORE

The below-given program is a step-by-step implementation, which involves usage and


declaration of semaphore.
Shared var mutex: semaphore = 1;

Process i

begin

P(mutex);

execute CS;

V(mutex);

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 4


.

End;

WAIT AND SIGNAL OPERATIONS IN SEMAPHORES

Both of these operations are used to implement process synchronization. The goal of this
semaphore operation is to get mutual exclusion.
WAIT FOR OPERATION

You can use this form of semaphore action to control the entry of a job into the critical section.
If, on the other hand, the value of wait is positive, the value of the wait argument X is
decremented. If the value is negative or 0, no operation is performed. It is also known as the
P(S) operation.
The command is held up until the required conditions are met when the semaphore value is
reduced to a negative value.

Copy CodeP(S)

while (S<=0);

S--;

SIGNAL OPERATION

This type of Semaphore operation is used to control the exit of a task from a critical section. It
helps to increase the value of the argument by 1, which is denoted as V(S).
Copy CodeP(S)

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 5


while (S>=0);

S++;

COUNTING SEMAPHORE VS. BINARY SEMAPHORE

Here, are some major differences between counting and binary semaphore:

Counting Semaphore Binary Semaphore

No mutual exclusion Mutual exclusion

Any integer value Value only 0 and 1

More than one slot Only one slot

Provide a set of Processes It has a mutual exclusion mechanism.

DIFFERENCE BETWEEN SEMAPHORE VS. MUTEX

Parameters Semaphore Mutex

Mechanism It is a type of signaling mechanism. It is a locking mechanism.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 6


Data Type Semaphore is an integer variable. Mutex is just an object.

Modification The wait and signal operations can It is modified only by the process
modify a semaphore. that may request or release a
resource.

Resource If no resource is free, then the If it is locked, the process has to


management process requires a resource that wait. The process should be kept
should execute wait operation. It in a queue. This needs to be
should wait until the count of the accessed only when the mutex is
semaphore is greater than 0. unlocked.

Thread You can have multiple program You can have multiple program
threads. threads in mutex but not
simultaneously.

Ownership Value can be changed by any process Object lock is released only by the
of releasing or obtaining the process, which has obtained the
resource. lock on it.

Types Types of semaphores are counting Mutex has no subtypes.


semaphores and binary semaphores
and

Operation Semaphore value is modified using The Mutex object is locked or


wait () and signal () operations. unlocked.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 7


Resources It is occupied if all resources are In case if the object is already
Occupancy being used and the process locked, the process requesting
requesting for resource performs a resources waits and is queued by
wait () operation and blocks itself the system before the lock is
until the semaphore count becomes released.
>1.

ADVANTAGES OF SEMAPHORES

Here, are the pros/benefits of using Semaphore:


● It enables many threads to reach the vital part.
● Semaphores are independent of machines.
● Semaphores are implemented in the microkernel's machine-independent code.
● Several processes are not permitted to enter the critical section.
● There is no waste of process time or resources because there is always busy waiting in
semaphore.
● They are machine-independent and should be run in the microkernel's machine-
independent code.
● They enable resource management that is adaptable.

A DISADVANTAGE OF SEMAPHORES

Here are the cons/drawback of semaphores


● Priority inversion is one of the most significant restrictions of a semaphore.
● The operating system must keep track of all wait and signal semaphore calls.
● Their use is never required, but simply by tradition.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 8


● Wait and Signal actions must be done in the correct order to avoid deadlocks in
semaphore.
● Because semaphore programming is difficult, there is a probability that mutual exclusion
will not be achieved.
● It is also not a realistic strategy for large-scale applications because it reduces
modularity.
● Semaphore is more susceptible to programming errors.
● Due to programmer error, it may result in a deadlock or a violation of mutual exclusion.

MONITORS

Monitors are a programming language component that helps to control shared data access. The
Monitor is a set of shared data structures, operations, and synchronization between concurrent
procedure calls. As a result, a monitor is frequently referred to as a synchronization tool.
Among the languages that support monitors are Java, C#, Visual Basic, Ada, and concurrent
Euclid. Outside processes cannot access the monitor's internal variables, but they can invoke
the monitor's operations.

In the Java programming language, for example, synchronization techniques such as wait() and
notify() are provided.

SYNTAX OF A MONITOR IN OS

Monitor in os has a simple syntax similar to how we define a class, it is as follows:

Monitor monitorName{

variables_declaration;

condition_variables;

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 9


procedure p1{ ... };

procedure p2{ ... };

...

procedure pn{ ... };

initializing_code;

Monitor in an operating system is simply a class containing variable_declarations,


condition_variables, various procedures (functions), and an initializing_code block that is used
for process synchronization.

CHARACTERISTICS OF MONITORS IN OS

A monitor in os has the following characteristics:

● Inside the monitor, we can only run one program at a time.


● Monitors in an operating system are specified as a collection of methods and
fields paired with a special type of os package.
● If a program is executing outside of the monitor, it cannot access the monitor's
internal variable. However, a program can access the monitor's functions.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 10


● Monitors were intended to simplify synchronization problems.
● Monitors give a high level of process synchronization.

COMPONENTS OF MONITOR IN AN OPERATING SYSTEM

The monitor is made up of four primary parts:

1. Initialization: The initialization code is supplied in the package, and it is only required
once when generating the monitors.

2. Private Data: A feature of an operating system's monitor that makes data private. It
contains all of the monitor's secret data, including private features that may be used
only within the monitor. As a result, confidential fields and functionalities are hidden
behind the monitor.

3. Process for Monitoring: Monitor procedures are procedures or functions that can be
invoked from outside the monitor.

4. Keep an eye on the entry queue: The Monitor Entry Queue is another critical
component of the monitor. It only contains the threads that are typically referred to be
procedures.

CONDITION VARIABLES

There are two sorts of operations we can perform on the monitor's condition variables:

1. Wait
2. Signal

Consider a condition variable (y) is declared in the monitor:

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 11


y.wait(): The activity/process that applies the wait operation on a condition variable will be
suspended, and the suspended process is located in the condition variable's block queue.

y.signal(): If an activity/process applies the signal action on the condition variable, then one of
the blocked activity/processes in the monitor is given a chance to execute.

ADVANTAGES OF MONITOR IN OS

● Monitors make concurrent or parallel programming simpler and less error-prone than
semaphore-based methods.
● It aids in the synchronization of processes in the operating system.
● Mutual exclusion is integrated into monitors.
● Monitors are simpler to configure than semaphores.
● Monitors may be able to compensate for the timing errors caused by semaphores.

DISADVANTAGES OF MONITOR IN OS

● Monitors must be written in a programming language.


● Monitor increases the workload on the compiler.
● The monitor must comprehend which operating system features are available for
controlling critical sections of parallel procedures.

KEY TAKEAWAY

➢ In the realm of interprocess communication, semaphores and monitors play crucial roles
in managing shared resources and ensuring synchronization among concurrent
processes.
➢ Semaphores, as hardware-based solutions to the critical section problem, utilize atomic
operations such as wait() and signal() to control access to resources.
➢ They can be categorized as counting or binary semaphores, each serving specific
synchronization needs. Despite their advantages, semaphores come with challenges,
including priority inversion and the risk of deadlocks.
➢ On the other hand, monitors offer a higher-level abstraction for synchronizing
concurrent processes. Monitors encapsulate shared data, operations, and

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 12


synchronization within a defined structure, simplifying parallel programming compared
to semaphore-based approaches.
➢ They use condition variables, such as wait() and signal(), to manage process execution
within the monitor. While monitors provide advantages like enhanced mutual exclusion,
they may introduce additional complexity to the compiler and require a deeper
understanding of operating system features.
➢ In summary, semaphores and monitors are indispensable tools in the design and
implementation of effective interprocess communication systems, each with its unique
strengths and considerations.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 13


INTERPROCESS
COMMUNICATION

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 1


SUB LESSON 5.6

THE PRODUCER\ CONSUMER PROBLEM

THE PRODUCER\ CONSUMER PROBLEM

Because the buffer pool has a maximum size, this problem is often called the Bounded buffer
problem. This problem is generalized in terms of the Producer-Consumer problem, where a
finite buffer pool is used to exchange messages between producer and consumer processes.
The solution to this problem is, creating two counting semaphores "full" and "empty" to keep
track of the current number of full and empty buffers respectively. In this Producers mainly
produces a product and consumers consume the product, but both can use one of the
containers each time. The main complexity of this problem is that we must have to maintain
the count for both empty and full containers that are available.

The Producer-Consumer problem is a classic multi-process synchronization problem, which


means we're attempting to synchronize more than one process.

In the producer-consumer problem, there is one Producer who produces some products and
one Consumer who consumes the items generated by the Producer. Both producers and
consumers share the same memory buffer, which has a fixed size.

The Producer's job is to make the item, store it in the memory buffer, and then start making
more items. The Consumer's task is to consume the item from the memory buffer.

LET'S UNDERSTAND WHAT IS THE PROBLEM.

Below are a few points that are considered as the problems that occur in Producer-Consumer:

● When the buffer is not full, the producer should produce data. If the buffer is found to
be full, the producer is not permitted to put any data in the memory buffer.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 2


● The consumer can only consume data if and only if the memory buffer is not empty. If
the buffer is found to be empty, the consumer is not permitted to use any data from the
memory buffer.
● Producer and consumer should not be able to access the memory buffer at the same
time.

Let's see the code for the above problem:

PRODUCER CODE

CONSUMER CODE

LET'S UNDERSTAND THE ABOVE PRODUCER AND CONSUMER CODES:

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 3


Before Starting an explanation of code, first, understand the few terms used in the above code:

1. "in" used in a producer code represent the next empty buffer


2. "out" used in consumer code represent first filled buffer
3. count keeps the count number of elements in the buffer
4. count is further divided into 3 lines code represented in the block in both the producer
and consumer code.

THE SOLUTION OF THE PRODUCER-CONSUMER PROBLEM USING SEMAPHORE

Semaphores can help solve Producer and Consumer problems caused by context switches and
delivering inconsistent results.

To overcome the above-mentioned race situation problem, we will employ Binary Semaphores
and Counting Semaphores.

Semaphore Binary: Just two processes can compete to enter its CRITICAL SECTION at any
moment in time in a Binary Semaphore, and the criterion of mutual exclusion is also preserved.

Counting Semaphore: In counting semaphores, many processes can compete to enter its
CRITICAL SECTION at any time, while the condition of mutual exclusion is also maintained.

Semaphore: A semaphore is an integer variable in S, that apart from initialization is accessed by


only two standard atomic operations - wait and signal, whose definitions are as follows:

1. wait( S )

while( S <= 0) ;

S--;

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 4


2. signal( S )

S++;

From the above definitions of wait, it is clear that if the value of S <= 0 then it will enter into an
infinite loop (because of the semicolon; after while loop). Whereas the job of the signal is to
increment the value of S.

Let's see the code as a solution to the producer and consumer problem using semaphore ( Both
Binary and Counting Semaphore):

PRODUCER CODE- SOLUTION

1. do {
2. wait(empty); // wait until empty>0
3. wait(s); //acquire lock

4. /* add data to buffer */

5. signal(s); //release lock


6. signal(full); //increment full
7. } while (True)

CONSUMER CODE- SOLUTION

1. do {
2. wait(full); // wait until full>0
3. wait(s); //acquire lock

4. /* add data to buffer */

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 5


5. signal(s); //release lock
6. signal(empty); //increment empty
7. } while (True)

LET'S UNDERSTAND THE ABOVE SOLUTION OF PRODUCER AND CONSUMER CODE:

Before Starting an explanation of code, first, understand the few terms used in the above code:

1. m(s): a binary semaphore which is used to acquire and release the lock
2. Empty: a counting semapho0re whose initial value is the number of slots in the buffer,
since initially, all slots are empty.
3. Full: a counting semaphore whose initial value is 0.

If we see the current situation of Buffer

S = 1(init. Value of Binary semaphore

in = 5( next empty buffer)

out = 0(first filled buffer)

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 6


As we can see from Fig: Buffer has total of 8 spaces out of which the first 5 are filled.

Semaphores used in Producer Code:

1. wait(empty) decreases the value of the counting semaphore variable empty by one;
that is, when the producer provides some element, the value of the space in the buffer
is automatically decreased by one. If the buffer is filled, that is, the value of the counting
semaphore variable "empty" is 0, then wait(empty); will trap the process (as defined by
wait) and prevent it from proceeding.

2. wait(S) sets the binary semaphore variable S to 0, preventing any other process from
entering its critical area.

3. Signal(s) sets the binary semaphore variable S to 1, allowing other processes wanting to
reach its critical area to do so.

Semaphores used in Producer Code:

1. wait(full) decreases the value of the counting semaphore variable full by one; that is,
when the consumer eats some element, the value of the full space in the buffer is
automatically decreased by one. If the buffer is empty, that is, the value of the counting
semaphore variable full is 0, then wait(full); will trap the process (as defined by wait)
and prevent it from proceeding.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 7


2. wait(S) sets the binary semaphore variable S to 0, preventing any other process from
entering its critical area.
3. signal(S) sets the binary semaphore variable S to 1, allowing other processes that want
to reach its critical area to do so.
4. Signal(empty) increments the counting semaphore variable empty by one, since when
an item is removed from the buffer, one space in the buffer becomes vacant, and the
variable empty must be updated correspondingly.

Conclusion:

This approach also leads to the same race conditions you have seen in earlier approaches. Race
conditions can occur due to the fact that access to ‘count’ is unconstrained. The essence of the
problem is that a wake-up call, sent to a process that is not sleeping, is lost.

KEY TAKEAWAY

➢ The Producer-Consumer problem, also known as the Bounded Buffer problem,


addresses the synchronization challenges between processes involved in producing and
consuming items shared in a finite buffer pool.
➢ To resolve this, counting semaphores, specifically "full" and "empty," are employed to
manage the availability of buffers.
➢ The Producer generates items and stores them in the buffer, while the Consumer
consumes items from the buffer.
➢ The problem mandates that the Producer should produce when the buffer is not full,
and the Consumer can only consume when the buffer is not empty.
➢ Mutual exclusion ensures that both Producer and Consumer cannot access the buffer
simultaneously.
➢ Binary and counting semaphores are utilized in the solution to handle critical sections
effectively. However, potential race conditions persist, emphasizing the need for robust
synchronization mechanisms.
➢ The solution offers insights into overcoming context-switch-related issues and achieving
consistency in shared resource access through semaphore implementation.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 8


INTERPROCESS
COMMUNICATION

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 1


SUB LESSON 5.7

CLASSICAL IPC PROBLEMS(READER’S & WRITER PROBLEM)

READER’S & WRITER PROBLEM


In this problem there are some processes(called readers) that only read the shared data, and
never change it, and there are other processes(called writers) who may change the data in
addition to reading, or instead of reading it. There is various type of readers-writer problem,
most centered on the relative priorities of readers and writers. The main complexity of this
problem occurs from allowing more than one reader to access the data at the same time.

There is a common resource that numerous processes should be able to access. In this context,
there are two sorts of processes. They are readers as well as authors. Any number of readers
can read from the shared resource at the same time, but only one writer can write to it. No
other process can access the resource while a writer is writing data to it. If there are non-zero
readers accessing the resource at the time, a writer cannot write to it.

There are four types of cases that could happen here.

Allowed/Not
Case Process 1 Process 2
Allowed

Case 1 Writing Writing Not Allowed

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 2


Case 2 Writing Reading Not Allowed

Case 3 Reading Writing Not Allowed

Case 4 Reading Reading Allowed

Here priority means, no reader should wait if the share is currently opened for reading.

Three variables are used: mutex, wrt, readcnt to implement solution

1. semaphore mutex, wrt; // semaphore mutex is used to ensure mutual exclusion


when readcnt is updated i.e. when any reader enters or exit from the critical section
and semaphore wrt is used by both readers and writers
2. int readcnt; // readcnt tells the number of processes performing read in the
critical section, initially 0

Functions for semaphore :

– wait() : decrements the semaphore value.

– signal() : increments the semaphore value.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 3


THE SOLUTION

It is clear from the problem description that readers take precedence over writers. If a writer
wants to contribute to the resource, he or she must wait until no readers are currently using it.

We utilize one mutex m and one semaphore w in this case. The number of readers currently
accessing the resource is kept track of using the integer variable read count. The variable read
count is set to 0. m and w are originally assigned a value of 1.

Instead of requiring the process to acquire a lock on the shared resource, we use the mutex m
to require the process to acquire and release a lock whenever the read count variable is
updated.

The code for the writer process looks like this:

while(TRUE)

wait(w);

/* perform the write operation */

signal(w);

And, the code for the reader process looks like this:

while(TRUE)

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 4


//acquire lock

wait(m);

read_count++;

if(read_count == 1)

wait(w);

//release lock

signal(m);

/* perform the reading operation */

// acquire lock

wait(m);

read_count--;

if(read_count == 0)

signal(w);

// release lock

signal(m);

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 5


HERE IS THE CODE UNCODED(EXPLAINED)

● As seen in the code for the writer above, the writer just waits on the w semaphore until it
is given the opportunity to write to the resource.
● It increments w after performing the write operation so that the following writer can
access the resource.
● With the reader's code, on the other hand, the lock is obtained whenever the read count
is updated by a process.
● When a reader wishes to access a resource, it first increases the read count value, then
accesses the resource, and finally decrements the read count value.
● The first reader who enters the critical section and the last reader who exits the critical
portion utilizes the semaphore w.
● This is because when the first reader reaches the critical portion, the writer is stopped
from accessing the resource. The resource is currently only available to new readers.
● Similarly, after the final reader exits the critical region, it signals the writer with the w
semaphore since there are no more readers and the resource is now available to the
writer.

Let's see the proof of each case mentioned in Table 1

CASE 1: WRITING - WRITING → NOT ALLOWED. That is when two or more two processes are
willing to write, then it is not allowed. Let us see that our code is working accordingly or not?

Explanation :

The initial value of semaphore write = 1

Suppose two processes P0 and P1 wants to write, let P0 enter first the writer code, The
moment P0 enters

Wait( write ); will decrease semaphore write by one, now write = 0

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 6


And continue WRITE INTO THE FILE

Now suppose P1 wants to write at the same time (will it be allowed?) let's see.

P1 does Wait( write ), since the write value is already 0, therefore from the definition of wait, it
will go into an infinite loop (i.e. Trap), hence P1 can never write anything, till P0 is writing.

Now suppose P0 has finished the task, it will

signal( write); will increase semaphore write by 1, now write = 1

if now P1 wants to write it since semaphore write > 0

This proofs that, if one process is writing, no other process is allowed to write.

CASE 2: READING - WRITING → NOT ALLOWED. That is when one or more than one process is
reading the file, then writing by another process is not allowed. Let us see whether our code is
working accordingly or not.

Explanation:

Initial value of semaphore mutex = 1 and variable readcount = 0

Suppose two processes P0 and P1 are in a system, P0 wants to read while P1 wants to write, P0
enters first into the reader code, the moment P0 enters

Wait( mutex ); will decrease semaphore mutex by 1, now mutex = 0

Increment readcount by 1, now readcount = 1, next

if (readcount == 1)// evaluates to TRUE

wait (write); // decrement write by 1, i.e. write = 0(which

clearly proves that if one or more than one

reader is reading then no writer will be

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 7


allowed.

signal(mutex); // will increase semaphore mutex by 1, now mutex = 1 i.e. other readers are
allowed to enter.

And the reader continues to --READ THE FILE?

Suppose now any writer wants to enter into its code then:

As the first reader has executed wait (write); because of which write value is 0, therefore
wait(writer); of the writer, code will go into an infinite loop and no writer will be allowed.

This proves that, if one process is reading, no other process is allowed to write.

Now suppose P0 wants to stop the reading and wanted to exit then

the following sequence of instructions will take place:

wait(mutex); // decrease mutex by 1, i.e. mutex = 0

readcount --; // readcount = 0, i.e. no one is currently reading

if (readcount == 0) // evaluates TRUE

signal (write); // increase write by one, i.e. write = 1

signal(mutex);// increase mutex by one, i.e. mutex = 1

Now if again any writer wants to write, it can do it now, since write > 0

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 8


CASE 3: WRITING -- READING → NOT ALLOWED. That is when if one process is writing into the
file, then reading by another process is not allowed. Let us see that our code is working
accordingly or not?

Explanation:

The initial value of semaphore write = 1

Suppose two processes P0 and P1 are in a system, P0 wants to write while P1 wants to read, P0
enter first into the writer code, The moment P0 enters

Wait( write ); will decrease semaphore write by 1, now write = 0

And continue WRITE INTO THE FILE

Now suppose P1 wants to read the same time (will it be allowed?) let's see.

P1 enters reader's code

Initial value of semaphore mutex = 1 and variable readcount = 0

Wait( mutex ); will decrease semaphore mutex by 1, now mutex = 0

Increment readcount by 1, now readcount = 1, next

if (readcount == 1)// evaluates to TRUE

wait (write); // since value of write is already 0, hence it

will enter into an infinite loop and will not be

allowed to proceed further (which clearly

proves that if one writer is writing then no

reader will be allowed.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 9


The moment writer stops writing and is willing to exit then

This proves that, if one process is writing, no other process is allowed to read.

The moment writer stops writing and is willing to exit then it will execute:

signal( write); will increase semaphore write by 1, now write = 1

if now P1 wants to read it can since semaphore write > 0

CASE 4: READING - READING → ALLOWED. That is when one process is reading the file, and
another process or process is willing to read, then they all are allowed i.e. reading - reading is
not mutually exclusive. Let us see if our code is working accordingly or not.

Explanation:

Initial value of semaphore mutex = 1 and variable readcount = 0

Suppose three processes P0, P1 and P2 are in a system, all the three processes P0, P1, and P2
want to read, let P0 enter first into the reader code, the moment P0 enters

Wait( mutex ); will decrease semaphore mutex by 1, now mutex = 0

Increment readcount by 1, now readcount = 1, next

if (readcount == 1)// evaluates to TRUE

wait (write); // decrement write by 1, i.e. write = 0(which

clearly proves that if one or more than one

reader is reading then no writer will be

allowed.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 10


signal(mutex); // will increase semaphore mutex by 1, now mutex = 1 i.e. other readers are
allowed to enter.

And P0 continues to --READ THE FILE?

→Now P1 wants to enter the reader code

current value of semaphore mutex = 1 and variable readcount = 1

let P1 enter into the reader code, the moment P1 enters

Wait( mutex ); will decrease semaphore mutex by 1, now mutex = 0

Increment readcount by 1, now readcount = 2, next

if (readcount == 1)// eval. to False, it will not enter if block

signal(mutex); // will increase semaphore mutex by 1, now mutex = 1 i.e. other readers are
allowed to enter.

Now P0 and P1 continues to --READ THE FILE?

→Now P2 wants to enter the reader code

current value of semaphore mutex = 1 and variable readcount = 2

let P2 enter into the reader code, The moment P2 enters

Wait( mutex ); will decrease semaphore mutex by 1, now mutex = 0

Increment readcount by 1, now readcount = 3, next

if (readcount == 1)// eval. to False, it will not enter if block

signal(mutex); // will increase semaphore mutex by 1, now mutex = 1 i.e. other readers are
allowed to enter.

Now P0, P1, and P2 continues to --READ THE FILE?

Suppose now any writer wants to enter into its code then:

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 11


As the first reader P0 has executed wait (write); because of which write value is 0, therefore
wait(writer); of the writer, code will go into an infinite loop and no writer will be allowed.

Now suppose P0 wants to come out of system( stop reading) then

wait(mutex); //will decrease semaphore mutex by 1, now mutex = 0

readcount --; // on every exit of reader decrement readcount by

one i.e. readcount = 2

if (readcount == 0)// eval. to FALSE it will not enter if block

signal(mutex); // will increase semaphore mutex by 1, now mutex = 1 i.e. other readers are
allowed to exit

→ Now suppose P1 wants to come out of system (stop reading) then

wait(mutex); //will decrease semaphore mutex by 1, now mutex = 0

readcount --; // on every exit of reader decrement readcount by

one i.e. readcount = 1

if (readcount == 0)// eval. to FALSE it will not enter if block

signal(mutex); // will increase semaphore mutex by 1, now mutex = 1 i.e. other readers are
allowed to exit

→Now suppose P2 (last process) wants to come out of system (stop reading) then

wait(mutex); //will decrease semaphore mutex by 1, now mutex = 0

readcount --; // on every exit of reader decrement readcount by

one i.e. readcount = 0

if (readcount == 0)// eval. to TRUE it will enter into if block

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 12


signal (write); // will increment semaphore write by one, i.e.

now write = 1, since P2 was the last process

which was reading, since now it is going out,

so by making write = 1 it is allowing the writer

to write now.

signal(mutex); // will increase semaphore mutex by 1, now mutex = 1

The above explanation proves that if one or more than one processes are willing to read
simultaneously.

KEY TAKEAWAY

➢ The Reader's and Writer's problem in Interprocess communication involves managing


access to a shared resource by readers and writers.
➢ The challenge is to prioritize readers over writers while ensuring mutual exclusion to
prevent conflicts.
➢ The solution employs three variables - mutex, wrt, and readcnt - and uses semaphores
for synchronization.
➢ Readers take precedence over writers, and the code ensures that if a writer is
contributing to the resource, no readers can access it simultaneously.
➢ The solution effectively handles cases where writing is not allowed when another
process is writing, reading is not allowed while writing is in progress, and writing is not
allowed during reading.
➢ Reading by multiple processes is permitted, demonstrating the mutual exclusivity of
reading and writing.
➢ The implementation involves careful use of semaphores and mutex to achieve
synchronization and prevent conflicts, ensuring a robust solution to the classical IPC
problem.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 13


CLASSICAL IPC PROBLEM

SUB LESSON 5.8

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 1


CLASSICAL IPC PROBLEMS( DINING PHILOSOPHER PROBLEM)

DINING PHILOSOPHER PROBLEM


Since 1965 (Dijkstra posed and solved this synchronization problem), everyone inventing a new
synchronization primitive has tried to demonstrate its abilities by solving the dining
philosophers problem. The problem can be stated quite simply as follows. Five philosophers are
seated around a circular table. Each philosopher has a plate of spaghetti. The spaghetti is so
slippery that a philosopher needs two forks to eat it. Between each pair of plates, there is only
one fork. The life of a philosopher consists of alternate periods of eating and thinking. This is
something of an abstraction, even for philosophers, but the other activities are irrelevant here.
When a philosopher gets hungry, she tries to acquire her left and right forks, one at a time, in
either order. If successful in acquiring two forks, she eats for a while, then puts down the forks,
and continues to think.

The dining philosopher's issue is a classic synchronization problem in which five philosophers sit
around a circular table and alternate between thinking and eating. A bowl of noodles, as well as
five chopsticks for each of the philosophers, is placed in the center of the table. A philosopher
requires both a right and a left chopstick to dine. A philosopher can only eat if both his or her
immediate left and right chopsticks are available. If the philosopher's immediate left and right
chopsticks are not available, the philosopher sets down their (either left or right) chopstick and
resumes pondering. When a resource is shared by multiple processes at the same time, data
inconsistency might occur.

The Dining Philosophers Issue is a common example of process synchronization difficulties in


systems with several processes and limited resources. Assume there are K philosophers seated
around a circular table, each with one chopstick between them, according to the Dining
Philosopher Problem. This means that a philosopher can only eat if both chopsticks are near to
him/her. One of the nearby followers may use one of the chopsticks, but not both.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 2


For example, let’s consider P0, P1, P2, P3, and P4 as the philosophers or processes and C0, C1,
C2, C3, and C4 as the 5 chopsticks or resources between each philosopher. Now if P0 wants to
eat, both resources/chopstick C0 and C1 must be free, which would leave in P1 and P4 void of
the resource and the process wouldn't be executed, which indicates there are limited
resources(C0,C1..) for multiple processes(P0, P1..), and this problem is known as the Dining
Philosopher Problem.

THE SOLUTION OF THE DINING PHILOSOPHERS PROBLEM

Semaphores are the solution to the process synchronisation problem. A semaphore is an


integer utilised in solving critical sections.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 3


The essential section is a programme segment that allows you to access shared variables or
resources. An atomic action (independently operating process) is required in a crucial area,
which indicates that only one process can execute in that region at a moment.

The atomic operations of a semaphore are wait() and signal (). If the value of its input S is
positive, the wait() action decrements; otherwise, it is used to acquire resource while entry. If S
is negative or zero, no operation is performed. When the critical section is executed at exit, the
value of the signal() operation's argument S is incremented.

Here's a simple explanation of the solution:

void Philosopher

while(1)

// Section where the philosopher is using chopstick

wait(use_resource[x]);

wait(use_resource[(x + 1) % 5]);

// Section where the philosopher is thinking

signal(free_resource[x]);

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 4


signal(free_resource[(x + 1) % 5]);

Explanation:

● When the philosopher is using the resources while the others are considering, the wait()
operation is used. The threads use resource[x] and use resource[(x + 1)% 5] are
executed here.

● Following the use of the resource, the signal() procedure represents the philosopher
using no resources and thinking. Threads free resource[x] and free resource[(x + 1)% 5]
are running here.

THE DRAWBACK OF THE ABOVE SOLUTION OF THE DINING PHILOSOPHER PROBLEM

Using the prior solution to the dining philosopher problem, we demonstrated that no two
neighbouring philosophers can eat at the same time. The above strategy has the disadvantage
of potentially resulting in a deadlock situation. This happens when all of the philosophers
choose their left chopstick at the same time, resulting in a stalemate situation in which none of
the philosophers can eat and so deadlock.

We can also avoid deadlock through the following methods in this scenario -

1. The maximum number of philosophers at the table should not be more than four. Let's
look at why these four processes are important:
○ Philosopher P3 will be able to use chopstick C4, thus P3 will begin eating and
then set down both chopsticks C3 and C4, indicating that semaphore C3 and C4
will now be reduced to one.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 5


○ Now that philosopher P2, who was holding chopstick C2, also possesses
chopstick C3, he will put his chopstick down after eating to make room for more
philosophers.
2. The four beginning philosophers (P0, P1, P2, and P3) must pick the left chopstick first,
then maybe the right, whereas the last philosopher (P4) must pick the right chopstick
first, then the left. Take a look at what happens in this scenario:
○ This forces P4 to hold his right chopstick first because it is C0, which is already
held by philosopher P0 and whose value is set to 0, trapping P4 in an infinite loop
and leaving chopstick C4 empty.
○ As a result, because philosopher P3 possesses both left C3 and right C4
chopsticks, it will begin eating and then place both chopsticks down once
completed, allowing others to eat and thereby breaking the impasse.
3. If the philosopher is in an even position, the right chopstick should be chosen first,
followed by the left, and if the philosopher is in an odd position, the left chopstick
should be chosen first, followed by the right.

4. A philosopher should be allowed to choose his or her chopsticks only if both chopsticks
(left and right) are available at the same moment.

KEY TAKEAWAY

➢ The Dining Philosopher Problem, a classic synchronization challenge, involves five


philosophers seated around a circular table, each with a plate of spaghetti.
➢ The philosophers alternate between periods of thinking and eating, needing two forks
(chopsticks) to consume their slippery noodles. The problem highlights the complexity
of managing shared resources, with each philosopher requiring both left and right
chopsticks to eat.
➢ The solution utilizes semaphores to synchronize access to the chopsticks, ensuring that
philosophers can only dine if both required chopsticks are available. However, this
solution presents a potential deadlock issue, where all philosophers may simultaneously
choose their left chopsticks, leading to a standstill.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 6


➢ Mitigations involve limiting the number of philosophers to four and introducing rules for
chopstick selection based on philosopher position, breaking deadlocks and ensuring a
more efficient dining process.
➢ The Dining Philosopher Problem serves as a valuable illustration of challenges in
concurrent programming and resource management.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 7


DEADLOCKS

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 1


SUB LESSON 6.1
DEADLOCKS

DEADLOCKS
Deadlock occurs when you have a set of processes [not necessarily all the processes in the
system], each holding some resources, each requesting some resources, and none of them is
able to obtain what it needs, i.e. to make progress. Those processes are deadlocked because all
the processes are waiting. None of them will ever cause any of the events that could wake up
any of the other members of the set, and all the processes continue to wait forever. For this
model, I assume that processes have only a single thread and that there are no interrupts
possible to wake up a blocked process. The no-interrupts condition is needed to prevent an
otherwise deadlocked process from being awakened by, say, an alarm, and then causing events
that release other processes in the set. In most cases, the event that each process is waiting for
is the release of some resource currently possessed by another member of the set. In other
words, each member of the set of deadlocked processes is waiting for a resource that is owned
by another deadlocked process. None of the processes can run, none of them can release any
resources, and none of them can be awakened. The number of processes and the number and
kind of resources possessed and requested are unimportant. This result holds for any kind of
resource, including both hardware and software.

A deadlock occurs when a collection of processes is stalled because each process is holding a
resource and waiting for another resource to be acquired by another process.

Consider this: if two trains are approaching each other on the same track and there is only one
track, neither train can move once they are in front of the other. In operating systems, a similar
situation happens when two or more processes hold some resources and wait on resources
held by other processes (s). For example, in the picture below, Process 1 is holding Resource 1
and waiting for Process 2 to acquire Resource 2, and Process 2 is waiting for Resource 1.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 2


Necessary Conditions

Deadlock situation can arise if the following four conditions hold simultaneously in a system: 1.
Resources are used in mutual exclusion. 2. Resources are acquired piecemeal (i.e. not all the
resources that are needed to complete an activity are obtained at the same time in a single
indivisible action). 3. Resources are not preempted (i.e. a process does not take away resources
being held by another process). 4. Resources are not spontaneously given up by a process until
it has satisfied all its outstanding requests for resources (i.e. a process, being that it cannot
obtain some needed resource it does not kindly give up the resources that it is currently
holding).

EXAMPLE OF DEADLOCK

● A real-world example would be traffic that only moves in one way.


● A bridge is seen as a resource in this context.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 3


● Hence, if one automobile backs up, a deadlock can be quickly resolved (Preempt
resources and rollback).
● If there is a stalemate, many automobiles may have to be backed up.
● As a result, starvation is a possibility.

NECESSARY CONDITIONS FOR DEADLOCKS

1. Mutual Exclusion: A resource can only be shared in a mutually exclusive manner. It


implies that two processes cannot use the same resource at the same time.

2. Hold and Wait: A process waits for some resources while holding another resource at
the same time.

3. No preemption: The process which once scheduled will be executed till the completion.
No other process can be scheduled by the scheduler meanwhile.

4. Circular Wait: One process is waiting for a resource that is being held by the second
process, which is also waiting for a resource that is being held by the third process, and
so on. This will continue until the last process needs a resource that the first process
has. This results in a circular chain.

Process A, for example, is assigned Resource B since it requests Resource A. Similarly, Process B
is assigned Resource A and requests Resource B. This results in a circular wait loop.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 4


EXAMPLE OF CIRCULAR WAIT

A computer, for example, contains three USB drives and three processes. One of the USB discs
can be held by each of the three processes. As a result, when each process demands another
disc, the three processes will be stuck in a deadlock since each is waiting for the USB drive that
is presently in use to be released. This will produce a circular

chain.

ADVANTAGES OF DEADLOCK

Here, are pros/benefits of using Deadlock method


● This condition is ideal for operations that carry out a single burst of activity.
● Deadlock does not require preemption.
● When used to resources whose states can be easily saved and restored, this strategy is
convenient.
● Compile-time checks could be used to ensure this.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 5


● Because the problem is solved during system design, no run-time computation is
required.

DISADVANTAGES OF DEADLOCK

Here, are cons/ drawback of using deadlock method


● Delays the start of the process
● Processes must anticipate future resource requirements.
● Preempts more frequently than necessary
● Allows or denies incremental resource requests
● Preemption losses are inherent.

RESOURCE ALLOCATION GRAPH

The resource allocation graph is a graphical representation of a system's status. The resource
allocation graph, as the name implies, contains all of the information about all of the activities
that are holding or waiting for resources.

It also includes information on all instances of all resources, regardless of whether they are
available or being used by the processes.

The process is represented by a circle in the Resource allocation graph, whereas the resource is
represented by a rectangle. Let's take a closer look at the various types of vertices and edges.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 6


Vertices are mainly of two types, Resource, and process. Each of them will be represented by a
different shape. The circle represents the process while the rectangle represents the resource.

A resource can have more than one instance. Each instance will be represented by a dot inside
the rectangle.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 7


Edges in RAG are also classified into two types: those that represent assignment and those that
show the wait of a process for a resource. Each of them is depicted in the image above.

If the tail of the arrow is attached to an instance of the resource and the head is attached to a
process, the resource is shown as assigned to that process.

If the tail of an arrow is attached to the process while the head is pointing toward the resource,
the process is shown to be waiting for a resource.

EXAMPLE

Consider three processes P1, P2, and P3, as well as two types of resources R1 and R2. Each
resource has a single instance.

According to the graph, P1 is using R1, P2 is holding R2 while waiting for R1, and P3 is waiting
for both R1 and R2.

Because no cycle is generated in the graph, there is no deadlock.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 8


KEY TAKEAWAY

➢ Deadlocks in operating systems occur when a set of processes, each holding certain
resources and requesting others, cannot make progress. This situation arises from four
necessary conditions: mutual exclusion, hold and wait, no preemption, and circular wait.
➢ A deadlock example is likened to two trains approaching each other on a single track,
unable to move forward. These conditions include mutual exclusion, hold and wait, no
preemption, and circular wait.
➢ To prevent deadlocks, operating systems employ strategies such as violating mutual
exclusion, addressing hold and wait situations, and implementing preemption. However,
the feasibility of these techniques varies due to resource characteristics and potential
inconsistencies.
➢ Deadlock prevention aims to design systems that minimize the risk of deadlocks,
ensuring safety and efficient resource allocation.
➢ In terms of disadvantages, deadlock methods may delay process starts, require
processes to anticipate future resource needs, and lead to more frequent preemption.
➢ The resource allocation graph is a useful tool for representing the status of resources
and processes, aiding in the analysis and prevention of deadlocks.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 9


DEADLOCKS

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 1


SUB LESSON 6.2
DEADLOCK PREVENTION

DEADLOCK HANDLING METHODS


● Prevention: It involves creating a system that violates at least one of the four required
conditions for deadlock and ensures that deadlock does not occur.
● Avoidance: To be in a safe condition, the system keeps a set of data from which it
decides whether to consider a new request or not.
● Detection & recovery: In this case, we wait till the impasse arises and then recover from
it.

DEADLOCK PREVENTION IN OPERATING SYSTEM


A process is a series of steps. When a process operates, it requires resources like CPU cycles,
files, or access to peripheral devices. Certain resource requests may result in a stalemate.
Deadlock prevention is removing one of the necessary conditions for deadlock, allowing only
safe requests to be made to the OS and excluding the potential of deadlock before making
requests. Because requests are now made with care, the operating system can safely grant all
requests. The operating system does not need to perform any additional work in this case, as it
does in deadlock avoidance by performing an algorithm on requests to check for the risk of
deadlock.

DEADLOCK PREVENTION TECHNIQUES

Violation of any of the four required requirements is referred to as a deadlock prevention


strategy. We'll look at each of them individually to see how we might violate them in order to
make safe requests and which strategy is optimal for avoiding deadlock.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 2


MUTUAL EXCLUSION

Certain resources, such as printers, are intrinsically unshareable. Processes necessitate


exclusive control of unshareable resources. A mutual exclusion means that processes cannot
access unshareable resources at the same time. Deadlock is not caused by shared resources,
although some resources cannot be shared among processes, resulting in a deadlock.

For Example, Several processes can perform read operations on a file at the same time, but not
write operations. Because write operations need sequential access, some processes must wait
while another process performs a write action. Mutual exclusion cannot be eliminated since
some resources are intrinsically non-shareable.

HOLD AND WAIT

A hold-and-wait condition occurs when a process holds one resource while simultaneously
waiting for another process to hold another resource. The process cannot proceed until all of
the necessary resources are obtained. As shown in the diagram below:

● Resource 1 is allocated to Process 2


● Resource 2 is allocated to Process 1

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 3


● Resource 3 is allocated to Process 1
● Process 1 is waiting for Resource 1 and holding Resource 2 and Resource 3
● Process 2 is waiting for Resource 2 and holding Resource 1

There are two ways to eliminate hold and wait:-

1. By eliminating wait: The process specifies the resources it requires in advance so that it
does not have to wait for allocation after execution starts.

For Example, Process1 declares in advance that it requires both Resource1 and
Resource2

2. By eliminating hold: The process has to release all resources it is currently holding
before making a new request.
For Example, Process1 has to release Resource2 and Resource3 before making a request
for Resource1

Challenges:

● Because a process executes instructions one by one, it cannot be aware of all necessary
resources prior to execution.
● Releasing all of the resources that a process is now holding is also troublesome because
they may be inaccessible to other processes and are thus released unnecessarily.
● For example, if Process1 releases both Resource2 and Resource3, Resource3 is released
unnecessarily because Process2 does not require it.

NO PREEMPTION

Preemption is temporarily interrupting an executing task and later resuming it. For example, if
process P1 is using a resource and a high-priority process P2 requests for the resource, process
P1 is stopped and the resources are allocated to P2.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 4


There are two ways to eliminate this condition by preemption:

1. If a process is holding some resources while waiting for others, it should release all
previously held resources and make a fresh request for the necessary resources. When
all of the necessary resources are available, the procedure can be restarted.

As an example: If a process has resources R1, R2, and R3 and is waiting for resource R4,
it must relinquish resources R1, R2, and R3 and make a new request for all resources.
2. If a process P1 is waiting for some resource, and there is another process P2 that is
holding that resource and is blocked waiting for some other resource. Then the resource
is taken from P2 and allocated to P1. This way process P2 is preempted and it requests
again for its required resources to resume the task. The above approaches are possible
for resources whose states are easily restored and saved, such as memory and registers.

Challenges:

● These approaches are troublesome since the process may be actively utilizing these
resources, and stopping it via preempting can result in inconsistencies. For example, if a
process starts writing to a file and its access is denied before it completes the update,
the file will remain unusable and inconsistent.
● It is inefficient and time-consuming to submit requests for all resources again.

CIRCULAR WAIT

In a circular wait, two or more processes wait for resources in a circular order. We can
understand this better by the diagram given below:

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 5


To eliminate circular wait, we assign a priority to each resource. A process can only request
resources in increasing order of priority.

In the example above, process P3 is requesting resource R1, which has a number lower than
resource R3 which is already allocated to process P3. So this request is invalid and cannot be
made, as R1 is already allocated to process P1.

Challenges:

● It is difficult to assign a relative priority to resources, as one resource can be prioritized


differently by different processes.
For Example, A media player will give a lesser priority to a printer while a document
processor might give it a higher priority. The priority of resources is different according
to the situation and use case.

FEASIBILITY OF DEADLOCK PREVENTION

● Mutual exclusion cannot be totally eradicated since some resources are intrinsically
unshareable.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 6


● Hold and wait cannot be abolished since we cannot predict the resources required to
avoid waiting. Preventing a hold by releasing all resources while requesting a new one is
inefficient.

● Preempting processes can lead to inconsistency because restarting the process by


requesting all resources again is wasteful.

● The only feasible approach to avoid stalemate is to eliminate the cyclic wait.

KEY TAKEAWAY

➢ Deadlock prevention in operating systems involves designing systems that violate at


least one of the four necessary conditions for deadlock, ensuring a deadlock-free
environment.
➢ This proactive approach aims to eliminate the possibility of deadlocks before resource
requests are made, unlike deadlock avoidance, which assesses the safety of requests
through simulations.
➢ Various techniques are employed to prevent deadlocks, including violating mutual
exclusion, addressing hold and wait situations by either eliminating the wait or the hold,
and implementing preemption strategies to temporarily interrupt processes.
➢ The challenging aspect lies in the feasibility of these prevention techniques. Mutual
exclusion and hold and wait conditions cannot be entirely eliminated due to the nature
of certain resources.
➢ Preemption introduces efficiency concerns and potential inconsistencies, making it
challenging to implement in all cases.
➢ The most feasible approach involves eliminating circular wait conditions by assigning
priorities to resources and restricting processes to request resources in an increasing
order of priority.
➢ Overall, deadlock prevention strategies aim to create systems that prioritize safety and
minimize the risk of deadlock occurrences.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 7


DEADLOCKS

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 1


SUB LESSON 6.3
DEADLOCK AVOIDANCE: BANKER’S ALGORITHM

DEADLOCK AVOIDANCE
Deadlock Avoidance, assuming that you are in a safe state (i.e. a state from which there is a
sequence of allocations and releases of resources that allows all processes to terminate) and
you are requested certain resources, simulates the allocation of those resources and
determines if the resultant state is safe. If it is safe the request is satisfied, otherwise, it is
delayed until it becomes safe. The Banker’s Algorithm is used to determine if a request can be
satisfied. It uses requires knowledge of who are the competing transactions and what are their
resource needs. Deadlock avoidance is essentially not used in distributed systems.

The Operating System employs Deadlock Avoidance to avoid deadlock. The operating system
avoids deadlock by originally knowing the maximal resource requirements of the processes, as
well as the free resources available at the time. The operating system attempts to allocate
resources in accordance with the process requirements and determines if the allocation can
lead to a safe or unsafe condition. If the allocation of resources results in an unsafe state, the
operating system does not continue with the allocation process.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 2


HOW DOES DEADLOCK AVOIDANCE WORK?

Let's understand the working of Deadlock Avoidance with the help of a reflexive example.

Process Maximum Required current Available Need

P1 9 5 4

P2 5 2 3

P3 3 1 2

Let's consider three processes P1, P2, P3. Some more information on which the processes tell
the Operating System are:

● To finish its execution, the P1 process requires a maximum of 9 resources (which can be
any software or hardware resource such as a tape drive or printer). P1 presently has 5
Resources and needs 4 more to complete its execution.
● The P2 process requires a maximum of 5 resources and is presently allocated two. As a
result, it requires three extra resources to finish its execution.
● The P3 process requires a maximum of three resources and is presently given one. As a
result, it requires two additional resources to finish its execution.
● The operating system is aware that just two of the total available resources are now free.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 3


But only 2 resources are free now. Can P1, P2, and P3 satisfy their requirements? Let's try to
find out.

Because just two resources are currently available, only P3 can meet its requirement for two
resources. P3 can release its three (1+2) resources if it takes two resources and completes its
execution. Now, the three free resources given by P3 can meet P2's needs. P2 can now finish its
execution and release 5 (2+3) resources after taking the three free resources. Five resources are
now available for free. P1 can now use four of the five available resources to complete its
execution. Hence, with only two free resources available at the start, all processes were able to
finish their execution, resulting in the Safe State. The processes were carried out in the
following order.

What if initially there was only 1 free resource available? None of the processes would be able
to complete its execution. Thus leading to an unsafe state.

We use two words, safe and unsafe states. What are those states? Let's understand these
concepts.

SAFE STATE AND UNSAFE STATE

Safe State - In the preceding example, we observed that the Operating System was able to meet
the resource requirements of all three processes, P1, P2, and P3. As a result, all of the processes
were able to complete their execution in the order P3->P2->P1. As a result, if the operating
system can allocate or satisfy the maximum resource requirements of all processes in any
sequence, the system is considered to be in a safe state. As a result, the safe state does not
result in a deadlock.

Unsafe State - If the operating system is unable to prohibit processes from seeking resources,
resulting in Deadlock, the system is said to be in an Unsafe Condition. An unsafe State does not
always result in deadlock; it may or may not result in deadlock.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 4


So, the above diagram shows the three states of the System. An unsafe state does not always
cause Deadlock. Some unsafe states can lead to Deadlock, as shown in the diagram.

DEADLOCK AVOIDANCE EXAMPLE

Let's take an example that has multiple resources requirement for every Process. Let there be
three Processes P1, P2, P3, and 4 resources R1, R2, R3, R4. The maximum resource
requirements of the Processes are shown in the below table.

Process R1 R2 R3 R4

P1 3 2 3 2

P2 2 3 1 4

P3 3 1 5 0

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 5


A number of currently allocated resources to the processes are

Process R1 R2 R3 R4

P1 1 2 3 1

P2 2 1 0 2

P3 2 0 1 0

The total number of resources in the System are:

R1 R2 R3 R4

7 4 5 4

We can find out the no of available resources for each of P1, P2, P3 by subtracting the currently
allocated resources from the total resources.

Available Resources are :

R1 R2 R3 R4

2 1 1 1

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 6


Now, The need for the resources for the processes can be calculated by:

Need = Maximum Resources Requirement - Currently Allocated Resources.

The need for the Resources is shown below:

Process R1 R2 R3 R4

P1 2 1 0 1

P2 0 2 1 2

P3 1 1 4 0

The available free resources are 2,1,1,1> of R1, R2, R3, and R4 resources, which can initially be
used to satisfy just the requirements of process P1 because process P2 requires two R2
resources, which are not accessible. The same is true for Process P3, which requires four R3
resources that are not originally available.

The Steps for resources allotment is explained below:

1. Initially, Process P1 will use the available resources to meet its resource requirements,
then complete its execution and release all of its allotted resources. Process P1 is
originally assigned <1,2,3,1> resources of R1, R2, R3, and R4. Process P1 requires
<2,1,0,1> resources from R1, R2, R3, and R4 to finish its execution. Therefore, process
P1 takes the available free resources 2,1,1,1> of R1, R2, R3, R4, and completes its
execution, releasing its existing allocated resources as well as the free resources it
utilized to complete its execution. Hence, P1 releases <1+2,2+1,3+1,1+1> = <3,3,4,2>
resources from R1, R2, R3, and R4.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 7


2. Now, the available resources are <5,4,4,4>, and the only Process still waiting to be
executed is Process P3, which requires <1,1,4,0> resources from R1, R2, R3, and R4. As a
result, it may readily make use of the available resources and complete its execution.
After P3, the accessible resources are <7,4,5,4>, which is equivalent to the maximum or
total resources available in the System.

Hence, in the preceding example, the process execution sequence was <P1, P2, P3>. However,
it could have been <P1, P3, P2> if process P3 had been executed before process P2, which was
conceivable because there were enough resources available to meet the needs of both Process
P2 and P3 after step 1 above.

DEADLOCK AVOIDANCE SOLUTION

Deadlock Avoidance can be solved by two different algorithms:

● Resource allocation Graph


● Banker's Algorithm

BANKER'S ALGORITHM
This is modeled on the way a small town banker might deal with customers’ lines of credit. In
the course of conducting business, our banker would naturally observe that customers rarely
draw their credit lines to their limits. This, of course, suggests the idea of extending more credit
than the amount the banker actually has in her coffers

Banker's method does the same thing as we have shown with an example of deadlock
avoidance. The algorithm predicts whether or not the System will be in a safe condition by
simulating the allocation of resources to processes based on the maximum available resources.
When there are a large number of processes and resources, the Banker's Algorithm comes in
handy.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 8


In this example, we have a process table with a number of processes that includes an allocation
field (which shows how many resources of type A, B, and C are allocated to each process in the
table), a max field (which shows how many resources of type A, B, and C can be allocated to
each process), and an available field. (for showing the currently available resources of each type
in the table).

Processes Allocation Max Available

ABC ABC ABC

P0 112 544 321

P1 212 433

P2 301 913

P3 020 864

P4 112 223

Considering the above processing table, we need to calculate the following two things:

Q.1 Calculate the need matrix? Q.2 Is the system in a safe state?

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 9


(Need)i=(Max)i−(Allocation)i

Process Need

ABC

P0 432

P1 221

P2 612

P3 844

P4 111

Ans.2 Let us check for a safe sequence:

1. For process P0, Need = (4, 3, 2) and Available = (3, 2, 1) Clearly, the resources needed are
more in number than the available ones. So, now the system will move to process the next
request.

2. For Process P1, Need = (2, 2, 1) and Available = (3, 2, 1) Clearly, the resources needed are less
than equal to the available resources within the system. Hence, the request of P1 is granted.

Available=Available+Allocation = (3, 2, 1) + (2, 1, 2) = (5, 3, 3) (New Available)

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 10


3. For Process P2, Need = (6, 1, 2) and Available = (5, 3, 3) Clearly, the resources needed are
more in number than the available ones. So, now the system will move to process the next
request.

4. For Process P3, Need = (8, 4, 4) and Available = (5, 3, 3) Clearly, the resources needed are
more in number than the available ones. So, now the system will move to process the next
request.

5. For Process P4, Need = (1, 1, 1) and Available = (5, 3, 3) Clearly, the resources needed are less
than equal to the available resources within the system. Hence, the request for P4 is granted.

Available=Available+Allocation = (5, 3, 3) + (1, 1, 2) = (6, 4, 5) (New Available)

6. Now again check for Process P2, Need = (6, 1, 2) and Available = (6, 4, 5) Clearly, resources
needed are less than equal to the available resources within the system. Hence, the request of
P2 is granted.

Available=Available+Allocation =(6,4,5)+(3,0,1)=(9,4,6) (NewAvailable)

7. Now again check for Process P3, Need = (8, 4, 4) and Available = (9, 4, 6) Clearly, resources
needed are less than equal to the available resources within the system. Hence, the request of
P3 is granted.

Available=Available+Allocation =(9,4,6)+(0,2,0)=(9,6,6)(NewAvailable)

8. Now again check for Process P0, Need = (4, 3, 2), and Available (9, 6, 6) Clearly, the request
for P0 is also granted.

Safe sequence: <P1,P4,P2,P3,P0>

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 11


The system has allocated all the required number of resources to each process in a particular
sequence. Therefore, it is proved that the system is in a safe state.

CONSTRAINTS OF BANKERS ALGORITHM

The single constraint in Banker's algorithm is that Available must always satisfy at least one of
the process resource needs so that the system does not become unsafe and the program must
roll back to the original allocated state.

ADVANTAGES OF BANKERS ALGORITHM IN OS

● The [MAX] array attribute in the Bankers algorithm in OS indicates the maximum
number of resources of each category that a process can store. We can always identify
the need for resources for a certain process by using the [MAX] array.
● [Need]=[MAX]−[Allocated]
● This algorithm aids in the detection and avoidance of deadlock, as well as the
management and control of process requests for each type of resource inside the
system.
● Each process should inform the operating system about impending resource demands,
the number of resources available, any delays, and how long the resources will be held
by the process before being released. This is also one of the key features of the Bankers
algorithm.
● Various types of resources are maintained by the system while using this algorithm,
which can fulfill the needs of at least one process type.
● This algorithm also consists of two other advanced algorithms for maximum resource
allocation.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 12


DISADVANTAGES OF BANKERS ALGORITHM IN OS

● The Bankers algorithm in the operating system does not let a process can adjust its
maximum resource requirement while processing.

● Another downside of this approach is that all processes within the system must be
aware of the system's maximal resource requirements in advance.

● It necessitates a predetermined amount of processes for processing, and no further


processes can be launched in the meantime.

● This technique permits all process resource requests to be granted in a specified limited
amount of time, although the maximum time period for assigning resources is one year.

KEY TAKEAWAY

➢ Deadlock Avoidance, implemented through the Banker's Algorithm in operating


systems, plays a crucial role in preventing deadlock situations. This strategy ensures that
the system remains in a safe state, allowing processes to terminate through a sequence
of resource allocations and releases.
➢ The Banker's Algorithm simulates resource allocation requests and assesses whether the
resulting state is safe; if not, the request is delayed until safety is guaranteed.
➢ The algorithm relies on knowledge of competing transactions and their resource needs.
By understanding maximal resource requirements and available resources, the
operating system allocates resources judiciously, avoiding unsafe conditions that may
lead to deadlocks.
➢ The algorithm's effectiveness is demonstrated through examples, showcasing scenarios
where processes can or cannot satisfy their resource requirements based on the
available resources.
➢ Despite its advantages in deadlock prevention and resource management, the Banker's
Algorithm has limitations, such as the need for predetermined maximal resource
requirements and a fixed number of processes.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 13


DEADLOCKS

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 1


SUB LESSON 6.4

DEADLOCK DETECTION AND RECOVERY

DEADLOCK DETECTION
Often neither deadlock avoidance nor deadlock prevention may be used. Instead, deadlock
detection and recovery are used by employing an algorithm that tracks resource allocation and
process states, and rolls back and restarts one or more of the processes in order to remove the
deadlock. Detecting a deadlock that has already occurred is easily possible since the resources
that each process has locked and/or currently requested are known to the resource scheduler
or OS. Detecting the possibility of a deadlock before it occurs is much more difficult and is, in
fact, generally undecidable, because the halting problem can be rephrased as a deadlock
scenario. However, in specific environments, using specific means of locking resources,
deadlock detection may be decidable. In the general case, it is not possible to distinguish
between algorithms that are merely waiting for a very unlikely set of circumstances to occur
and algorithms that will never finish because of deadlock.

Deadlocks must be detected and resolved by the operating system if they occur. Deadlock
detection techniques like the Wait-For Graph are used to detect deadlocks, and recovery
algorithms like the Rollback and Abort algorithm are used to resolve them. The recovery
algorithm frees resources held by one or more processes, allowing the system to keep working.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 2


There are two deadlock detection methods depending upon the number of instances of each
resource:

1. Single instance of each resource

2. Multiple instances of each resource

SINGLE INSTANCE OF EACH RESOURCE: WAIT-FOR-GRAPH

When there is a single instance of each resource the system can use wait-for-graph for
deadlock detection. The key points about the wait-for-graph are

1. Obtained from resource allocation graph


2. Remove the resource nodes – from the resource allocation graph remove the
resource nodes
3. Collapse the corresponding edges

WORKING OF WAIT-FOR-GRAPH

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 3


A Pi-Pj Edge indicates that process Pi is waiting for a resource owned by process Pj. Now, if the
resource allocation graph has two edges Pi -> Rq and Rq -> Pj, for some resource Rq, combine
these into a single edge from Pi -> Pj to create the wait-for-graph. Lastly, if there is a cycle in the
wait-for-graph, the system is in a deadlock; otherwise, it is not.

In the example above, P2 is seeking R3, which is owned by P5. As a result, we eliminate R3 and
collapse the edge between P2 and P5. This is reflected in the wait-for-graph. Similarly, the
edges between P4 and P1 have collapsed, as have all other processes.

multiple instances of resources –

Detection of the cycle is necessary but not sufficient condition for deadlock detection, in this
case, the system may or may not be in deadlock varies according to different situations.

DEADLOCK RECOVERY

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 4


Because deadlock recovery is time and space-intensive, standard operating systems such as
Windows do not support it. Deadlock recovery is used in real-time operating systems.

1. Killing the process –


Killing all processes that are engaged in the impasse. One by one, the killing process
begins. After terminating each process, continue the process until the system
recovers from the stalemate. Killing all processes one by one assists a system in
breaking the circular wait state.
2. Resource Preemption –
Preempted resources from the processes implicated in the stalemate are assigned to
other processes, giving the system a chance to recover from the impasse. The
system goes into hunger in this circumstance.

ADVANTAGES OR DISADVANTAGES:

ADVANTAGES OF DEADLOCK DETECTION AND RECOVERY IN OPERATING SYSTEMS:

1. Improved System Stability: System-wide stalls can be caused by deadlocks, and


detecting and resolving deadlocks can help to enhance system stability.
2. Better Resource Utilization: The operating system can ensure that resources are
used efficiently and that the system stays responsive to user demands by identifying
and resolving deadlocks.
3. Better System Design: Deadlock detection and recovery methods can provide
insight into system behaviour and the linkages between processes and resources,
assisting in informing and improving system design.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 5


DISADVANTAGES OF DEADLOCK DETECTION AND RECOVERY IN OPERATING
SYSTEMS:

1. Performance Overhead: Deadlock detection and recovery procedures can impose a


considerable performance overhead since the system must check for deadlocks on a
frequent basis and take necessary measures to resolve them.
2. Complexity: Deadlock detection and recovery algorithms can be difficult to build,
especially if advanced techniques like the Resource Allocation Graph or
timestamping are used.
3. False Positives and Negatives: Deadlock detection algorithms are not flawless and
may generate false positives or negatives, suggesting the presence of deadlocks
when none exist or failing to detect deadlocks when they do exist.
4. Risk of Data Loss: In some situations, recovery procedures may need reverting to a
previous state of one or more processes, resulting in data loss or corruption.

Generally, the approach used to identify and recover from deadlocks is determined by the
system's specific requirements, the trade-offs between performance, complexity, and accuracy,
and the system's risk tolerance. To efficiently detect and resolve deadlocks, the operating
system must balance these aspects.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 6


MEMORY MANAGEMENT

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 1


SUB LESSON 7.1
INTRODUCTION OF MEMORY MANAGEMENT

MEMORY MANAGEMENT IN OPERATING SYSTEM


Memory is the electronic holding place for instructions and data that the computer’s
microprocessor can reach quickly. When the computer is in normal operation, its memory
usually contains the main parts of the operating system and some or all of the application
programs and related data that are being used. Memory is often used as a shorter synonym for
random access memory (RAM). This kind of memory is located on one or more microchips that
are physically close to the microprocessor in the computer. Most desktop and notebook
computers sold today include at least 16 megabytes of RAM and are upgradeable to include
more. The more RAM you have, the less frequently the computer has to access instructions and
data from the more slowly accessed hard disk form of storage. Memory is sometimes
distinguished from storage or the physical medium that holds the much larger amounts of data
that won’t fit into RAM and may not be immediately needed there. Storage devices include
hard disks, floppy disks, CD-ROM, and tape backup systems. The terms auxiliary storage,
auxiliary memory, and secondary memory have also been used for this kind of data repository.
Additional kinds of integrated and quickly accessible memory are read-only memory (ROM),
programmable ROM (PROM), and erasable programmable ROM (EPROM). These are used to
keep special programs and data, such as the basic input/output system, that need to be in the
computer all the time. Memory is a resource that needs to be managed carefully. Most
computers have a memory hierarchy, with a small amount of very fast, expensive, volatile
cache memory, some number of megabytes of medium-speed, medium-price, volatile main
memory (RAM), and hundreds of thousands of megabytes of slow, cheap, non-volatile disk
storage. It is the job of the operating system to coordinate how these memories are used.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 2


Memory management is a means of controlling and managing the operation of random access
memory in an operating system (primary memory). It is used to boost concurrency, system
performance, and memory use.

The practice of shifting processes from primary memory to secondary memory and vice versa is
known as memory management. It also keeps track of available memory, memory allocation,
and free memory.

WHY MEMORY MANAGEMENT IS REQUIRED:

● Memory management keeps track of each memory location's status, whether it is


allocated or free.
● Memory management allows computer systems to run programs that require more
main memory than the system's available free main memory. This is accomplished by
transferring data between primary and secondary memory.
● Memory management deals with the system's primary memory by providing
abstractions that make the applications running on the system believe a significant
amount of memory has been allocated to them.
● Memory management's job is to keep the memory allotted to all processes from being
corrupted by other processes. If this is not done, the computer may behave
unexpectedly or incorrectly.
● Memory management allows processes to share memory spaces, allowing numerous
programs to live in the same memory address (although only one at a time).

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 3


Now we are discussing the concept of logical address space and Physical address space:

LOGICAL AND PHYSICAL ADDRESS SPACE:

Logical Address space: A "Logical Address" is an address generated by the CPU. It is often
referred to as a virtual address. The size of the process can be described as the logical address
space. It is possible to alter a logical address.

Physical Address space: A "Physical Address" is an address viewed by the memory unit (i.e. one
loaded into the memory's memory address register). A Real address is another term for a
physical address. Physical address space refers to the collection of all physical addresses that
correspond to these logical addresses. MMU generates a physical address. A hardware device,
the Memory Management Unit, performs the run-time mapping from virtual to physical
addresses (MMU). The physical address is always the same.

STATIC AND DYNAMIC LOADING:

Loading a process into the main memory is done by a loader. There are two different types of
loading :

Static loading:- loading the entire program into a fixed address. It requires more memory
space.

Dynamic loading:- To execute a process, the complete program and its data must be in physical
memory. As a result, the size of a process is restricted by the amount of physical memory
available. Dynamic loading is used to maximize memory utilization. A routine is not loaded in
dynamic loading until it is called. All of the routines are stored on a disc in a relocatable load
format. One benefit of dynamic loading is that unused routines are never loaded. When a
significant amount of code is required to handle it efficiently, this loading is useful.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 4


Static and Dynamic linking: A linker is used to conduct linking tasks. A linker is a program that
merges one or more object files produced by a compiler into a single executable file.

Static linking: The linker in static linking integrates all required program modules into a single
executable program. As a result, there is no runtime reliance. Certain operating systems only
enable static linking, which treats system language libraries like any other object module.

Dynamic linking: The fundamental principle of dynamic linking is similar to that of dynamic
loading. "Stub" is included in dynamic linking for each eligible library procedure reference. A
stub is a short section of code. When the stub is executed, it checks to see if the required
routine is already in memory. If not available then the program loads the routine into memory.

SWAPPING:

Any operating system has a fixed amount of physical memory available. Usually, applications
need more than the physical memory installed on your system, for that purpose the operating
system uses a swap mechanism: instead of storing data in physical memory, it uses a disk file.
Swapping is the act of moving processes between memory and a backing store. This is done to
free up available memory. Swapping is necessary when there are more processes than available
memory. At the coarsest level, swapping is done a process at a time. To move a program from
fast-access memory to slow-access memory is known as “swap out”, and the reverse operation
is known as “swap in”. The term often refers specifically to the use of a hard disk (or a swap file)
as virtual memory or “swap space”. When a program is to be executed, possibly as determined
by a scheduler, it is swapped into core for processing; when it can no longer continue executing
for some reason, or the scheduler decides its time slice has expired, it is swapped out again.

When a process is executed, it must be in memory. Swapping is the process of temporarily


moving a process from the main memory to the secondary memory, which is faster than the
main memory. Swapping allows more processes to operate and fit into memory at the same

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 5


time. The transferred time is the most important aspect of swapping, and the overall time is
directly proportional to the quantity of memory swapped. When a higher priority process
arrives and requests service, the memory manager can swap out the lower priority process and
then load and run the higher priority process. After finishing higher priority work, the lower
priority process swapped back in memory and continued to the execution process.

KEY TAKEAWAY
➢ Memory management in an operating system is a crucial aspect responsible for
controlling and optimizing the usage of random access memory (RAM).

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 6


➢ It involves coordinating the allocation and deallocation of memory, ensuring efficient
utilization, and enhancing system performance.
➢ Memory management keeps track of the status of each memory location, facilitates the
transfer of processes between primary and secondary memory, and provides
abstractions for applications to perceive significant memory allocations.
➢ The management of logical and physical address spaces is fundamental, where logical
addresses, generated by the CPU, represent a process's virtual address space, while
physical addresses, viewed by the memory unit, correspond to the actual memory
locations.
➢ The concepts of static and dynamic loading, as well as linking, are employed to load
processes into memory efficiently.
➢ Swapping, a mechanism utilized by the operating system, involves moving processes
between main memory and secondary storage, enabling the execution of more
processes than the available physical memory can accommodate.
➢ This dynamic swapping process contributes to efficient multitasking and overall system
performance.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 7


MEMORY MANAGEMENT

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 1


SUB LESSON 7.2
CONTIGUOUS MEMORY MANAGEMENT SCHEMES

MEMORY MANAGEMENT TECHNIQUES

The memory management techniques can be classified into the following main categories:

● Contiguous memory management schemes


● Non-Contiguous memory management schemes

CONTIGUOUS MEMORY MANAGEMENT SCHEMES

The real challenge of efficiently managing memory is seen in the case of a system which has
multiple processes running at the same time. Since primary memory can be space-multiplexed,

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 2


the memory manager can allocate a portion of primary memory to each process for its own
use. However, the memory manager must keep track of which processes are running in which
memory locations, and it must also determine how to allocate and deallocate available memory
when new processes are created and when old processes complete execution. While various
different strategies are used to allocate space to processes competing for memory, three of the
most popular are Best fit, Worst fit, and First fit.

In a Contiguous memory management scheme, each program occupies a single contiguous


block of storage locations, i.e., a set of memory locations with consecutive addresses.

SINGLE CONTIGUOUS MEMORY MANAGEMENT SCHEMES:

The simplest memory management strategy employed in the first generation of computer
systems were single contiguous memory management. The main memory is separated into two
contiguous sections or partitions in this approach. The operating systems are permanently
installed in one partition, usually in the lower RAM, and the user processes are put into the
other.

Advantages of Single contiguous memory management schemes:

● Simple to implement.
● Easy to manage and design.
● In a Single contiguous memory management scheme, once a process is loaded, it is
given full processor time, and no other processor will interrupt it.

Disadvantages of Single contiguous memory management schemes:

● Memory waste owing to unused memory since the process is unlikely to utilise all of the
allocated memory space.
● The CPU remains idle while the disc loads the binary image into main memory.
● It cannot be performed if the software occupies more than the allotted main memory
space.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 3


● It does not allow multiprogramming, which means it cannot run numerous applications
at the same time.

MULTIPLE PARTITIONING:

The single Contiguous memory management scheme is inefficient as it limits computers to


execute only one program at a time resulting in wastage in memory space and CPU time. The
problem of inefficient CPU use can be overcome using multiprogramming that allows more
than one program to run concurrently. To switch between two processes, the operating
systems need to load both processes into the main memory. The operating system needs to
divide the available main memory into multiple parts to load multiple processes into the main
memory. Thus multiple processes can reside in the main memory simultaneously.

The multiple partitioning schemes can be of two types:

● Fixed Partitioning
● Dynamic Partitioning

FIXED PARTITIONING

In a fixed partition memory management technique or static partitioning, the main memory is
divided into many fixed-sized divisions. These dividers might be of the same or varying sizes.
Each partition can only hold one process. The number of partitions defines the degree of
multiprogramming or the maximum number of processes that can be stored in memory. These
partitions are created during system formation and stay fixed afterward.

Advantages of Fixed Partitioning memory management schemes:

● Simple to implement.
● Easy to manage and design.

Disadvantages of Fixed Partitioning memory management schemes:

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 4


There are various cons of using this technique.

1. Internal Fragmentation

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 5


If the size of the process is less than the whole size of the partition, some partition space is
squandered and remains unused. This is memory waste, often known as internal
fragmentation.

As illustrated in the screenshot below, the 4 MB partition is only used to load three MB
processes, with the remaining one MB being wasted.

2. External Fragmentation

Even when there is space available but not in a contiguous manner, the entire unused space of
multiple partitions cannot be used to load the processes.

The remaining 1 MB space of each partition, as illustrated in the graphic below, cannot be used
as a unit to contain a 4 MB process. Despite the fact that there is enough space to load the
process, it will not be loaded.

3. Limitation on the size of the process

If the process size is more than the maximum partition size, the process cannot be loaded into
memory. As a result, the process size can be restricted to no greater than the size of the largest
partition.

4. Degree of multiprogramming is less

The greatest number of processes that can be loaded into memory at the same time is referred
to as the degree of multiprogramming. Because the size of the partition cannot be modified
according to the size of the processes, the degree of multiprogramming in fixed partitioning is
fixed and very low.

DYNAMIC PARTITIONING

Dynamic partitioning was created to address the shortcomings of a fixed partitioning system.
With a dynamic partitioning strategy, each process only takes up as much RAM as it needs
when it is loaded for processing. Requested processes are given memory until the physical

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 6


memory is depleted or the remaining space is insufficient to hold the requesting process. The
partitions utilised in this scheme are of variable size, and the number of partitions is not
specified at the time the system is created.

Advantages of Dynamic Partitioning memory management schemes:

1. NO INTERNAL FRAGMENTATION

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 7


Given that dynamic partitioning creates partitions based on the needs of the process, it is
obvious that there would be no internal fragmentation because there will be no unused
residual space in the partition.

2. NO LIMITATION ON THE SIZE OF THE PROCESS

Due to a lack of adequate contiguous memory in Fixed partitioning, processes with sizes bigger
than the largest partition could not be performed. The process size cannot be limited in
dynamic partitioning since the partition size is determined by the process size.

3. DEGREE OF MULTIPROGRAMMING IS DYNAMIC

Because there would be no unused space in the partition due to the lack of internal
fragmentation, more processes can be loaded into memory at the same time.

Disadvantages of Dynamic Partitioning memory management schemes:

1. EXTERNAL FRAGMENTATION

The absence of internal fragmentation does not rule out external fragmentation.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 8


Consider three processes, P1 (1 MB), P2 (3 MB), and P3 (1 MB), which are loaded in their
respective major memory partitions.

P1 and P3 were completed after some time, and their assigned space was freed. There are now
two unused partitions (1 MB and 1 MB) accessible in main memory, however they cannot be
used to load a 2 MB process because they are not contiguous.

The rule states that the process must be present in main memory at all times in order to be
performed. To avoid external fragmentation, we must amend this regulation.

2. COMPLEX MEMORY ALLOCATION

The list of partitions is created once and never changes in fixed partitioning, but in dynamic
partitioning, the allocation and deallocation is quite complex since the partition size varies
every time it is assigned to a new process. The operating system must keep track of all
partitions.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 9


Because dynamic memory allocation involves frequent allocation and deallocation, and the
partition size changes with each allocation, it will be extremely difficult for the operating
system to manage everything.

PARTITION ALLOCATION

Memory is partitioned or divided into blocks. Each process is assigned based on the
requirements. Partitioning is an excellent strategy for avoiding internal fragmentation.
Below are the various partition allocation schemes:
● First Fit: With this sort of fit, the partition, which is the first sufficient block from the
beginning of the main memory, is allocated.
● Best Fit: It assigns the process to the partition with the least size among the free
partitions.
● Worst Fit: It assigns the process to the partition that has the biggest sufficient free
space in the main memory.
● Next Fit: This Fit is similar to the first Fit in many ways, except it looks for the first
sufficient partition from the last allocation point.

KEY TAKEAWAY

➢ Memory management in operating systems employs various techniques, primarily


categorized into contiguous and non-contiguous memory management schemes.
➢ In contiguous memory management, programs are allocated in a single, uninterrupted
block of storage locations.
➢ Single contiguous memory management involves partitioning the main memory into
two sections, with the operating system permanently residing in one and user processes
in the other.
➢ However, this approach suffers from limitations like memory waste and an inability to
support multiprogramming. To address these issues, multiple partitioning schemes, such
as fixed partitioning and dynamic partitioning, are employed.
➢ Fixed partitioning divides memory into fixed-sized partitions, leading to challenges like
internal and external fragmentation. On the other hand, dynamic partitioning
overcomes these drawbacks by allocating memory based on process requirements,
eliminating internal fragmentation.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 10


➢ However, it introduces new challenges, including external fragmentation and complex
memory allocation. Various partition allocation schemes, such as First Fit, Best Fit,
Worst Fit, and Next Fit, are utilized to optimize the allocation process based on different
criteria.
➢ Overall, memory management techniques aim to enhance system performance by
efficiently utilizing available resources.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 11


MEMORY MANAGEMENT

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 1


SUB LESSON 7.3
PARTITION ALLOCATION

PARTITION ALLOCATION
Memory is partitioned or divided into blocks. Each process is assigned based on the
requirements. Partitioning is an excellent strategy for avoiding internal fragmentation.
Below are the various partition allocation schemes:
● First Fit: which is the first sufficient block from the beginning of the main memory, is
allocated in this sort of fit.
● Next Fit: This Fit is similar to the first Fit in many ways, except it looks for the first
sufficient partition from the last allocation point.
● Best Fit: It assigns the process to the partition with the least size among the free
partitions.
● Worst Fit: It assigns the process to the partition that has the biggest sufficient free space
in the main memory.

HOW DOES FIRST FIT WORK?


Whenever a process (p1) comes with a memory allocation request the following happens –

● OS sequentially searches available memory blocks from the first index


● Assigns the first memory block large enough to accommodate the process

Whenever a new process P2 comes, it does the same thing. Search from the first index again.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 2


WORKING EXAMPLE FOR FIRST FIT

Example: As shown on the right/below image

Memory blocks available are: {100, 50, 30, 120, 35}

PROCESS P1, SIZE: 20

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 3


● OS Searches memory from sequentially from starting
● Block 1 fits, P1 gets assigned to block 1

PROCESS P2, SIZE: 60

● OS Searches memory sequentially from block 1 again


● Block 1 is unavailable, Block 2 and 3 can’t fit
● Block 4 fits, p2 assigned to block 4

PROCESS P3, SIZE: 70

● OS Searches memory sequentially from block 1 again


● Block 1 is unavailable, Block 2, 3 can’t fit. Block 4 is unavailable, Block 5 can’t fit
● P3 remains unallocated

Similarly, P4 is assigned to block 2

ADVANTAGES

● Easy to implement
● OS can allocate processes quickly as an algorithm to allocate processes will be quick as
compared to other methods (Best Fit, Worst Fit, Next Fit, etc.)

DISADVANTAGES

● Causes huge internal fragmentation


● Smarter allocation may be done by best-fit algorithm

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 4


● High chances of unallocated for some processes due to poor algorithm
● More overhead as compared to next fit

HOW NEXT FIT WORKS?

Next fit is a variant of First Fit in which memory is searched for empty places in a manner
similar to the first fit memory allocation scheme. The only difference between the next fit
and the first fit memory allocation is that if the search is paused in between, the fresh
search starts from the last position.

The next fit can alternatively be described as a modified version of the first fit because it
begins searching for free memory in accordance with the first-fit memory allocation
strategy. This memory allocation approach employs a moving pointer to seek memory for
the next fit by traveling along the empty memory slots. The next fit memory allocation
approach always avoids memory allocation from the start of the memory space. For this
reason, the operating system employs a memory allocation method, commonly known as
the scheduling algorithm.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 5


HOW DOES BEST FIT WORK?
in Best fit memory allocation scheme, the operating system searches that can –

● Accommodate the process


● Also, leaves the minimum memory wastage

Example –

If you see the image below/right you will see that the process size is 40.

While blocks 1, 2 and 4 can accommodate the process. Block 2 is chosen as it leaves the lowest
memory wastage

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 6


ADVANTAGE:

It is allocated to the process. This scheme is considered the best approach as it results in the
most optimized memory allocation.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 7


Also reduces internal fragmentation.

DISADVANTAGE:

However, finding the best-fit memory allocation may be time-consuming.

HOW DOES WORST FIT WORK?


Worst fit works in the following way, for any given process Pn.

The algorithms searches sequentially starting from the first memory block and searches for the
memory block that fulfills the following condition –

● Can accommodate the process size


● Leaves the largest wasted space (fragmentation) after the process is allocated to a given
memory block

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 8


KEY TAKEAWAY

➢ Memory partition allocation plays a crucial role in optimizing system performance, and
various schemes are employed to efficiently assign memory blocks to processes.
➢ First Fit allocates the first available memory block that can accommodate a process, but
it leads to significant internal fragmentation.
➢ Next Fit, a variant of First Fit, resumes its search from the last position, aiming to reduce
allocation overhead.
➢ Best Fit allocates memory blocks that not only fit the process but also minimize memory
wastage, resulting in optimized memory allocation and reduced internal fragmentation.
On the other hand,
➢ Worst Fit selects a memory block that accommodates the process while leaving the
largest wasted space.
➢ Although Best Fit minimizes fragmentation, it can be time-consuming, while Worst Fit
emphasizes efficient space utilization over allocation speed.
➢ Each allocation scheme has its advantages and disadvantages, influencing the overall
performance of memory management in an operating system.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 9


MEMORY MANAGEMENT

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 1


SUB LESSON 7.4
NON-CONTIGUOUS MEMORY MANAGEMENT SCHEMES

NON-CONTIGUOUS MEMORY MANAGEMENT SCHEMES

A process can acquire many memory blocks at different locations in memory using this sort of
memory allocation. Unlike contiguous memory allocation, which allocates all available free
space in one area, available free space is scattered throughout the system.
Non-contiguous memory allocation is slower than contiguous memory allocation. It consists of
segmentation and paging. This allocation results in no memory loss. The processes that follow
the switch might take place in any location.
For example, a process with three segments, say P1, P2, and P3, will be allocated in RAM in
contiguous memory blocks such as block 1 (-> P1), block 3 (-> P2), and block 5 (-> P3).

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 2


PAGING
It is a technique for increasing the memory space available by moving infrequently-used parts
of a program’s working memory from RAM to a secondary storage medium, usually a hard disk.
The unit of transfer is called a page. A memory management unit (MMU) monitors access to
memory and splits each address into a page number (the most significant bits) and an offset
within that page (the lower bits). It then looks up the page number in its page table. The page
may be marked as paged in or paged out. If it is paged in then the memory access can proceed
after translating the virtual address to a physical address. If the requested page is paged out
then space must be made for it by paging out some other page, i.e. copying it to disk. The
requested page is then located on the area of the disk allocated for “swap space” and is read
back into RAM. The page table is updated to indicate that the page is paged in and its physical
address recorded. The MMU also records whether a page has been modified since it was last
paged in. If it has not been modified then there is no need to copy it back to disk and the space
can be reused immediately. Paging allows the total memory requirements of all running tasks
(possibly just one) to exceed the amount of physical memory, whereas swapping simply allows
multiple processes to run concurrently, so long as each process on its own fi ts within physical

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 3


memory. On operating systems, such as Windows NT, Windows 2000, or UNIX, the memory is
logically divided into pages. When the system needs a certain portion of memory that is
currently in the swap (this is called a page fault) it will load all the corresponding pages into
RAM. When a page is not accessed for a long time, it is saved back to disk and discarded. In a
virtual memory system, it is common to map between virtual addresses and physical addresses
by means of a data structure called a page table. A page table is the data structure used by a
virtual memory system in a computer operating system to store the mapping between virtual
addresses and physical addresses. Virtual addresses are those unique to the accessing process.
Physical addresses are those unique to the CPU, i.e., RAM. The page number of an address is
usually found from the most significant bits of the address; the remaining bits yield the offset of
the location within the page. The page table is normally indexed by page number and contains
information on whether the page is currently in main memory, and where it is in main memory
or on disk. Conventional page tables are sized to the virtual address space and store the entire
virtual address space description of each process. Because of the need to keep the virtual-to-
physical translation time low, a conventional page table is structured as a fixed, multi-level
hierarchy, and can be very inefficient at representing a sparse virtual address space, unless the
allocated pages are carefully aligned to the page table hierarchy.
Paging is a storage strategy used in operating systems to recover processes from secondary
storage and store them in main memory as pages. The primary idea behind paging is to divide
each procedure onto separate pages. The primary memory will also be divided into frames.
One process page should be preserved in one of the memory frames. The pages can be stored
wherever in memory, but finding contiguous frames or holes is always the goal.
Process pages are brought into main memory only when they are required; otherwise, they are
stored in secondary storage.
The size of the frame varies based on the operating system. Each frame should be the same
size. Because Paging's pages are mapped to frames, the page size must be the same as the
frame size.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 4


Figure 7.4.2 Paging (This image needs to be shown in the background of video))

Example:
One of the simplest paging strategies is to build the page table as a collection of registers.
Because register capacity is limited, the page table is kept in main memory, despite the fact
that the size of the page table is often enormous.
This strategy results in no external fragmentation because every free frame is allocated to a
process that demands it. Internal fragmentation, on the other hand, endures.
● If a procedure requires 'n' pages, 'n' frames must be employed.
● The first frame on the free-frame list is loaded with the process's initial page, and
the frame number is then inserted into the page table.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 5


The frame table is a data structure that stores information such as which frames are allocated
or available and other information. Each physical page frame is represented by one entry in this
table.

The operating system preserves a copy of the instruction counter and registers contents for
each process in the same way that it keeps a copy of the page table for each process. This copy

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 6


is also used to transform logical addresses to physical addresses when the operating system
manually maps a logical address to a physical address.
The CPU dispatcher utilizes this copy to define the hardware page table when a process has to
be assigned to the CPU.

Advantages of paging:

● Pages reduce external fragmentation.


● Simple to implement.
● Memory efficient.
● Due to the equal size of frames, swapping becomes very easy.
● It is used for faster access of data.

SEGMENTATION
It is very common for the size of program modules to change dynamically. For instance, the
programmer may have no knowledge of the size of a growing data structure. If a single address
space is used, as in the paging form of virtual memory, once the memory is allocated for
modules they cannot vary in size. This restriction results in either wastage or shortage of
memory. To avoid the above problem, some computer systems are provided with many
independent address spaces. Each of these address spaces is called a segment. The address of
each segment begins with 0 and segments may be compiled separately. In addition, segments
may be protected individually or shared between processes. However, segmentation is not
transparent to the programmer like paging. The programmer is involved in establishing and
maintaining the segments. Segmentation is one of the most common ways to achieve memory
protection like paging. An instruction operand that refers to a memory location includes a value
that identifies a segment and an offset within that segment. A segment has a set of
permissions, and a length, associated with it. If the currently running process is allowed by the
permissions to make the type of reference to memory that it is attempting to make, and the
offset within the segment is within the range specified by the length of the segment, the
reference is permitted; otherwise, a hardware exception is delivered. In addition to the set of

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 7


permissions and length, a segment also has associated with it information indicating where the
segment is located in memory. It may also have a flag indicating whether the segment is
present in the main memory or not; if the segment is not present in the main memory, an
exception is delivered, and the operating system will read the segment into memory from
secondary storage. The information indicating where the segment is located in memory might
be the address of the first location in the segment or might be the address of a page table for
the segment. In the first case, if a reference to a location within a segment is made, the offset
within the segment will be added to the address of the first location in the segment to give the
address in memory of the referred-to item; in the second case, the offset of the segment is
translated to a memory address using the page table. A memory management unit (MMU) is
responsible for translating a segment and offset within that segment into a memory address,
and for performing checks to make sure the translation can be done and that the reference to
that segment and offset is permitted.
Segmentation is a memory management technique used in operating systems that splits
memory into varying-sized pieces. Each component is known as a segment, and each segment
can be associated with a process.
Each segment is stored in a table called the segment table. The segment table is contained in
one (or more) segments.
The segment table primarily comprises two pieces of information regarding the segment:
1. Base: This is the segment's starting address.
2. Limit: The segment's length is the limit.

WHY IS SEGMENTATION REQUIRED?


Until now, we've relied on paging as our primary memory management strategy. Paging is
related to the operating system rather than the user. Even if a process has specific connected
sections of functions that must be loaded on the same page, it separates all processes into
pages.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 8


The operating system is indifferent to the user's opinion of the method. It may divide the same
function into many pages, which may or may not be loaded into memory concurrently. As a
result, the system's efficiency declines.
It is advisable to segment the process by dividing it into segments. Each segment contains the
same functions. For example, the main function can be found in one segment and the library
functions in the other.

Example:
The segmentation example below shows five parts numbered 0 to 4. These sections, as shown,
will be kept in physical memory. Each segment has its own entry in the segment table, which
contains both the segment's initial entry address in physical memory (referred to as the base)
and its length (denoted as limit).

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 9


Segment 2 begins at location 4300 and lasts 400 bytes. As a result, in this case, a reference to
byte 53 of segment 2 is mapped to location 4300 (4300+53=4353). 3200 (the base of segment
3)+852=4052 is mapped to byte 85, a reference to segment 3.
A reference to byte 1222 of segment 0 would result in a trap to the OS because this segment is
1000 bytes long.

SEGMENTATION WITH PAGING


Pure segmentation is not extensively utilized, and many operating systems do not support it.
Segmentation and paging, on the other hand, can be combined to optimize the benefits of both
tactics.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 10


In Segmented Paging, the primary memory is partitioned into variable-size segments, which are
then divided into fixed-size pages.
1. Pages are smaller than segments.
2. Because each section has its own page table, each program contains numerous
page tables.
3. The logical address is denoted by the Segment Number (base address), Page
Number, and Page Offset.

It takes you to the relevant Segment Number based on the number of segments.
It takes you to the specific page inside the segment based on the number of pages.
Page Offset: This is a value that is used to offset the page's position within the frame.

Each Page table contains a variety of information about each segment page. The Segment Table
contains information for each segment. Each page table entry corresponds to one of the pages
contained inside a segment, and each segment table entry refers to a page table item.

KEY TAKEAWAY
➢ Memory management in operating systems involves various strategies for efficient
allocation and use of system memory.
➢ In non-contiguous memory management schemes, processes can acquire memory
blocks at different locations, unlike contiguous allocation where all available space is in
one area.
➢ Two main non-contiguous memory management techniques are segmentation and
paging. Segmentation involves dividing memory into varying-sized segments, each
associated with a process, offering flexibility but requiring manual management.
➢ Paging, on the other hand, divides processes into fixed-size pages, simplifying allocation
but potentially leading to internal fragmentation.
➢ Paging allows the efficient use of main memory by swapping infrequently-used parts of
a program between RAM and secondary storage.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 11


➢ It uses a page table to map virtual addresses to physical addresses, facilitating dynamic
memory allocation. Combining segmentation and paging can optimize the benefits of
both approaches.
➢ In Segmented Paging, primary memory is divided into variable-size segments, which are
further divided into fixed-size pages.
➢ Each segment has its own page table, enhancing memory management efficiency.
Overall, these non-contiguous memory management techniques contribute to effective
memory utilization in operating systems.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 12


MEMORY MANAGEMENT

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 1


SUB LESSON 7.5

VIRTUAL MEMORY

VIRTUAL MEMORY
Many of us use computers on a daily basis. Although you use them for many different purposes
in many different ways, you share one common reason for using them; to make our job more
efficient and easier. However, there are times when computers cannot run as fast as you want
them to or just cannot handle certain processes effectively, due to the shortage of system
resources. When the limitations of system resources become a major barrier to achieving your
maximum productivity, you often consider the apparent ways of upgrading the system, such as
switching to a faster CPU, adding more physical memory (RAM), installing utility programs, and
so on. As a computer user, you want to make the most of the resources available; the process
of preparing plans to coordinate the total system to operate in the most efficient manner. This
is called system optimization. When it comes to system optimization, there is one great
invention of modern computing called virtual memory. It is an imaginary memory area
supported by some operating system (for example, Windows but not DOS) in conjunction with
the hardware. You can think of virtual memory as an alternate set of memory addresses.
Programs use these virtual addresses rather than real addresses to store instructions and data.
When the program is actually executed, the virtual addresses are converted into real memory
addresses. The purpose of virtual memory is to enlarge the address space, the set of addresses
a program can utilize. To facilitate copying virtual memory into real memory, the operating
system divides virtual memory into pages, each of which contains a fixed number of addresses.
Each page is stored on a disk until it is needed. When the page is needed, the operating system
copies it from the disk to the main memory, translating the virtual addresses into real
addresses. The process of translating virtual addresses into real addresses is called mapping.
The copying of virtual pages from the disk to the main memory is known as paging or swapping.
Some physical memory is used to keep a list of references to the most recently accessed
information on an I/O (input/output) device, such as the hard disk. The optimization it provides

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 2


is that it is faster to read the information from physical memory than use the relevant I/O
channel to get that information. This is called caching. It is implemented inside the OS.

Virtual memory is a memory management solution that employs both hardware and software.
It is a component of secondary storage that provides the user with the impression that it is a
component of the main memory. It facilitates the execution of several applications with limited
main memory and enhances the degree of multiprogramming in systems. Demand paging is
typically used to implement it.

WHAT IS VIRTUAL MEMORY?


Consider your computer's physical memory to be the front section of your outfit. You keep the
clothes you want to wear in front of you so they are easily accessible. Consider yourself to be
the operating system in the preceding situation, and the backside of your clothing to be the
hard drive. The garments in the rear can also be accessible when necessary, albeit it may take a
little longer. Keep track of where you've put what clothing items to make it easier to find them
later. A page table is a type of mapping mechanism.

Virtual memory is a component of the system's secondary memory that acts and seems to be a
component of the main memory. Virtual memory enables a system to run heavy applications or
several applications concurrently without running out of RAM (Random Access Memory). The
system can, for example, act as if its total RAM resources were equal to the total amount of
physical RAM plus the total amount of virtual Memory.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 3


When numerous demanding apps are operating at the same time, the system's RAM may get
overwhelmed. To address this issue, certain data in RAM that isn't currently being used can be
temporarily transferred to virtual memory. This frees up RAM space, which can then be used to
store data that the system needs.

A system can function normally with far less physical RAM than it would ordinarily require. This
can be accomplished by transferring data between RAM and virtual memory as needed.

THE HISTORY OF VIRTUAL MEMORY

Primary and secondary storage were available on early computers. Storage was extremely
expensive back then, so expanding it would be problematic. The system ran out of memory
when programmes grew and requested more memory than was available. They were
compelled to use the hard drive's memory when this occurred. But, because access to this was
sluggish, they reasoned that it would be beneficial to have some form of rapid cheap memory
to aid in the resolution of this dilemma.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 4


In 1959, the first virtual memory was created. The criticism that the concept of virtual memory
at the time experienced led to the creation of the virtual memory that exists today. By the late
1970s, the concept of virtual memory had been thoroughly researched and incorporated into
commercial computers.

Yet, it was not until 1985, when Intel developed virtual memory, that virtual memory in desktop
computers became a reality. That is how virtual memory entered our daily lives.

HOW VIRTUAL MEMORY WORKS?

Let's understand the working of virtual memory using the example shown below.

Imagine that an operating system consumes 500 MB of Memory to keep all processes
functioning. Unfortunately, there is now just 10 MB of actual RAM capacity available. The
operating system will then allocate 490 MB of virtual memory and manage it using the Virtual
Memory Management tool (VMM). As a result, the VMM will create a 490 MB file on the hard
disc to hold the additional RAM required. Because it considers 500 MB of actual memory
preserved in RAM, the System will now proceed to access memory even if only 10 MB of space

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 5


is available. Even if just 10 MB of memory is available, it is the VMM's obligation to manage 500
MB of memory.

PAGE FRAMES

A page frame is used to structure physical memory. A page frame's size is a power of two bytes
and varies between platforms.

The CPU accesses processes using their logical addresses, whereas the main memory only
recognizes the physical address. This is made easier by the Memory Management Unit, which
converts page numbers (logical addresses) to frame numbers (physical address). Both have the
same offset.

The page table associates the page number with the frame number. The graphic above shows
how the frame number and frame offset work together to help us find the needed word.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 6


PAGE TABLE

A page table is a logical data structure that virtual memory uses to store the mapping between
virtual and physical locations. The accessing process's programs use virtual addresses, but the
hardware, particularly the random-access memory (RAM) subsystem, employs physical
addresses. The page table is an essential component of virtual address translation, which is
necessary to access memory data.

Let us suppose, as shown in the picture above, that the physical address space, logical address
space, and page size are M, L, and P words, respectively. The physical address, logical address,
and page offset can then be specified as follows:

Physical Address = log2 M m bits L page offset = p bits = log2 P Logical Address = l bits = log2 P

By removing the page offset from the address, we get the frame numbers, which are m-p and l-
p for physical and logical addresses, respectively.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 7


Because the page table is made up of processes, the number of entries in the page table equals
the number of pages in the process. As a result, the page table size would be specified as m-p
bytes.

WHAT IS DEMAND PAGING?

Consider the wardrobe analogy that we discussed before. Perhaps you're browsing an e-
commerce site and fall in love with certain clothes. But, because you don't have enough space
in your wardrobe, you put it on your wishlist and intend to buy it only when you need it.
Demand paging, in contrast, is a method that saves pages of a process that are infrequently
utilized in secondary memory and draws them only when needed to satisfy the demand.

As a result, when a context transition occurs, the System begins executing the new program
after loading the first page and only fetches the referred pages of the application. If the
software attempts to access a page that is no longer available in main memory due to
swapping, If the software targets a page that is no longer present in main memory due to
swapping, the CPU considers it an invalid memory reference.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 8


After Program A completes its execution, it swaps out the RAM that was previously in use.
Program B then swaps in the RAM that it needed to meet the deadline. This is a common
application of demand paging.

SWAP IN AND SWAP OUT

When primary memory (RAM) is insufficient to store data required by several applications, we
use it to swap out to transfer certain programs from RAM to the hard drive.

Similarly, as RAM becomes available, we move apps from hard disc to RAM. With swaps, we
may manage many processes inside the same Memory. Swapping is a low-cost method of
creating virtual memory.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 9


Process 1 is seen in the diagram above swapping apps from hard disc to RAM as needed.
Furthermore, when available memory is low, Process 2 swaps out, shifting certain programs
from RAM to hard disc.

ADVANTAGES OF VIRTUAL MEMORY

● Improves the degree of multiprogramming: You can run multiple apps at the same time
without purchasing additional memory RAMs.
● Data Sharing: Data that is shared across memory can be shared.
● Avoids Relocation: The code in physical memory can be accessed whenever it is needed
without the need for relocation.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 10


● Increases memory: Users can run larger apps on computers that don't have enough
RAM. As a result, it may run applications that require more memory than physical
memory.
● Improves CPU effectiveness: Because more processes can be stored in memory, the CPU
is utilized more efficiently. Each page is temporarily saved on the disc and then deleted.

DISADVANTAGES OF VIRTUAL MEMORY

1. Slows down the system: Switching takes time, which reduces the system's speed. This
occurs because moving between applications takes longer.

2. Fewer Hard Disk Space: Virtual Memory consumes storage space that could otherwise
be used to store long-term data.

3. Reduces System Stability: It has a detrimental impact on a system's overall performance.

KEY TAKEAWAY
➢ Virtual memory is a crucial aspect of modern computing that plays a significant role in
optimizing system performance.
➢ It acts as an imaginary memory area supported by the operating system and hardware,
providing an extended address space for programs to utilize.
➢ In situations where physical memory (RAM) is limited, virtual memory enables the
execution of multiple applications simultaneously by temporarily storing unused data on
the hard disk.
➢ The process involves mapping virtual addresses to real memory addresses, known as
mapping, and transferring virtual pages between disk and main memory, known as
paging or swapping.
➢ This dynamic allocation of memory allows systems to function efficiently with less
physical RAM, enhancing the degree of multiprogramming.
➢ Virtual memory has evolved since its inception in 1959, becoming a standard feature in
contemporary operating systems.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 11


➢ Despite its advantages, such as improved CPU effectiveness and the ability to run larger
applications, virtual memory has drawbacks, including potential system slowdowns,
reduced hard disk space, and an impact on overall system stability.
➢ In summary, virtual memory is a crucial mechanism for efficient memory management,
contributing to enhanced system performance and multitasking capabilities in modern
computing environments.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 12


MEMORY MANAGEMENT

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 1


SUB LESSON 7.6
PAGE REPLACEMENT ALGORITHMS

PAGE REPLACEMENT ALGORITHMS


When the number of available real memory frames on the free list becomes low, a page stealer
is invoked. A page stealer moves through the Page Frame Table (PFT), looking for pages to steal.
The PFT includes flags to signal which pages have been referenced and which have been
modified. If the page stealer encounters a page that has been referenced, it does not steal that
page, but instead, resets the reference flag for that page. The next time the clock hand (page
stealer) passes that page and the reference bit is still off, that page is stolen. A page that was
not referenced in the first pass is immediately stolen. The modified flag indicates that the data
on that page has been changed since it was brought into memory. When a page is to be stolen,
if the modify flag is set, a page-out call is made before stealing the page. Pages that are part of
working segments are written to paging space; persistent segments are written to disk. All
paging algorithms function on three basic policies: a fetch policy, a replacement policy, and a
placement policy. In the case of static paging, describes the process with a shortcut: the page
that has been removed is always replaced by the incoming page; this means that the placement
policy is always fixed. Since you are also assuming demand paging, the fetch policy is also a
constant; the page fetched is that which has been requested by a page fault. This leaves only
the examination of replacement methods.

A page replacement situation in an operating system is one in which a page from the main
memory is replaced by a page from the secondary memory. Page defects cause page
replacement. Several page replacement algorithms, such as

● FIFO
● Optimal page replacement
● LRU
● LIFO

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 2


● Random page replacement

WHAT IS PAGE REPLACEMENT IN OPERATING SYSTEMS?

In operating systems that use virtual memory with Demand Paging, page replacement is
required. As we know, demand paging loads only a subset of a process's pages into memory.
This is done so that multiple processes can run concurrently in memory.

When a process requests a page from virtual memory for execution, the Operating System must
determine which page will be replaced by this requested page. This is known as page
replacement, and it is an important part of virtual memory management.

WHY NEED PAGE REPLACEMENT ALGORITHMS?

To understand why we require page replacement techniques, we must first understand page
faults. Let's look at what a page fault is.

Page Fault: A Page Fault happens when a program executing in the CPU attempts to access a
page that is in that program's address space, but the requested page is currently not loaded
into the system's main physical memory, the RAM.

Page errors occur because the actual RAM is much less than the virtual memory. As a result, if a
page fault occurs, the operating system must replace a previously requested page in RAM with
the newly requested page. Page replacement algorithms aid the Operating System in
determining which page to replace in this case. The primary objective of all the page
replacement algorithms is to minimize the number of page faults.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 3


PAGE REPLACEMENT ALGORITHMS IN OPERATING SYSTEMS

FIRST IN FIRST OUT (FIFO)

First-in, first-out is as easy to implement as Random Replacement, and although its


performance is equally unreliable or worse, its behavior does follow a predictable pattern.
Rather than choosing a victim page at random, the oldest page (or first-in) is the first to be
removed. Conceptually compares FIFO to a limited-size queue, with items being added to the
queue at the tail. When the queue fills (all of the physical memory has been allocated), the first
page to enter is pushed out of the head of the queue. Similar to Random Replacement, FIFO
blatantly ignores trends, and although it produces fewer page faults, still does not take
advantage of locality trends unless by coincidence as pages move along the queue. A
modification to FIFO that makes its operation much more useful is First-In Not-Used First-Out
(FINUFO). The only modification here is that a single bit is used to identify whether or not a
page has been referenced during its time in the FIFO queue. This utility, or referenced bit, is
then used to determine if a page is identified as a victim. If, since it has been fetched, the page
has been referenced at least once, it's bit becomes set. When a page must be swapped out, the
first to enter the queue whose bit has not been set is removed; if every active page has been
referenced, a likely occurrence taking locality into consideration, all of the bits are reset. In a
worst-case scenario, this could cause minor and temporary thrashing, but is generally very
effective given its low cost.

The FIFO method is the most basic of the page replacement algorithms. In this case, we keep a
queue of all the pages that are currently in memory. The oldest page in memory is at the head
of the queue, and the most recent page is at the tail end.

When a page fault occurs, the operating system examines the front end of the queue to
determine which page should be replaced by the newly requested page. It also adds the newly
requested page to the back end of the queue and removes the oldest page from the front end.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 4


Consider the following page reference string: 3, 1, 2, 1, 6, 5, 1, 3 with 3-page frames.Let’s try to
find the number of page faults:

● Initially, all of the slots are empty so page faults occur at 3,1,2.

Page faults = 3

● When page 1 comes, it is in the memory so no page fault occurs.

Page faults = 3

● When page 6 comes, it is not present and a page fault occurs. Since there are no empty
slots, we remove the front of the queue, i.e 3.

Page faults = 4

● When page 5 comes, it is also not present and hence a page fault occurs. The front of
the queue i.e 1 is removed.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 5


Page faults = 5

● When page 1 comes, it is not found in memory and again a page fault occurs. The front
of the queue i.e 2 is removed.

Page faults = 6

● When page 3 comes, it is again not found in memory, a page fault occurs, and page 6 is
removed being on top of the queue

Total page faults = 7

Belady's anomaly: Generally if we increase the number of frames in the memory, the number of
page faults should decrease due to obvious reasons. Belady’s anomaly refers to the phenomena
where increasing the number of frames in memory, increases the page faults as well.

Advantages

● Simple to understand and implement


● Does not cause more overhead

Disadvantages

● Poor performance
● Doesn’t use the frequency of the last used time and just simply replaces the oldest page.
● Suffers from Belady’s anomaly.

OPTIMAL PAGE REPLACEMENT IN OS

The best page replacement algorithm is optimal page replacement since it produces the fewest
number of page defects. The pages that will not be used for the greatest period of time in the
future are replaced by this algorithm. In basic terms, this algorithm replaces the sites that will
be referred to the most in the future.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 6


Example:

Take the same page reference string as in FIFO: 3, 1, 2, 1, 6, 5, 1, 3 with 3-page frames. This also
assists you in understanding how Optimal Page Replacement works best.

● Initially, since all the slots are empty, pages 3, 1, 2 cause a page fault and take the empty
slots.

Page faults = 3

● When page 1 comes, it is in the memory and no page fault occurs.

Page faults = 3

● When page 6 comes, it is not in the memory, so a page fault occurs and 2 is removed as
it is not going to be used again.

Page faults = 4

● When page 5 comes, it is also not in the memory and causes a page fault. Similar to
above 6 is removed as it is not going to be used again.

page faults = 5

● When page 1 and page 3 come, they are in the memory so no page fault occurs.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 7


Total page faults = 5

Advantages

● Excellent efficiency
● Less complexity
● Easy to use and understand
● Simple data structures can be used to implement
● Used as the benchmark for other algorithms

Disadvantages

● More time consuming


● Difficult for error handling
● Need future awareness of the programs, which is not possible every time

LEAST RECENTLY USED (LRU) PAGE REPLACEMENT ALGORITHM

The least recently used page replacement method maintains track of page usage over time. This
algorithm is based on the locality of a reference principle, which claims that a programme has a
tendency to access the same set of memory locations repeatedly during a short period of time.
Pages that have been widely utilised in the past are likely to be heavily used in the future as
well.

When a page failure occurs in this algorithm, the page that has not been utilised for the longest
period of time is replaced by the newly requested page.

Example: Let’s see the performance of the LRU on the same reference string of 3, 1, 2, 1, 6, 5, 1,
3 with 3-page frames:

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 8


● Initially, since all the slots are empty, pages 3, 1, 2 cause a page fault and take the empty
slots.

Page faults = 3

● When page 1 comes, it is in the memory and no page fault occurs.

Page faults = 3

● When page 6 comes, it is not in the memory, so a page fault occurs and the least
recently used page 3 is removed.

Page faults = 4

● When page 5 comes, it again causes a page fault and page 1 is removed as it is now the
least recently used page.

Page faults = 5

● When page 1 comes again, it is not in the memory and hence page 2 is removed
according to the LRU.

Page faults = 6

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 9


● When page 3 comes, the page fault occurs again and this time page 6 is removed as the
least recently used one.

Total page faults = 7

In the preceding example, the LRU creates the same page faults as the FIFO, but this is not
always the case because it depends on the series, the amount of frames available in memory,
and so on. In reality, in most circumstances, LRU is preferable to FIFO.

Advantages

● It is open for full analysis


● Doesn’t suffer from Belady’s anomaly
● Often more efficient than other algorithms

Disadvantages

● It requires additional data structures to be implemented


● More complex
● High hardware assistance is required

LAST IN FIRST OUT (LIFO) PAGE REPLACEMENT ALGORITHM

This method is based on the Last in First Out (LIFO) concept. The requested page replaces the
newest page in this algorithm. Typically, this is accomplished using a stack, in which we retain a

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 10


stack of pages currently in memory, with the most recent page at the top. The page at the top
of the stack is replaced whenever a page fault occurs.

Example: Let’s see how the LIFO performs for our example string of 3, 1, 2, 1, 6, 5, 1, 3 with 3-
page frames:

● Initially, since all the slots are empty, page 3,1,2 causes a page fault and takes the empty
slots.

Page faults = 3

● When page 1 comes, it is in the memory and no page fault occurs.

Page faults = 3

● When page 6 comes, the page fault occurs and page 2 is removed as it is on the top of
the stack and is the newest page.

Page faults = 4

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 11


● When page 5 comes, it is not in the memory, which causes a page fault, and hence page
6 is removed being on top of the stack.

Page faults = 5

● When page 1 and page 3 come, they are in memory already, hence no page fault occurs.

Total page faults = 5

This is, as you may have seen, the same number of page faults as the Optimal page
replacement algorithm. Hence, for this series of pages, we can state that this is the best
method that can be implemented without previous knowledge of future references.

Advantages

● Simple to understand
● Easy to implement
● No overhead

Disadvantages

● Does not consider Locality principle, hence may produce worst performance
● The old pages may reside in memory forever even if they are not used

RANDOM PAGE REPLACEMENT IN OS

As the name implies, this technique selects any random page in memory to be replaced by the
desired page. Based on the random page picked to be replaced, this method can act like any of
the approaches.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 12


Example: Suppose we choose to replace the middle frame every time a page fault occurs. Let’s
see how our series of 3, 1, 2, 1, 6, 5, 1, 3 with 3-page frames perform with this algorithm:

● Initially, since all the slots are empty, page 3,1,2 causes a page fault and takes the empty
slots

Page faults = 3

● When page 1 comes, it is in the memory and no page fault occurs.

Page faults = 3

● When page 6 comes, the page fault occurs, we replace the middle element i.e 1 is
removed.

Page faults = 4

● When page 5 comes, the page fault occurs again and middle element 6 is removed

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 13


Page faults = 5

● When page 1 comes, there is again a page fault and again the middle element 5 is
removed

Page faults = 6

● When page 3 comes, it is in memory, hence no page fault occurs.

Total page faults = 6

As we can see, the performance is not the best, but it's also not the worst. The performance in
the random replacement algorithm depends on the choice of the page chosen at random.

Advantages

● Easy to understand and implement


● No extra data structure needed to implement
● No overhead

Disadvantages

● Can not be analyzed, may produce different performances for the same series
● Can suffer from Belady’s anomaly

KEY TAKEAWAY
➢ Page replacement algorithms are crucial for optimizing memory management in
operating systems, particularly in scenarios where available real memory frames
become limited.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 14


➢ When page stealer mechanisms are invoked due to low available memory frames,
various page replacement algorithms come into play, such as FIFO, Optimal, LRU, LIFO,
and Random.
➢ These algorithms determine which page in the main memory should be replaced by a
page from the secondary memory, helping manage page faults efficiently.
➢ Page faults occur when a program attempts to access a page not currently in the
system's main physical memory, leading to the need for page replacement.
➢ Each algorithm follows different strategies; for example, FIFO replaces the oldest page,
Optimal replaces the page not used for the longest time, LRU replaces the least recently
used page, LIFO replaces the newest page, and Random selects a page randomly for
replacement.
➢ Each algorithm has its advantages and disadvantages, impacting factors such as
simplicity, efficiency, and the ability to handle future program awareness.
➢ The choice of a page replacement algorithm depends on specific system requirements
and performance considerations.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 15


I/O MANAGEMENT

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 1


SUB LESSON 8.1

I/O DEVICES AND ORGANIZATION OF THE I/O FUNCTION

I/O HARDWARE

An Operating System is responsible for managing numerous I/O devices such as the mouse,
keyboards, touchpad, disc drives, display adapters, USB devices, Bit-mapped screens, LEDs,
Analog-to-digital converters, On/off switches, network connections, audio I/O, printers, and so
on.

An I/O system must receive an application I/O request and send it to the physical device, then
receive the device's response and transmit it back to the application. I/O devices are classified
into two types:

● Block devices − The driver connects with a block device by transmitting whole blocks of
data. Hard drives, USB cameras, Disk-On-Key, and other similar devices are examples.
● Character devices − The driver communicates with a character device by transmitting
and receiving single characters (bytes, octets). For instance, serial ports, parallel ports,
sound cards, and so on.

DEVICE CONTROLLERS

Device drivers are software modules that can be put into an operating system to manage a
specific device. The operating system relies on device drivers to manage all I/O devices.

The Device Controller acts as a bridge between a device and its driver. I/O units (keyboard,
mouse, printer, etc.) are normally made up of a mechanical component and an electrical
component, the latter of which is known as the device controller.

Each device has a device controller and a device driver to connect with the operating systems. A
device controller may be capable of controlling many devices. Its primary function as an
interface is to transform a serial bit stream to a block of bytes and repair errors as needed.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 2


A plug and socket connect any device to the computer, and the socket connects to a device
controller. The following is a model for linking the CPU, memory, controllers, and I/O devices, in
which the CPU and device controllers communicate via a shared bus.

SYNCHRONOUS VS ASYNCHRONOUS I/O

● Synchronous I/O − In this scheme CPU execution waits while I/O proceeds
● Asynchronous I/O − I/O proceeds concurrently with CPU execution

COMMUNICATION TO I/O DEVICES

The CPU must have a way to pass information to and from an I/O device. There are three
approaches available to communicate with the CPU and Device.

● Special Instruction I/O


● Memory-mapped I/O
● Direct memory access (DMA)

SPECIAL INSTRUCTION I/O

This makes advantage of CPU instructions designed expressly for manipulating I/O devices.
These instructions typically permit data to be delivered to or read from an I/O device.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 3


MEMORY-MAPPED I/O

Memory and I/O devices share the same address space when utilizing memory-mapped I/O.
The device is directly attached to certain primary memory locations, allowing the I/O device to
transmit data blocks to and from memory without passing via the CPU.

By employing memory-mapped IO, the operating system allocates a buffer in memory and
instructs the I/O device to use that buffer to transfer data to the CPU. I/O device operates
asynchronously with the CPU and interrupts the CPU when completed.

This technique has the advantage of allowing any instruction that can access memory to be
used to operate an I/O device. Most high-speed I/O devices, such as discs and communication
interfaces, use memory-mapped IO.

DIRECT MEMORY ACCESS (DMA)

For Every byte transferred, slow devices such as keyboards will generate an interrupt to the
main CPU. If a fast device, such as a disc, generates an interrupt for every byte, the operating
system would spend the majority of its time dealing with these interruptions. To eliminate this
overhead, a typical computer employs direct memory access (DMA) technology.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 4


Direct Memory Access (DMA) refers to the CPU granting I/O modules permission to read from
or write to memory without involving the CPU. The DMA module controls data interchange
between the main memory and the I/O device. The CPU is only involved at the start and finish
of the transfer, and it is only interrupted after the full block has been transferred.

Direct Memory Access necessitates the use of specialized hardware known as a DMA controller
(DMAC), which handles data transfers and arbitrates access to the system bus. The controllers
are pre-programmed with source and destination pointers (where to read/write data), counters
to track the number of transmitted bytes, and parameters such as I/O and memory types,
interrupts, and CPU cycle states.

The operating system uses the DMA hardware as follows −

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 5


Step Description

1 Device driver is instructed to transfer disk data to a buffer address X.

2 Device driver then instruct disk controller to transfer data to buffer.

3 Disk controller starts DMA transfer.

4 Disk controller sends each byte to DMA controller.

5 DMA controller transfers bytes to buffer, increases the memory address,


decreases the counter C until C becomes zero.

6 When C becomes zero, DMA interrupts CPU to signal transfer completion.

POLLING VS INTERRUPTS I/O

A computer must be able to detect the presence of any sort of input. There are two ways for
this to happen: polling and interruptions. Both of these strategies enable the processor to deal
with events that can occur at any time and are unrelated to the present operation.

● POLLING I/O

Polling is the most basic method for I/O devices to communicate with the processor. Polling is
the process of periodically monitoring the device's state to see if it is time for the next I/O
operation. The I/O device merely stores the information in a Status register, and the processor
must retrieve it.

Most of the time, gadgets do not require attention, and when they do, they must wait until the
polling program interrogates them again. This is an inefficient strategy that wastes a significant
amount of processor time on useless polls.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 6


Compare this strategy to a teacher asking every student in a class, one by one, whether they
need assistance. The more effective technique would obviously be for a student to notify the
teacher whenever they require assistance.

● INTERRUPTS I/O

The interrupt-driven method is an alternate method for dealing with I/O. An interrupt is a signal
from a device that requires attention to the microprocessor.

When a device controller needs the CPU's attention, it sends an interrupt signal to the bus.
When the CPU gets an interrupt, it saves its current state and runs the appropriate interrupt
handler using the interrupt vector (addresses of OS routines to handle various events). After
dealing with the interrupting device, the CPU returns to its original task as if it had never been
stopped.

I/O SOFTWARE

I/O software is often organized in the following layers −

● User Level Libraries − This provides a simple interface to the user program to perform
input and output. For example, stdio is a library provided by C and C++ programming
languages.
● Kernel Level Modules − This provides the device driver to interact with the device
controller and device-independent I/O modules used by the device drivers.
● Hardware − This layer includes actual hardware and hardware controller which interact
with the device drivers and makes hardware alive.

A crucial idea in the design of I/O software is device independence, which means that programs
should be able to access any I/O device without having to identify the device in advance. A
program that reads a file as input, for example, should be able to read a file on a floppy disc, a
hard drive, or a CD-ROM without having to alter the program for each device.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 7


DEVICE DRIVERS

Device drivers are software modules that can be put into an operating system to manage a
specific device. The operating system relies on device drivers to manage all I/O devices. Device
drivers encapsulate device-specific code and implement a standard interface in such a way that
code containing device-specific register reads/writes is included. Device drivers are typically
created by the device's manufacturer and distributed on a CD-ROM with the device.

A device driver performs the following jobs −

● To accept requests from the device independent software above it.


● Interact with the device controller to take and give I/O and perform required error
handling

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 8


● Making sure that the request is executed successfully

The following is how a device driver handles a request: Assume a request is received to read
block N. If the driver is idle when a request arrives, it begins processing the request
immediately. If the driver is already occupied with another request, the new request is added
to the queue of pending requests.

INTERRUPT HANDLERS

An interrupt handler, also known as an interrupt service routine or ISR, is a piece of software or,
more precisely, a callback function in an operating system or, more precisely, in a device driver,
whose execution is initiated by the receipt of an interrupt.

The following is how a device driver handles a request: Assume a request is received to read
block N. If the driver is idle when a request arrives, it begins processing the request
immediately. If the driver is already occupied with another request, the new request is added
to the queue of pending requests.

An interrupt handler, also known as an interrupt service routine or ISR, is a piece of software or,
more precisely, a callback function in an operating system or, more precisely, in a device driver,
whose execution is initiated by the receipt of an interrupt.

DEVICE-INDEPENDENT I/O SOFTWARE

The device-independent software's primary job is to perform I/O functions shared by all devices
and to provide a consistent interface to the user-level program. Despite writing totally device-
independent software is difficult, we may develop some modules that are shared by all devices.
The following is a list of device-independent I/O Software functions.

● Uniform interfacing for device drivers


● Device naming - Mnemonic names mapped to Major and Minor device numbers
● Device protection
● Providing a device-independent block size

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 9


● Buffering because data coming off a device cannot be stored in the final destination.
● Storage allocation on block devices
● Allocation and releasing dedicated devices
● Error Reporting

USER-SPACE I/O SOFTWARE

These are the libraries that give a deeper and more straightforward interface for accessing
kernel functionality or, ultimately, interacting with device drivers. With the exception of the
spooling system, which is a method of dealing with dedicated I/O devices in a
multiprogramming system, most user-level I/O software consists of library functions.

I/O Libraries (for example, stdio ) exist in user space and serve as an interface to the OS's
device-independent I/O SW. Putchar(), getchar(), printf(), and scanf() are some examples of
user-level I/O library stdio functions available in C programming.

KERNEL I/O SUBSYSTEM

The Kernel I/O Subsystem is in charge of providing several I/O-related services. The following
are some of the services offered.

● Scheduling − A series of I/O requests are scheduled by the kernel to identify the best
order in which to execute them. When an application makes a blocked I/O system call,
the request is routed to the device's queue. The Kernel I/O scheduler reorders the
queue in order to optimize overall system efficiency and the average response time
observed by applications.
● Buffering − The Kernel I/O Subsystem maintains a buffer memory space that retains
data while it is transported between two devices or between a device and an
application action. Buffering is used to compensate for a speed difference between the
producer and consumer of a data stream, as well as to adjust across devices with varied
data transmission sizes.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 10


● Caching − The kernel manages cache memory, which is a fast memory space that stores
data copies. Accessing the cached copy is faster than accessing the original.
● Spooling and Device Reservation − A spool is a buffer that holds output for a device that
cannot accept interleaved data streams, such as a printer. The queued spool files are
copied to the printer one at a time by the spooling mechanism. Spooling is managed by
a system daemon process in several operating systems. It is handled by an in-kernel
thread in other operating systems.
● Error Handling − A protected memory operating system can protect against a wide
range of hardware and application errors.

KEY TAKEAWAY

➢ In the realm of I/O hardware management, an operating system plays a critical role in
overseeing a diverse array of devices, ranging from keyboards and display adapters to
disk drives and network connections.
➢ Device controllers act as intermediaries between devices and their respective drivers,
ensuring seamless communication.
➢ Two broad categories of I/O devices, block devices and character devices, necessitate
different handling approaches. Synchronous and asynchronous I/O strategies determine
whether CPU execution waits for I/O completion or proceeds concurrently.
➢ Various communication methods include special instruction I/O, memory-mapped I/O,
and Direct Memory Access (DMA), each with its advantages and use cases.
➢ The crucial choice between polling and interrupts I/O methods defines how a computer
detects input, with interrupts offering more efficient event handling by signaling the
CPU to address specific device needs.
➢ I/O software operates at different layers, from user-level libraries providing a simple
interface to kernel-level modules managing device drivers and hardware interactions.
➢ Device drivers play a pivotal role in interacting with specific devices, encapsulating
device-specific code and implementing standard interfaces.
➢ Interrupt handlers respond to interrupts, initiating specific actions upon receipt. Device-
independent I/O software facilitates uniform interfacing for device drivers, device
naming, protection, block size determination, buffering, storage allocation, error
reporting, and more.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 11


➢ The user-space I/O software consists of libraries offering interfaces for interacting with
device-independent I/O software.
➢ The kernel I/O subsystem provides essential services such as scheduling, buffering,
caching, spooling, device reservation, and error handling.
➢ In summary, effective I/O management in an operating system involves a
comprehensive stack of layers and functionalities to ensure efficient communication
and interaction with a diverse range of I/O devices.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 12


I/O MANAGEMENT

SUB LESSON 8.2

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 1


OS DESIGN ISSUES AND I/O BUFFERING

OS DESIGN ISSUES
Following are the design issues in a distributed operating system:

Transparency

A transparent distributed system's user interface must be consistent across local and remote
resources. That is, users should be able to access remote resources as if they were local, and
the distributed system should be in charge of locating the resources and coordinating their
precise interaction. Another facet of transparency is user mobility.

Users should not be forced to utilize a specific machine, but the system should allow them to
log into any machine and access its services. A transparent distributed system enables user
mobility by copying the user's environment (for example, the home directory) to wherever the
user checks in.

The concept of transparency can be applied to many facets of a distributed system. These are
given below.

a) Transparency in Location: Users are unable to determine the location of resources.

b) Migration Transparency: Resources can be moved from one location to another without
losing their identities, such as names.

b) Transparency in Replication: Users have no idea how many copies of the same file or
directory exist on various systems.

d) Concurrency Transparency: Several users can automatically share resources at the same
time.

f) Parallelism Transparency: All tasks can run in parallel without bothering the users.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 2


Scalability

Another challenge in a distributed system is scalability. Scalability is simply a system's ability to


handle an increased service load. The system constrains the resources. Under greater load, it
may become entirely full or saturated.

Take, for example, a file system; saturation occurs when a server's CPU is overburdened or
when discs are nearly full. Scalability is a relative trait, yet it may be precisely measured. A
scalable system should be able to handle more load while maintaining the same performance
as a non-scalable system.

In fact, however, its performance first declines moderately, followed by saturation of its
resources. Even the most ideal design cannot manage this issue. Scalability and fault tolerance
are linked. A strongly loaded component can become paralyzed and perform as if it were
malfunctioning.

When we shift the load from a faulty component to the backup component, the latter can
become saturated. In general, having extra resources is critical for assuring reliability and
readily handling peak loads. Because of the diversity of resources, a distributed system has the
potential for fault tolerance and scalability.

Flexibility

Another critical aspect is adaptability. Because we are only beginning to discover how to create
a distributed system, it is critical that the system be flexible.

Reliability

One of the primary goals of distributed operating systems was to make them more reliable than
single-processor systems. If some machines fail, other machines will continue to function
normally. A highly reliable system must also be highly available, but this is insufficient. Trusted
data to the system must not be lost, and if files are stored on several computers, consistency
must be maintained. The more copies retained, the greater the availability.

Performance

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 3


The actual performance issue is the concealed data in the background. The performance
component is critical in developing a transparent, flexible, and dependable distributed system.
Running a specific application on a distributed system should not be noticeably worse than
running the same application on a single CPU. Unfortunately, saying this is easier than doing it.

Fault Tolerance

Computer systems occasionally fail. When hardware or software fails, programs may provide
unexpected or inaccurate results. It may occasionally cease before they have completed the
relevant calculation. Yet, failures in a distributed system are partial, meaning that although
some parts of the system fail, others continue to function. As a result, dealing with failures can
be challenging. Because it uses replicated servers, the distributed system has a high level of
availability, and data may be retrieved quickly.

I/O BUFFERING

WHAT IS A BUFFER

A data buffer (or simply buffer) is a physical memory storage region used to temporarily hold
data while it is being transported from one location to another. Data is typically kept in a buffer
as it is retrieved from an input device (such as a microphone) or immediately before it is
transferred to an output device (such as speakers). When transporting data between processes
on a computer, a buffer may be used. This is analogous to buffers in telecommunications.
Buffers can be implemented in hardware as a fixed memory region, or in software as a virtual
data buffer pointing to a physical memory location. The data stored in a data buffer is always
stored on a physical storage device. The bulk of buffers is implemented in software, which often
stores temporary data in quicker RAM due to the significantly faster access time compared to
hard disc drives. Buffers are often employed when there is a gap between the rate at which
data is received and the pace at which it can be processed, or when these rates are changing, as

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 4


in a printer spooler or online video streaming. In a distributed computing context, data buffers
are frequently implemented as burst buffers that provide distributed buffering functionality.

USES OF BUFFERS

Buffers are frequently used in conjunction with I/O to hardware, such as disc drives, to send or
receive data to or from a network, or to play sound on a speaker. A line to a rollercoaster at an
amusement park is very similar. Riders on the roller coaster arrive at an unknown and
frequently unpredictable rate, however, the roller coaster will be able to load riders in bursts
(as a coaster arrives and is loaded). The queue area serves as a buffer—a temporary space
where individuals who want to ride can wait until the ride is ready. Buffers are typically
employed in a FIFO (first in, first out) technique, which outputs data in the order in which it
arrived.

Buffers can boost application performance by allowing synchronous operations like file reads
and writes to complete quickly rather than blocking while waiting for hardware interrupts to
access a physical disc subsystem; instead, an operating system can immediately return a
successful result from an API call, allowing an application to continue processing while the
kernel completes the disc operation in the background. If the application is reading or writing
small blocks of data that do not correspond to the disc subsystem's block size, a buffer can be
utilized to aggregate multiple smaller read or write operations into block sizes that are more
efficient for the disc subsystem, or in the case of a read, sometimes to completely avoid having
to physically access a disk.

TYPES OF BUFFERS

There are several common types of buffering used. We will talk about just a couple.

SINGLE I/O BUFFER

A single buffer implementation is the most basic form of buffering. As a user process runs, it
sends out an I/O request, which causes the operating system to allocate a buffer in the system

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 5


portion of the main memory to the I/O operation. The request is then routed to the appropriate
device by the operating system. In response to the request, the I/O device sends data. The OS
just copies the data into the already constructed buffer. The data is delivered to the process in
a single transfer when the buffer is full or at some OS-specified scheduled period.

This procedure can also work in the opposite direction. The process may involve the generation
of data that must be saved on a disc drive or transferred to another output device. As the
process generates data, it stores it in the buffer until the device is ready to accept it, at which
point the OS transfers the buffer's contents to the output device.

DOUBLE I/O BUFFER

This technique essentially adds a second buffer to improve efficiency. When the first buffer is
full, the operating system automatically switches to the second buffer. Once the second buffer
fills, the OS moves data out of the first buffer. As a result, by the time the first buffer is needed
again, it is empty, and the OS begins filling it again while emptying the other buffer.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 6


This notion, once again, works with either a read or a write request from the user process. The
above example shows a read request, but the identical process occurs with a write request.

CIRCULAR BUFFER

A circular buffer is initially empty and has a fixed length. A 7-element buffer is depicted in the
diagram below:

Assuming that 1 is written in the center of a Circular Buffer (the exact starting point in a Circular
Buffer is unimportant):

Assume two more pieces are added to the Circular Buffer — 2 and 3 — and are placed after 1:

If two elements are removed, the two oldest values in the Circular Buffer are removed as well.
FIFO (First In, First Out) logic is used by circular buffers. Because 1 and 2 were the first to enter
the Circular Buffer, they are the first to be eliminated, leaving three inside the Buffer.

If the buffer contains seven elements, it is totally full:

When the circular buffer becomes full and a subsequent write is made, it begins overwriting the
oldest data. In the current example, two new elements — A and B — are added and replace
elements 3 and 4:

Alternatively, the buffer management procedures could prevent data overwriting and return an
error or raise an exception. The semantics of the buffer procedures or the application using the
circular buffer determine whether or not data is overwritten.

Finally, if two components are removed, what is returned is not 3 and 4, but 5 and 6, because A
and B overwrote the 3 and 4.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 7


KEY TAKEAWAY

➢ In the context of distributed operating systems, several design issues must be


addressed, including transparency, scalability, flexibility, reliability, performance, and
fault tolerance.
➢ Transparency ensures a consistent user interface across local and remote resources,
with users able to access resources seamlessly.
➢ Scalability challenges the system's ability to handle increased service loads, and
flexibility is crucial for adapting to the evolving nature of distributed systems.
➢ Reliability aims to surpass single-processor systems by maintaining functionality even in
the face of failures.
➢ Performance remains a critical concern, demanding efficient execution of applications
on distributed systems.
➢ Fault tolerance addresses the challenge of dealing with partial failures in distributed
systems.
➢ Within this framework, I/O buffering, involving the temporary storage of data during
transportation, plays a vital role.
➢ Buffers, such as single I/O buffers, double I/O buffers, and circular buffers, enhance
system efficiency, allowing for smoother data transfers and improved application
performance.
➢ Understanding the types and uses of buffers is essential for optimizing I/O operations in
distributed systems.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 8


I/O MANAGEMENT

SUB LESSON 8.3

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 1


DISK ORGANISATION

DISK ORGANISATION
Disks come in a variety of shapes and sizes. The most visible similarity between floppy discs,
diskettes, and hard discs is that floppy discs and diskettes are made up of a single magnetic
disc, but hard drives are usually made up of several stacked on top of one another. Hard drives
are completely enclosed electronics that are significantly more finely built and so require dust
protection. A hard disc spins at a regular speed, whereas floppy drives rotate on and off. Floppy
drives on the Macintosh system operate at various speeds, whereas other floppy drives only
rotate at one speed. As hard drives and tape units become more efficient and less expensive to
manufacture, the floppy disk's importance is waning.

A hard drive is made up of numerous physical discs stacked on top of each other, as shown in
Figure. The disc seen in the figure has three platters and six recording surfaces (two on each

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 2


platter). Each surface is fitted with its own read head. Although the discs are built from a
continuous magnetic material, the density of information that may be stored on the disc is
limited. A stepper motor drives the heads across each surface, moving them in fixed-distance
intervals. That is, each surface has a defined number of tracks. The tracks on all surfaces are
aligned, and the total of all the tracks at a particular distance from the disk's edge is known as a
cylinder. Tracks are frequently broken up into sectors- or fixed-size areas that lay along tracks
to make disc access faster. Data is written to a disc in units of a whole number of sectors. (In
this respect, they are equivalent to pages or frames in physical memory). The sizes of sectors on
some discs are determined by the manufacturers of hardware.

On other systems (often microcomputers), it may be selected in the software when the disc is
ready for usage (formatting). We can improve read-write efficiency by assigning blocks in
parallel across all surfaces because the heads of the disc move together on all surfaces. As a
result, if a file is stored in sequential blocks on a disc with n surfaces and n heads, it can read n
sectors per track without moving the head. When a manufacturer ships a disc, the physical
properties of the disc (number of tracks, number of heads, sectors per track, speed of
revolution) are included. An operating system must be able to adapt to various disc types.
Clearly, the number of sectors per track and the number of tracks are not constants. The
numbers mentioned are only a convention used to generate a consistent set of addresses on a
disc and may have nothing to do with the disk's hard and fast physical limits. To address any
part of a disc, we require a three-component address made up of (surface, track, and sector).

The seek time is the time required for the disk arm to move the head to the cylinder with the
desired sector. The rotational latency is the time required for the disk to rotate the desired
sector until it is under the read-write head.

DEVICE DRIVERS AND IDS

A hard drive is a device, and as such, an operating system must communicate with it via a
device controller. Some device controllers are basic microprocessors that convert numerical

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 3


addresses into head motor movements, whereas others incorporate their own little decision-
making computers. The SCSI(Small Computer System Interface) drive is the most common form
of drive for larger personal computers and workstations. SCSI (pronounced scuzzy) is a protocol
that is now available in four variants: SCSI 1, SCSI 2, fast SCSI 2, and SCSI 3. SCSI drives are
connected to the CPU and memory through a data bus, which functions similarly to a very short
network. Each bus-connected drive is identified by a SCSI address, and each SCSI controller can
address up to seven units. A second controller is necessary if more discs are required. SCSI is
more efficient than other disc types for microcomputers in sharing numerous accesses. A SCSI
device driver is required for an operating system to communicate with a SCSI disc. This is a
software layer that converts disc requests from the operating system's abstract command layer
into the language of signals understood by the SCSI controller.

CHECKING DATA CONSISTENCY AND FORMATTING

Hard drives are not perfect: they develop flaws as a result of magnetic dropout and poor
manufacturing. This is verified when the disc is formatted on older discs, and damaged sectors
are avoided. If a sector becomes damaged while in use, the disc structure must be repaired
using a repair program. Typically, the data is lost. On more intelligent drives, such as SCSI drives,
the disc itself maintains a defect list that includes a record of all defective sectors. A new disc
from the manufacturer has a starting list, which is updated as more problems occur. Formatting
is the process through which the disk's sectors are:

● (If necessary) created by setting out ‘signposts’ along the tracks,


● Labeled with an address, so that the disk controller knows when it has found the correct
sector.

Formatting is done manually on basic discs used by microcomputers. Some varieties, such as
SCSI drives, have low-level formatting already on the disc when it is shipped from the
manufacturer. In certain ways, this is part of the SCSI protocol. High-level formatting is not
required because an advanced enough filesystem will be able to control the hardware sectors.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 4


Writing to disc and retrieving the result ensures data consistency. A mistake happens when
there is disagreement. This process is best handled within the disk's hardware—modern disc
drives are tiny computers in their own right. Another less expensive method of ensuring data
consistency is to compute and keep a number for each sector based on the data in the sector.
When the data is read back, the number is recalculated, and an error is signaled if there is a
difference. A cyclic redundancy check (CRC) or error-correcting code is used for this. Some
device controllers are intelligent enough to detect bad sectors and transfer data to a spare
'good' sector if an error occurs. The disk design is still an active area of study, and discs are
advancing in terms of speed and durability by leaps and bounds.

KEY TAKEAWAY

➢ Disk organization is a critical aspect of I/O management in computing systems, where


various types of disks, such as floppy disks, diskettes, and hard drives, exhibit distinct
characteristics.
➢ Hard drives, comprised of multiple physical discs stacked together, have become more
prominent due to their efficiency and cost-effectiveness.
➢ The design of hard drives involves platters with recording surfaces, each equipped with
its read head.
➢ Stepper motors drive the heads across these surfaces, and tracks, cylinders, and sectors
are employed for efficient data storage and access.
➢ The seek time and rotational latency are crucial metrics in evaluating disk performance.
Device controllers facilitate communication between the operating system and hard
drives, with SCSI being a common protocol for larger computers.
➢ Disk reliability is addressed through the formatting process, where sectors are labeled,
and defect lists are maintained to manage flawed sectors.
➢ Modern disk drives act as miniature computers, employing techniques like cyclic
redundancy checks to ensure data consistency and error detection.
➢ Continuous advancements in disk design contribute to improved speed and durability,
underscoring the evolving nature of I/O technology.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 5


I/O MANAGEMENT

SUB LESSON 8.4

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 1


DISK SCHEDULING

A process, as we all know, requires two types of time: CPU time and IO time. It asks the
operating system to access the disc for I/O.

Yet, the operating system must be far enough to satisfy each request while also maintaining the
efficiency and speed of process execution.

Disk scheduling refers to the technique used by the operating system to select which request
should be fulfilled next.

Let's discuss some important terms related to disk scheduling.

● SEEK TIME

Seek time is the amount of time it takes to locate the disc arm to a specific track where the
read/write request will be fulfilled.

● ROTATIONAL LATENCY

It is the amount of time it takes the targeted sector to rotate itself to a position where it can
access the R/W heads.

● TRANSFER TIME

Transfer Time is the amount of time it takes to transfer data.

● DISK ACCESS TIME

Disk access time is calculated as follows:

Disk Access Time = Rotational Latency + Seek Time + Transfer Time

● DISK RESPONSE TIME

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 2


It is the average amount of time each request spends waiting for the IO process.

● PURPOSE OF DISK SCHEDULING

The disc scheduling algorithm's main goal is to select a disc request from a queue of IO requests
and determine when this request will be handled.

GOAL OF DISK SCHEDULING ALGORITHM

○ Fairness
○ High throughout
○ Minimal traveling head time

DISK SCHEDULING ALGORITHMS

The following is a list of various disc scheduling algorithms. Each algorithm has both advantages
and cons. Each algorithm's limitation leads to the evolution of a new algorithm.

○ FCFS scheduling algorithm


○ SSTF (shortest seek time first) algorithm
○ SCAN scheduling
○ C-SCAN scheduling
○ LOOK Scheduling
○ C-LOOK scheduling

FCFS SCHEDULING ALGORITHM

It is the most fundamental Disk Scheduling algorithm. It processes IO requests in the order in
which they arrive. This algorithm has no hunger; every request is met.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 3


DISADVANTAGES

○ The strategy does not reduce seek time.


○ The request may originate from different processes so there is a chance of improper
movement of the head.

EXAMPLE

Consider the following disc request sequence for a disc with 100 tracks 45, 21, 67, 90, 4, 50, 89,
52, 61, 87, 25

Head pointer begins at 50 and travels in the left direction. Find the number of head movements
in cylinders using FCFS scheduling.

Number of cylinders moved by the head

= (50-45)+(45-21)+(67-21)+(90-67)+(90-4)+(50-4)+(89-50)+(61-52)+(87-61)+(87-25)

= 5 + 24 + 46 + 23 + 86 + 46 + 49 + 9 + 26 + 62

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 4


= 376

SSTF SCHEDULING ALGORITHM

The algorithm that selects the disc I/O request that requires the least amount of disc arm
movement from its current position, independent of direction, is known as the shortest seek
time first (SSTF). It reduces the total seek time as compared to FCFS.

It allows the head to move to the closest track in the service queue.

DISADVANTAGES

○ It may cause starvation for some requests.


○ Switching directions frequently slows the algorithm's performance.
○ That is not the most optimal algorithm.

EXAMPLE

Consider the following disk request sequence for a disk with 100 tracks

45, 21, 67, 90, 4, 89, 52, 61, 87, 25

Head pointer starting at 50. Find the number of head movements in cylinders using SSTF
scheduling.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 5


Number of cylinders = 5 + 7 + 9 + 6 + 20 + 2 + 1 + 65 + 4 + 17 = 136

SCAN ALGORITHM

It is also referred to as the Elevator Algorithm. In this algorithm, the disc arm moves in a
particular direction till the end, satisfying all the requests coming in its path, and then it turns
back and moves in the reverse direction satisfying requests coming in its path.

It operates similarly to an elevator in that it moves in one direction until it reaches the last floor
in that direction and then reverses direction.

EXAMPLE

Consider the following disc request sequence for a 100-track disc:

98, 137, 122, 183, 14, 133, 65, 78

Starting at 54, the head pointer moves to the left. Using SCAN scheduling, determine the
number of head movements in cylinders.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 6


Number of Cylinders = 40 + 14 + 65 + 13 + 20 + 24 + 11 + 4 + 46 = 237

C-SCAN ALGORITHM

The arm of the disc moves in a specific way, serving requests until it reaches the last cylinder,
then it jumps to the final cylinder in the opposite direction, without processing any requests,
then it turns back and starts moving in that direction, servicing the remaining demands.

EXAMPLE

Consider the following disc request sequence for a 100-track disc:

98, 137, 122, 183, 14, 133, 65, 78

Starting at 54, the head pointer moves to the left. Find the number of head movements in
cylinders using C-SCAN scheduling.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 7


No. of cylinders crossed = 40 + 14 + 199 + 16 + 46 + 4 + 11 + 24 + 20 + 13 = 387

LOOK SCHEDULING

It is similar to the SCAN scheduling algorithm in some ways, except that in this scheduling
algorithm, the arm of the disc stops moving inwards (or outwards) when there are no more
requests in that direction. This technique tries to overcome the overhead of the SCAN
algorithm which compels the disc arm to proceed in one way till the end regardless of knowing
if any request exists in the direction or not.

EXAMPLE

Consider the following disc request sequence for a 100-track disc:

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 8


98, 137, 122, 183, 14, 133, 65, 78

Starting at 54, the head pointer moves to the left. Using LOOK scheduling, determine the
number of head movements in cylinders.

Number of cylinders crossed = 40 + 51 + 13 + +20 + 24 + 11 + 4 + 46 = 209

C LOOK SCHEDULING

C Look Algorithm is comparable to the C-SCAN algorithm to some extent. In this algorithm, the
arm of the disc moves outwards servicing requests until it reaches the highest request cylinder,
then it jumps to the lowest request cylinder without servicing any request then it again starts
moving outwards servicing the remaining requests.

It differs from the C SCAN algorithm in that C SCAN forces the disc arm to travel to the last
cylinder regardless of whether or not any requests are to be handled on that cylinder.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 9


EXAMPLE

Consider the following disc request sequence for a 100-track disc:

98, 137, 122, 183, 14, 133, 65, 78

Starting at 54, the head pointer moves to the left. Find the number of head movements in
cylinders using C LOOK scheduling.

Number of cylinders crossed = 11 + 13 + 20 + 24 + 11 + 4 + 46 + 169 = 298

KEY TAKEAWAY

➢ Disk scheduling is a crucial aspect of operating system design, optimizing the order in
which I/O requests are fulfilled to enhance efficiency.
➢ Several terms are central to disk scheduling, including seek time, rotational latency,
transfer time, disk access time, and disk response time.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 10


➢ The goal of disk scheduling algorithms is to achieve fairness, high throughput, and
minimal head movement during disk operations.
➢ Various disk scheduling algorithms have been developed, each with its advantages and
limitations.
➢ The First-Come-First-Serve (FCFS) algorithm processes requests in the order of arrival
but may not minimize seek time.
➢ The Shortest Seek Time First (SSTF) algorithm prioritizes requests with the least arm
movement but can lead to starvation for some requests.
➢ SCAN and C-SCAN algorithms involve the disk arm moving in specific directions,
addressing requests as it reaches the end and then reversing.
➢ LOOK scheduling is similar to SCAN but stops when no requests exist in the current
direction.
➢ C-LOOK scheduling is akin to C-SCAN but does not force the arm to travel to the last
cylinder.
➢ Each algorithm aims to strike a balance between efficient utilization of disk resources
and minimizing access times based on varying scenarios and requirements.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 11


I/O MANAGEMENT

SUB LESSON 8.5

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 1


RAID AND DISK CACHE

RAID (REDUNDANT ARRAY OF INDEPENDENT DISKS)


RAID (redundant array of independent discs) is a method of storing the same data on numerous
hard discs or solid-state drives (SSDs) in separate locations to protect data in the event of a
drive failure. However, there are several RAID levels, and not all of them are designed to
provide redundancy.

Disks have high failure rates and hence there is the risk of data loss and lots of downtime for
restoring and disk replacement. To improve disk usage many techniques have been
implemented. One such technology is RAID (Redundant Array of Inexpensive Disks). Its
organization is based on disk striping (or interleaving), which uses a group of disks as one
storage unit. Disk striping is a way of increasing the disk transfer rate up to a factor of N, by
splitting files across N different disks. Instead of saving all the data from a given file on one disk,
it is split across many. Since the N heads can now search independently, the speed of transfer
is, in principle, increased manifold. Logical disk data/blocks can be written on two or more
separate physical disks which can further transfer their sub-blocks in parallel. The total transfer
rate system is directly proportional to the number of disks. The larger the number of physical
disks striped together, the larger the total transfer rate of the system. Hence, the overall
performance and disk accessing speed are also enhanced. The enhanced version of this scheme
is mirroring or shadowing. In this RAID organization, a duplicate copy of each disk is kept. It is
costly but a much faster and more reliable approach. The disadvantage of disk striping is that, if
one of the N disks becomes damaged, then the data on all N disks is lost. Thus striping needs to
be combined with a reliable form of backup in order to be successful. Another RAID scheme
uses some disk space for holding parity blocks. Suppose, three or more disks are used, then one
of the disks will act as a parity block, which contains corresponding bit positions in all blocks. In
case some error occurs or the disk develops problems all its data bits can be reconstructed. This
technique is known as disk striping with parity or block interleaved parity, which increases

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 2


speed. But writing or updating any data on a disk requires corresponding recalculations and
changes in the parity block. To overcome this the parity blocks can be distributed over all disks.

HOW RAID WORKS


RAID improves performance by storing data on several discs and allowing input/output (I/O)
activities to overlap in a balanced manner. Because storing data redundantly increases the
mean time between failures, it also increases fault tolerance.

The operating system (OS) sees RAID arrays as a single logical drive.

RAID involves disc mirroring or disc striping techniques. Mirroring copies identical data to many
drives. Striping partitions aid in the distribution of data over many disc drives. The storage
space on each disc is divided into chunks ranging from 512-byte sectors to several megabytes.
All of the discs' stripes are interleaved and addressed sequentially. Disk mirroring and disc
striping can also be coupled in a RAID array.

On a single-user system where huge records are kept, stripes are often configured to be small
(512 bytes, for example) such that a single record spans all discs and may be accessed rapidly by
reading all discs at the same time.

Better performance in a multiuser system necessitates a stripe broad enough to carry the
typical or maximum size record, allowing for overlapping disc I/O across drives.

RAID CONTROLLER
A RAID controller is a piece of hardware that manages the hard disc drives in a storage array. It
can be used as a layer of abstraction between the operating system and the actual discs,
presenting disc groups as logical units. Utilizing a RAID controller can boost performance while
also protecting data in the event of a crash.

RAID controllers can be either hardware or software-based. A physical controller oversees the
entire array in a hardware-based RAID device. In addition, the controller can be configured to
accommodate drive formats such as Serial Advanced Technology Attachment and Tiny

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 3


Computer System Interface. A physical RAID controller can also be installed onto the
motherboard of a server.

The controller in software-based RAID makes use of the physical system's resources, such as the
central CPU and memory. While it performs the same responsibilities as a hardware-based RAID
controller, software-based RAID controllers may not provide as much of a speed gain and may
interfere with the performance of other server applications.

If a software-based RAID implementation is incompatible with a system's boot process and


hardware-based RAID controllers are too expensive, firmware, or driver-based RAID, is a viable
solution.

Similar to software-based RAID, firmware-based RAID controller chips are situated on the
motherboard and all operations are executed by the central processor unit (CPU). With
firmware, however, the RAID system is only activated at the start of the boot procedure. After
the operating system has been loaded, the controller driver takes over the RAID operation. A
firmware RAID controller is less expensive than a hardware RAID controller, but it places more
burden on the computer's CPU. Firmware-based RAID is also known as hardware-assisted
software RAID, hybrid model RAID, and fake RAID.

RAID LEVELS
RAID devices employ various versions known as levels. The original study that originated the
name and introduced the RAID configuration idea defined six RAID levels ranging from 0 to 5.
This numbering system allowed IT personnel to distinguish between RAID versions. Since then,
the number of levels has grown and been divided into three categories: standard, nested, and
nonstandard RAID levels.

DISK CACHE

Disk caching is an extension of buffering. The cache is derived from the French word cacher,
meaning to hide. In this context, a cache is a collection of blocks that logically belong on the
disk, but are kept in memory for performance reasons. It is used in multiprogramming

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 4


environments or in disk file servers, which maintain a separate section of main memory called
disk cache. These are sets of buffers (cache) that contain the blocks that are recently used. The
cached buffers in memory are copies of the disk blocks and if any data here is modified only its
local copy is updated. So, to maintain integrity, updated blocks must be transferred back to the
disk. Caching is based on the assumption that most shortly accessed blocks are likely to be
accessed again soon. In case some new block is required in the cache buffer, one block already
there can be selected for “flushing” to the disk. Also to avoid loss of updated information in
case of failures or loss of power, the system can periodically flush cache blocks to the disk. The
key to disk caching is keeping frequently accessed records in the disk cache buffer in primary
storage.

A disc cache (cache memory) is a temporary holding space in a hard drive or random access
memory (RAM) where the computer saves frequently needed information. The computer can
use it to speed up the process of storing and accessing information from the disc cache
considerably faster than if the information was stored in the usual area (which could be on a
disc or in a slower-accessing part of the computer's memory). A disc cache may also refer to a
disc buffer or a cache buffer.

Working with information stored in memory (RAM) is significantly faster than working with
information saved on a disc, which is the underlying notion behind a disc cache. A disc cache is
a software program that operates by reserving a region of memory in which it stores a copy of
previously read information from your drives. When your computer needs that information
again, it can get it directly from the cache rather than the slower disc. But, if the CPU (central
processing unit, the chip that runs the computer) requires data that is not in the disc cache, it
must access the disc, Then it has to go to the disc and get it, therefore having a portion of your
memory set aside for the disc cache is pointless.

Because related data is frequently physically contiguous on a disc, the cache may also make a
copy of information near previously utilized data in the hope that extra data may be needed in
the future. Nevertheless, because cache space is limited, only a portion of the information on

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 5


your disc is in the cache at any given time. The caching utility, on the other hand, arranges
things such that the information your software most frequently uses remains in the cache.

Some disc caching tools also cache files that you want to save as well as other data that your
computer is attempting to save to a disc. Because the cache delivers this data to the disc in
little drips and drabs rather than all at once, you don't have to wait for the saving procedure to
complete before returning to work. Although advertisements claim that a disc cache "speeds
up your hard disc," the disc does not operate any faster—it simply isn't used as frequently.

The Memory Control Panel on a Mac allows you to specify how much memory to set aside as a
disc cache. In System 6, you can disable the cache entirely; in System 7, you can reduce it to 16
K. Bear in mind that whatever amount selected as disc cache depletes your main RAM. If you
only have 2 MB in your computer, turn the cache off or down to 16K, because you won't notice
a difference with less than a 256 K cache, and you really can't spare any if you only have 2 MB.
Try setting your cache between 256 K and 1,024 K if you have 4 MB or more of memory and
see if you observe a performance improvement. If you don't see an improvement, reduce it as
much as possible because there's no use in saving that RAM space if it's not helping you.

You must run your disc caching utility as a device driver (through your CONFIG.SYS file) or as a
memory resident program on IBM-compatible PCs (usually by including it in your
AUTOEXEC.BAT file). With Windows and MS-DOS versions 5.0 and higher, Microsoft included a
good caching application called SMART Drive. If you're prepared to spend more, you can
purchase faster-caching tools from various vendors, the most well-known of which is Super PC-
Kwik.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 6


FILE MANAGEMENT

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 1


SUB LESSON 9.1

INTRODUCTION AND FILE ORGANIZATION

INTRODUCTION OF FILE MANAGEMENT


File management is one of the operating system's fundamental yet significant features. File
management in an operating system is simply software that handles or manages the files
(binary, text, pdf, documents, audio, video, and so on) that are present in a computer program.
The file system in the operating system can manage both individual files and groups of files on
the computer. The file system in an operating system informs us about the location, owner,
date and time of creation and modification, type, and state of a file on the computer system.

Let us first review operating systems and files before diving into the file system in the operating
system.

Files are collections of related information that are kept on various storage devices such as flash
drives, hard disc drives (HDD), magnetic tapes, optical discs, cassettes, and so on. Files can be
read-only or write-only. Files are simply utilized to provide input(s) and get output (s).

Currently, an operating system is simply a software program that serves as a bridge between
hardware, application software, and users. An operating system's primary goal is to manage all
computer resources. So, simply put, the operating system provides a platform for application
software and other system software to perform their functions.

Refer to the diagram below to understand the value and working of the operating system.

The features of an operating system are:

1. providing security to the system and application software.


2. memory management.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 2


3. disk management.
4. I/O operations.
5. file management, etc.

As a result, file management is one of the operating system's fundamental but vital
functionalities. File management in an operating system is simply software that handles or
manages the files (binary, text, pdf, documents, audio, video, and so on) that are present in a
computer program. The file system in the operating system can manage both individual files
and groups of files on the computer. File management in the operating system manages all of
the files with various extensions that are present in the computer system (such as .exe, .pdf,
.txt, .docx, etc.)

We may also use the operating system's file system to retrieve information about any file(s) on
our system. Details can include:

● location of the file (the logical location where the file is stored in the computer system)
● the owner of the file (who can read or write on the particular file)
● when was the file created (time of file creation and modification time)
● a type of file (format of the file for example text, pdfs, docs, etc.)
● state of completion of the file, etc.

The file must have a predefined structure or format in order to be managed by the operating
system or to be understood by the operating system. In most operating systems, there are
three types of file structures:

1. text file: Text files are non-executable files that include a series of numbers, symbols,
and letters structured in the form of lines.
2. source file: A source file is an executable file that comprises a series of routines and
processes. In layman's terms, a source file is a file that contains the program's
instructions.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 3


3. object file: A file containing object codes in the form of assembly language code or
machine language code is known as an object file. Object files, in simple words, contain
program instructions in the form of a series of bytes and are structured in the form of
blocks.

OBJECTIVES OF FILE MANAGEMENT IN OPERATING SYSTEMS


We learned a lot about files, operating systems, and file management in operating systems in
the previous section. Let's look at some of the goals of file management in operating systems.

● The operating system's file management allows users to create new files, as well as
change and delete existing files located across the computer system.
● The file management program in the operating system handles the locations of the file
shop so that files can be quickly extracted.
● Because processes share files, making files shareable between processes is one of the
most significant functions of file management in operating systems. It enables several
processes to securely access the information they require from a file.
● The operating system file management software also handles the files, reducing the
possibility of data loss or destruction.
● The file management in the operating system gives input-output operation support to
the files so that the data can be written, read, or extracted from the file(s) (s).
● It also serves as a standard input-output interface for both the user and the system
operations. The intuitive interface enables quick and easy data editing.
● The operating system's file management system also manages the various user
permissions that are present on a file. The operating system provides three types of user
permissions: read, write, and execute.
● The operating system's file management supports many types of storage devices such as
flash drives, hard disc drives (HDD), magnetic tapes, optical discs, tapes, and so on, and
it also allows the user(s) to quickly store and remove them.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 4


● It also organizes the files hierarchically in the form of files and folders (directories),
making file administration easier from the user's perspective. For a better
understanding, see the diagram below.

PROPERTIES OF FILE MANAGEMENT IN OPERATING SYSTEMS


Let us now learn about the properties of file management in operating systems once we have
covered the properties of file management in operating systems.

1. The files are organized or grouped into a more complicated structure, such as a tree, to
reflect the relationship between the various files. File systems function similarly to how

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 5


libraries organize books. A special directory is normally located at the root of a
hierarchical file system. It is similar to a tree in appearance.

For a better view, refer to the structure presented below.

2. Every file has a name and access rights that indicate who can access the files in which
mode (read or write).

For a better understanding, consider the example below.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 6


1. The access authorization for a certain file is represented in the above example. Thus, r
indicates that the file is readable, w indicates that it is writeable, and x indicates that it is
executable. The dash (-) symbol indicates that no authorization is granted.
As we all know, there are three permission groups in LINUX-based operating systems
(owner, group, and other). The first character indicates the file type (either file or
directory). The following three characters reflect the owner's permission, the next three
represent the permissions of a group, and the last three represent the permissions of
the other.
2. When a user signs off, the file saved on the secondary storage device is not deleted. If
the data is stored in primary memory, such as RAM, it is lost.

FUNCTIONS OF FILE MANAGEMENT IN OPERATING SYSTEMS


Let us now discuss some of the most significant file management functionalities in operating
systems.

● Users can use this program to create, change, and remove files on the computer system.
● Manages the locations of files stored in secondary or primary memory.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 7


● Manages and handles the permissions of a specific file for different users and groups.
● Organizes files into a tree-like arrangement for easier viewing.
● Interfaces with I/O operations.
● Files are protected from unauthorized access by hackers.

KEY TAKEAWAY

➢ File management is a crucial feature of operating systems, serving as the software that
oversees various types of files within a computer program.
➢ It handles individual and grouped files, providing information about their location,
ownership, creation and modification timestamps, types, and states.
➢ Operating systems act as a bridge between hardware, applications, and users, with file
management being one of its fundamental functionalities.
➢ Files, including binary, text, pdfs, and others, are stored on various devices, and file
management ensures efficient organization and retrieval.
➢ The objectives of file management in operating systems encompass creating, modifying,
and deleting files, facilitating file sharing among processes, reducing the risk of data loss,
supporting input-output operations, and managing user permissions.
➢ The properties of file management involve organizing files hierarchically and assigning
names and access rights to each file.
➢ The functions of file management include creating, changing, and removing files,
managing file locations, handling permissions, organizing files into a tree structure,
interfacing with I/O operations, and safeguarding files from unauthorized access.
➢ Overall, effective file management is essential for streamlined and secure computing
operations.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 8


FILE MANAGEMENT

SUB LESSON 9.2

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 1


FILE DIRECTORIES AND FILE SHARING

FILE DIRECTORIES

A file directory is a collection of files that have been organized. A directory entry relates to a file
or another directory. As a result, a tree structure/hierarchy can be created. Directories are used
to organize files belonging to various applications/users. Large-scale time-sharing and
distributed systems hold thousands of files and large amounts of data. A file system must be
correctly organized for this type of setting. A file system can be partitioned or divided into
volumes. They provide different sections within a single disc that are handled as separate
storage devices for files and directories. Hence, directories allow files to be divided based on
user and user application, simplifying system administration issues such as backups, recovery,
security, integrity, name-collision (file name clashes), file cleaning, and so on.

The device directory stores information about all the files on the partition, such as their name,
location, size, and type. A root is the area of the disc from which the root directory points to the
user directories. The root directory differs from subdirectories in that it has a defined position
and size. As a result, the directory functions similarly to a symbol table, converting file names
into their associated directory entries.

The operations performed on a directory or file system are:

1) Create, delete, and modify files.

2) Search for a file.

3) Mechanisms for sharing files should provide controlled access like read, write, execute or
various combinations.

4) List the files in a directory and also the contents of the directory entry.

5) Renaming a file when its contents or uses change or file position needs to be changed.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 2


6) Backup and recovery capabilities must be provided to prevent accidental loss or malicious
destruction of information.

7) Traverse the file system.

The most common schemes for describing logical directory structure are

I. Single-level directory

All of the files are in the same directory, which is basic and easy to grasp; nevertheless, all files
must have unique names. Furthermore, even with a single user, as the number of files grows, it
becomes impossible to remember and track all of the file names. The figure depicts this
hierarchy.

II. Two-level directory

We can overcome the limitations of the previous scheme by creating a separate directory called
User File Directory for each user (UFD). When a user first logs in, the system's Master File
Directory (MFD) is searched, which is indexed with respect to the user's username/account and
UFD reference. Hence, various users may have the same file names, but they must be unique
within each UFD. To some extent, this fixes the name-collision problem, but this directory
structure isolates one user from another, which is not always desirable when users need to
communicate or collaborate on a job. The figure clearly depicts this scheme.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 3


III. Tree-structured directory

The two-level directory structure is like a 2-level tree. To generalize, we can extend the
directory structure to any height tree. As a result, the user can construct his or her own
directory and subdirectories, as well as organize files. One bit in each directory entry specifies if
the entry is a file (0) or a subfolder (1).

The tree contains a root directory, and each file within it has a distinct path name (path from
root, through all subdirectories, to a specified file). The pathname precedes the filename and
aids in navigating to the needed file from a base directory. Pathnames can be of two types,
depending on the base directory: absolute path names or relative path names. Depending on
the base directory, pathnames might be of two types: absolute path names or relative path
names. An absolute path name starts at the root and proceeds to a specific file location. It is a
complete pathname that refers to the root directory. The path relative to the current directory
is defined by relative.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 4


IV. Acyclic-graph directory:

This scheme, as the name implies, features a graph with no cycles. This approach introduced
the concept of a shared common subdirectory/file, which existed in the file system in two (or
more) locations at the same time. Having two copies of a file does not result in changes in one
copy reflecting changes in the other copy. Nevertheless, in a shared file, only one actual file is
utilized, so modifications are apparent. As a result, an acyclic graph is a broadening of a tree-
structured directory system. This is important when several people are working as a team and
need access to shared files housed in a single directory. This strategy can be carried out by
adding a new directory item known as a link that points to another file or subdirectory. This
arrangement for directories is depicted in Figure.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 5


The restricted I/O and File Management The challenges in accessing a whole file system due to
various absolute path names are iterations of this strategy. Another issue is the occurrence of
dangling pointers to deleted files, which can be avoided by keeping the file until all references
to it are removed. A new entry is added to the file-reference list each time a link or a copy of a
directory is established. Nevertheless, because the list is too long, only a count of the number
of references is preserved. As the reference to the file is added or deleted, this count is
incremented or decremented.

V. General graph Directory:

Cycles are not permitted in acyclic graphs. When there are cycles, the reference count may be
non-zero even if the directory or file is no longer referenced. In such cases, trash pickup is
beneficial. This technique necessitates traversing the entire file system and marking only
accessible entries. The second pass collects anything that is not designated on a free-space list.
The figure illustrates this.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 6


FILE SHARING

The accessing or sharing of files by one or more users is known as file sharing, sometimes
known as file swapping. It is used on computer networks to transmit data quickly. A file-sharing
system typically has more than one administrator, with users having the same or different
access privileges. It also involves having a set number of personal files in common storage.

For many years, file sharing has been utilized in mainframe and multi-user computer systems,
and now that the internet is widely available, a file transmission method known as the File-
Transfer Protocol, or FTP, is extensively used.

The internet can be thought of as a large-scale file-sharing system in which files are constantly
downloaded or viewed by online users.

But, with a file-sharing system, the user can read and write in the files, and if the entire system
is unavailable for access, the owner's common file can be read and written into.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 7


When numerous people have access to the files, certain complications develop. The system can
either grant users access by default or require the user to grant access to the files explicitly.
This poses control and security issues, and in order to implement sharing and security, the
system must keep more file and directory attributes than a single-user system would.

KEY TAKEAWAY

➢ File directories play a crucial role in organizing files within an operating system, forming
a tree structure or hierarchy.
➢ They help manage large amounts of data and prevent issues such as name collisions,
providing a structured approach to system administration tasks like backups, recovery,
security, and integrity.
➢ File systems can be partitioned into volumes, allowing for efficient storage device
management.
➢ Operations performed on directories and file systems include creating, deleting,
modifying, searching, sharing, listing, renaming, backing up, recovering, and traversing.
➢ Different logical directory structures, such as single-level, two-level, tree-structured,
acyclic-graph, and general graph, offer varying degrees of organization and user
isolation.
➢ File sharing, an essential aspect of file management, involves the access and
transmission of files among users, typically facilitated by systems like FTP.
➢ However, managing file access in shared environments requires careful consideration of
control and security aspects.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 8


FILE MANAGEMENT

SUB LESSON 9.3

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 1


RECORD BLOCKING AND DISK SPACE MANAGEMENT

RECORD BLOCKING

Records are the logical units of a file whereas blocks are units of I/O with secondary storage. It
performs for I/O and the records must organize as blocks. Blocking is a process of grouping
similar records into blocks that the operating system component explores exhaustively. Block is
the unit of I/O for secondary storage. Generally, the larger blocks reduce the I/O transfer time.
Larger blocks require larger I/O buffers. In record blocking, the records are grouped into blocks
by shared properties that are indicators of duplication.

Types of Record Blocking in OS:

Generally, there are three types of record-blocking methods.

1. Fixed blocking
2. Variable-length spanned blocking
3. Variable-length unspanned blocking

FIXED BLOCKING:

In this method, record lengths are fixed. The prescribed number of records stored in a block.
Internal fragmentation is stored in a block. Fixed blocking is common for sequential files with
fixed-length records length records

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 2


VARIABLE-LENGTH SPANNED BLOCKING:

In this method, record sizes aren’t the same, Variable-length records packing into blocks with
no unused space. So, some records may divide into two blocks. In this type of situation, a
pointer passes from one block to another block. So, the Pointers used to span block unused
space. It is efficient in length and efficiency of storage. It doesn’t limit record size, but it is more
complicated to implement. So, the files are more difficult to update

VARIABLE-LENGTH UNSPANNED BLOCKING:

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 3


Here, records are variable in length, but the records span between blocks. In this method, the
wasted area is a serious problem, because of the inability to use the remainder of a block, if the
next record is larger than the remaining unused space. These blocking methods result in
blocking results in wasted space and limit record size to the size of the block.

DISK SPACE MANAGEMENT

DIRECT ACCESS TO DISKS AND KEEPING FILES IN ADJACENT AREAS OF THE DISK IS
HIGHLY DESIRABLE. BUT THE PROBLEM IS HOW TO ALLOCATE SPACE TO FILES FOR
EFFECTIVE DISK SPACE UTILIZATION AND QUICK ACCESS. ALSO, AS FILES ARE
ALLOCATED AND FREED, THE SPACE ON A DISK BECOME FRAGMENTED. THE MAJOR
METHODS OF ALLOCATING DISK SPACE ARE:

I) CONTINUOUS

II) NON-CONTINUOUS (INDEXING AND CHAINING)

I) CONTINUOUS

BECAUSE EACH FILE IN THIS METHOD OCCUPIES A SET OF CONTIGUOUS BLOCKS ON


THE DISC, IT IS ALSO KNOWN AS CONTIGUOUS ALLOCATION. ON THE DISC, THERE IS
A LINEAR ORDERING OF DISC ADDRESSES. IT IS USED IN THE ANCIENT INTERACTIVE
SYSTEM VM/CMS. THE BENEFIT OF THIS METHOD IS THAT SUBSEQUENT LOGICAL

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 4


RECORDS ARE PHYSICALLY ADJACENT AND DO NOT REQUIRE HEAD MOVEMENT. AS
A RESULT, DISC SEEK TIME IS REDUCED, WHICH SPEEDS UP RECORD ACCESS. THIS
TECHNIQUE IS ALSO REASONABLY STRAIGHTFORWARD TO IMPLEMENT. DYNAMIC
DISC SPACE ALLOCATION IS A MECHANISM IN WHICH THE OPERATING SYSTEM
GIVES UNITS OF FILE SPACE ON DEMAND BY USER -RUNNING APPLICATIONS. IN
GENERAL, SPACE IS ALLOTTED IN FIXED -SIZE UNITS KNOWN AS ALLOCATION UNITS
OR 'CLUSTERS' IN MS-DOS. A CLUSTER IS A SIMPLE MULTIPLE OF THE PHYSICAL
SECTOR SIZE OF A DISC, WHICH IS TYPICALLY 512 BYTES. DISK SPACE CAN BE
THOUGHT OF AS A ONE-DIMENSIONAL ARRAY OF DATA STORES, WITH EACH STORE
REPRESENTING A CLUSTER. A GREATER CLUSTER SIZE MINIMIZES THE POSSIBILITY
OF FRAGMENTATION WHILE INCREASING THE LIKELIHOOD OF UNUSED SPACE IN
CLUSTERS. UTILIZING CLUSTERS LARGER THAN ONE SECTOR MINIMIZES
FRAGMENTATION AND THE AMOUNT OF DISC SPACE REQUIRED TO RETAIN
INFORMATION ABOUT THE DISK'S UTILIZED AND VACANT PORTIONS. CONTIGUOUS
ALLOCATION ONLY KEEPS THE FIRST BLOCK'S DISC ADDRESS (START OF FILE) AND
LENGTH (IN BLOCK UNITS). IF A FILE IS N BLOCKS LONG AND STARTS AT POINT B
(BLOCKS), IT WILL OCCUPY B, B+1, B+2,..., B+N -1 BLOCKS. TO CHOOSE A FREE HOLE
FROM THE AVAILABLE ONES, FIRST-FIT AND BEST-FIT PROCEDURES MIGHT BE
UTILIZED. HOWEVER, THE MAIN ISSUE HERE IS FINDING ENOUGH SPACE FOR A NEW
FILE. THIS SCHEME IS DEPICTED IN THE FIGURE:

This technique suffers from fragmentation issues related to variable memory partitioning. This
is due to the possibility that allocation and deal locating will result in regions of free disc space

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 5


being fragmented into chunks (parts) within active space, which is known as external
fragmentation. This issue can be solved by using a repacking procedure known as compaction.
An entire file system is copied to tape or another disc with this process, and the original disc is
then totally released. The files are then restored from the cloned disc utilizing contiguous space
on the original disc. However, this strategy can be highly time-consuming. Furthermore, size
declaration in advance is problematic because the size of the file is unpredictable each time. It
does, however, allow both sequential and direct access. Almost no seeks are required for
sequential access. Direct access using seek and read is also quick. Additionally, calculating data
blocks is quick and easy because we only require an offset from the beginning of the file.

ii) Non-Continuous (Chaining and Indexing)

This plan has taken the place of the previous ones. The following are the most common non-
contiguous storage allocation schemes:

• Linked/Chained allocation

• Indexed Allocation.

Linked/Chained allocation:

All files are stored in fixed-size blocks that are linked together in the same way that a linked list
is. Disk blocks can be found anywhere on the disc. The directory contains a pointer to the file's
first and last blocks. In addition, each block contains pointers to the following block that are not
visible to the user. This has no external fragmentation because any free block can be used for
storage. Compaction and repositioning are thus unnecessary. However, because blocks are split
over the disc and must follow pointers from one disc block to the next, it is potentially
inefficient for direct-accessible files. It is only effective for sequential access, but it may create
long seeks between blocks. Another consideration is the additional storage space required for
pointers. But, there is a dependability issue owing to the loss/damage of any pointer. The usage
of doubly linked lists could be a solution to this problem, but it would introduce additional
overheads for each file. A doubly linked list also makes searching easier because blocks are

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 6


threaded forward and backward. The figure shows a linked/chained allocation in which each
block provides information about the next block (i.e., a pointer to the next block).

FAT is another version on the linked list used by MS-DOS and OS/2 (File Allocation Table). Each
partition starts with a table that has one item for each disc block and is indexed by the block
number. The directory entry provides the file's initial block's block number. The block number
of the next block in the file is contained in the table entry indexed by the block number. The last
block in the file's Table pointer has an EOF pointer value. This chain will be repeated until an
EOF (end of file) table entry is met. We still have to visit the next pointers sequentially, but we
don't have to go to the disc for each one. A table value of 0 (Zero) indicates an unused block.
Hence, allocating free blocks with the FAT scheme is simple; simply look for the first block with
a 0 table pointer. This method is used by MS-DOS and OS/2. The figure depicts this system.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 7


Indexed Allocation:

Each file has its own index block in this. The index keeps an array of block pointers for each file,
with each entry pointing to the disc blocks containing actual file data. As a result, an index block
is an array of disc block addresses. The index block's i th entry points to the file's i th block. The
main directory also provides the address of the index block on the disc. All of the pointers in the
index block are initially set to NIL. This approach has the advantage of supporting both
sequential and random access. Searching can take conducted within index blocks. To reduce
seek time, index blocks in secondary storage might be stored near together. Additionally, just a
little amount of space is wasted on the index, and there is no external fragmentation. However,
several constraints of the old scheme remain, such as the necessity to establish a maximum file
length and the ability to have an overflow scheme for files greater than the anticipated value.
Insertions may necessitate the complete reconstruction of index blocks. The figure depicts the
indexed allocation scheme diagrammatically.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 8


KEY TAKEAWAY

➢ Record blocking and disk space management are essential components of file
management in operating systems.
➢ Record blocking involves organizing records into blocks for efficient I/O operations with
secondary storage.
➢ Three types of record-blocking methods include fixed blocking, variable-length spanned
blocking, and variable-length unspanned blocking, each with its own advantages and
complexities.
➢ Disk space management is crucial for effective utilization and quick access to disk space.
➢ Two major methods for allocating disk space are continuous and non-continuous.
Continuous allocation, also known as contiguous allocation, assigns a set of contiguous
blocks to each file, minimizing fragmentation and reducing disc seek time.
➢ However, it suffers from issues like fragmentation and requires size declaration in
advance. Non-continuous allocation includes linked/chained allocation and indexed
allocation.
➢ Linked/chained allocation uses linked blocks like a linked list, suitable for sequential
access but potentially inefficient for direct access. Indexed allocation assigns an index

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 9


block to each file, supporting both sequential and random access with minimal wasted
space.
➢ However, it comes with constraints like establishing a maximum file length and potential
reconstruction for insertions.
➢ Each method has its advantages and challenges, impacting system performance and
efficiency.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 10


LINUX SHELL PROGRAMING

SUB LESSON 10.1

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 1


INTRODUCTION OF LINUX AND SHELL

INTRODUCTION OF LINUX AND SHELL

An open-source operating system called Linux has completely changed the computing industry.
Linux has established itself as the foundation of many devices, including personal computers,
servers, and even embedded systems, because of its stability, security, and adaptability. In
addition to Linux, shell programming is essential for automating processes and maximizing the
operating system's functionality.

Linux was created in 1991 by Linus Torvalds and is based on the ideas of open-source
development. This indicates that the public can see, alter, and share its source code without
paying any fees. An operating system that can compete with commercial alternatives is the
result of the international cooperation of thousands of developers.

There are several benefits to using Linux. First off, Linux is known for being incredibly stable. It
is a great option for crucial systems and servers because it is known to operate for extended
periods of time without needing to reboot. The effective memory management, process
separation, and error handling methods built into the Linux kernel are responsible for its
stability.

Second, security is an area where Linux shines. Because Linux is open-source, extensive code
review by security professionals is made possible, guaranteeing that vulnerabilities are found
and fixed right away. Furthermore, Linux offers fine-grained access restrictions that let
administrators give users particular permissions in order to maintain a secure computer
environment.

Third, Linux offers much customizability. Users are free to customize and enhance the operating
system to meet their own requirements. Linux distributions like Ubuntu, Fedora, and Debian

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 2


offer a variety of alternatives for desktop, server, and specialized computing environments to
accommodate various user needs.

Additionally, Linux has a huge software ecosystem. The Linux community has created sizable
repositories of software packages that include a variety of tools, libraries, and applications.
Users may easily identify and install software thanks to package management systems like apt,
yum, and pacman, including web servers and scientific computing frameworks.

A basic feature of Linux is shell programming, which enables users to communicate with the
operating system via a command-line interface. A programme called the shell interprets user
commands and carries them out appropriately. The Bash shell, which stands for "Bourne Again
SHell," is among the most well-liked types of shells. It offers a robust and feature-rich scripting
environment and is the default shell for many Linux distributions.

Shell programming helps users to effectively manage system setups, automate jobs, and carry
out a variety of procedures. Shell scripts are collections of commands that the shell interprets
from scripts written in a scripting language. By combining several commands, loops,
conditionals, and variables, these scripts can be utilized to carry out complex processes.

Shell scripting is an excellent resource for both inexperienced and seasoned users due to its
simplicity and flexibility. Users can use it to automate repetitive tasks like file manipulation,
backup procedures, and system monitoring by writing scripts. Users can save time and effort
while assuring consistency and accuracy in task execution by encapsulating a set of commands
within a script.

Shell programming also makes it easier to create interactive programmes and utilities. Shell
scripts can prompt users for information, validate inputs, and perform suitable actions based on
the provided conditions with the use of user input and conditional statements. Shell
programming is a crucial ability for system administrators, developers, and power users
because of its versatility.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 3


WHAT IS SHELL?
A shell is a command-line interface (CLI) program that provides a way for users to interact with
an operating system. It acts as an intermediary between the user and the operating system,
accepting commands entered by the user and executing them accordingly.

The shell takes textual commands and translates them into instructions that the operating
system can understand and execute. It provides a command-line prompt where users can type
commands and receive responses or perform actions based on those commands.

In addition to accepting and executing commands, the shell also provides various features and
functionalities to enhance the user experience. These include:

1. Command History: The shell normally keeps track of all commands that have already
been run, making it simple for users to recall and reuse them.
2. Shell programmes frequently have tab completion, which allows users to start typing a
command or file name and then hit the Tab key to have the rest of the text
automatically complete or be suggested. This function speeds up the process and
reduces errors.
3. Redirection of Command Input and Output: The shell lets users reroute the input or
output of commands. For instance, they can direct input from a file rather than
manually putting it in or the output of a command to a file rather than displaying it on
the screen..
4. Pipelines: Shell programmes give users the ability to combine numerous instructions.
Pipes enable the creation of strong and complicated command sequences by allowing
the output of one command to be transferred as input to another.
5. Environment management and variables: The shell has the ability to define and work
with variables, which can be used in commands and scripts to store data. Additionally, it
controls the shell session's environment, including environment variables that dictate
how scripts and programmes operate.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 4


6. Scripting: The shell's scripting capabilities are one of its main advantages. Users can
automate activities, carry out complex operations, and produce reusable scripts for
different purposes by writing scripts that contain a succession of shell commands.

Unix-like operating systems are the Bash (Bourne Again SHell). It comes with a wealth of
functionality and is compatible with existing shell scripts, making it the default shell for many
Linux distributions.

The Z shell (zsh), the Korn shell (ksh), and other frequently used shells each have their own
syntax, features, and capabilities.

In conclusion, a shell is a command-line interface programme that enables users to


communicate with an operating system by typing commands, then either waiting for responses
or doing action in response to those responses. A sophisticated tool for interacting with and
managing the operating system, it offers a variety of features and functionalities, including
command history, tab completion, input/output redirection, pipelines, variables, environment
management, and scripting.

KEY TAKEAWAY

➢ Linux, an open-source operating system created in 1991 by Linus Torvalds, has


revolutionized the computing industry due to its stability, security, and adaptability.
➢ It serves as the foundation for various devices, from personal computers to servers. The
shell programming, a crucial aspect of Linux, enables users to interact with the
operating system through a command-line interface.
➢ The Bash shell, one of the most popular types, provides a robust scripting environment.
Linux boasts stability, security, customizability, and a vast software ecosystem, making it
a preferred choice.
➢ Shell programming, achieved through scripting with commands, loops, and conditionals,
allows users to automate tasks, manage system setups, and create interactive programs.
➢ Shells, such as Bash, act as intermediaries between users and the operating system,
offering features like command history, tab completion, input/output redirection,
pipelines, and scripting capabilities.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 5


➢ Shell programming is a versatile skill crucial for system administrators, developers, and
power users due to its simplicity and flexibility, allowing for efficient task automation
and improved user experiences.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 6


LINUX SHELL PROGRAMING

SUB LESSON 10.2

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 1


LINUX OPERATING SYSTEM AND ARCHITECTURE

LINUX OPERATING SYSTEM

The open-source operating system Linux has become quite popular and is used extensively in
many different fields, including personal computers, servers, embedded systems, and even
supercomputers. Linux, an operating system designed to resemble Unix, is recognised for its
reliability, security, adaptability, and scalability. This in-depth overview of Linux will cover its
history, architecture, features, distributions, and community, demonstrating why it has become
a popular operating system for many people.

1. History of Linux:

The origins of Linux may be traced to a Finnish computer science student named Linus Torvalds
who began developing a Unix-like operating system kernel in the early 1990s. It was created by
Torvalds as a side project at first, but he eventually made it available to the online community
for participation and enhancement. The Linux kernel, the central element of the Linux
operating system, was created as a result of this cooperative effort.

2. Linux Architecture:

The kernel and user space are the two fundamental parts that make up Linux's architecture.
Low-level features including process control, memory management, device drivers, and security
are offered by the Linux kernel. By controlling the system's resources and providing
communication between the hardware and applications, it serves as a bridge between the
hardware and the software.

A complete operating system environment is provided by the user space, which sits on top of
the kernel and is made up of a variety of software components and libraries that communicate
with the kernel. Utilities, libraries, shells, and graphical user interfaces (GUIs) are all
components of the user space, which enables efficient user interaction with the system.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 2


3. Key Features of Linux:

3.1. Open Source Nature: Since Linux is an open-source operating system, anyone can access its
source code without charge. This transparency enables developers to examine, edit, and share
the code, leading to ongoing development, security upgrades, and quick bug patches.

3.2. Stability and Reliability: The dependability and stability of Linux are well known. Because
of its strong architecture and developed design, it can function for long stretches of time
without needing to be rebooted and manage heavy workloads. When stability is crucial, Linux
systems are frequently utilized in critical infrastructure like servers and embedded systems.

3.3. Security: In addition to process isolation, user-based file permissions, and the ability to run
on secure hardware designs, Linux offers increased security capabilities. Linux is a popular
choice for people and organizations who are concerned about security because of its open-
source nature, which enables speedy identification and patching of flaws.

3.4. Flexibility and Customizability: Linux offers consumers a great deal of flexibility and
customization. Users can select and install only the essential components thanks to the
modular design, creating systems that are light and efficient. In order to create a customized
computing experience, users can choose from a variety of desktop environments, window
managers, and software applications.

3.5. Scalability: Linux is highly scalable and can be configured to run on a variety of hardware
platforms, including powerful servers, supercomputers, embedded systems, and mobile
devices. Due to its scalability, it may be used with a variety of hardware and software.

3.6. Multitasking and Multithreading: Linux enables multitasking, enabling the execution of
many tasks simultaneously. Additionally, it has strong multithreading capabilities that enable
effective use of multi-core processors and enhance overall system performance.

4. Linux Distributions:

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 3


Because it is an open-source operating system, Linux offers a huge variety of distributions, or
"distros." A Linux distribution is an all-inclusive set that comes with the Linux kernel, a package
manager, tools, programmes, and frequently a desktop environment. Several well-known Linux
distributions are openSUSE, Fedora, Debian, CentOS, Ubuntu, and Fedora. Each distribution has
distinct qualities, a distinct target market, and a distinct philosophy that it caters to.

ARCHITECTURE OF LINUX
Linux architecture has the following components:

Figure 10.2.1 Linux architecture (This image needs to be shown in the background of video))

1. Kernel: The kernel is the foundation of an operating system built on Linux. It virtualizes
the computer's shared hardware resources to provide each process access to its own
virtual resources. This gives the impression that the process is the only one active on the
computer. Additionally, the kernel is in charge of avoiding and resolving conflicts
between various programmes.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 4


The various types of kernels include:

Monolithic Kernel

Hybrid kernels

Exo kernels

Micro kernels

2. System Library: of functions that are used to implement the operating system's
capabilities.

3. Shell: The intricacy of the kernel's functions are concealed from users through this
interface to the kernel. It accepts user commands and carries out kernel operations.

4. Hardware Layer: This layer consists all peripheral devices like RAM/ HDD/ CPU etc.

5. System Utility: It provides the functionalities of an operating system to the user.

Advantages of Linux

● Linux's primary benefit is that it is an open-source operating system. This indicates that
the source code is easily accessible to everyone and that you are free to contribute,
change, and share the code with anyone without obtaining any special authorizations.
● Linux is the safest operating system out there in terms of security. Although there is
malware for Linux, it is less vulnerable than any other operating system. This does not
imply that Linux is completely secure. Therefore, no anti-virus software is needed.
● Linux offers simple and regular software updates.
● There are numerous Linux distributions available, and you can use any of them
depending on your needs or preferences.
● On the internet, Linux can be used without charge.
● The community as a whole supports it.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 5


● It offers very strong stability. It rarely freezes or slows down, so there's no need to
restart it after a short while.
● It protects the user's privacy.
● Compared to other operating systems, the performance of the Linux system is
significantly higher. It manages a huge number of people working simultaneously and at
the same time effectively.
● It is suitable for networks.
● Linux offers a lot of versatility. You are permitted to install only the necessary
components; you are not required to install a full Linux suit.
● Numerous file types are compatible with Linux.
● Using the internet to install it is quick and simple. It may be set up on any hardware,
including an outdated computer system.
● Despite the hard disk's low size, it completes all jobs correctly.

Disadvantages of Linux

● It lacks a lot of user-friendliness. As a result, newcomers could find it complicated.


● In comparison to Windows, it contains less peripheral hardware drivers.

Is There Any Difference between Linux and Ubuntu?

To that, I say YES. Ubuntu is a free open-source operating system and a Linux distribution,
whereas Linux is a family of open-source operating systems built on the Debian kernel. This is
the main distinction between the two operating systems. Or, to put it another way, Ubuntu is a
distribution of Linux, the operating system as a whole. Ubuntu was created by Canonical Ltd.
and released in 2004 whereas Linux was created by Linus Torvalds and released in 1991.

KEY TAKEAWAY

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 6


➢ Linux, an open-source operating system developed by Linus Torvalds in the early 1990s,
has become widely popular due to its reliability, security, adaptability, and scalability.
➢ The architecture of Linux comprises the kernel and user space, where the kernel handles
low-level functionalities, acting as a bridge between hardware and software, while user
space provides an environment for user interaction with the system.
➢ Key features of Linux include its open-source nature, stability, security, flexibility,
customizability, and scalability. Linux distributions, such as Ubuntu, Fedora, and Debian,
cater to various user needs.
➢ The Linux architecture involves the kernel, system library, shell, hardware layer, and
system utility.
➢ The advantages of Linux include being open-source, secure, regularly updated,
supported by a community, stable, privacy-focused, and versatile.
➢ However, it may lack user-friendliness and peripheral hardware drivers compared to
other operating systems.
➢ Ubuntu, a Linux distribution, is not a separate operating system but a part of the Linux
family, emphasizing the distinction between Linux and Ubuntu.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 7


LINUX SHELL PROGRAMING

SUB LESSON 10.3

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 1


COMPONENT OF LINUX SYSTEM, BASIC FUNCTION OF KERNEL

COMPONENT OF LINUX SYSTEM


Servers, PCs, embedded systems, and other types of devices all use the Linux operating system, which is
strong and adaptable. It is made up of a number of essential parts that combine to offer a
comprehensive and effective computer environment. The main parts of the Linux system, their
functions, and how they contribute to the system's overall functionality will all be covered in this
thorough presentation.

1. Kernel: The fundamental building block of the operating system is the Linux kernel. It serves as a
bridge between hardware and software, controlling system resources, offering fundamental
functions, and promoting interaction between various parts. It manages networking, device
drivers, filesystems, memory management, and scheduling of processes.
2. Shell: The shell is an operating system command-line interface that enables users to
communicate with it. It provides access to many system functions and programmes by
interpreting user commands and carrying them out. Linux users frequently use the Bourne Again
SHell (Bash), Zsh (Z shell), and Tcsh (TENEX C shell) shells.
3. Filesystem: Files and folders are organized and managed by the Linux filesystem. It offers a
hierarchical structure for the storage and access of files. Ext4, XFS, Btrfs, and other file system
formats are supported by Linux. The filesystem also includes components like access control,
symbolic links, and file permissions.
4. Process Management: Linux supports running several processes at once. The conception,
planning, execution, and termination of processes are all handled by the process management
component. It allocates memory, controls system resources, and enforces process priorities.
Process management is done using the system commands fork(), exec(), and kill().
5. Device Drivers: Device drivers allow the Linux kernel to connect with hardware components
such input/output devices, graphics cards, storage devices, and network adapters. They offer an
abstraction layer that enables interaction between the kernel and devices via common
interfaces and protocols.
6. Networking: Linux's networking module enables system to system communication over
networks. It features network stack functions and protocols like TCP/IP, UDP, and ICMP.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 2


Routing, firewalling, network address translation (NAT), and virtual networking interfaces are
just a few of the networking technologies that Linux enables.
7. System Libraries: System libraries are sets of precompiled code that give programmes common
features and services. They offer high-level programming interfaces, access to hardware
resources, and system call encapsulation. The GNU C Library (glibc), libstdc++, and libpthread
are typical Linux system libraries.
8. Command-line Utilities: Linux comes with a wide range of command-line tools and utilities for
carrying out different operations. These utilities include text processing commands (grep, sed,
and awk), network administration commands (ifconfig, netstat), and commands for
manipulating files (ls, cp, and mv). They offer robust and adaptable ways to communicate with
the system and automate processes.
9. Graphical User Interface (GUI): Linux comes with a number of GUI environments, including
GNOME, KDE, Xfce, and Unity, that offer intuitive ways for users to interact with the system. It is
simpler to complete activities and manage files and apps thanks to the GUI, which enables users
to interact with applications through windows, menus, icons, and graphical components.
10. Package Management: Package management systems are used by Linux distributions to make it
easier to install, update, and remove software. These systems, like apt (used by Debian and
Ubuntu) or yum/dnf (used by Fedora), handle package retrieval, installation, and dependency
resolution procedures, ensuring the system is secure and up-to-date.
11. Security: To safeguard the system and its users, Linux has a number of security mechanisms.
This covers file permissions, access control lists (ACLs), firewalls, encryption, and user
authentication methods (such as passwords and key-based authentication). The open-source
nature of Linux makes it easier to quickly find and fix security flaws.
12. X Window System: The Linux operating system's fundamental architecture for displaying
graphical user interfaces is the X Window System, sometimes known as X11 or X. It enables the
creation of windows, the creation of graphics, the handling of input events, and the
management of display devices. The usage of remote graphical applications is made possible by
X, which also supports a variety of desktop environments and window managers.
13. Init System: The management of the boot process and initiating and halting of system services
are under the purview of the init system. Runlevels and scripts are used by conventional init
systems like SysV init, while more recent ones like system have replaced them with more
sophisticated capabilities like parallel service startup, dependency tracking, and logging.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 3


14. Virtualization: Linux is compatible with a number of virtualization technologies, including Xen,
Docker, and Kernel-based Virtual Machine (KVM). Multiple instances of operating systems or
applications can run simultaneously on a single physical computer thanks to the development
and management of virtual machines (VMs) and containers made possible by these
technologies.
15. System Logs: Linux creates system logs that document significant happenings, mistakes, and
alerts. The /var/log directory houses log files that offer information about system performance,
administration, and troubleshooting. /var/log/messages, /var/log/syslog, and /var/log/auth.log
are examples of common log files.
16. Printing Subsystem: The Linux printing subsystem offers assistance in managing printers and
printing documents. It contains the CUPS programme, which manages print queues, printer
detection, and spooling.
17. Package Build Systems: To make the development and distribution of software packages easier,
Linux distributions offer build systems including the GNU build system (Autotools), CMake, and
Meson. These build methods streamline the configuration, compilation, and installation
procedures to guarantee system compatibility.
18. GNU Core Utilities: Linux distributions frequently include a set of command-line utilities called
the GNU Core Utilities (coreutils). These programmes, which include ls, cp, mv, rm, and mkdir,
offer fundamental file and directory manipulation features.
19. GNU Compiler Collection (GCC): C, C++, Fortran, Ada, and other programming languages are
supported by the GCC, a collection of compilers and development tools. It enables the
compilation, optimisation, and debugging of Linux software programmes.
20. Graphical Subsystems: To assist graphics rendering and display management, Linux has graphical
subsystems including Direct Rendering Manager (DRM) and Xorg (X11). These components offer
the foundation for windowing systems and graphical programmes as well as effective interface
with graphics hardware.

These are yet a few of the main Linux system components. Due to its modular structure, which enables
flexibility and customization, Linux can be used in a variety of applications and contexts. Because Linux is
an open-source operating system, community contributions and ongoing development are encouraged,
creating a powerful and feature-rich operating system.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 4


BASIC FUNCTION OF KERNEL

1. Process Management: The kernel controls processes, which are active instances of executing
programmes. It assures fair and effective execution by scheduling and allocating system
resources (such CPU time, memory, and I/O devices) to processes according to their priority. The
kernel is also in charge of starting, stopping, and switching contexts for processes.
2. Memory Management: The kernel is responsible for allocating and releasing memory to
processes as needed. Because of the virtual memory features it offers, each process can have its
own, isolated address space. To effectively employ the physical memory resources available, the
kernel manages memory protection, paging, swapping, and demand paging.
3. Device Management: The kernel coordinates the communication of hardware and software
devices as well as the management of device drivers. By removing the focus from the physical
details, it offers a consistent interface for accessing devices. The startup, setup, and control of
devices are handled by the kernel, ensuring correct data transit between devices and processes.
4. Filesystem Management: The filesystem layer offered by the kernel enables the arranging,
establishing, modifying, and erasing of files and directories. It manages access control,
permissions, and file metadata. To enhance performance and offer dependable storage, the
kernel controls file I/O operations, caching, and buffering.
5. Networking: The kernel offers networking features like sockets, device drivers, and network
protocols. It takes care of packet forwarding, network stack implementation, routing, and
network setup. The kernel controls network interfaces and handles data transmission to make it
possible for processes to communicate with one another over networks.
6. Interrupt Handling: Timer interrupts, I/O interrupts, and hardware exceptions are among the
interrupts handled by the kernel. It prioritises and handles interruptions, providing prompt
action and appropriate event management. The kernel's interrupt handlers control interrupt-
driven operations and coordinate with other parts.
7. Security and Access Control: To safeguard the system and its resources, the kernel imposes
security regulations and implements access control systems. It controls user identification,
permissions, and authorization. To maintain system integrity and guard against unauthorized
access, the kernel manages user and group administration, file permissions, and security
modules.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 5


8. System Calls: The kernel gives user-space programmes an interface to use in order to access its
functionality. User programmes can seek services from the kernel by calling system calls. The
generation of processes, file I/O, memory allocation, and network connectivity are among the
activities they cover.

These are a few of the fundamental tasks that the kernel carries out. The operating system's main
component, the kernel, manages resources, offers services, and ensures the smooth operation of the
system as a whole.

KEY TAKEAWAY

➢ The Linux operating system consists of several essential components that collectively
provide a robust and versatile computing environment.
➢ The kernel acts as the foundation, managing system resources, process control,
memory, and device drivers.
➢ The shell, a command-line interface, facilitates user interaction by interpreting
commands.
➢ Other crucial components include the filesystem for organizing files, process
management for multitasking, device drivers for hardware communication, networking
for system communication, system libraries for common features, command-line
utilities, and graphical user interfaces.
➢ Linux also incorporates package management, security mechanisms, virtualization
technologies, system logs, printing subsystems, and various build systems.
➢ The modular structure of Linux enables flexibility, customization, and application in
diverse contexts.
➢ The kernel, as the core component, plays key roles in process management, memory
allocation, device coordination, filesystem management, networking, interrupt handling,
security, access control, and system calls, ensuring the smooth operation of the entire
system.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 6


Linux Shell Programing

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 1


SUB LESSON 10.4

BASIC COMMANDS

BASIC COMMANDS

1. cd( change Directory) Command

The cd command is used to change the current directory (i.e., the directory in which the user is currently
working)

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 2


2. ls Command

List directory contents.

Syntax :

ls [Options] [file/dir]

Example:

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 3


3. man Command

It is the interface used to view the system’s reference manuals.

Syntex :

man [ command name ]

Example

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 4


4. echo Command

Display a line of text/string on standard output or a file.

Syntax :

echo [ option ] [ string ]

Example :

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 5


5. cal Command

Displays a simple, formatted calendar in your terminal.

Syntax :

cal [ option ] [[[ day ] month ] year ]

Example :

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 6


OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 7
6. data Command

print or set the system data and time.

Syntax :

data [ option ]....[ + format ]

Example :

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 8


7. Clear Command

Clear the terminal screen.

If you take a detailed look after running the clear command, you will find that it doesn’t really clear the
terminal. The tool just shifts the text upwards, out of the viewable area.

Syntax :

clear

8. cat Command

It is used to create, display and concatenate file contents.

Syntax :

cat [ option ] [ file ]

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 9


Example :

Example:

$ cat > file1.txt

It creates file1.txt and allows us to insert content for this file.

After inserting content you can use ctrl+c to exit the file.

$ cat file.txt > newfile.txt

Read the contents of file.txt and write them to newfile.txt, overwriting anything newfile.txt previously
contained. If newfile.txt does not exist, it will be created.

$cat file.txt > > newfile.txt

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 10


Read the contents of file.txt and append them to the end of newfile.txt. If newfile.txt does not exist, it will
be created.

cat file1.txt file2.txt

it will read the contents of file1.txt and file2.txt and display the result in the terminal.

cat file1.txt file2.txt > combinedfile.txt

it will concatenate the contents of file1.txt and file2.txt and write them to a new file combinedfile.txt
using the (>) operator.

If the combinedfile.txt file doesn’t exist the command will create it. Otherwise it will overwrite the file.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 11


9. pwd (Print working directory) command

It prints the current working directory name with the complete path starting from root (/).

Syntax :

pwd [- option ]

Example :

10. who Command

It display the users that are currently logged into your Unix computer system.

Syntax :

who [-option ] [filename ]

Example :

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 12


11. whoami Command

This command prints the username associated with the current effective user ID.

Syntax :

whoami [-option ]

Example :

12. passwd Command

The passwd command is used to change the password of a user account.

Syntax :

passwd [-options ] [ username ]

Example :

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 13


13. mkdir Command

This command is used to make Directories.

Syntax :

mkdir [-option ] Directory

Example :

14. rmdir Command

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 14


This command removed empty directories from your filesystem.

Syntax :

rmdir [-option ] Directory

Example :

15. cp (Copy) Command

This Command is used to copy files and directories.

Syntax :
cp [option ] source destination/directory

Example:

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 15


16. mv (move) Command

mv command is used to move files and directories.

Syntax :
mv [options] source dest

Example :

17. rm(remove) Command

The ‘rm’ command is used to delete files and directories.

Syntax:
rm [option] filename

Example:

18. cut Command

The cut command extracts a given number of characters or columns from a file.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 16


Syntax:
cut [option] [file]

Example:

19. paste Command

The paste command displays the corresponding lines of multiple file side by side.

Syntax:

past [options] [file]

Example :

20. wc Command

It is used to find out the number of newline count, word count,byte and characters count in a files
specified by the file arguments.

Syntax:

wc [options] filenames

KEY TAKEAWAY

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 17


➢ In Linux shell programming, understanding basic commands is fundamental for effective
interaction with the operating system.
➢ The "cd" command facilitates changing the current directory, while "ls" is employed to
list directory contents.
➢ The "man" command provides access to system reference manuals, and "echo" displays
text on the standard output.
➢ The "cal" command presents a formatted calendar, and "date" is used to print or set
system date and time.
➢ The "clear" command clears the terminal screen, and "cat" is versatile for creating,
displaying, and concatenating file contents.
➢ The "pwd" command prints the current working directory path, and "who" displays
currently logged-in users.
➢ "whoami" reveals the username associated with the current effective user ID. Managing
user passwords is done using the "passwd" command.
➢ Creating directories is accomplished with "mkdir," and "rmdir" removes empty
directories.
➢ File operations involve "cp" for copying, "mv" for moving, and "rm" for deleting. "cut"
extracts characters or columns, "paste" aligns lines from multiple files, and "wc"
provides counts for lines, words, bytes, and characters in specified files.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 18


LINUX SHELL PROGRAMING

SUB LESSON 10.5

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 1


CONTROL STRUCTURE

OPERATORS

1. Arithmetic Operators

Certainly! Here's a table depicting the arithmetic operators commonly used in shell
programming, particularly in Unix-like shells like Bash:

Operator Description Example

+ Addition `$((5 + 3))`

_ Subtraction `$((7 - 2))`

* Multiplication `$((4 * 6))`

/ Division `$((10 / 2))`

% Modulus (remainder) `$((11 % 3))`

** Exponentiation N/A (Use external tools)

In shell programming, arithmetic operations are often performed within double parentheses
`$((...))` or using the `expr` command. However, please note that the `**` operator for
exponentiation operator isn't directly supported in basic shell arithmetic. You might need to
use external tools or functions to perform exponentiation.
Keep in mind that shell arithmetic is typically integer-based, so the division operator `/`
performs integer division, and the result is truncated to an integer. If you need floating-point
division or more advanced mathematical operations, you might need to use external tools like
`bc` or perform calculations in higher-level programming languages.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 2


2. Assessment Operators for numerical

Operator Description Example

-eq Equal to `[ "$a" -eq "$b" ]`

-ne Not equal to `[ "$a" -ne "$b" ]`

-gt Greater than `[ "$a" -gt "$b" ]`

-lt Less than `[ "$a" -lt "$b" ]`

-ge Greater than or equal to `[ "$a" -ge "$b" ]`

-le Less than or equal to `[ "$a" -le "$b" ]`

These operators are used to compare numerical values within conditional statements like `if`,
`while`, and other control structures in shell scripts. Make sure to enclose the expressions
involving these operators within square brackets `[ ]` and provide appropriate spacing.

3. Relational operators for string

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 3


Operator Description Example

= Equal to `[ "$str1" = "$str2" ]`

!= Not equal to `[ "$str1" != "$str2" ]`

< Less than (based on ASCII) `[ "$str1" \< "$str2" ]`

> Greater than (based on ASCII) `[ "$str1" \> "$str2" ]`

In these operators, make sure to properly quote the strings within square brackets `[ ]` to
ensure that they are treated as single entities and to prevent word splitting and globbing. Also,
note that the `<` and `>` operators might need to be escaped or enclosed in double quotes to
avoid misinterpretation by the shell.
Remember, the comparison using these operators is based on lexicographical (ASCII) order for
strings.
4. Boolean operators

Operator Description Example

! Logical NOT `! condition`

-a or && Logical AND `condition1 && condition2`

-o or \|\| Logical OR `condition1 \|\| condition2`

These operators are often used within conditional statements like `if`, `while`, and other control
structures to combine multiple conditions and create more complex logical expressions. The `-
a` and `-o` operators can also be replaced by the `&&` and `||` operators, respectively.
Remember to properly enclose conditions in parentheses or square brackets as required and to
use correct spacing for these operators.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 4


5. Boolean operators for string

Operator Description Example

! Logical NOT `! [ -z "$str" ]`

-n Check if string is not empty `[ -n "$str" ]`

-z Check if string is empty `[ -z "$str" ]`

= Equal to `[ "$str1" = "$str2" ]`

!= Not equal to `[ "$str1" != "$str2" ]`

These operators are used to evaluate conditions involving string values within shell scripting.
The `-n` and `-z` operators are specifically used to check if a string is not empty and if it's empty,
respectively. The `=` and `!=` operators are used for string equality and inequality comparisons.
Remember to properly quote strings within square brackets `[ ]` to ensure accurate evaluation
and to prevent issues with word splitting and globbing.

“EXPR” COMMAND

The `expr` command in shell programming is used for evaluating expressions, performing
arithmetic operations, and performing string comparisons. It's a command-line utility that helps

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 5


you manipulate and evaluate expressions within shell scripts. Here's an overview of how to use
the `expr` command:

**Arithmetic Operations:**

**String Length:**

**String Comparison:**

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 6


- The `expr` command usually requires spaces around operators and operands.
- The comparison operators within `expr` return 1 for true and 0 for false.
- Be cautious with using `expr` for arithmetic operations involving variables. Modern shells like
Bash have built-in arithmetic capabilities that are safer and more versatile.

Keep in mind that while the `expr` command can be useful for basic operations, for more
complex expressions and calculations, you might consider using shell arithmetic (`$((...))`) or
even external tools like `bc` for precision arithmetic or languages like Python for more
advanced operations.

IF STATEMENTS

In shell programming, `if` statements are used to make decisions based on the evaluation of
conditions. They allow you to execute specific code blocks depending on whether a certain
condition is true or false. Here's the basic syntax of an `if` statement in shell scripting:

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 7


Here's a more detailed breakdown:

- The `if` keyword starts the conditional statement.

- The condition is enclosed in square brackets `[ ]`. You can use various operators for
comparison, and it's a good practice to enclose variables in double quotes within the condition
to handle cases with spaces and special characters.

- The `then` keyword indicates the start of the code block to execute if the condition is true.

- The code within the `if` block is executed if the condition is true.

- The `else` keyword indicates the start of the code block to execute if the condition is false.

- The code within the `else` block is executed if the condition is false.

- The `fi` keyword marks the end of the `if` statement.

Here's a simple example:

Remember to properly format your script with correct spacing and indentation for clarity. You
can also use nested `if` statements and combine them with other control structures like loops
for more complex logic in your shell scripts.

IF ELSE STATEMENTS

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 8


In this example, the script checks whether the value of the variable `x` is equal to 10. If the
condition is true, it prints "x is equal to 10". Otherwise, it prints "x is not equal to 10".

You can also add more conditions and nesting to handle different cases:

In this extended example, the script first checks if `x` is equal to 10. If not, it checks if `x` is
greater than 10. If neither condition is true, it defaults to the `else` block and indicates that `x`
is less than 10.

NESTED IF ELSE STATEMENTS

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 9


Certainly! Nested `if`-`else` statements are used when you want to have conditional branches
within other conditional branches. This allows you to handle more complex scenarios where
different conditions need to be evaluated in a hierarchical manner. Here's an example with
explanations:

In this script:

1. The outer `if` statement checks if `x` is equal to 10.

- If true, it enters the nested `if` statement to check if `y` is equal to 20.

- If both conditions are true, it prints "Both x and y are 10 and 20".

- If `y` is not 20, it prints "x is 10 but y is not 20".

- If `x` is not 10, it enters the nested `else` block.

- If `y` is 20, it prints "x is not 10 but y is 20".

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 10


- If `y` is not 20, it prints "Neither x is 10 nor y is 20".

This demonstrates how you can use nested `if`-`else` statements to handle more intricate
conditions step by step. However, it's important to keep your code organized and maintain
readability, especially when dealing with deep levels of nesting.

IF-ELSE-IF LADDER

Certainly! An if-else-if ladder (also known as a cascading if statement) is a sequence of


conditional statements that provide multiple options to choose from. Each "if" statement is
followed by an "else" statement, except for the last one which can have just the "if" part. The
ladder helps you choose one specific branch of code based on the evaluation of multiple
conditions. Here's an example with explanations:

In this script:

1. The first `if` statement checks if the `grade` is greater than or equal to 90.

- If true, it prints "Grade: A".

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 11


- If false, it moves to the next `elif` statement.

2. The second `elif` statement checks if the `grade` is greater than or equal to 80.

- If true, it prints "Grade: B".

- If false, it moves to the next `elif` statement.

3. The third `elif` statement checks if the `grade` is greater than or equal to 70.

- If true, it prints "Grade: C".

- If false, it moves to the next `elif` statement.

4. The fourth `elif` statement checks if the `grade` is greater than or equal to 60.

- If true, it prints "Grade: D".

- If false, it moves to the `else` block.

5. The `else` block is reached if none of the conditions in the `if` and `elif` statements are true. It
prints "Grade: F".

An if-else-if ladder allows you to handle various cases in a structured manner. However, keep in
mind that the conditions are evaluated sequentially, so once a condition is satisfied, the
subsequent conditions are not checked.

CASE STRUCTURE

Certainly! The `case` statement in shell programming is used to perform multiway branching
based on the value of a variable. It's an alternative to using a long sequence of `if`-`elif`
statements when you want to match a variable's value against multiple possible values. Here's
an example with explanations:

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 12


In this script:

1. The `case` statement starts with the value of the variable `$fruit`.

2. Each possible value is followed by `)` and a block of code delimited by `;;`. This block is
executed if the variable's value matches the given value.

3. The last `)` before `*)` is used to match any value that didn't match the previous cases.

4. `*)` represents the default case. If none of the cases match, this block of code is executed.

Here's the output for the above script with `fruit="apple"`:

It's an apple.

If you change `fruit` to `"banana"`:

It's a banana.

If you change `fruit` to `"grape"`:

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 13


It's something else.

The `case` statement makes code more readable when you have multiple possible values to
match against. It's especially useful when you need to compare a variable to different values
and execute different actions based on the match.

REPETITION CONSTRUCTS

Repetition constructs, also known as loops, are essential in shell programming to execute a
certain block of code repeatedly. Shell scripts use loops to iterate through lists of items,
perform actions a specified number of times, or until a certain condition is met.

1. While Loop

Certainly! A `while` loop is a fundamental construct in shell programming that allows you to
repeatedly execute a block of code as long as a certain condition remains true. Here's how a
`while` loop works, along with an example for explanation:

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 14


In this example:

1. `count=1` initializes the variable `count` to 1.

2. The `while` keyword indicates the start of the loop.

3. `[ $count -le 5 ]` is the condition being evaluated. It checks whether the value of `count` is
less than or equal to 5.

4. The `do` keyword indicates the beginning of the loop's code block.

5. `echo "Count: $count"` prints the current value of `count`.

6. `count=$((count + 1))` increments the value of `count` by 1 in each iteration.

7. The `done` keyword marks the end of the loop.

The loop continues to run as long as the condition `[ $count -le 5 ]` remains true. In each
iteration, it prints the value of `count` and then increments it. The loop stops when `count`
reaches 6, as 6 is not less than or equal to 5.

The output of the above script will be:

Once the condition becomes false, the loop terminates, and the script execution continues with
the next command after the loop. This demonstrates the basic structure and operation of a
`while` loop in shell programming.

2. Until Loop

Certainly! An `until` loop is similar to a `while` loop in shell programming, but it repeats a block
of code as long as a condition remains false. This is in contrast to a `while` loop, which repeats
as long as a condition remains true. Here's how an `until` loop works, along with an example for
explanation:

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 15


In this example:

1. `count=1` initializes the variable `count` to 1.

2. The `until` keyword indicates the start of the loop.

3. `[ $count -gt 5 ]` is the condition being evaluated. It checks whether the value of `count` is
greater than 5.

4. The `do` keyword indicates the beginning of the loop's code block.

5. `echo "Count: $count"` prints the current value of `count`.

6. `count=$((count + 1))` increments the value of `count` by 1 in each iteration.

7. The `done` keyword marks the end of the loop.

The loop continues to run as long as the condition `[ $count -gt 5 ]` remains false. In each
iteration, it prints the value of `count` and then increments it. The loop stops when `count`
becomes greater than 5.

The output of the above script will be:

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 16


Once the condition becomes true, indicating that `count` is greater than 5, the loop terminates,
and the script execution continues with the next command after the loop. This demonstrates
the basic structure and operation of an `until` loop in shell programming.

3. For … in Loop

Certainly! The `for` loop in shell programming is used to iterate over a list of items, performing
a set of actions for each item in the list. Here's how a `for` loop works, along with an example
for explanation:

In this example:

1. `fruits=("apple" "banana" "orange" "grape")` initializes an array called `fruits` containing four
different fruit names.

2. The `for` keyword indicates the start of the loop.

3. `fruit` is a variable that will hold the current item from the list in each iteration.

4. `in "${fruits[@]}"` specifies the list of items to iterate over. The `"${fruits[@]}"` syntax
expands the array elements as separate items.

5. The `do` keyword indicates the beginning of the loop's code block.

6. `echo "Fruit: $fruit"` prints the current value of the `fruit` variable.

7. The `done` keyword marks the end of the loop.

The loop iterates over each element in the `fruits` array and prints the name of each fruit.

The output of the above script will be:

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 17


The loop continues until all items in the list have been processed. Once the loop finishes, the
script execution continues with the next command after the loop. This demonstrates the basic
structure and operation of a `for` loop in shell programming.

4. Select Loop

The `select` loop is a special construct in shell programming, specifically used to create simple
menus in interactive shell scripts. It prompts the user to choose an option from a list, allowing
them to make a selection by entering a number corresponding to the options. Here's how a
`select` loop works, along with an example for explanation:

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 18


In this example:

1. `options=("Option 1" "Option 2" "Option 3" "Quit")` initializes an array called `options` with
four menu options.

2. `PS3="Select an option: "` sets the prompt string that will be displayed before each menu
iteration.

3. The `select` keyword starts the loop.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 19


4. `choice` is the variable that will hold the selected option in each iteration.

5. `in "${options[@]}"` specifies the list of options to present to the user.

6. The `do` keyword indicates the beginning of the loop's code block.

7. The `case` statement matches the value of `$REPLY`, which corresponds to the user's input.

8. `1)`, `2)`, and `3)` represent the options numbered 1, 2, and 3 respectively.

9. `4)` represents the "Quit" option.

10. Inside each case, the script provides the appropriate response.

11. `break` is used to exit the loop when the "Quit" option is selected.

12. `*)` is the default case for an invalid input.

When the user selects an option by entering a number, the corresponding code block is
executed. For instance, if the user selects option 2, the output will be:

The `select` loop continues to prompt the user until they select the "Quit" option. It's a great
way to create interactive menus in shell scripts for tasks that require user input.

OPERATING SYSTEM WITH UNIX SHELL PROGRAMING 20

You might also like