100% found this document useful (1 vote)
707 views42 pages

Students Copy Operating System I CTE 243

Uploaded by

Otobong Okpon
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
707 views42 pages

Students Copy Operating System I CTE 243

Uploaded by

Otobong Okpon
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 42

CTE 243

OPERATING SYSTEMS I
General Objectives:
On completion of this course the student should be able to:
1. Know the concepts of an operating system
2. Know the classification and different types of Operating System
3. Know the functions, characteristics, and components of Operating System
4. Know services, properties, and structure of an Operating System
5. Understand the general concept of system programming
6. Understand the use of utilities and libraries
7. Understand Input / Output devices handlers

THE OPERATING SYSTEM (OS)


Definition:
An operating system is a type of software that acts as an interface between the user
and the hardware. It is responsible for handling various critical functions of the
computer or any other machine. Various tasks that are handled by OS are file
management, task management, garbage management, memory management,

1
process management, disk management, I/O management, peripherals
management, etc.
Operating system (OS) is a program that manages a computer’s resources, especially
the allocation of those resources among other programs. Typical resources include
the central processing unit (CPU), computer memory, file storage, input/output (I/O)
devices, and network connections.
An Operating System is software that communicates with the hardware and allows
other programs to run. It is comprised of system software, or the fundamental files
the computer needs to boot up and function.
Common desktop operating systems include Windows, OS X, and Linux. Mobile
devices, such as tablets and smartphones also include operating systems that
provide a GUI and can run applications. Common mobile OSes include Android, iOS,
and Windows Phone.

HISTORY OF OPERATING SYSTEM


Generation of Operating System
Below are four generations of operating systems.

1. The First Generation (1940 to early 1950s): In 1940, an operating system


was not included in the creation of the first electrical computer. Early computer users
had complete control over the device and wrote programs in pure machine language
for every task.
2. The Second Generation (1955 – 1965)
GMOSIS, the first operating system (OS) was developed in the early 1950s. It was
created by General Motors for the IBM Computer. Its mode of operation was to gather
all related jobs into groups or batches and then submit the batched jobs to the
operating system, and punch cards were used to finish all jobs in a machine.
3. The Third Generation (1965 – 1980)
Operating system designers were able to create a new operating system in the late
1960s that was capable of multiprogramming—the simultaneous execution of several
tasks in a single computer program. This operating system is used in minicomputers.
4. The Fourth Generation (1980 – Present Day)
The fourth generation of personal computers is the result of these PDPs.

EVOLUTION OF OPERATING SYSTEMS


The following are the changes made to operating systems by the years:
1. No OS – (0s to 1940s)
As we know that before 1940s, there was no use of OS. Earlier, people are lacking OS
in their computer system so they had to manually type instructions for each task in
machine language (0-1 based language).
2. Batch Processing Systems -(1940s to 1950s)
With the growth of time, batch processing system came into the market. Now Users
had facility to write their programs on punch cards and load it to the computer
operator.

3. Multiprogramming Systems -(1950s to 1960s)


Multiprogramming systems was the first operating system where actual revolution
began. It provides user facility to load the multiple programs into the memory and
provide a specific portion of memory to each program.

4. Time-Sharing Systems -(1960s to 1970s)


2
Time-sharing systems is extended version of multiprogramming system. Here one
extra feature was added to avoid the use of CPU for long time by any single program
and give access of CPU to every program after a certain interval of time.
5. Introduction of GUI -(1970s to 1980s)
With the growth of time, Graphical User Interfaces (GUIs) came. First time OS
became more user-friendly and changed the way of people to interact with
computer.
6. Networked Systems – (1980s to 1990s)
At 1980s, the craze of computer networks at it’s peak. A special type of Operating
Systems needed to manage the network communication.
7. Mobile Operating Systems – (Late 1990s to Early 2000s)
Invention of smartphones create a big revolution in software industry, to handle the
operation of smartphones, a special type of operating systems was developed
8. AI Integration – (2010s to ongoing)
With the growth of time, Artificial intelligence came into picture. Operating system
integrates features of AI technology like Siri, Google Assistant, and Alexa and
became more powerful and efficient in many ways.

MERITS AND DEMERITS OF OPERATING SYSTEM


Advantages of Operating System:
1. O/S gives the interface between the user and the computer.
2. It controls all functions of the PC.
3. O/S utilizes various memory management techniques in order to manage the
network.
4. Operating System gives information about the devices attached to the PC.
Disadvantages of Operating System:
1. Need expertise to manage.
2. Slower in operation than hard-wired system.

Resource Management in Operating System


Resource Management in Operating System is
the process to manage all the resources efficiently
like CPU, memory, input/output devices, and other
hardware resources among the various programs
and processes running in the computer.
Features or characteristics of the Resource
management of operating system:
 Resource scheduling
 Resource Monitoring
 Resource Protection
 Resource Sharing
 Deadlock prevention
 Resource accounting
 Performance optimization

3
CLASSIFICATION AND DIFFERENT TYPES OF OPERATING
SYSTEMS
OPEN AND CLOSED SOURCE OPERATING SYSTEMS
Open-Source Operating System
An open-source operating system allows the public to make changes to the source
code of the software. This permission or access is granted by the author of the
operating system. Such operating system software are usually free online. Such
companies are Unix, Linux and an example of such operating systems are Ubuntu.
Closed-Source Operating System
A Closed Source Operating System does not authorize public users to make changes
to the source code of the software. The production company does the modification
themselves. Examples of Closed-source Operating Systems are Microsoft Windows,
Mac OS, Android, IOS.

Advantages and Disadvantages of Operating Systems


1. Windows: Generally referred to as Microsoft Windows, these OS are
manufactured and developed by the tech-giant Microsoft and are the most
commonly used OS for personal computers and to some extent in mobile
phones or the Windows phone.
Advantages of Windows
 Hardware compatibility.
 Pre-loaded and available Software
 Ease of Use
 Game Runner
Disadvantages of Windows
 Expensive
 Poor Security
 Not reliable

2. UNIX: Developed in 1970 in the Bell Lab research centre UNIX became a
multitasking and multiuser operating system, reaching numerous platforms
for use. It was developed by Ken Thompson, Dennis Ritchie, and a few others
and later AT&T licensed UNIX to the development of many variants of Unix.
Advantages of UNIX
 The OS is available on a wide variety of machines that are the most truly
portable operating system.
 It has a Very efficient virtual memory system, which allows many programs to
run simultaneously with a modest amount of physical memory and time.
 The OS was primarily built to serve the complete multitasking purpose
without crashing of data, and it served well along with the protected memory.
 Has a high-level authentication system along with a fully secured
environment.
Disadvantages of UNIX

4
 This OS was primarily designed for programmers and techies and not for
personal and casual use.
 It is a command-driven OS with commands being supplied by the shell kernel
and often has cryptic names which normal users find difficult to keep up with.
 To work comfortably with the UNIX system, one needs to understand the main
design features and how to command and interact with the OS.

3. Linux: Primarily derived from the concept of Unix, Linux became the most-
prominent free and open-source OS available to everyone in the world. It is
built around the Linux kernel and served for both desktop and server use. The
top Linux OS are Ubuntu, Fedora, OpenSUSE, RedHat, and many more.
Advantages of LINUX
 The OS is open-source and available free of cost to every computer user.
There are large repositories from which anyone can freely download high-
quality software for almost any task.
 Linux provides high performance for a longer time and does not require a
periodic reboot to maintain the system.
 It is one of the most secure OS and does not permit any unwanted malware
and virus into the system.
 It is designed to multitask and can perform multiple processes at the same
time, without hampering the performance of the OS.
 The OS is highly compatible and flexible to run on all modern PCs and
networks.
Disadvantages of LINUX
 It is not as user-friendly as Windows and users need to struggle for a few days
before adapting to the behaviour of OS.
 It is not meant for gamers since it does not support any high graphics game.
 Since there are no standard editions available for Linux, it comes with many
versions, confusing the users about what to adapt and what not.

4. Solaris: This OS was originally developed by Sun Microsystems and is a type


of Unix OS.
Advantages of Solaris
 It provides good and high performance.
 It provides complete protection against viruses and malware.
 It is a multitasking OS and allows multiple tasks at the same time.
 Known for its good and powerful backup tools.
Disadvantages of Solaris
 Although the OS provides a graphic interface, it is not as good as other
graphical user interfaces.
 The OS is available free of cost but the updates are not available for free, so
not completely open-source.
 The OS is not user-friendly.

5. BOSS: It stands for Bharat Operating System Solutions designed specifically


by India for Indians. It was developed by C-DAC(Centre for Development of
Advanced Computing), Chennai, to benefit the Free/Open Source Software in

5
India. It has an enhanced Desktop Environment integrated with multiple
Indian language support and other software.
Advantages of BOSS
 It is easily available and free to install and use.
 It is a very stable OS and provides free access to many software.
 It supports multiple Indian languages, so user-friendly at least for Indian
society.
Disadvantages of BOSS
 Since it is Linux OS, it does not support Windows programs and shares the
same disadvantages as other Linux-based OS.

TYPES OF OPERATING SYSTEM (OS)


1. Batch Operating System
Some computer processes are very lengthy and time-consuming. To speed the same
process, a job with a similar type of needs is batched together and run as a group.
The user of a batch operating system never directly interacts with the computer. In
this type of OS, every user prepares his or her job on an offline device like a punch
card and submit it to the computer operator.
Examples of Batch Operating System is the Payroll System, Bank Statements etc.
Advantages of Batch Operating System
 It is very difficult to guess or know the
time required for any job to complete.
Processors of the batch systems know
how long the job would be when it is in
queue.
 Multiple users can share the batch
systems.
 The idle time for the batch system is
very less.
 It is easy to manage large work repeatedly in batch systems

Disadvantages of Batch Operating System:


 The computer operators should be well known with batch systems
 Batch systems are hard to debug.
 It is sometimes costly.
 The other jobs will have to wait for an unknown time if any job fails.

2. Multi-Tasking/Time-sharing Operating systems


Time-sharing operating system enables people located at a different terminal (shell)
to use a single computer system at the same time. The processor time (CPU) which is
shared among multiple users is termed as time sharing.

6
Advantages of Time-Sharing OS:
 Each task gets an equal opportunity.
 Fewer chances of duplication of
software.
 CPU idle time can be reduced

Disadvantages of Time-Sharing OS:


Reliability problem.
 One must have to take care of the
security and integrity of user programs and data.
 Data communication problem.

Examples of Time-Sharing OSs are: Multics, Unix, etc.

3. Real time Operation System


A real time operating system time interval to process and respond to inputs is very
small. Examples: Military Software Systems, Space Software Systems are the Real
time OS example.
Two types of Real-Time Operating System which are as
follows:
Hard Real-Time Systems: These OSs are meant for
applications where time constraints are very strict and even
the shortest possible delay is not acceptable. These systems
are built for saving life like automatic parachutes or airbags
which are required to be readily available in case of any
accident. Virtual memory is rarely found in these systems.
Soft Real-Time Systems: These OSs are for applications
where for time-constraint is less strict.
Advantages of Real-Time Operating System
 Maximum Consumption
 Task Shifting
 Focus on Application
 Real-time operating system in the embedded system
 Error Free
 Memory Allocation
Disadvantages of RTOS:
 Limited Tasks
 Use heavy system resources
 Complex Algorithms
 Device driver and interrupt signals
 Thread Priority.

7
Examples of Real-Time Operating Systems are: Scientific experiments, medical
imaging systems, industrial control systems, weapon systems, robots, air traffic
control systems, etc.

4. Network Operating System


Network Operating System runs on a server. It provides the capability to serve to
manage data, user, groups, security, application, and other networking functions.
These types of operating systems allow shared access of files, printers, security,
applications, and other networking functions over a small private network. One more
important aspect of Network Operating Systems is that all the users are well aware of
the underlying configuration, of all other users within the network, their individual
connections, etc. and that’s why these computers are popularly known as tightly
coupled systems.
Advantages of Network Operating System:
1. Highly stable centralized servers.
2. Security concerns are handled through
servers.
3. New technologies and hardware up-
gradation are easily integrated into the
system.
4. Server access is possible remotely from
different locations and types of systems
Disadvantages of Network Operating
System:
1. Servers are costly.
2. User has to depend on a central location for most operations.
3. Maintenance and updates are required regularly
Examples of Network Operating System are: Microsoft Windows Server 2003,
Microsoft Windows Server 2008, UNIX, Linux, Mac OS X, Novell NetWare, and BSD,
etc.

5. Mobile Operating System: Mobile operating systems are those OS which is


especially that are designed to power smartphones, tablets, and wearables devices.
Some most famous mobile operating systems are Android and iOS, but others include
BlackBerry, Web, and WatchOS.

Distributed Operating System


Distributed systems use many processors located in different machines to provide
very fast computation to its users.
These types of the operating system are a recent advancement in the world of
computer technology and are being widely accepted all over the world and, that too,
with a great pace. Various autonomous interconnected computers communicate with

8
each other using a shared communication network. Independent systems possess
their own memory unit and CPU. These are referred to as loosely coupled systems or
distributed systems. These system’s processors differ in size and function. The major
benefit of working with these types of the operating system is that it is always
possible that one user can access the files or software which are not actually present
on his system but some other system connected within this network i.e., remote
access is enabled within the devices connected in that network.
Advantages of Distributed Operating
System:
 Failure of one will not affect the other
network communication, as all systems
are independent from each other.
 Electronic mail increases the data
exchange speed.
 Since resources are being shared,
computation is highly fast and durable.
 Load on host computer reduces.
 These systems are easily scalable as
many systems can be easily added to
the network.
 Delay in data processing reduces.
Disadvantages of Distributed Operating System:
 Failure of the main network will stop the entire communication.
 To establish distributed systems the language which is used are not well
defined yet.
 These types of systems are not readily available as they are very expensive.
Not only that the underlying software is highly complex and not understood
well yet.
Examples of Distributed Operating System are- LOCUS, etc.

COMPUTING ENVIRONMENTS
Types of Computing Environments
The different types of Computing Environments are −
Personal Computing Environment: In the personal computing environment,
there is a single computer system. All the system processes are available on the
computer and executed there. The different devices that constitute a personal
computing environment are laptops, mobiles, printers, computer systems, scanners
etc.
Time-Sharing Computing Environment: The time-sharing computing
environment allows multiple users to share the system simultaneously. Each user is
provided a time slice and the processor switches rapidly among the users according
to it.

9
Client-Server Computing Environment: In client server computing, the client
requests a resource and the server provides that resource. A server may serve
multiple clients at the same time while a client is in contact with only one server.
Both the client and server usually communicate via a computer network but
sometimes they may reside in the same system.
Distributed Computing Environment: A distributed computing environment
contains multiple nodes that are physically separate but linked together using the
network. All the nodes in this system communicate with each other and handle
processes in tandem. Each of these nodes contains a small part of the distributed
operating system software.
Cloud Computing Environment: The computing is moved away from individual
computer systems to a cloud of computers in cloud computing environment. The
cloud users only see the service being provided and not the internal details of how
the service is provided.
Cluster Computing Environment: The clustered computing environment is
similar to parallel computing environment as they both have multiple CPUs.
However, a major difference is that clustered systems are created by two or more
individual computer systems merged together which then work parallel to each
other.

CHAPTER THREE
FUNCTIONS, CHARACTERISTICS, AND COMPONENTS OF OPERATING
SYSTEMS
An operating system manages resources, and these resources are often shared in
one way or another among the various programs that want to use them: multiple
programs executing concurrently share the use of main memory; they take turns
using the CPU; and they compete for an opportunity to use input and output devices.

1. Memory Management: An Operating System performs the following functions


on Memory Management:
 It helps you to keep track of primary memory.
 Determine what part of it are in use by whom, what part is not in use.
 In a multiprogramming system, the Operating System takes a decision about
which process will get Memory and how much memory it gets.
 Allocates the memory when a process requests CPU time.
 It also de-allocates the Memory when a process no longer requires or has been
terminated.

2. Process Management: The process management component is a procedure for


managing the many processes that are running simultaneously on the operating
system. Every software application program has one or more processes associated
with them when they are running. Process management involves various tasks like
creation, scheduling, termination of processes, and even dead locks.
The Operating System must allocate resources that enable processes to share and
exchange information. It handles operations by performing tasks like process

10
scheduling and such as resource allocation. It also manages the memory allocated to
processes and shutting them down when needed.
Functions of process management by the Operating System
The following are functions of process management:
 Process creation and deletion.
 Suspension and resumption.
 Synchronization process.
 Communication process.

3. Process Scheduling: Process Scheduling is an Operating System task that


schedules processes of different states like ready, waiting, and running. In process
scheduling, the Operating System allocates a time interval of CPU execution for each
process. Another important reason for using a process scheduling system is that it
keeps the CPU busy all the time. This allows the user to get the minimum response
time for programs.
4. Interrupt handling: Interrupts are responses by the processor to a
process/event that needs immediate attention from the software. Interrupts alert the
processor and serves a request for the CPU to interrupt the currently executing
program/code when permitted, in order so that the event can be processed within
good time. If the response is accepted from the processor, the processor will respond
by suspending its current activities (saving its state), and thus executing a function
called an interrupt handler to deal with the event.

5. Information Management: Information management is otherwise called File


Management. A file is a named collection of related information that is recorded on
secondary storage such as magnetic disks, magnetic tapes and optical disks. In
general, a file is a sequence of bits, bytes, lines or records whose meaning is defined
by the file’s creator and user.
The operating system has the following important given activities in connections with
file management:
 File and directory creation and deletion.
 For manipulating files and directories.
 Mapping files onto secondary storage.
 Backup files on stable storage media.
 Keeps track of the information, its location, its usage, status, etc. The module
called a file system provides these facilities.
 Decides who gets hold of information, enforce protection mechanism, and
provides for information access mechanism, etc.
 Allocate the information to a requesting process, e.g., open a file.
 De-allocate the resource, e.g., close a file.

6. I/O Device Management: An OS will have device drivers to facilitate I/O


functions involving I/O devices. These device drivers are software routines that
control respective I/O devices through their controllers. The OS is responsible for the
following I/O Device Management Functions:

11
 Keep track of the I/O devices, I/O channels, etc. This module is typically called
I/O traffic controller.
 Decide what is an efficient way to allocate the I/O resource. If it is to be shared,
then decide who gets it, how much of it is to be allocated, and for how long.
This is called I/O scheduling.
 Allocate the I/O device and initiate the I/O operation.
 Reclaim device when its use is through. In most cases I/O terminates
automatically.

CHARACTERISTICS OF THE OPERATING SYSTEM


The operating system characteristic can be divided into two which is the main and
the supporting characteristic. The main characteristic involves concurrency, sharing,
long term storage and non-determinacy:

1. Concurrency: In a single multiprocessor multiprogramming system, processes


are interleaved in time to yield the appearance of simultaneous execution.
2. Sharing: The introduction of multiprogramming brought the ability to share
the resources among users. Sharing involves not only the processors but also
the Memory; Input/ output devices, such as discs and printers; Programs; Data.
 Sharing the resources such as disc and printers
 Program and routine sharing
 Data sharing

3. Long term storage: Many users and applications required means for storing
information for extended periods, since they are in need for data, program and
routine sharing which are stored in Ram or secondary storage. The things to be
considered are:
 Assessment to the data/ easy programs
 Security from any interference
 Protection from any system breakdown

4. No determinacy: The result of a particular program should depend only on


the input of the program and not on the activities of other programs in shared
systems. But when programs share memory and their execution is interleaved
by the processor they may interfere each other by over writing common
memory areas in unpredictable ways. Thus, the order in which various
programs are scheduled may affect the outcome of any particular program.

FEATURES OF OPERATING SYSTEM


The support characteristic of the Operating System involves efficiency, reliability,
maintainability and small size.
1. Efficiency: An efficient software can solve problem in short time. The
efficiency criteria are measured by:
 Average time between job,
12
 Processor idle time,
 Turn-around time batch job
 Feedback time
 The use of computer resources
Even though the above factors could not optimize at the same time but an efficient
operating system can make full use of resources and maximize the use of the system
processors.

2. Reliability: Reliability is typically far more important for real-time systems


than non-real time system. A reliable operating system is fail-soft operation.
The fail-soft is a characteristic that refers to the ability of a system to fail such
as possible for example a real-time system will attempt to either correct the
problem or maximize its affect while continuing to run. One important aspect of
fail-soft operation is referred to as stability.

3. Maintainability: A good operating system can be troubleshot easily by a


programmer without using a longer time. Its modules are clear and the
documentation is complete to help the programmer to maintain the Operating
System.

4. Small size: Operating system should be in a small size to reduce the use of
memory and storage. The operating system storage cannot be used to run any
user programs. The reduction of memory and storage allows the user to use
the memory to run their programs.

COMPONENTS OF OPERATING SYSTEM


There are various components of an Operating System to perform well defined tasks.
Though most of the Operating Systems differ in structure but logically they have
similar components. Each component must be a well-defined portion of a system that
appropriately describes the functions, inputs, and outputs, as listed below.
1. Process Management
2. I/O Device Management
3. File Management
4. Network Management
5. Main Memory Management
6. Secondary Storage Management
7. Security Management
8. Command Interpreter System
9. Multitasking
10. User Interface
11. Interrupt

1. Process Management: The operating system is responsible for the following


activities in connection with process management:
 Create, load, execute, suspend, resume, and terminate processes.
 Switch system among multiple processes in main memory.
 Provides communication mechanisms so that processes can communicate with
each other

13
 Provides synchronization mechanisms to control concurrent access to shared
data to keep shared data consistent.
 Allocate/de-allocate resources properly to prevent or avoid deadlock situation.

2. I/O Device Management: Following are the tasks of I/O Device Management
component:
 Hide the details of H/W devices
 Manage main memory for the devices using cache, buffer, and spooling
 Maintain and provide custom drivers for each device.

3. File Management: The operating system is responsible for the following


activities in connection with file management:
 File creation and deletion
 Directory creation and deletion
 The support of primitives for manipulating files and directories
 Mapping files onto secondary storage
 File backup on stable (non-volatile) storage media

4. Network Management: Network management is the process of keeping your


network healthy for an efficient communication between different computers.
Network management comprises fault analysis, maintaining the quality of
service, provisioning of networks, and performance management. Following are
the features of network management:
 Network administration
 Network maintenance
 Network operation
 Network provisioning
 Network security

5. Main Memory Management: The main motivation behind Memory


Management is to maximize memory utilization on the computer system. The
operating system is responsible for the following activities in connections with
memory management:
 Keep track of which parts of memory are currently being used and by whom.
 Decide which processes to load when memory space becomes available.
 Allocate and deallocate memory space as needed.

6. Secondary Storage Management: The operating system is responsible for


the following activities in connection with disk management:
 Free space management
 Storage allocation
 Disk scheduling

7. Security Management: Security Management refers to a mechanism for


controlling the access of programs, processes, or users to the resources
defined by a computer control to be imposed, together with some means of
enforcement.
8. Command Interpreter System: Command Interpreter System allows human
users to interact with the Operating System and provides convenient

14
programming environment to the users. Command Interpreter System
executes a user command by calling one or more number of underlying system
programs or system calls.

PARAMETERS USED TO MEASURE OPERATING SYSTEM PERFORMANCE


Response Time: The response time is defined as the total time lapse between the
completion of an inquiry or demand made on a system resource and the receipt of a
response.
Latency: Computer latency is defined as the time it takes to communicate a
message, or the time the message spends traveling the geographical distance ('on
the wire') before it gets to its desired destination. This can be compared to the time
one spends on an aircraft, traveling from one geographical location to another.
Speed of execution
The term speed is usually in reference to the clock speed of the processor. The clock
speed is defined as the clock cycles per second, which determines the rate at which
instruction processing takes place. It is usually measured in megahertz (MHz) or
gigahertz (GHz).

SERVICES, PROPERTIES, AND STRUCTURE OF AN OPERATING SYSTEM


Services of Operating System
1. Program execution
2. Input Output Operations
3. Communication between Process
4. File Management
5. Memory Management
6. Process Management
7. Security and Privacy
8. Resource Management
9. User Interface
10. Networking
11. Error Handling
12. Time Management

OPERATING SYSTEM PROPERTIES


1. Batch processing: Batch processing is a technique in which an Operating
System collects the programs and data together in a batch before processing
starts. An operating system does
the following activities related to
batch processing −

15
 The OS defines a job which has predefined sequence of commands,
programs and data as a single unit.
 The OS keeps a number a job in memory and executes them without any
manual information.
 Jobs are processed in the order of submission, i.e., first come first served
fashion.
 When a job completes its execution, its memory is released and the
output for the job gets copied into an output spool for later printing or
processing.
Advantages
 Batch processing takes much of the work of the operator to the
computer.
 Increased performance as a new job get started as soon as the previous
job is finished, without any manual intervention.
Disadvantages
 Difficult to debug program.
 A job could enter an infinite loop.
 Due to lack of protection scheme, one batch job can affect pending jobs.

2. Multitasking: Multitasking is when multiple jobs are executed by the CPU


simultaneously by switching between them. Switches occur so frequently that
the users may interact with each program while it is running. An OS does the
following activities related to multitasking −
 The user gives instructions to the operating system or to a program
directly, and receives an immediate response.
 The OS handles multitasking in the way that it can handle multiple
operations/executes multiple programs at a time.
 Multitasking Operating Systems are also known as Time-sharing systems.
 These Operating Systems were developed to provide interactive use of a
computer system at a reasonable cost.
 A time-shared operating system uses the concept of CPU scheduling and
multiprogramming to provide each user with a small portion of a time-
shared CPU.
 Each user has at least one separate
program in memory.
 A program that is loaded into memory and
is executing is commonly referred to as
a process.
 When a process executes, it typically
executes for only a very short time before
it either finishes or needs to perform I/O.
 Since interactive I/O typically runs at
slower speeds, it may take a long time to complete. During this time, a CPU
can be utilized by another process.
 The operating system allows the users to share the computer simultaneously.
Since each action or command in a time-shared system tends to be short, only
a little CPU time is needed for each user.
16
 As the system switches CPU rapidly from one user/program to the next, each
user is given the impression that he/she has his/her own CPU, whereas actually
one CPU is being shared among many users.

3. Multiprogramming: Sharing the processor, when two


or more programs reside in memory at the same time, is
referred as multiprogramming. Multiprogramming
assumes a single shared processor. Multiprogramming
increases CPU utilization by organizing jobs so that the
CPU always has one to execute.

An OS does the following activities related to


multiprogramming.
 The operating system keeps several jobs in
memory at a time.
 This set of jobs is a subset of the jobs kept in the job pool.
 The operating system picks and begins to execute one of the jobs in the
memory.
 Multiprogramming operating systems monitor the state of all active
programs and system resources using memory management programs
to ensures that the CPU is never idle, unless there are no jobs to process.
Advantages
 High and efficient CPU utilization.
 User feels that many programs are allotted CPU almost simultaneously.

Disadvantages
 CPU scheduling is required.
 To accommodate many jobs in memory, memory management is
required.

4. Interactivity: Interactivity refers to the ability of users to interact with a


computer system. An Operating system does the following activities related to
interactivity −
 Provides the user an interface to interact with the system.
 Manages input devices to take inputs from the user. For example,
keyboard.
 Manages output devices to show outputs to the user. For example,
Monitor.
The response time of the OS needs to be short, since the user submits and waits for
the result.
5. Real Time System: Real-time systems are usually dedicated, embedded
systems. An operating system does the following activities related to real-time
system activity.
 In such systems, Operating Systems typically read from and react to
sensor data.
17
 The Operating system must guarantee response to events within fixed
periods of time to ensure correct performance.

6. Distributed Environment: A distributed environment refers to multiple


independent CPUs or processors in a computer system. An operating system
does the following activities related to distributed environment −
 The OS distributes computation logics among several physical
processors.
 The processors do not share memory or a clock. Instead, each processor
has its own local memory.
 The OS manages the communications between the processors. They
communicate with each other through various communication lines.

7. Spooling: Spooling is an acronym for Simultaneous Peripheral Operations On


Line. Spooling refers to putting data of various I/O jobs in a buffer. This buffer is
a special area in memory or hard disk which is accessible to I/O devices.
An operating system does the following activities related to distributed environment

 Handles I/O device data spooling as devices have different data access
rates.
 Maintains the spooling buffer which provides a waiting station where data
can rest while the slower device catches up.
 Maintains parallel computation because of spooling process as a
computer can perform I/O in parallel fashion. It becomes possible to have
the computer read data from a tape, write data to disk and to write out
to a tape printer while it is doing its computing task.
Advantages
 The spooling operation uses a disk
as a very large buffer.
 Spooling is capable of overlapping
I/O operation for one job with
processor operations for another
job.

OPERATING SYSTEM STRUCTURE


The operating system structure is as listed below.
 Simple Structure
 Monolithic Structure
 Layered Approach Structure
 Micro-Kernel Structure
 Exo-Kernel Structure
 Virtual Machines

18
Operating system structure can be thought of as the strategy for connecting and
incorporating various operating system components within the kernel. Operating
systems are implemented using many types of structures:

SIMPLE STRUCTURE: It is the most straightforward operating system structure, but


it lacks definition and is only appropriate for usage with tiny and restricted systems.
Since the interfaces and degrees of functionality in this structure are clearly defined,
programs are able to access I/O routines, which may result in unauthorized access to
I/O procedures.
This organizational structure is used by the MS-DOS operating system:
 There are four layers that make up the MS-DOS operating system, and
each has its own set of features.
 These layers include ROM BIOS device drivers, MS-DOS device drivers,
application programs, and system programs.
 The MS-DOS operating system benefits from layering because each level
can be defined independently and, when necessary, can interact with one
another.
 If the system is built in layers, it will be simpler to design, manage, and
update. Because of this, simple structures can be used to build constrained
systems that are less complex.
 When a user program fails, the operating system as whole crashes.
 Because MS-DOS systems have a low level of abstraction, programs and
I/O procedures are visible to end users, giving them the potential for unwanted
access.
Advantages of Simple Structure:
 Because there are only a few interfaces and levels, it is simple to develop.
 Because there are fewer layers between the hardware and the applications,
it offers superior performance.
Disadvantages of Simple Structure:
 The entire operating system breaks if just one user program malfunctions.
 Since the layers are interconnected, and in communication with one
another, there is no abstraction or data hiding.
 The operating system's operations are accessible to layers, which can result
in data tampering and system failure.

MONOLITHIC STRUCTURE: The monolithic operating system controls all aspects of


the operating system's operation, including file management, memory management,
device management, and operational operations.

19
The core of an operating system for
computers is called the kernel (OS).
All other System components are
provided with fundamental services
by the kernel. The operating system
and the hardware use it as their main
interface. When an operating system
is built into a single piece of
hardware, such as a keyboard or
mouse, the kernel can directly access
all of its resources.

Advantages of Monolithic Structure:


 Because layering is unnecessary and the kernel alone is responsible for
managing all operations, it is easy to design and execute.
 Due to the fact that functions like memory management, file management,
process scheduling, etc., are implemented in the same address area, the
monolithic kernel runs rather quickly when compared to other systems.
Utilizing the same address speeds up and reduces the time required for
address allocation for new processes.
Disadvantages of Monolithic Structure:
 The monolithic kernel's services are interconnected in address space and
have an impact on one another, so if any of them malfunctions, the entire
system does as well.
 It is not adaptable. Therefore, launching a new service is difficult.

LAYERED STRUCTURE: The OS is separated into layers or levels in this kind of


arrangement. Layer 0 (the lowest layer) contains the hardware, and layer 1 (the
highest layer) contains the user interface (layer N). These layers are organized
hierarchically, with the top-level layers making use of the capabilities of the lower-
level ones.
The functionalities of each layer are separated in this
method, and abstraction is also an option. Because
layered structures are hierarchical, debugging is
simpler, therefore all lower-level layers are debugged
before the upper layer is examined. As a result, the
present layer alone has to be reviewed since all the
lower layers have already been examined.
Advantages of Layered Structure:
 Work duties are separated since each layer
has its own functionality, and there is some
amount of abstraction.
 Debugging is simpler because the lower layers are examined first, followed
by the top layers.
20
Disadvantages of Layered Structure:
 Performance is compromised in layered structures due to layering.
 Construction of the layers requires careful design because upper layers only
make use of lower layers' capabilities.

MICRO-KERNEL STRUCTURE: The operating system is created using a micro-kernel


framework that strips the kernel of any unnecessary parts. Systems and user
applications are used to implement these optional kernel components. So, Micro-
Kernels is the name given to these systems that have been developed.
Each Micro-Kernel is created separately and is kept apart from the others. As a
result, the system is now more trustworthy and secure. If one Micro-Kernel
malfunctions, the remaining operating system is unaffected and continues to
function normally.
Advantages of Micro-Kernel
Structure:
 It enables portability of the
operating system across
platforms.
 Due to the isolation of
each Micro-Kernel, it is
reliable and secure.
 The reduced size of Micro-
Kernels allows for
successful testing.
 The remaining operating
system remains unaffected
and keeps running properly even if a component or Micro-Kernel fails.
Disadvantages of Micro-Kernel Structure:
 The performance of the system is decreased by increased inter-module
communication.
 The construction of a system is complicated.

EXOKERNEL
An operating system called Exokernel was created at MIT with the goal of offering
application-level management of hardware resources. The exokernel architecture's
goal is to enable application-specific customization by separating resource
management from protection. Exokernel size tends to be minimal due to its limited
operability.
Exokernel operating systems have a number of features, including:
 Enhanced application control support.
 Splits management and security apart.

21
 A secure transfer of abstractions is made to an unreliable library operating
system.
 Brings up a low-level interface.
 Operating systems for libraries provide compatibility and portability.
Advantages of Exokernel Structure:
 Application performance is enhanced by it.
 Accurate resource allocation and revocation enable more effective
utilisation of hardware resources.
 New operating systems can be tested and developed more easily.
 Every user-space program is permitted to utilise its own customised
memory management.
Disadvantages of Exokernel Structure:
 A decline in consistency
 Exokernel interfaces have a complex architecture.

VIRTUAL MACHINES (VMs)


The hardware of our personal computer, including the CPU, disc drives, RAM, and NIC
(Network Interface Card), is abstracted by a virtual machine into a variety of various
execution contexts based on our needs, giving us the impression that each execution
environment is a separate computer. A virtual box is an example of it.
Advantages of Virtual Machines:
 Due to total isolation between each virtual machine and every other virtual
machine, there are no issues with security.
 A virtual machine may offer an architecture for the instruction set that is
different from that of actual computers.
 Simple availability, accessibility, and recovery convenience.
Disadvantages of Virtual Machines:
 Depending on the workload, operating numerous virtual machines
simultaneously on a host computer may have an adverse effect on one of
them.
 When it comes to hardware access, virtual computers are less effective than
physical ones.

PROCESS MANAGEMENT
Process management includes various roles and responsibilities such as allocating
system resources, memory management, manage input/output devices for running
system processes and scheduling the execution of processes in such a way that
maximizes throughput and minimum response times.
Process management can help organizations improve their operational efficiency,
reduce costs, increase customer satisfaction, and maintain compliance with

22
regulatory requirements. It involves analysing the performance of existing processes,
identifying bottlenecks, and making changes to optimize the process flow.
Some of the systems call in this category are as follows.
 Create a child’s process identical to the parents.
 Terminate a process
 Wait for a child process to terminate
 Change the priority of the process
 Block the process
 Ready the process
 Dispatch a process
 Suspend a process
 Resume a process
 Delay a process
 Fork a process
Explanation of Process
 Text Section: A Process, sometimes known as the Text
Section, also includes the current activity represented
by the value of the Program Counter.
 Stack: The stack contains temporary data, such as
function parameters, returns addresses, and local
variables.
 Data Section: Contains the global variable.
 Heap Section: Dynamically memory allocated to
process during its run time.
Key Components of Process Management
Below are some key components of process management.
 Process mapping: Creating visual representations of processes to understand
how tasks flow, identify dependencies, and uncover improvement
opportunities.
 Process analysis: Evaluating processes to identify bottlenecks, inefficiencies,
and areas for improvement.
 Process redesign: Making changes to existing processes or creating new
ones to optimize workflows and enhance performance.
 Process implementation: Introducing the redesigned processes into the
organization and ensuring proper execution.
 Process monitoring and control: Tracking process performance, measuring
key metrics, and implementing control mechanisms to maintain efficiency and
effectiveness.

Characteristics of a Process
A process has the following attributes.
 Process Id: A unique identifier assigned by the operating system.

23
 Process State: Can be ready, running, etc.
 CPU registers: Like the Program Counter
(CPU registers must be saved and restored
when a process is swapped in and out of the
CPU)
 Accounts information: Amount of CPU used
for process execution, time limits, execution
ID, etc
 I/O status information: For example, devices allocated to the process, open
files, etc
 CPU scheduling information: For example, Priority (Different processes may
have different priorities, for example, a shorter process assigned high priority
in the shortest job first scheduling)
All of the above attributes of a process are also known as the context of the
process. Every process has its own process control block (PCB), i.e. each process will
have a unique PCB. All of the above attributes are part of the PCB.
States of Process
A process is in one of the following states:
 New: Newly Created Process (or) being-created process.
 Ready: After the creation process moves to the Ready state, i.e., the process
is ready for execution.
 Run: Currently running process in CPU (only one process at a time can be
under execution in a single processor)
 Wait (or Block): When a process requests I/O access.
 Complete (or terminated): The process completed its execution.
 Suspended Ready: When the ready queue becomes full, some processes are
moved to a suspended ready state
 Suspended Block: When the waiting queue becomes full.

Context Switching of Process


The process of saving the context of one process and loading the context of another
process is known as Context Switching. In simple terms, it is like loading and
unloading the process from the running state to the ready state.
When Does Context Switching Happen?
1. When a high-priority process comes to a ready state (i.e., with higher priority than
the running process).
2. An Interrupt occurs.
3. User and kernel-mode switch (It is not necessary though)
4. Pre-emptive CPU scheduling is used.
Context Switching

24
In order for a process execution to be continued from the same point at a later time,
context switching is a mechanism to store and restore the state or context of a CPU
in the Process Control block. A context switcher makes it possible for multiple
processes to share a single CPU using this method. A multitasking operating
system must include context switching among its features.
The state of the currently running process is saved into the process control block
when the scheduler switches the CPU from executing one process to another. The
state used to set the computer, registers, etc. for the process that will run next is
then loaded from its own PCB. After that, the second can start processing.
In order for a process execution to be
continued from the same point at a later
time, context switching is a mechanism to
store and restore the state or context of a
CPU in the Process Control block. A context
switcher makes it possible for multiple
processes to share a single CPU using this
method.

PROCESS SCHEDULING ALGORITHMS


The operating system can use different scheduling algorithms to schedule processes.
Here are some commonly used timing algorithms:
 First-come, first-served (FCFS): This is the simplest scheduling algorithm,
where the process is executed on a first-come, first-served basis. FCFS is non-
preemptive, which means that once a process starts executing, it continues
until it is finished or waiting for I/O.
 Shortest Job First (SJF): SJF is a proactive scheduling algorithm that selects
the process with the shortest burst time. The burst time is the time a process
takes to complete its execution. SJF minimizes the average waiting time of
processes.
 Round Robin (RR): Round Robin is a proactive scheduling algorithm that
reserves a fixed amount of time in a round for each process. If a process does
not complete its execution within the specified time, it is blocked and added to
the end of the queue. RR ensures fair distribution of CPU time to all processes
and avoids starvation.
 Priority Scheduling: This scheduling algorithm assigns priority to each
process and the process with the highest priority is executed first. Priority can
be set based on process type, importance, or resource requirements.
 Multilevel queue: This scheduling algorithm divides the ready queue into
several separate queues, each queue having a different priority. Processes are
queued based on their priority, and each queue uses its own scheduling
algorithm. This scheduling algorithm is useful in scenarios where different
types of processes have different priorities.
25
PROCESS TABLE AND PROCESS CONTROL BLOCK (PCB)
A process control block (PCB) is a data structure used by operating systems to store
important information about running processes. It contains information such as the
unique identifier of the process (Process ID or PID), current status, program counter,
CPU registers, memory allocation, open file descriptions and accounting information.
The circuit is critical to context switching because it allows the operating system to
efficiently manage and control multiple processes.
All this information is required and must be saved when the process is switched from
one state to another. When the process makes a transition from one state to another,
the operating system must update information in the process’s PCB. A process
control block (PCB) contains information about the process, i.e. registers, quantum,
priority, etc. The process table is an array of PCBs, that means logically contains
a PCB for all of the current processes in the system.
Advantages-
1. Efficient process management
2. Resource management
3. Process synchronization
4. Process scheduling

Disadvantages-
1. Overhead
2. Complexity
3. Scalability
4. Security

CPU SCHEDULING IN OPERATING SYSTEMS


Scheduling of processes/work is done to finish the work on time. CPU Scheduling is
a process that allows one process to use the CPU while another process is delayed (in
standby) due to unavailability of any resources such as I / O etc, thus making full use
of the CPU. The purpose of CPU Scheduling is to make the system more efficient,
faster, and fairer.
Whenever the CPU becomes idle, the operating system must select one of the
processes in the line ready for launch. The selection process is done by a temporary
(CPU) scheduler. The Scheduler selects between memory processes ready to launch
and assigns the CPU to one of them.
CPU scheduling is the process of deciding which process will own the CPU to use
while another process is suspended. The main function of the CPU scheduling is to
ensure that whenever the CPU remains idle, the OS has at least selected one of the
processes available in the ready-to-use line.
What are the different types of CPU Scheduling Algorithms?
There are mainly two types of scheduling methods:

26
 Pre-emptive Scheduling: Pre-emptive scheduling is used when a process
switches from running state to ready state or from the waiting state to the
ready state.
 Non-Pre-emptive Scheduling: Non-Pre-emptive scheduling is used when a
process terminates, or when a process switches from running state to waiting
state.

The following are CPU


scheduling algorithms in
operating systems:
1. First Come First
Serve:
FCFS considered to be the
simplest of all operating
system scheduling
algorithms. First come first
serve scheduling algorithm
states that the process that requests the CPU first is allocated the CPU first and is
implemented by using FIFO queue.
Characteristics of FCFS:
 FCFS supports non-preemptive and preemptive CPU scheduling algorithms.
 Tasks are always executed on a First-come, First-serve concept.
 FCFS is easy to implement and use.
 This algorithm is not much efficient in performance, and the wait time is
quite high.
Advantages of FCFS:
 Easy to implement
 First come, first serve method
Disadvantages of FCFS:
 FCFS suffers from Convoy effect.
 The average waiting time is much higher than the other algorithms.
 FCFS is very simple and easy to implement and hence not much efficient.
2. Shortest Job First(SJF):
Shortest job first (SJF) is a scheduling process that selects the waiting process
with the smallest execution time to execute next. This scheduling method may or
may not be preemptive. Significantly reduces the average waiting time for other
processes waiting to be executed. The full form of SJF is Shortest Job First.
Characteristics of SJF:
 Shortest Job first has the advantage of having a minimum average waiting
time among all operating system scheduling algorithms.

27
 It is associated with each task as a unit of time to complete.
 It may cause starvation if shorter processes keep coming. This problem can
be solved using the concept of ageing.
Advantages of Shortest Job first:
 As SJF reduces the average waiting time thus, it is better than the first come
first serve scheduling algorithm.
 SJF is generally used for long term scheduling

Disadvantages of SJF:
 One of the demerits SJF has is starvation.
 Many times, it becomes complicated to predict the length of the upcoming CPU
request
3. Longest Job First (LJF):
Longest Job First (LJF) scheduling process is just opposite of shortest job first (SJF),
as the name suggests this algorithm is based upon the fact that the process with the
largest burst time is processed first. Longest Job First is non-preemptive in nature.
Characteristics of LJF:
 Among all the processes waiting in a waiting queue, CPU is always assigned to
the process having largest burst time.
 If two processes have the same burst time then the tie is broken using FCFS i.e.
the process that arrived first is processed first.
 LJF CPU Scheduling can be of both preemptive and non-preemptive types.

Advantages of LJF:
 No other task can schedule until the longest job or process executes
completely.
 All the jobs or processes finish at the same time approximately.

Disadvantages of LJF:
 Generally, the LJF algorithm gives a very high average waiting
time and average turn-around time for a given set of processes.
 This may lead to convoy effect.
4. Priority Scheduling:
Preemptive Priority CPU Scheduling Algorithm is a pre-emptive method of CPU
scheduling algorithm that works based on the priority of a process. In this
algorithm, the editor sets the functions to be as important, meaning that the most
important process must be done first. In the case of any conflict, that is, where there
is more than one process with equal value, then the most important CPU planning
algorithm works on the basis of the FCFS (First Come First Serve) algorithm.
Characteristics of Priority Scheduling:
 Schedules tasks based on priority.
 When the higher priority work arrives and a task with less priority is executing,
the higher priority proess will takes the place of the less priority proess and
28
 The later is suspended until the execution is complete.
 Lower is the number assigned, higher is the priority level of a process.

Advantages of Priority Scheduling:


 The average waiting time is less than FCFS
 Less complex
Disadvantages of Priority Scheduling:
 One of the most common demerits of the Pre-emptive priority CPU scheduling
algorithm is the Starvation Problem. This is the problem in which a process has
to wait for a longer amount of time to get scheduled into the CPU. This
condition is called the starvation problem.
5. Round robin:
Round Robin is a CPU scheduling algorithm where each process is cyclically
assigned a fixed time slot. It is the pre-emptive version of First come First Serve CPU
Scheduling algorithm. Round Robin CPU Algorithm generally focuses on Time Sharing
technique.
Characteristics of Round robin:
 It’s simple, easy to use, and starvation-free as all processes get the balanced
CPU allocation.
 One of the most widely used methods in CPU scheduling as a core.
 It is considered pre-emptive as the processes are given to the CPU for a very
limited time.
Advantages of Round robin:
 Round robin seems to be fair as every process gets an equal share of CPU.
 The newly created process is added to the end of the ready queue.

6. Shortest Remaining Time First:


Shortest remaining time first is the pre-emptive version of the Shortest job first
which we have discussed earlier where the processor is allocated to the job closest to
completion. In SRTF the process with the smallest amount of time remaining until
completion is selected to execute.
Characteristics of Shortest remaining time first:
 SRTF algorithm makes the processing of the jobs faster than SJF algorithm,
given its overhead charges are not counted.
 The context switch is done a lot more times in SRTF than in SJF and consumes
the CPU’s valuable time for processing. This adds up to its processing time and
diminishes its advantage of fast processing.
Advantages of SRTF:
 In SRTF the short processes are handled very fast.
 The system also requires very little overhead since it only makes a decision
when a process completes or a new process is added.
Disadvantages of SRTF:
29
 Like the shortest job first, it also has the potential for process starvation.
 Long processes may be held off indefinitely if short processes are continually
added.
7. Longest Remaining Time First:
The longest remaining time first is a pre-emptive version of the longest job first
scheduling algorithm. This scheduling algorithm is used by the operating system to
program incoming processes for use in a systematic way. This algorithm schedules
those processes first which have the longest processing time remaining for
completion.
Characteristics of longest remaining time first:
 Among all the processes waiting in a waiting queue, the CPU is always assigned
to the process having the largest burst time.
 If two processes have the same burst time then the tie is broken
using FCFS i.e., the process that arrived first is processed first.
 LRTF CPU Scheduling can be of both pre-emptive and non-pre-emptive.
 No other process can execute until the longest task executes completely.
 All the jobs or processes finish at the same time approximately.
Disadvantages of LRTF:
 This algorithm gives a very high average waiting time and average turn-around
time for a given set of processes.
 This may lead to a convoy effect.
8. Highest Response Ratio Next:
Highest Response Ratio Next is a non-preemptive CPU Scheduling algorithm and
it is considered as one of the most optimal scheduling algorithms. The name itself
states that we need to find the response ratio of all available processes and select
the one with the highest Response Ratio. A process once selected will run till
completion.
Characteristics of Highest Response Ratio Next:
 The criteria for HRRN is Response Ratio, and the mode is Non-
Preemptive.
 HRRN is considered as the modification of Shortest Job First to reduce the
problem of starvation.
 In comparison with SJF, during the HRRN scheduling algorithm, the CPU is
allotted to the next process which has the highest response ratio and not to
the process having less burst time.
Response Ratio = (W + S)/S
Here, W is the waiting time of the process so far and S is the Burst time of the
process.
Advantages of HRRN:
 HRRN Scheduling algorithm generally gives better performance than
the shortest job first Scheduling.

30
 There is a reduction in waiting time for longer jobs and also it encourages
shorter jobs.
Disadvantages of HRRN:
 The implementation of HRRN scheduling is not possible as it is not possible to
know the burst time of every job in advance.
 In this scheduling, there may occur an overload on the CPU.

9. Multiple Queue Scheduling:


Processes in the ready queue can be divided into different classes where each class
has its own scheduling needs. For example, a common division is a foreground
(interactive) process and a background (batch) process. These two classes have
different scheduling needs. For this kind of situation Multilevel Queue
Scheduling is used.
 System Processes: The CPU itself has its process to run, generally termed as
System Process.
 Interactive Processes: An Interactive Process is a type of process in which
there should be the same type of interaction.
 Batch Processes: Batch processing is generally a technique in the Operating
system that collects the programs and data together in the form of
a batch before the processing starts.
Advantages of multilevel queue scheduling:
 The main merit of the multilevel queue is that it has a low scheduling
overhead.
Disadvantages of multilevel queue scheduling:
 Starvation problem
 It is inflexible in nature
10. Multilevel Feedback Queue Scheduling:
Multilevel Feedback Queue Scheduling (MLFQ) CPU Scheduling is like Multilevel
Queue Scheduling but in this process can move between the queues.
Characteristics of Multilevel Feedback Queue Scheduling:
 In a multilevel queue-scheduling algorithm, processes are permanently
assigned to a queue on entry to the system, and processes are not allowed to
move between queues.
 As the processes are permanently assigned to the queue, this setup has the
advantage of low scheduling overhead,
 But on the other hand, disadvantage of being inflexible.

Advantages of Multilevel feedback queue scheduling:


 It is more flexible
 It allows different processes to move between different queues

Disadvantages of Multilevel feedback queue scheduling:


31
 It also produces CPU overheads
 It is the most complex algorithm.

GENERAL CONCEPT OF SYSTEM PROGRAMMING


System Programming can be defined as
the act of building Systems Software using
System Programming Languages.
According to Computer Hierarchy,
Hardware comes first then is Operating
System, System Programs, and finally
Application Programs.
Here are the examples of System Programs
:
1. File Management
2. Command Line Interface(CLI’s)
3. Device drivers
4. Status Information
5. File Modification
6. Programming-Language support
7. Program Loading and Execution
8. Communications

APPLICATION PROGRAMMING
Similarly, Application Programming is the act of building application programs for
specific set of purposes such as creation of documents, spreadsheets, reading and
recording music files, etc.
DIFFERENCE BETWEEN SYSTEM PROGRAMS AND APPLICATION PROGRAMS
System programs are programs that typically aids in the management of the
computer system. Some systems programs aid external hardware devices to run
properly (device drivers), some help in system security management (antiviruses,
disk scanners, firewalls, etc), some aid in other functions of management, which is
generally managed by the operating system.
On the other hand, application programs help the users to perform specific tasks
such as recording, printing, etc.

UTILITIES AND LIBRARIES


Utility software is a type of software that is designed to help users manage, maintain,
and optimize their computer systems. Utility software includes a wide range of tools
and applications that perform specific tasks to improve the performance, security,
and functionality of a computer system.

32
Utility Software is a type of software which is used to analyse and maintain
a computer. This software is focused on how OS works on that basis it performs tasks
to enable the smooth functioning of the computer. This software may come with OS
like windows defender and disk clean-ups tools. Antivirus, backup software, file
manager, and disk compression tool all are utility software.
Types of Utility Software
Some of the most common types of utility software include –
 Antivirus software
 Disk cleaners
 Backup and recovery software
 System optimizers
 Disk defragmenters File compression software
 Disk encryption software
Operating System - I/O Softwares
I/O software is often organized in the following layers −
 User Level Libraries − This provides simple interface to the user program to
perform input and output. For example, stdio is a library provided by C and C+
+ programming languages.
 Kernel Level Modules − This provides device driver to interact with the
device controller and device independent I/O modules used by the device
drivers.
 Hardware − This layer includes actual hardware and hardware controller
which interact with the device drivers and makes hardware alive.
A key concept in the design of I/O software is that it should be device independent
where it should be possible to write programs that can access any I/O device without
having to specify the device in advance.
Device Drivers
Device drivers are software modules that can be plugged into an OS to handle a
particular device. Operating System takes help from device drivers to handle all I/O
devices. Device drivers encapsulate device-dependent code and implement a
standard interface in such a way that code contains device-specific register
reads/writes.
A device driver performs the
following jobs −
 To accept request from the
device independent software
above to it.
 Interact with the device
controller to take and give I/O
and perform required error
handling

33
 Making sure that the request is executed successfully
How a device driver handles a request is as follows: Suppose a request comes to
read a block N. If the driver is idle at the time a request arrives, it starts carrying out
the request immediately. Otherwise, if the driver is already busy with some other
request, it places the new request in the queue of pending requests.

Interrupt handlers
An interrupt handler, also known as an interrupt service routine or ISR, is a piece of
software or more specifically a call-back function in an operating system or more
specifically in a device driver, whose execution is triggered by the reception of an
interrupt.
When the interrupt happens, the interrupt procedure does whatever it has to in order
to handle the interrupt, updates data structures and wakes up process that was
waiting for an interrupt to happen.
Device-Independent I/O Software
The basic function of the device-independent software is to perform the I/O functions
that are common to all devices and to provide a uniform interface to the user-level
software. Though it is difficult to write completely device independent software but
we can write some modules which are common among all the devices. Following is a
list of functions of device-independent I/O Software −
 Uniform interfacing for device drivers
 Device naming - Mnemonic names mapped to Major and Minor device numbers
 Device protection
 Providing a device-independent block size
 Buffering because data coming off a device cannot be stored in final
destination.
 Storage allocation on block devices
 Allocation and releasing dedicated devices
 Error Reporting

User-Space I/O Software


These are the libraries which provide richer and simplified interface to access the
functionality of the kernel or ultimately interactive with the device drivers. Most of
the user-level I/O software consists of library procedures with some exception like
spooling system which is a way of dealing with dedicated I/O devices in a
multiprogramming system.

Kernel I/O Subsystem


Kernel I/O Subsystem is responsible to provide many services related to I/O.
Following are some of the services provided.

34
 Scheduling − Kernel schedules a set of I/O requests to determine a good
order in which to execute them. When an application issues a blocking I/O
system call, the request is placed on the queue for that device.
 Buffering − Kernel I/O Subsystem maintains a memory area known
as buffer that stores data while they are transferred between two devices or
between a device with an application operation. Buffering is done to cope with
a speed mismatch between the producer and consumer of a data stream or to
adapt between devices that have different data transfer sizes.
 Caching − Kernel maintains cache memory which is region of fast memory
that holds copies of data. Access to the cached copy is more efficient than
access to the original.
 Spooling and Device Reservation − A spool is a buffer that holds output for
a device, such as a printer, that cannot accept interleaved data streams. The
spooling system copies the queued spool files to the printer one at a time.
 Error Handling − An operating system that uses protected memory can guard
against many kinds of hardware and application errors.

INPUT/OUTPUT
Operating System - I/O Hardware
An I/O system is required to take an application I/O request and send it to the
physical device, then take whatever response comes back from the device and send
it to the application. I/O devices can be divided into two categories −
 Block devices − A block device is one with which the driver communicates by
sending entire blocks of data. For example, Hard disks, USB cameras, Disk-On-
Key etc.
 Character devices − A character device is one with which the driver
communicates by sending and receiving single characters (bytes, octets). For
example, serial ports, parallel ports, sounds cards etc

Device Controllers
Device drivers are software modules that can be plugged into an OS to handle a
particular device. Operating System takes help from device drivers to handle all I/O
devices.
The Device Controller works like an interface between a device and a device driver.
I/O units (Keyboard, mouse, printer, etc.) typically consist of a mechanical
component and an electronic component where electronic component is called the
device controller.

Communication to I/O Devices

35
The CPU must have a way to pass information to and from an I/O device. There are
three approaches available to communicate with the CPU and Device.
 Special Instruction I/O
 Memory-mapped I/O
 Direct memory access (DMA)

Special Instruction I/O


This uses CPU instructions that are specifically made for controlling I/O devices.
These instructions typically allow data to be sent to an I/O device or read from an I/O
device.
Memory-mapped I/O
When using memory-mapped I/O, the same address space is shared by memory and
I/O devices. The device is connected directly to certain main memory locations so
that I/O device can transfer block of data to/from memory without going through
CPU.

While using memory mapped IO, OS allocates


buffer in memory and informs I/O device to use
that buffer to send data to the CPU. I/O device
operates asynchronously with CPU, interrupts
CPU when finished.
The advantage to this method is that every
instruction which can access memory can be
used to manipulate an I/O device. Memory
mapped IO is used for most high-speed I/O
devices like disks, communication interfaces.
Direct Memory Access (DMA)
Slow devices like keyboards will generate an interrupt to the main CPU after each
byte is transferred. If a fast device such as a disk generated an interrupt for each
byte, the operating system would spend most of its time handling these interrupts.
So a typical computer uses direct memory access (DMA) hardware to reduce this
overhead.
Direct Memory Access (DMA) means CPU grants I/O
module authority to read from or write to memory
without involvement. DMA module itself controls
exchange of data between main memory and the I/O
device. CPU is only involved at the beginning and
end of the transfer and interrupted only after entire
block has been transferred.
Direct Memory Access needs a special hardware
called DMA controller (DMAC) that manages the
data transfers and arbitrates access to the system
bus. The controllers are programmed with source

36
and destination pointers (where to read/write the data), counters to track the number
of transferred bytes, and settings, which includes I/O and memory types, interrupts
and states for the CPU cycles.

The operating system uses the DMA hardware as follows −


Step Description
1 Device driver is instructed to transfer disk data to a buffer address X.
2 Device driver then instruct disk controller to transfer data to buffer.
3 Disk controller starts DMA transfer.
4 Disk controller sends each byte to DMA controller.
5 DMA controller transfers bytes to buffer, increases the memory address,
decreases the counter C until C becomes zero.
6 When C becomes zero, DMA interrupts CPU to signal transfer
completion.

Polling vs Interrupts I/O


A computer must have a way of detecting the arrival of any type of input. There are
two ways that this can happen, known as polling and interrupts. Both of these
techniques allow the processor to deal with events that can happen at any time and
that are not related to the process it is currently running.
Polling I/O
Polling is the simplest way for an I/O device to communicate with the processor. The
process of periodically checking status of the device to see if it is time for the next
I/O operation, is called polling. The I/O device simply puts the information in a Status
register, and the processor must come and get the information.

Interrupts I/O
An alternative scheme for dealing with I/O is the interrupt-driven method. An
interrupt is a signal to the microprocessor from a device that requires attention.
A device controller puts an interrupt signal on the bus when it needs CPU’s attention
when CPU receives an interrupt, It saves its current state and invokes the
appropriate interrupt handler using the interrupt vector (addresses of OS routines to
handle various events). When the interrupting device has been dealt with, the CPU
continues with its original task as if it had never been interrupted.
Interrupt handling
Interrupts are responses by the processor to a process/event that needs immediate
attention from the software.
Interrupts alert the processor and serves a request for the CPU to interrupt the
currently executing program/code when permitted, in order so that the event can be
processed within good time. If the response is accepted from the processor, the
processor will respond by suspending its current activities (saving its state), and thus
executing a function called an interrupt handler to deal with the event.

37
Types of Interrupts
Interrupts signals as mentioned above, are a response to software or hardware
events in the system. These events are classified as software interrupts or hardware
interrupts.

A. Hardware Interrupt
A hardware interrupt is an electronic alerting signal sent to the processor from an
external device, like a disk controller or an external peripheral. For example, when
we press a key on the keyboard or move the mouse, they trigger hardware interrupts
which cause the processor to read the keystroke or mouse position.

The three types of Hardware Interrupt


i. Masking:Typically, processors have an internal interrupt mask register that
allows selective enabling and disabling of hardware interrupts. Furthermore,
each interrupt is linked with a bit in the mask register. In some systems, the
interrupt is enabled when the bit is set and disabled when the bit is clear.
Therefore, when the interrupt is disabled, the linked interrupt signal will be
ignored by the processor.
ii. In some cases, some interrupt signals cannot be affected by the interrupt
mask, so they cannot be disabled, these are referred to be Non-Maskable
Interrupts. Such types of interrupts have an extremely high priority and
cannot be ignored under any circumstance. In Non-Maskable interrupt, the
hardware interrupts which cannot be delayed, and therefore require the
processor to process them immediately.
On the other hand, in Maskable interrupt, the hardware interrupts can be
delayed when a high priority interrupt has occurred at the same time.
iii. There exists another type of interrupt called Spurious Interrupts. These
types of interrupts are categorized to be invalid, short-duration signal on an
interrupt input. These types of interrupts are caused by glitches that are a
result of electrical interference, race conditions (A race condition refers to an
undesirable situation that occurs when a device or system attempts to perform
two or more operations at the same time, though these operations must be
done in a proper sequence to get executed properly), or malfunctioning
devices.

B. Software Interrupt
A software interrupt is caused either by an exceptional condition or a special
instruction in the instruction set which causes an interrupt when it is executed by the
processor. For example, if the processor's arithmetic logic unit runs a command to
divide a number by zero, to cause a divide-by-zero exception, thus causing the
computer to abandon the calculation or display an error message.
Furthermore, software interrupts can also be triggered unexpectedly, by the program
execution errors. These are referred to as traps or exceptions.

Difference between Trap and Interrupt in Operating System


What is the trap?
A trap is a software-produced interrupt that can be caused by various factors,
including an error in instruction, such as division by zero or illegal memory access. A
38
trap may also be generated when a user program makes a definite service request
from the OS.
Traps are called synchronous events because the execution of the present
instructions much more likely causes traps. System calls are another type of trap in
which the program asks the operating system to seek a certain service, and the
operating system subsequently generates an interrupt to allow the program to
access the services.

What is the Interrupt?


Interrupts are signals emitted by software or hardware when a process or event
requires immediate attention. Because both hardware and software generate these
signals, they are referred to as the hardware, and the software interrupts. A
hardware device produces an interrupt. Interrupts can be caused by a USB device, a
NIC card, or a keyboard. Interrupts happen asynchronously, and they may happen at
any time.

Trap Interrupt
The trap is a signal raised by a user The interrupt is a signal to the CPU
program instructing the operating emitted by hardware that indicates an
system to perform some functionality event that requires immediate
immediately. attention.
It is a synchronous process. It is an asynchronous process.
All traps are interrupt. Not all the interrupts are traps.
It may happen only from the software It may happen from the hardware and
devices. software devices.
A user program instruction generates it. Hardware devices generate it.
It is also known as a software interrupt. It is also known as a hardware interrupt.
It executes the specific functionality in It forces the CPU to trigger a specific
the operating system and gives control interrupt handler routine.
to the trap handler.

DEADLOCK
A process in operating system uses resources in the following ways:
 Requests a resource
 Use the resource
 Releases the resource

Deadlock is a situation where a set of


processes are blocked because each process is
holding a resource and waiting for another
resource acquired by some other process.
For example, in the diagram below, Process A
is holding Resource 1 and waiting for Resource
2 which is held by Process B; and Process B is
waiting for Resource 1.
39
The condition for a Deadlock to occur is that these four conditions must hold
simultaneously:
1. Mutual Exclusion: Two or more resources are non-shareable.
2. Hold and Wait: A process is holding at least one resource and waiting for
resources.
3. Non-Preemption: A resource cannot be taken from a process unless the
process releases the resource.
4. Circular Wait: A set of processes are waiting for each other in circular form.

METHODS OF HANDLING DEADLOCK


There are three ways to handle deadlock:
1. Deadlock Prevention or Avoidance: The idea is to not allow a system to
slip into a deadlock state. One can zoom into each category individually.
Prevention is done by negating one of above-mentioned necessary conditions
for deadlock.
Avoidance is kind of futuristic in nature. By using “Avoidance” strategy, an
assumption has to be made. All information about resources which a process
will need are known to the user prior the execution of the process. Banker’s
algorithm can be used to avoid deadlock.
2. Deadlock Detection and Recovery: this method allows a deadlock to occur
and then preemption is done to handle it.
3. Ignore the Problem: In systems that deadlocks rarely occur, deadlock is
allowed; and then the system is rebooted to resolve it. This is the approach
that both Windows and UNIX take.

DEADLOCK DETECTION
If a system does not employ either a deadlock prevention or deadlock avoidance
algorithm then a deadlock situation may occur. In this case:
 Apply an algorithm to examine state of system to determine whether deadlock
has occurred or not.
 Apply an algorithm to resolve the deadlock.

Race Condition
A race condition is categorized as either a critical or non-critical race condition. The
critical race conditions are those conditions that occur when the order of the internal
variable regulates the last state of the machine. On the other hand, the non-critical
race conditions are those conditions which occur when the order of internal variables
does not regulate the last state of the machine.

SEMAPHORES IN OPERATING SYSTEM

40
Semaphores are integer variables that are used to solve the critical section problem
by using two atomic operations, wait and signal that are used for process
synchronization.
The definitions of wait and signal are as follows –
Wait: The wait operation decrements the value of its argument S, if it is positive. If S
is negative or zero, then no operation is performed.
Signal: The signal operation increments the value of its argument S.

Types of Semaphores
There are two main types of semaphores i.e. counting semaphores and binary
semaphores.
Counting Semaphores: These are integer value semaphores and have an
unrestricted value domain. These semaphores are used to coordinate the resource
access, where the semaphore count is the number of available resources. If the
resources are added, semaphore count automatically incremented and if the
resources are removed, the count is decremented.

Binary Semaphores: The binary semaphores are like counting semaphores but
their value is restricted to 0 and 1. The wait operation only works when the
semaphore is 1 and the signal operation succeeds when semaphore is 0. It is
sometimes easier to implement binary semaphores than counting semaphores.

Advantages of Semaphores
Some of the advantages of semaphores are as follows −
 Semaphores allow only one process into the critical section. They follow the
mutual exclusion principle strictly and are much more efficient than some other
methods of synchronization.
 There is no resource wastage because of busy waiting in semaphores as
processor time is not wasted unnecessarily to check if a condition is fulfilled to
allow a process to access the critical section.
 Semaphores are implemented in the machine independent code of the
microkernel. So, they are machine independent.

Disadvantages of Semaphores
Some of the disadvantages of semaphores are as follows −
 Semaphores are complicated so the wait and signal operations must be
implemented in the correct order to prevent deadlocks.
 Semaphores are impractical for last scale use as their use leads to loss of
modularity. This happens because the wait and signal operations prevent the
creation of a structured layout for the system.

41
 Semaphores may lead to a priority inversion where low priority processes may
access the critical section first and high priority processes later.

42

You might also like