0% found this document useful (0 votes)
19 views50 pages

Os Unit-1 Notes

An Operating System (OS) serves as an interface between users and hardware, managing processes, resources, and providing a user-friendly environment. It encompasses various functions such as process management, memory management, and file management, and can be structured in different ways, including simple, layered, and micro-kernel structures. Different types of operating systems include batch, multiprogramming, multitasking, network, real-time, time-sharing, and distributed systems, each with its own advantages and disadvantages.

Uploaded by

rakeshreddyt25
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views50 pages

Os Unit-1 Notes

An Operating System (OS) serves as an interface between users and hardware, managing processes, resources, and providing a user-friendly environment. It encompasses various functions such as process management, memory management, and file management, and can be structured in different ways, including simple, layered, and micro-kernel structures. Different types of operating systems include batch, multiprogramming, multitasking, network, real-time, time-sharing, and distributed systems, each with its own advantages and disadvantages.

Uploaded by

rakeshreddyt25
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 50

Operating System

Operating System:

Operating System can be defined as an interface between user and the hardware. It
provides an environment to the user so that, the user can perform its task in
convenient and efficient way.

The Operating System Tutorial is divided into various parts based on its functions
such as Process Management, Process Synchronization, Deadlocks and File
Management.

Operating System Definition and Function

In the Computer System (comprises of Hardware and software), Hardware can only
understand machine code (in the form of 0 and 1) which doesn't make any sense to a
naive user.

We need a system which can act as an intermediary and manage all the processes and
resources present in the system.

An Operating System can be defined as an interface between user and hardware. It is


responsible for the execution of all the processes, Resource
Allocation, CPU management, File Management and many other tasks.

The purpose of an operating system is to provide an environment in which a user can


execute programs in convenient and efficient manner.

Structure of a Computer System

A Computer System consists of:

o Users (people who are using the computer)


o Application Programs (Compilers, Databases, Games, Video player,
Browsers, etc.)
o System Programs (Shells, Editors, Compilers, etc.)
o Operating System ( A special program which acts as an interface between user
and hardware )
o Hardware ( CPU, Disks, Memory, etc)

What does an Operating system do?

1. Process Management
2. Process Synchronization
3. Memory Management
4. CPU Scheduling
5. File Management
6. Security

The Operating System is a program with the following features −


 An operating system is a program that acts as an interface between the software
and the computer hardware.

 It is an integrated set of specialized programs used to manage overall resources


and operations of the computer.

 It is a specialized software that controls and monitors the execution of all other
programs that reside in the computer, including application programs and other
system software.
Objectives of Operating System

The objectives of the operating system are −

 To make the computer system convenient to use in an efficient manner.

 To hide the details of the hardware resources from the users.

 To provide users a convenient interface to use the computer system.

 To act as an intermediary between the hardware and its users, making it easier for
the users to access and use other resources.

To manage the resources of a computer system.

 To keep track of who is using which resource, granting resource requests, and
mediating conflicting requests from different programs and users.

 To provide efficient and fair sharing of resources among users and programs.

Characteristics of Operating System


Here is a list of some of the most prominent characteristic features of Operating Systems −

Memory Management − Keeps track of the primary memory, i.e. what part of it is in use by whom,
what part is not in use, etc. and allocates the memory when a process or program requests it.

Processor Management − Allocates the processor (CPU) to a process and deallocates the processor
when it is no longer required.

Device Management − Keeps track of all the devices. This is also called I/O controller that decides
which process gets the device, when, and for how much time.

File Management − Allocates and de-allocates the resources and decides who gets the resources.

Security − Prevents unauthorized access to programs and data by means of passwords and other similar
techniques.

Job Accounting − Keeps track of time and resources used by various jobs and/or users.

Control Over System Performance − Records delays between the request for a service and from the
system.

Interaction with the Operators − Interaction may take place via the console of the computer in the
form of instructions. The Operating System acknowledges the same, does the corresponding action, and
informs the operation by a display screen.

Error-detecting Aids − Production of dumps, traces, error messages, and other debugging and error-
detecting methods.

Coordination Between Other Software and Users − Coordination and assignment of compilers,
interpreters, assemblers, and other software to the various users of the computer systems.

What is an operating System Structure?

We want a clear structure to let us apply an operating system to our particular needs because operating
systems have complex structures. It is easier to create an operating system in pieces, much as we break down
larger issues into smaller, more manageable subproblems. Every segment is also a part of the operating
system. Operating system structure can be thought of as the strategy for connecting and incorporating
various
operating system components within the kernel. Operating systems are implemented using many types of
structures, as will be discussed below:

SIMPLE STRUCTURE

It is the most straightforward operating system structure, but it lacks definition and is only appropriate for
usage with tiny and restricted systems. Since the interfaces and degrees of functionality in this structure are
clearly defined, programs are able to access I/O routines, which may result in unauthorized access to I/O
procedures.

This organizational structure is used by the MS-DOS operating system:

o There are four layers that make up the MS-DOS operating system, and each
has its own set of features.
o These layers include ROM BIOS device drivers, MS-DOS device drivers,
application programs, and system programs.
o The MS-DOS operating system benefits from layering because each level can
be defined independently and, when necessary, can interact with one another.
o If the system is built in layers, it will be simpler to design, manage, and update.
Because of this, simple structures can be used to build constrained systems
that are less complex.
o When a user program fails, the operating system as whole crashes.
o Because MS-DOS systems have a low level of abstraction, programs and I/O
procedures are visible to end users, giving them the potential for unwanted
access.

The following figure illustrates layering in simple structure:


Advantages of Simple Structure:

o Because there are only a few interfaces and levels, it is simple to develop.
o Because there are fewer layers between the hardware and the applications, it
offers superior performance.

Disadvantages of Simple Structure:

o The entire operating system breaks if just one user program malfunctions.
o Since the layers are interconnected, and in communication with one another,
there is no abstraction or data hiding.
o The operating system's operations are accessible to layers, which can result in
data tampering and system failure.

LAYERED STRUCTURE

The OS is separated into layers or levels in this kind of arrangement. Layer 0 (the lowest layer) contains
the hardware, and layer 1 (the highest layer) contains the user

interface (layer N). These layers are organized hierarchically, with the top-level layers making use of the
capabilities of the lower-level ones
The functionalities of each layer are separated in this method, and abstraction is also an option.
Because layered structures are hierarchical, debugging is simpler, therefore all lower-level layers
are debugged before the upper layer is examined. As a result, the present layer alone has to be
reviewed since all the lower layers have already been examined.

The image below shows how OS is organized into layers:


Advantages of Layered Structure:

o Work duties are separated since each layer has its own functionality, and there
is some amount of abstraction.
o Debugging is simpler because the lower layers are examined first, followed by
the top layers.

Disadvantages of Layered Structure:

o Performance is compromised in layered structures due to layering.


o Construction of the layers requires careful design because upper layers only
make use of lower layers' capabilities.

MICRO-KERNEL STRUCTURE

The operating system is created using a micro-kernel framework that strips the kernel
of any unnecessary parts. Systems and user applications are used to implement these
optional kernel components. So, Micro-Kernels is the name given to these systems
that have been developed.

Each Micro-Kernel is created separately and is kept apart from the others. As a result,
the system is now more trustworthy and secure. If one Micro-Kernel malfunctions, the
remaining operating system is unaffected and continues to function normally.

The image below shows Micro-Kernel Operating System Structure:


Advantages of Micro-Kernel Structure:

o It enables portability of the operating system across platforms.


o Due to the isolation of each Micro-Kernel, it is reliable and secure.
o The reduced size of Micro-Kernels allows for successful testing.
o The remaining operating system remains unaffected and keeps running
properly even if a component or Micro-Kernel fails.

Disadvantages of Micro-Kernel Structure:

The performance of the system is decreased by increased inter-module communication.


o The construction of a system is complicated.

Types of Operating Systems (OS):

An operating system is a well-organized collection of programs that manages the


computer hardware. It is a type of system software that is responsible for the smooth
functioning of the computer system.

Batch Operating System

In the 1970s, Batch processing was very popular. In this technique, similar types of
jobs were batched together and executed in time. People were used to having a single
computer which was called a mainframe.

In Batch operating system, access is given to more than one person; they submit their
respective jobs to the system for the execution.

The system put all of the jobs in a queue on the basis of first come first serve and then
executes the jobs one by one. The users collect their respective output when all the
jobs get executed.

The purpose of this operating system was mainly to transfer control from one job to
another as soon as the job was completed. It contained a small set of programs called
the resident monitor that always resided in one part of the main memory. The
remaining part is used for servicing jobs.

Advantages of Batch OS
o The use of a resident monitor improves computer efficiency as it eliminates
CPU time between two jobs.

Disadvantages of Batch OS

1. Starvation

Batch processing suffers from

starvation. For Example:

There are five jobs J1, J2, J3, J4, and J5, present in the batch. If the execution time of
J1 is very high, then the other four jobs will never be executed, or they will have to
wait for a very long time. Hence the other processes get starved.

2. Not Interactive

Batch Processing is not suitable for jobs that are dependent on the user's input. If a job
requires the input of two numbers from the console, then it will never get it in the
batch processing scenario since the user is not present at the time of execution.

Multiprogramming Operating System

Multiprogramming is an extension to batch processing where the CPU is always kept


busy. Each process needs two types of system time: CPU time and IO time.

In a multiprogramming environment, when a process does its I/O, The CPU can start
the execution of other processes. Therefore, multiprogramming improves the
efficiency of the system.
Advantages of Multiprogramming OS

o Throughout the system, it increased as the CPU always had one program to
execute.
o Response time can also be reduced.

Disadvantages of Multiprogramming OS

o Multiprogramming systems provide an environment in which various systems


resources are used efficiently, but they do not provide any user interaction
with the computer system.

Multiprocessing Operating System

In Multiprocessing, Parallel computing is achieved. There are more than one


processors present in the system which can execute more than one process at the same
time. This will increase the throughput of the system.
In Multiprocessing, Parallel computing is achieved. More than one processor present in the system
can execute more than one process simultaneously, which will increase the throughput of the
system.

Advantages of Multiprocessing operating system:

o Increased reliability: Due to the multiprocessing system, processing tasks can


be distributed among several processors. This increases reliability as if one
processor fails, the task can be given to another processor for completion.
o Increased throughout: As several processors increase, more work can be done
in less.

Disadvantages of Multiprocessing operating System

o Multiprocessing operating system is more complex and sophisticated as it


takes care of multiple CPUs simultaneously.

Multitasking Operating System


The multitasking operating system is a logical extension of a multiprogramming
system that enables multiple programs simultaneously. It allows a user to perform
more than one computer task at the same time.

Advantages of Multitasking operating system

o This operating system is more suited to supporting multiple users


simultaneously.
o The multitasking operating systems have well-defined memory management.

Disadvantages of Multitasking operating system

o The multiple processors are busier at the same time to complete any task in a
multitasking environment, so the CPU generates more heat.
Network Operating System

An Operating system, which includes software and associated protocols to


communicate with other computers via a network conveniently and cost-effectively, is
called Network Operating System.

Advantages of Network Operating System


o In this type of operating system, network traffic reduces due to the division
between clients and the server.
o This type of system is less expensive to set up and maintain.

Disadvantages of Network Operating System

o In this type of operating system, the failure of any node in a system affects the
whole system.
o Security and performance are important issues. So trained network
administrators are required for network administration.

Real Time Operating System

In Real-Time Systems, each job carries a certain deadline within which the job is
supposed to be completed, otherwise, the huge loss will be there, or even if the result
is produced, it will be completely useless.

The Application of a Real-Time system exists in the case of military applications, if


you want to drop a missile, then the missile is supposed to be dropped with a certain
precision.
Advantages of Real-time operating system:

o Easy to layout, develop and execute real-time applications under the real-time
operating system.
o In a Real-time operating system, the maximum utilization of devices and
systems.

Disadvantages of Real-time operating system:

o Real-time operating systems are very costly to develop.


o Real-time operating systems are very complex and can consume critical CPU
cycles.

Time-Sharing Operating System

In the Time Sharing operating system, computer resources are allocated in a time-
dependent fashion to several programs simultaneously. Thus it helps to provide a
large number of user's direct access to the main computer. It is a logical extension of
multiprogramming. In time-sharing, the CPU is switched among multiple programs
given by different users on a scheduled basis.
A time-sharing operating system allows many users to be served simultaneously, so
sophisticated CPU scheduling schemes and Input/output management are required.

Time-sharing operating systems are very difficult and expensive to build.

Advantages of Time Sharing Operating System

o The time-sharing operating system provides effective utilization and sharing


of resources.
o This system reduces CPU idle and response time.

Disadvantages of Time Sharing Operating System

o Data transmission rates are very high in comparison to other methods.


o Security and integrity of user programs loaded in memory and data need to be
maintained as many users access the system at the same time.

Distributed Operating System

The Distributed Operating system is not installed on a single machine, it is divided


into parts, and these parts are loaded on different machines. A part of the distributed
Operating system is installed on each machine to make their communication possible.
Distributed Operating systems are much more complex, large, and sophisticated than

Network operating systems because they also have to take care of varying networking protocols.
Advantages of Distributed Operating System

o The distributed operating system provides sharing of resources.


o This type of system is fault-tolerant.

Disadvantages of Distributed Operating System

o Protocol overhead can dominate computation cost.

Distributed System:

A distributed system is a model where distributed applications are running on


multiple computers linked by a communications network. Sometimes it is also
called loosely coupled systems because in which each processor has its own local
memory and processing units. LOCUS and MICROS are some examples of
distributed operating systems.

Parallel Systems:
Parallel Systems are designed to speed up the execution of programs by dividing the
programs into multiple fragments and processing these fragments at the same time.
Flynn has classified computer systems into four types based on parallelism in the
instructions and in the data streams.
1. Single Instruction stream, single data stream
2. Single Instruction stream, multiple data stream
3. Multiple Instruction stream, single data stream

4. Multiple Instruction stream, multiple data stream


Advantages of Distributed Systems:
 Scalability: Distributed systems can be easily scaled by adding more
computers to the network.
 Fault Tolerance: Distributed systems can recover from failures by
redistributing work to other computers in the network.
 Geographical Distribution: Distributed systems can be geographically
distributed, allowing for better performance and resilience.
Disadvantages of Distributed Systems:
 Complexity: Distributed systems are more complex to design and maintain
compared to single computer systems.
 Communication Overhead: Communication between computers in a
distributed system adds overhead and can impact performance.
 Security: Distributed systems are more vulnerable to security threats, as the
communication between computers can be intercepted and compromised.
Advantages of Parallel Systems:
 High Performance: Parallel systems can execute computationally intensive
tasks more quickly compared to single processor systems.
 Cost Effective: Parallel systems can be more cost-effective compared to
distributed systems, as they do not require additional hardware for
communication.
Disadvantages of Parallel Systems:
 Limited Scalability: Parallel systems have limited scalability as the number
of processors or cores in a single computer is finite.
 Complexity: Parallel systems are more complex to program and debug
compared to single processor systems.
 Synchronization Overhead: Synchronization between processors in a parallel
system can add overhead and impact performance.

Difference Between Distributed System and Parallel System:

S.
Parallel System Distributed System
No

Parallel systems are the systems that


can process the data simultaneously, In these systems, applications are running
and increase the computational on multiple computers linked by
speed of a computer system. communication lines.
1.

Parallel systems work with the The distributed system consists of a


simultaneous use of multiple number of computers that are connected
computer resources which can and managed so that they share the job
include a single computer with processing load among various computers
2. multiple processors. distributed over the network.

3. Tasks are performed with a more Tasks are performed with a less speedy
speedy process. process.

4. These systems are multiprocessor In Distributed Systems, each processor has


systems. its own memory.

5. It is also known as a tightly coupled Distributed systems are also known as


system. loosely coupled systems.

These systems communicate with one


another through various communication
These systems have close lines, such as high-speed buses or
6. communication with more than one telephone lines.
processor.

7. These systems share a memory, These systems do not share memory or


clock, and peripheral devices clock in contrast to parallel systems.

In this there is no global clock in


distributed computing, it uses various
8. In this, all processors share a single synchronization algorithms.
master clock for synchronization.
9. E.g:- Hadoop, MapReduce, Apache E.g:- High-Performance Computing
Cassandra clusters, Beowulf clusters
Components of Operating System

An operating system is a large and complex system that can only be created by
partitioning into small parts. These pieces should be a well-defined part of the system,
carefully defining inputs, outputs, and functions.

The components of an operating system play a key role to make a variety of computer
system parts work together. There are the following components of an operating
system, such as:

1. Process Management
2. File Management
3. Network Management
4. Main Memory Management
5. Secondary Storage Management
6. I/O Device Management
7. Security Management
8. Command Interpreter System

Operating system components help you get the correct computing by detecting CPU
and memory hardware errors.

Process Management
The process management component is a procedure for managing many processes
running simultaneously on the operating system. Every running software application
program has one or more processes associated with them.

For example, when you use a search engine like Chrome, there is a process running
for that browser program.

Process management keeps processes running efficiently. It also uses memory


allocated to them and shutting them down when needed.

The execution of a process must be sequential so, at least one instruction should be
executed on behalf of the process.

Functions of process management

Here are the following functions of process management in the operating system, such
as:

o Process creation and deletion.


o Suspension and resumption.
o Synchronization process
o Communication process

File Management

A file is a set of related information defined by its creator. It commonly represents


programs (both source and object forms) and data. Data files can be alphabetic,
numeric, or alphanumeric.
Function of file management

The operating system has the following important activities in connection with file
management:

o File and directory creation and deletion.


o For manipulating files and directories.
o Mapping files onto secondary storage.
o Backup files on stable storage media.

Network Management

Network management is the process of administering and managing computer


networks. It includes performance management, provisioning of networks, fault
analysis, and maintaining the quality of service.
A distributed system is a collection of computers or processors that never share their
memory and clock. In this type of system, all the processors have their local memory,
and the processors communicate with each other using different communication
cables, such as fibre optics or telephone lines.

The computers in the network are connected through a communication network,


which can configure in many different ways. The network can fully or partially
connect in network management, which helps users design routing and connection
strategies that overcome connection and security issues.

Functions of Network management

Network management provides the following functions, such as:

o Distributed systems help you to various computing resources in size and


function. They may involve minicomputers, microprocessors, and many
general-purpose computer systems.
o A distributed system also offers the user access to the various resources the
network shares.
o It helps to access shared resources that help computation to speed up or offers
data availability and reliability.

Main Memory management

Main memory is a large array of storage or bytes, which has an address. The memory
management process is conducted by using a sequence of reads or writes of specific
memory addresses.

It should be mapped to absolute addresses and loaded inside the memory to execute a
program. The selection of a memory management method depends on several factors.

However, it is mainly based on the hardware design of the system. Each algorithm
requires corresponding hardware support. Main memory offers fast storage that can
be accessed directly by the CPU. It is costly and hence has a lower storage capacity.
However, for a program to be executed, it must be in the main memory.
Functions of Memory management

An Operating System performs the following functions for Memory Management in


the operating system:

o It helps you to keep track of primary memory.


o Determine what part of it are in use by whom, what part is not in use.
o In a multiprogramming system, the OS decides which process will get
memory and how much.
o Allocates the memory when a process requests.
o It also de-allocates the memory when a process no longer requires or has been
terminated.

Secondary-Storage Management

The most important task of a computer system is to execute programs. These


programs help you to access the data from the main memory during execution. This
memory of the computer is very small to store all data and programs permanently.
The computer system offers secondary storage to back up the main memory.
Today modern computers use hard drives/SSD as the primary storage of both
programs and data. However, the secondary storage management also works with
storage devices, such as USB flash drives and CD/DVD drives. Programs like
assemblers and compilers are stored on the disk until it is loaded into memory, and
then use the disk is used as a source and destination for processing.

Functions of Secondary storage management

Here are some major functions of secondary storage management in the operating
system:

o Storage allocation
o Free space management
o Disk scheduling

I/O Device Management

One of the important use of an operating system that helps to hide the variations of
specific hardware devices from the user.

Functions of I/O management

The I/O management system offers the following functions, such as:

o It offers a buffer caching system


o It provides general device driver code
o It provides drivers for particular hardware devices.
o I/O helps you to know the individualities of a specific device.
Security Management

The various processes in an operating system need to be secured from other activities.
Therefore, various mechanisms can ensure those processes that want to operate files,
memory CPU, and other hardware resources should have proper authorization from
the operating system.

Security refers to a mechanism for controlling the access of programs, processes, or


users to the resources defined by computer controls to be imposed, together with
some means of enforcement.

For example, memory addressing hardware helps to confirm that a process can be
executed within its own address space. The time ensures that no process has control of
the CPU without renouncing it. Lastly, no process is allowed to do its own I/O to
protect, which helps you to keep the integrity of the various peripheral devices.

Security can improve reliability by detecting latent errors at the interfaces between
component subsystems. Early detection of interface errors can prevent the foulness of
a healthy subsystem by a malfunctioning subsystem. An unprotected resource cannot
misuse by an unauthorized or incompetent user.

Command Interpreter System

One of the most important components of an operating system is its command


interpreter. The command interpreter is the primary interface between the user and the
rest of the system.
Many commands are given to the operating system by control statements. A program
that reads and interprets control statements is automatically executed when a new job
is started in a batch system or a user logs in to a time-shared system. This program is
variously called.

o The control card interpreter,


o The command-line interpreter,
o The shell (in UNIX), and so on.

Its function is quite simple, get the next command statement, and execute it. The
command statements deal with process management, I/O handling, secondary storage
management, main memory management, file system access, protection, and
networking.

Operating system services:

An Operating System provides services to both the users and to the programs.

 It provides programs an environment to execute.


 It provides users the services to execute the programs in a convenient
manner. Following are a few common services provided by an operating system −

 Program execution
 I/O operations
 File System manipulation
 Communication
 Error Detection
 Resource Allocation
 Protection

Program execution

Operating systems handle many kinds of activities from user programs to system
programs like printer spooler, name servers, file server, etc. Each of these activities is
encapsulated as a process.
A process includes the complete execution context (code to execute, data to
manipulate, registers, OS resources in use). Following are the major activities of an
operating system with respect to program management −

 Loads a program into memory.


 Executes the program.
 Handles program's execution.
 Provides a mechanism for process synchronization.
 Provides a mechanism for process communication.
 Provides a mechanism for deadlock handling.

I/O Operation

An I/O subsystem comprises of I/O devices and their corresponding driver software.
Drivers hide the peculiarities of specific hardware devices from the users.
An Operating System manages the communication between user and device drivers.

 I/O operation means read or write operation with any file or any specific I/O
device.
 Operating system provides the access to the required I/O device when required.

File system manipulation

A file represents a collection of related information. Computers can store files on the
disk (secondary storage), for long-term storage purpose. Examples of storage media
include magnetic tape, magnetic disk and optical disk drives like CD, DVD. Each of
these media has its own properties like speed, capacity, data transfer rate and data
access methods.
A file system is normally organized into directories for easy navigation and usage.
These directories may contain files and other directions. Following are the major
activities of an operating system with respect to file management −

 Program needs to read a file or write a file.


 The operating system gives the permission to the program for operation on file.
 Permission varies from read-only, read-write, denied and so on.
 Operating System provides an interface to the user to create/delete files.
 Operating System provides an interface to the user to create/delete directories.
 Operating System provides an interface to create the backup of file system.

Communication

In case of distributed systems which are a collection of processors that do not share
memory, peripheral devices, or a clock, the operating system manages
communications between all the processes. Multiple processes communicate with one
another through communication lines in the network.
The OS handles routing and connection strategies, and the problems of contention and
security. Following are the major activities of an operating system with respect to
communication −

 Two processes often require data to be transferred between them


 Both the processes can be on one computer or on different computers, but are
connected through a computer network.
 Communication may be implemented by two methods, either by Shared
Memory or by Message Passing.

Error handling

Errors can occur anytime and anywhere. An error may occur in CPU, in I/O devices
or in the memory hardware. Following are the major activities of an operating system
with respect to error handling −

 The OS constantly checks for possible errors.


 The OS takes an appropriate action to ensure correct and consistent computing.

Resource Management

In case of multi-user or multi-tasking environment, resources such as main memory,


CPU cycles and files storage are to be allocated to each user or job. Following are the
major activities of an operating system with respect to resource management −

 The OS manages all kinds of resources using schedulers.


 CPU scheduling algorithms are used for better utilization of CPU.

Protection

Considering a computer system having multiple users and concurrent execution of


multiple processes, the various processes must be protected from each other's
activities.
Protection refers to a mechanism or a way to control the access of programs,
processes, or users to the resources defined by a computer system. Following are the
major activities of an operating system with respect to protection −

 The OS ensures that all access to system resources is controlled.


 The OS ensures that external I/O devices are protected from invalid access
attempts.
 The OS provides authentication features for each user by means of passwords.

System calls:

A system call is a method for a computer program to request a service from the kernel
of the operating system on which it is running. A system call is a method of
interacting with the operating system via programs. A system call is a request from
computer software to an operating system's kernel.

The Application Program Interface (API) connects the operating system's functions to
user programs. It acts as a link between the operating system and a process, allowing
user-level programs to request operating system services. The kernel system can only
be accessed using system calls. System calls are required for any programs that use
resources.

How System Calls Work

The Applications run in an area of memory known as user space. A system call
connects to the operating system's kernel, which executes in kernel space. When an
application creates a system call, it must first obtain permission from the kernel. It
achieves this using an interrupt request, which pauses the current process and
transfers control to the kernel.

If the request is permitted, the kernel performs the requested action, like creating or
deleting a file. As input, the application receives the kernel's output. The application
resumes the procedure after the input is received. When the operation is finished, the
kernel returns the results to the application and then moves data from kernel space to
user space in memory.

A simple system call may take few nanoseconds to provide the result, like retrieving
the system date and time. A more complicated system call, such as connecting to a
network device, may take a few seconds. Most operating systems launch a distinct
kernel thread for each system call to avoid bottlenecks. Modern operating systems are
multi-threaded, which means they can handle various system calls at the same time.

Types of System Calls

There are commonly five types of system calls. These are as follows:
1. Process Control
2. File Management
3. Device Management
4. Information Maintenance
5. Communication

Process Control

Process control is the system call that is used to direct the processes. Some process
control examples include creating, load, abort, end, execute, process, terminate the
process, etc.

File Management

File management is a system call that is used to handle the files. Some file
management examples include creating files, delete files, open, close, read, write, etc.

Device Management

Device management is a system call that is used to deal with devices. Some examples
of device management include read, device, write, get device attributes, release device,
etc.

Information Maintenance

Information maintenance is a system call that is used to maintain information. There


are some examples of information maintenance, including getting system data, set
time or date, get time or date, set system data, etc.
Communication

Communication is a system call that is used for communication. There are some
examples of communication, including create, delete communication connections,
send, receive messages, etc.

Examples of Windows and Unix system calls

There are various examples of Windows and Unix system calls. These are as listed

Process Windows Unix

Process Control CreateProcess() Fork()


ExitProcess() Exit()
WaitForSingleObject() Wait()

File Manipulation CreateFile() Open()


ReadFile() Read()
WriteFile() Write()
CloseHandle() Close()

Device Management SetConsoleMode() Ioctl()


ReadConsole() Read()
WriteConsole() Write()

Information Maintenance GetCurrentProcessID() Getpid()


SetTimer() Alarm()
Sleep() Sleep()

Communication CreatePipe() Pipe()


CreateFileMapping() Shmget()
MapViewOfFile() Mmap()

Protection SetFileSecurity() Chmod()


InitializeSecurityDescriptor() Umask()
SetSecurityDescriptorgroup() Chown()
below in the table:

Here, you will learn about some methods briefly:

open()

The open() system call allows you to access a file on a file system. It allocates
resources to the file and provides a handle that the process may refer to. Many
processes can open a file at once or by a single process only. It's all based on the file
system and structure.

read()
It is used to obtain data from a file on the file system. It accepts three arguments in
general:

o A file descriptor.
o A buffer to store read data.
o The number of bytes to read from the file.

The file descriptor of the file to be read could be used to identify it and open it
using open() before reading.

wait()

In some systems, a process may have to wait for another process to complete its
execution before proceeding. When a parent process makes a child process, the parent
process execution is suspended until the child process is finished. The wait() system
call is used to suspend the parent process. Once the child process has completed its
execution, control is returned to the parent process.

write()

It is used to write data from a user buffer to a device like a file. This system call is one
way for a program to generate data. It takes three arguments in general:

o A file descriptor.
o A pointer to the buffer in which data is saved.
o The number of bytes to be written from the buffer.

fork()

Processes generate clones of themselves using the fork() system call. It is one of the
most common ways to create processes in operating systems. When a parent process
spawns a child process, execution of the parent process is interrupted until the child
process completes. Once the child process has completed its execution, control is
returned to the parent process.

close()

It is used to end file system access. When this system call is invoked, it signifies that
the program no longer requires the file, and the buffers are flushed, the file
information is altered, and the file resources are de-allocated as a result.

exec()

When an executable file replaces an earlier executable file in an already executing


process, this system function is invoked. As a new process is not built, the old process
identification stays, but the new process replaces data, stack, data, head, etc.
exit()

The exit() is a system call that is used to end program execution. This call indicates
that the thread execution is complete, which is especially useful in multi-threaded
environments. The operating system reclaims resources spent by the process
following the use of the exit() system function.
Process Management
Process:

A process is basically a program in execution. The execution of a process must


progress in a sequential fashion.
A process is defined as an entity which represents the basic unit of work to be
implemented in the system.
To put it in simple terms, we write our computer programs in a text file and when we
execute this program, it becomes a process which performs all the tasks mentioned in
the program.
When a program is loaded into the memory and it becomes a process, it can be
divided into four sections ─ stack, heap, text and data. The following image shows a
simplified layout of a process inside main memory −

S.N. Component & Description

1
Stack
The process Stack contains the temporary data such as method/function parameters,
return address and local variables.

2
Heap
This is dynamically allocated memory to a process during its run time.

3
Text
This includes the current activity represented by the value of Program Counter and the
contents of the processor's registers.

4
Data
This section contains the global and static variables.

Program

A program is a piece of code which may be a single line or millions of lines. A


computer program is usually written by a computer programmer in a programming
language. For example, here is a simple program written in C programming language

#include <stdio.h>
int main() {
printf("Hello, World! \n");
return 0;
}
A computer program is a collection of instructions that performs a specific task when
executed by a computer. When we compare a program with a process, we can
conclude that a process is a dynamic instance of a computer program.
A part of a computer program that performs a well-defined task is known as
an algorithm. A collection of computer programs, libraries and related data are
referred to as a software.

Process Life Cycle

When a process executes, it passes through different states. These stages may differ in
different operating systems, and the names of these states are also not standardized.

In general, a process can have one of the following five states at a time.

S.N. State & Description

1
Start
This is the initial state when a process is first started/created.

2
Ready
The process is waiting to be assigned to a processor. Ready processes are waiting to have
the processor allocated to them by the operating system so that they can run. Process may
come into this state after Start state or while running it by but interrupted by the scheduler
to assign CPU to some other process.

3
Running
Once the process has been assigned to a processor by the OS scheduler, the process state
is set to running and the processor executes its instructions.
4
Waiting
Process moves into the waiting state if it needs to wait for a resource, such as waiting for
user input, or waiting for a file to become available.

5
Terminated or Exit
Once the process finishes its execution, or it is terminated by the operating system, it is
moved to the terminated state where it waits to be removed from main memory.
Operations on Processes
There are many operations that can be performed on processes. Some of these are
process creation, process preemption, process blocking, and process termination.
These are given in detail as follows −

Process Creation

Processes need to be created in the system for different operations. This can be done
by the following events −

 User request for process creation


 System initialization
 Execution of a process creation system call by a running process
 Batch job initialization
A process may be created by another process using fork(). The creating process is
called the parent process and the created process is the child process. A child process
can have only one parent but a parent process may have many children. Both the
parent and child processes have the same memory image, open files, and
environment strings. However, they have distinct address spaces.
A diagram that demonstrates process creation using fork() is as follows −

Process Preemption

An interrupt mechanism is used in preemption that suspends the process executing


currently and the next process to execute is determined by the short-term scheduler.
Preemption makes sure that all processes get some CPU time for execution.
A diagram that demonstrates process preemption is as follows −
Process Blocking

The process is blocked if it is waiting for some event to occur. This event may be I/O
as the I/O events are executed in the main memory and don't require the processor.
After the event is complete, the process again goes to the ready state.
A diagram that demonstrates process blocking is as follows −
Process Termination

After the process has completed the execution of its last instruction, it is terminated.
The resources held by a process are released after it is terminated.
A child process can be terminated by its parent process if its task is no longer
relevant. The child process sends its status information to the parent process before it
terminates. Also, when a parent process is terminated, its child processes are
terminated as well as the child processes cannot run if the parent processes are
terminated.
Process Scheduling

Definition

The process scheduling is the activity of the process manager that handles the removal
of the running process from the CPU and the selection of another process on the basis
of a particular strategy.
Process scheduling is an essential part of a Multiprogramming operating systems.
Such operating systems allow more than one process to be loaded into the executable
memory at a time and the loaded process shares the CPU using time multiplexing.

Categories of Scheduling

There are two categories of scheduling:

1. Non-preemptive: Here the resource can’t be taken from a process until the process
completes execution. The switching of resources occurs when the running process
terminates and moves to a waiting state.
2. Preemptive: Here the OS allocates the resources to a process for a fixed amount of
time. During resource allocation, the process switches from running state to ready
state or from waiting state to ready state. This switching occurs as the CPU may give
priority to other processes and replace the process with higher priority with the
running process.

Process Scheduling Queues

The OS maintains all Process Control Blocks (PCBs) in Process Scheduling Queues.
The OS maintains a separate queue for each of the process states and PCBs of all
processes in the same execution state are placed in the same queue. When the state of
a process is changed, its PCB is unlinked from its current queue and moved to its new
state queue.
The Operating System maintains the following important process scheduling queues
− Job queue − This queue keeps all the processes in the system.
Ready queue − This queue keeps a set of all processes residing in main
memory, ready and waiting to execute. A new process is always put in this
queue.
Device queues − The processes which are blocked due to unavailability of an
I/O device constitute this queue.

The OS can use different policies to manage each queue (FIFO, Round Robin,
Priority, etc.). The OS scheduler determines how to move processes between the
ready and run queues which can only have one entry per processor core on the system;
in the above diagram, it has been merged with the CPU.

Two-State Process Model

Two-state process model refers to running and non-running states which are described
below −

S.N. State & Description

1
Running
When a new process is created, it enters into the system as in the running state.

2
Not Running
Processes that are not running are kept in queue, waiting for their turn to execute. Each
entry in the queue is a pointer to a particular process. Queue is implemented by using
linked list. Use of dispatcher is as follows. When a process is interrupted, that process is
transferred in the waiting queue. If the process has completed or aborted, the process is
discarded. In either case, the dispatcher then selects a process from the queue to execute.

Schedulers

Schedulers are special system software which handle process scheduling in various
ways. Their main task is to select the jobs to be submitted into the system and to
decide which process to run. Schedulers are of three types −

 Long-Term Scheduler
 Short-Term Scheduler
 Medium-Term Scheduler

Long Term Scheduler

It is also called a job scheduler. A long-term scheduler determines which programs


are admitted to the system for processing. It selects processes from the queue and
loads them into memory for execution. Process loads into the memory for CPU
scheduling.
The primary objective of the job scheduler is to provide a balanced mix of jobs, such
as I/O bound and processor bound. It also controls the degree of multiprogramming. If
the degree of multiprogramming is stable, then the average rate of process creation
must be equal to the average departure rate of processes leaving the system.
On some systems, the long-term scheduler may not be available or minimal. Time-
sharing operating systems have no long term scheduler. When a process changes the
state from new to ready, then there is use of long-term scheduler.

Short Term Scheduler

It is also called as CPU scheduler. Its main objective is to increase system


performance in accordance with the chosen set of criteria. It is the change of ready
state to running state of the process. CPU scheduler selects a process among the
processes that are ready to execute and allocates CPU to one of them.
Short-term schedulers, also known as dispatchers, make the decision of which process
to execute next. Short-term schedulers are faster than long-term schedulers.

Medium Term Scheduler

Medium-term scheduling is a part of swapping. It removes the processes from the


memory. It reduces the degree of multiprogramming. The medium-term scheduler is
in-charge of handling the swapped out-processes.
A running process may become suspended if it makes an I/O request. A suspended
processes cannot make any progress towards completion. In this condition, to remove
the process from memory and make space for other processes, the suspended process
is moved to the secondary storage. This process is called swapping, and the process is
said to be swapped out or rolled out. Swapping may be necessary to improve the
process mix.

Comparison among Scheduler

S.N. Long-Term Scheduler Short-Term Scheduler Medium-Term Scheduler

1 It is a job scheduler It is a CPU scheduler It is a process swapping


scheduler.

2 Speed is lesser than short term Speed is fastest among Speed is in between both short
scheduler other two and long term scheduler.
3 It controls the degree of It provides lesser control It reduces the degree of
multiprogramming over degree of multiprogramming.
multiprogramming

4 It is almost absent or minimal It is also minimal in time It is a part of Time sharing


in time sharing system sharing system systems.

5 It selects processes from pool It selects those processes It can re-introduce the process
and loads them into memory which are ready to execute into memory and execution can
for execution be continued.

Thread:

A thread is a flow of execution through the process code, with its own program
counter that keeps track of which instruction to execute next, system registers which
hold its current working variables, and a stack which contains the execution history.
A thread shares with its peer threads few information like code segment, data segment
and open files. When one thread alters a code segment memory item, all other threads
see that.
A thread is also called a lightweight process. Threads provide a way to improve
application performance through parallelism. Threads represent a software approach
to improving performance of operating system by reducing the overhead thread is
equivalent to a classical process.

The following figure shows the working of a single-threaded and a multithreaded


process
Difference between Process and Thread

S.N. Process Thread

1 Process is heavy weight or resource intensive. Thread is light weight, taking lesser
resources than a process.

2 Process switching needs interaction with operating Thread switching does not need to
system. interact with operating system.

3 In multiple processing environments, each process All threads can share same set of open
executes the same code but has its own memory and files, child processes.
file resources.

4 If one process is blocked, then no other process can While one thread is blocked and
execute until the first process is unblocked. waiting, a second thread in the same
task can run.

5 Multiple processes without using threads use more Multiple threaded processes use fewer
resources. resources.

6 In multiple processes each process operates One thread can read, write or change
independently of the others. another thread's data.
Advantages of Thread

 Threads minimize the context switching time.


 Use of threads provides concurrency within a process.
 Efficient communication.
 It is more economical to create and context switch threads.
 Threads allow utilization of multiprocessor architectures to a greater scale and
efficiency.

Types of Thread

Threads are implemented in following two ways −


 User Level Threads − User managed threads.
 Kernel Level Threads − Operating System managed threads acting on kernel,
an operating system core.

User Level Threads

In this case, the thread management kernel is not aware of the existence of threads.
The thread library contains code for creating and destroying threads, for passing
message and data between threads, for scheduling thread execution and for saving and
restoring thread contexts. The application starts with a single thread.

Advantages
 Thread switching does not require Kernel mode privileges.
 User level thread can run on any operating system.
 Scheduling can be application specific in the user level thread.
 User level threads are fast to create and manage.

Disadvantages
 In a typical operating system, most system calls are blocking.
 Multithreaded application cannot take advantage of multiprocessing.

Kernel Level Threads

In this case, thread management is done by the Kernel. There is no thread


management code in the application area. Kernel threads are supported directly by the
operating system. Any application can be programmed to be multithreaded. All of the
threads within an application are supported within a single process.
The Kernel maintains context information for the process as a whole and for
individuals threads within the process. Scheduling by the Kernel is done on a thread
basis. The Kernel performs thread creation, scheduling and management in Kernel
space. Kernel threads are generally slower to create and manage than the user threads.
Advantages
 Kernel can simultaneously schedule multiple threads from the same process on
multiple processes.
 If one thread in a process is blocked, the Kernel can schedule another thread of
the same process.
 Kernel routines themselves can be multithreaded.
Disadvantages
 Kernel threads are generally slower to create and manage than the user threads.
 Transfer of control from one thread to another within the same process requires a
mode switch to the Kernel.

Difference between User-Level & Kernel-Level Thread

S.N. User-Level Threads Kernel-Level Thread

1 User-level threads are faster to create and manage. Kernel-level threads are slower to
create and manage.

2 Implementation is by a thread library at the user Operating system supports creation of


level. Kernel threads.

3 User-level thread is generic and can run on any Kernel-level thread is specific to the
operating system. operating system.

4 Multi-threaded applications cannot take advantage Kernel routines themselves can be


of multiprocessing. multithreaded.
Types of Multi-Threading Models

Multithreading allows the execution of multiple parts of a program at the same time.
These parts are known as threads and are lightweight processes available within the
process. Therefore, multithreading leads to maximum utilization of the CPU by
multitasking.
The main models for multithreading are one to one model, many to one model and
many to many model. Details about these are given as follows −

One to One Model

The one to one model maps each of the user threads to a kernel thread. This means
that many threads can run in parallel on multiprocessors and other threads can run
when one thread makes a blocking system call.
A disadvantage of the one to one model is that the creation of a user thread requires a
corresponding kernel thread. Since a lot of kernel threads burden the system, there is
restriction on the number of threads in the system.
A diagram that demonstrates the one to one model is given as follows −

Many to One Model

The many to one model maps many of the user threads to a single kernel thread. This
model is quite efficient as the user space manages the thread management.
A disadvantage of the many to one model is that a thread blocking system call blocks
the entire process. Also, multiple threads cannot run in parallel as only one thread can
access the kernel at a time.
A diagram that demonstrates the many to one model is given as follows −
Many to Many Model

The many to many model maps many of the user threads to a equal number or lesser
kernel threads. The number of kernel threads depends on the application or machine.
The many to many does not have the disadvantages of the one to one model or the
many to one model. There can be as many user threads as required and their
corresponding kernel threads can run in parallel on a multiprocessor.
A diagram that demonstrates the many to many model is given as follows –

You might also like