0% found this document useful (0 votes)
21 views

Unit 1 (With Page Number)

Aktu

Uploaded by

Yashi Upadhyay
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views

Unit 1 (With Page Number)

Aktu

Uploaded by

Yashi Upadhyay
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 57

Lecture-1

1.1 Operating System:

The operating system (OS) is one of the programs that run on the hardware and enables the user to communicate
with it by sending input commands and output commands. It allows the user, computer applications, and system
hardware to connect with one another, therefore the operating system acts as a hub. An operating system (OS) is the
program that, after being initially loaded into the computer by a boot program, manages all of the other application
programs in a computer. The application programs make use of the operating system by making requests for
services through a defined application program interface (API) In addition, users can interact directly with the
operating system through a user interface, such as a command - line interface (CLI) or a graphical UI (GUI) Without
operating system, a computer and software must be useless An Operating System can be defined as an interface
between user and hardware. It is responsible for the execution of all the processes, Resource Allocation, CPU
management, File Management and many other tasks.

1.2 Why use an operating system?

An operating system brings powerful benefits to computer software and software development. Without an
operating system, every application would need to include its own UI, as well as the comprehensive code needed to
handle all low - level functionality of the underlying computer, such as disk storage, network interfaces and so on.
Considering the vast array of underlying hardware available, this would vastly bloat the size of every application
and make software development impractical. Many common tasks, such as sending a network packet or displaying
text on a standard output device, such as a display, can be offloaded to system software that serves as an
intermediary between the applications and the hardware. The system software provides a consistent and repeatable
way for applications to interact with the hardware without the applications needing to know any details about the
hardware. As long as each application accesses the same resources and services in the same way, that system
software -- the operating system -- can service almost any number of applications. This vastly reduces the amount of
time and coding required developing and debugging an application, while ensuring that users can control, configure
and manage the system hardware through a common and well - understood interface. Ones to interact with the
hardware without the applications needing to know any details about the hardware. Once installed, the
Operating system relies on a vast library of device drivers to tailor OS services to the specific hardware
environment. Thus, every application may make a common call to a storage device, but the OS receives that call and
uses the corresponding driver to translate the call into actions (commands) needed for the underlying hardware on
that specific computer. the operating system provides a comprehensive platform that identifies, configures and
manages a range of hardware, including processors; memory devices and memory management; chipsets; storage;
networking; port communication, such as Video Graphics Array (VGA), High - Definition Multimedia Interface
(HDMI) and Universal Serial Bus (USB); and subsystem interfaces, such as Peripheral Component Interconnect
Express (PCIe).

1.3 Characteristics of Operating System:

Virtualization: Operating systems can provide Virtualization capabilities, allowing multiple operating systems or
instances of an operating system to run on a single physical machine. This can improve resource utilization and
provide isolation between different operating systems or applications.

Networking: Operating systems provide networking capabilities, allowing the computer system to connect to other
systems and devices over a network. This can include features such as network protocols network interfaces, and
network security

1
Scheduling: Operating systems provide scheduling algorithms that determine the order in which tasks are executed
on the system.
These algorithms prioritize tasks based on their resource requirements and other factors to optimize system
performance.
Interposes Communication: Operating systems provide mechanisms for applications to communicate with each
other, allowing them to share data and coordinate their activities.

Performance Monitoring: Operating systems provide tools for monitoring system performance, including CPU
usage, memory usage, disk usage, and network activity. This can help identify performance bottlenecks and
optimize system performance.

Backup and Recovery: Operating systems provide backup and recovery mechanisms to protect data in the event of
system failure or data loss.

Debugging: Operating systems provide debugging tools that allow developers to identify and fix software bugs and
other issues in the system.

1.4 Operating System Components

The components of an operating system play a key role to make a variety of computer system parts work together.
There are the following components of an operating system, such as:
• Hardware
• Application Program
• Operating System
• Users

Hardware: Computer hardware is a collective term used to describe any of the physical components of an analog or
digital computer. Computer hardware can be categorized as being either internal or external components. Generally,
internal hardware components are those necessary for the proper functioning of the computer, while external
hardware components are attached to the computer to add or enhance functionality.

Operating System: The operating system (OS) is one of the programs that run on the hardware and enables the user
to communicate with it by sending input commands and output commands. It allows the user, computer applications,
and system hardware to connect with one another, therefore the operating system acts as a hub. Without an operating
system, a computer and software must be useless.

User: Users perform the computation with the help of an application program. A user is someone or something that
wants or needs access to a system's resources; another word for user is client. A user can be a real person sitting on
the Windows operating system, a user refers to a person who has an account on the computer or device. Users can
have different levels of access and permissions depending on their account type. There are several types of users in
Windows OS:

Application Program: Applications programs are programs written to solve specific problems, to produce specific
reports, or to update specific files. A computer program that performs useful work on behalf of the user of the
computer (for example a word processing or accounting program) as opposed to the SYSTEM SOFTWARE which
manages the running of the computer itself, or to the DEVELOPMENT software which is used by programmers to
create other programs. An application program is typically self-contained, storing data within files of a special (often
proprietary) format that it can create, open for editing and save to disk.

2
Abstract view of the components of a computer system

1.5 Function of Operating System:

Booting: The process of starting or restarting the computer is known as booting. If the computer is switched off
completely and if turned on then it is called cold booting. Warm booting is a process of using the operating system
to restart the computer.

● Uses Diagnostic routines to test systems for equipment failure.


● Copies BIOS (Basic input output system) programs from ROM chips to main memory (RAM).
● Loads operating system into computer’s main memory (RAM).

Memory Management: The operating system manages the Primary Memory or Main Memory. Main memory is
made up of a large array of bytes or words where each byte or word is assigned a certain address. Main memory is
fast storage and it can be accessed directly by the CPU. For a program to be executed, it should be first loaded in the
main memory. An Operating System performs the following activities for Memory Management. It keeps track of
primary memory, i.e., which bytes of memory are used by which user program. The memory addresses that have
already been allocated and the memory addresses of the memory that has not yet been used. In multiprogramming,
the OS decides the order in which processes are granted memory access, and for how long. It allocates the memory
to a process when the process requests it and deal locates the memory when the process has terminated or is
performing an I/O operation.

Processor Management: In a multiprogramming environment, the OS decides the order in which processes have
access to the processor, and how much processing time each process has. This function of the OS is called Process
Scheduling. An Operating System performs the following activities for Processor Management: Keeps track of the
status of processes. The program which performs this task is known as a traffic controller. Allocates the CPU that is
a processor to a process. De- allocates the processor when a process is no longer required.

3
Device Management: An OS manages device communication via its respective drivers. It performs the following
activities for device management. Keeps track of all devices connected to the system. Designates a program
responsible for every device known as the Input/output controller. Decides which process gets access to a certain
device and for how long.
Allocates devices effectively and efficiently. Deal locates devices when they are no longer required.

Process Management: The process is a program under the execution. The operating system manages all the
processes so that each process gets the CPU for a specific time to execute itself, and there will be less waiting time
for each process. This management is also called process scheduling. For process scheduling operating system uses
various algorithms:
FCFS, SJF, LJF, ROUND ROBIN, PRIORITY SCHEDULING ALGORITHM.

File Management: A file system is organized into directories for efficient or easy navigation and usage. These
directories may contain other directories and other files. An Operating System carries out the following file
management activities. It keeps track of where information is stored, user access settings, the status of every file,
and more… These facilities are collectively known as the file system.
User Interface or Command Interpreter:
The user interacts with the computer system through the operating system. Hence the OS acts as an interface
between the user and the computer hardware. This user interface is offered through a set of commands or a graphical
user interface (GUI). Through this interface, the user makes interaction with the applications and the machine
hardware.
Security: The operating system uses password protection to protect user data and similar other techniques. it also
prevents unauthorized access to programs and user data.

Job Accounting: The operating system Keeps track of time and resources used by various tasks and users, this
information can be used to track resource usage for a particular user or group of users.

Error-detecting aids: The operating system constantly monitors the system to detect errors and avoid
malfunctioning computer systems.

Coordination between other software and users: Operating systems also coordinate and assign interpreters,
compilers, assemblers, and other software to the various users of the computer systems.

Performs Basic Computer Tasks: The management of various peripheral devices such as the mouse, keyboard,
and printer are carried out by the operating system. Today most operating systems are plug-and-play. These
operating systems.
Automatically recognize and configure the devices with no user interference.
Network Management: The OS provides network connectivity and manages communication between computers
on a network. It also manages network security by providing firewalls and other security measures.

4
Lecture-2

2.1 Goals of Operating System:

Convenience: (user friendly) an OS makes a computer more convenient to use.

Efficiency: An OS allows the computer system resources to be used in an efficient manner.


Portability: A portable operating system can be carried on a physical drive and is compatible with a wide range of
hardware systems. Most portable operating systems are small and come with a CD or USB drive. The process of
executing an OS from a CD/USB drive is known as using a live CD or USB.
Reliability: We consider an operating system to be reliable if it delivers the expected service without any
interruptions during the normal operating mode, where a normal operating mode is defined as the execution
environment free from external factors, such as a critical hardware failure.
Scalability: Scalability is the measure of a system's ability to increase or decrease in performance and cost in
response to changes in application and system processing demands.
Robustness: Robustness is the ability of a computer system to cope with errors during execution and cope with
erroneous input. Robustness can encompass many areas of computer science, such as robust programming, robust
machine learning, and Robust Security Network.
Ability to evolve: An OS should be constructed in such a way as to permit the effective development, testing, and
introduction of new system functions without interfering with service.

2.3 Classification of Operating System Batch Operating System

Batch processing was very popular in the 1970s. The jobs were executed in batches. People used to have a single
Computer known as a mainframe. Users using batch operating systems do not interact directly with the computer.
Each user prepares their job using an offline device like a punch card and submitting it to the computer operator.
Jobs with similar requirements are grouped and executed as a group to speed up processing. Once the programmers
have left their programs with the operator, they sort the programs with similar needs into batches. The batch
operating system grouped jobs that perform similar functions. These job groups are treated as a batch and executed
simultaneously. A computer system with this operating system performs the following batch processing activities.

● A job is a single unit that consists of a preset sequence of commands, data, and programs.
● Processing takes place in the order in which they are received, i.e., first come, first serve.
● These jobs are stored in memory and executed without the need for manual information. When a job is successfully
run, the operating system releases its memory.

5
Types of Batch Operating System

There are mainly two types of the batch operating system. These are as follows:

1. Simple Batched System


2. Multi-programmed batched system

Simple Batched System

The user did not directly interact with the computer system for job execution in a simple batch operating system.
However, the user was required to prepare a job that included the program, control information, and data on the
nature of the job on control cards. The job was then submitted to the computer operator, who was usually in the form
of a punch card. The program's output included results and registers and memory dumps in the event of a program
error. The output appeared after some time that could take days, hours, and minutes. Its main role was to transfer
control from one job to another. Jobs with similar requirements were pooled together and processed through the
processor to improve processing speed. The operators were used in the program to create batches with similar needs.
The computer runs the batches one by one when they become available. This system typically reads a sequence of
jobs, each with its control cads and predefined job tasks.

Multi-programmed batched system

Spooling deals with many jobs that have already been read and are waiting to run on disk. A disk containing a pool
of jobs allows the operating system to choose which job to run next to maximize CPU utilization. Jobs that come on
magnetic tape or cards directly cannot be run in a different order. Jobs run sequentially because they are executed in
a first-come, first-served manner. When various jobs are stored on a direct access device, job scheduling becomes
possible like a disk. Multi- programming is an important feature of job scheduling. For overlapped I/O, spooling and
offline operations have their limitations. Generally, a single user could not maintain all of the input/output devices,
and CPU buys at all times. In the multi-programmed batched system, jobs are grouped so that the CPU only
executes one job at a time to improve CPU utilization. The operating system maintains various jobs in memory at a
time. The operating system selects one job and begins executing it in memory. Finally, the job must wait for a task
to complete, such as mounting a tape on an I/O operation. In a multiprogramming system, do not sit idle because the
operating system switches to another task. When a job is in the wait state, and the current job is completed, the CPU
is returned.

Why are Batch Operating Systems used?

Batch operating systems load less stress on the CPU and include minimal user interaction, and that is why you can
still use them nowadays. Another benefit of batch operating systems is that huge repetitive jobs may be done without
interacting with the computer to notify the system that you need to perform after you finish that job.
● Old batch operating systems weren't interactive, which means that the user did not interact with the program while
executing it.
● Modern batch operating systems now support interactions. For example, you may schedule the job, and when the
specified time arrives, the computer acknowledges the processor that the time is up.

How does the Batch Operating System work?


● The operating system keeps the number of jobs in memory and performs them one at a time.
● Jobs are processed in a first-come, first-served manner.
● Each job set is defined as a batch. When a task is finished, its memory is freed, and the work’s output is transferred
into an output spool for later printing or processing.
● User interaction is limited in the batch operating system. When the system takes the task from the user, the users
free.
● You may also use the batch processing system to update data relating to any transactions or records.

6
Role of Batch Operating System
● A batch operating system's primary role is to execute jobs in batches automatically.
● The main task of a batch processing system is done by the 'Batch Monitor', which is located at the low end of the
main memory.
● This technique was made possible by the development of hard disk drives and card readers. The jobs can now be
stored on a disk to form a pool of jobs for batch execution.
● After that, they are grouped with similar jobs being placed in the same batch. As a result, the batch Operating system
automatically ran the batched jobs one after the other, saving time by performing tasks only once.
● It resulted from a better system due to reduced turnaround time.

Characteristics of Batch Operating System.

There are various characteristics of the Batch Operating System. Some of them are as follows:

● In this case, the CPU executes the jobs in the same sequence that they are sent to it by the operator, which implies
that the task sent to the CPU first will be executed first. It's also known as the 'first come, first serve'.
● The word job refers to the command or instruction that the user and the program should perform.
● A batch operating system runs a set of user-supplied instructions composed of distinct instructions and programs
with several similarities.
● When a task is successfully executed, the OS releases the memory space held by that job.
● The user does not interface directly with the operating system in a batch operating system; rather, all instructions are
sent to the operator.
● The operator evaluates the user's instructions and creates a set of instructions having similar properties.

Advantages

There are various advantages of the Batch Operating System. Some of them are as follows:
● It isn't easy to forecast how long it will take to complete a job; only batch system processors know how long it will
take to finish the job in line.
● This system can easily manage large jobs again and again.
● The batch process can be divided into several stages to increase processing speed.
● When a process is finished, the next job from the job spool is run without any user interaction.
● CPU utilization gets improved.

Disadvantages
There are various disadvantages of the Batch Operating System. Some of them are as follows:
● When a job fails once, it must be scheduled to be completed, and it may take a long time to complete the task.
● Computer operators must have full knowledge of batch systems.
● The batch system is quite difficult to debug.
● The computer system and the user have no direct interaction.
● If a job enters an infinite loop, other jobs must wait for an unknown period of time.

Uniprogramming Operating System


Uniprogramming implies that only a single task or program is in the main memory at a particular time. It was more
common in the initial computers and mobiles where one can run only a single application at time.

7
Characteristics of Uniprogramming:

● It allows only one program to sit in the memory at one time.


● The size is small as only one program is present.
● The resources are allocated to the program that is in the memory at that time.

Advantages of uniprogramming

● The Uniprogramming memory management system is moderate without bugs.


● It additionally executes with minimal overhead.
● Once an application is stacked, that application is ensured 100% of the processor's time, since no different procedure
will intrude on it.

Disadvantages of uni-programming:

● Wastage of CPU time.


● No user interaction.
● No mechanism to prioritize processes.

Multiprogramming Operating System

Multiprogramming OS is an ability of an operating system that executes more than one program using a single
processor machine .More than one task or program or jobs are present inside the main memory at one point of time.
Buffering and spooling can overlap I/O and CPU tasks to improve the system performance but it has some
limitations that a single user cannot always keep CPU or I/O busy all the time. To increase resource utilization,
multiprogramming approaches. The OS could pick and start the execution of one of the jobs in memory, whenever
the jobs does not need CPU that means the job is working with I/O at that time the CPU is idle at that time the OS
switches to another job in memory and CPU executes a portion of it till the job issues a request for I/O and so on.
Let’s P1 and P2 are two programs present in the main memory. The OS picks one program and starts executing it.
During execution if the P1 program requires I/O operation, then the OS will simply switch over to the P2 program.
If the p2 program requires I/O then again it switches to P3 and so on. If there is no other program remaining after P3
then the CPU will pass its control back to the previous program.

Features of Multiprogramming

● Response time will be lesser and better source utilization.


● Best source utilization.
● It may help to improve turnaround time for any type of time task.
● The resources are utilized smartly.
● Various users may use the multiprogramming system at once.

8
How do Multiprogramming Operating Systems Work?

Multiple users can execute tasks simultaneously in the multiprogramming system and they can be stored
in the main memory. If a program is involved in I/O operations, the CPU may give time to
different programs in idle mode. While one program is waiting for an I/O transfer, another program is
always ready to use the processor, and many programs may share CPU time. Not all tasks run
concurrently, but many tasks may run concurrently on a single processor, with some other processes
running first, then others, and so on. Consequently, the overall goal of any multiprogramming system is to
keep the CPU busy until some task in the job pool becomes available. Therefore, many programs can be
idle on a single-processor computer without using the CPU.

Advantages

● CPU utilization is high because the CPU never goes to idle state.
● Memory utilization is efficient.
● CPU throughput is high and also supports multiple interactive user terminals.
● It provides less response time.
● It may help to run various jobs in a single application simultaneously.
● It helps to optimize the total job throughput of the computer.
● Various users may use the multiprogramming system at once.
● Short-time jobs are done quickly in comparison to long-time jobs.
● It may help to improve turnaround time for short-time tasks.
● It helps in improving CPU utilization and never gets idle.
● The resources are utilized smartly.

Disadvantages

The disadvantages of multiprogramming operating system are as follows


● CPU scheduling is compulsory because lots of jobs are ready to run on CPU simultaneously.
● User is not able to interact with jobs when it is executing.
● Programmers also cannot modify a program that is being executed.
● If several jobs are ready in main memory and if there is not enough space for all of them, then the system has to
choose them by making a decision, this processes called job scheduling.
● When the operating system selects a job from the group of jobs and loads that job into memory for execution,
therefore it needs memory management, if several such jobs are ready then it needs CPU scheduling.

9
Multi-tasking Operating System (Time Sharing Operating System)

Multitasking, in an operating system, is allowing a user to perform more than one computer task (such as the
operation of an application program) at a time. The operating system is able to keep track of where you are in these
tasks and go from one to the other without losing information. Microsoft Windows 2000, IBM's OS/390, and Linux
are examples of operating systems that can do multitasking (almost all of today's operating systems can). When you
open your Web browser and then open Word at the same time, you are causing the operating system to do
multitasking. Being able to multitask doesn't mean that an unlimited number of tasks can be juggled at the same
time. Each task consumes system storage and other resources. As more tasks are started, the system may slow down
or begin to run out of shared storage. Multitasking term used in a modern computer system. It is a logical extension
of a multiprogramming system that enables the execution of multiple
Programs simultaneously. In an operating system, multitasking allows a user to perform more than one computer
task simultaneously. Multiple tasks are also known as processes that share similar processing resources like CPU.
The operating system keeps track of where you are in each of these jobs and allows you to transition between them
without losing data. Early operating system could execute various programs at the same time, although multitasking
was not fully supported. As a result, single software could consume the entire CPU of the computer while
completing a certain activity. Basic operating system functions, such as file copying, prevented the user from
completing other tasks, such as opening and closing windows. Fortunately, because modern operating systems have
complete multitasking capability, numerous programs can run concurrently without interfering with one other. In
addition, many operating system processes can run at the same time.

Types of Multitasking
There are mainly two types of multitasking. These are as follows:
1. Preemptive Multitasking
2. Cooperative Multitasking

Preemptive Multitasking

Preemptive multitasking is a special task assigned to a computer operating system. It decides how much time one
task spends before assigning another task to use the operating system. Because the operating system controls the
entire process, it is referred to as 'pre-emptive'. Preemptive multitasking is used in desktop operating systems. Unix
was the first operating system to use this method of multitasking. Windows NT and Windows 95 were the first
versions of Windows that use preemptive multitasking. With OS X, the Macintosh acquired proactive multitasking.
This operating system notifies programs when it's time for another program to take over the CPU.

Cooperative Multitasking
The term 'Non-Pre-emptive multitasks’ refers to cooperative multitasking. The main purpose of cooperative
multitasking is to run the present task while releasing the CPU to allow another process to run. This task is carried
out by using task YIELD (). When the task YIELD () function is called, context-switch is executed. Windows and
Mac OS use cooperative multitasking. A Windows program will respond to a message by performing some short
unit of work before handing the CPU over to the operating system until the program receives another message. It

10
worked perfectly as long as all programs were written with other programs in mind and bug-free.

11
Advantages of Multitasking:

Manage Several Users


This operating system is more suited to supporting multiple users simultaneously; and multiple apps can run
smoothly without interfering with system performance.

Virtual Memory

The greatest virtual memory system is found in multitasking operating systems. Because of virtual memory, any
program does not require a long wait time to complete its tasks; if this problem arises, those programs are moved to
virtual memory.

Good Reliability

Multitasking operating systems give more flexibility to several users, and they are happier as result. On which each
user can execute single or multiple programs simultaneously.

Secured Memory

The multitasking operating systems have well-defined memory management. Due to this operating system does not
allow any types of permissions for undesirable programs to waste memory.

Time Shareable

All tasks are allotted a specified amount of time so that they do not have to wait for the CPU.

Background Processing

A multitasking operating system provides a better environment for background processes to run. These background
programs are not visible to most users, but they help other programs like firewalls, antivirus software, and others run
well.

12
Optimize the computer resources

A multitasking operating system may manage various computer resources like I/O devices, RAM hard disk, CPU,
and others.

Use Several Programs

Users can run many programs simultaneously, like an internet browser, games, MS Excel, PowerPoint, and other
utilities.

Disadvantages of Multitasking:

Processor Boundation

The system may run programs slowly because of the poor speed of their processors, and their reaction time might
rise when processing many programs. To solve this problem, more processing power is required.

Memory Boundation

The computer's performance may get slow due to the multiple programs running at the same time because the main
memory gets overloaded while loading multiple programs. Because the CPU is unable to provide different times for
each program, reaction time increases. The primary cause of this issue is that it makes use of low-capacity RAM. As
a result, the RAM capacity can be raised to provide a solution.

CPU Heat Up

The multiple processors are busier at the same time to complete any task in a multitasking environment, so the CPU
generates more heat.

Multi-User Operating System

A multi-user operating system is an operating system that permits several users to access a single system running to
a single operating system. These systems are frequently quite complex, and they must manage the tasks that the
various users connected to them require. Users will usually sit at terminals or computers connected to the system via
a network and other system machines like printers. A multi- user operating system varies from a connected single-
user operating system in that each user accesses the same operating system from different machines. The main goal
of developing a multi-user operating system is to use it for time- sharing and batch processing on mainframe
systems. This multi-user operating system is now often used in large organizations, the government sector,
educational institutions like large universities, and on servers' side such as Ubuntu Server or Windows Server. These
servers allow several users to access the operating system, kernel, and hardware at the same time. It is usually
responsible for handling memory and processing for other running programs, identifying and using system
hardware, and efficiently handling user interaction and data requests. It’s especially important for an operating
system, a multi-user operating system because several users rely on the system to function properly at the same time.

Components of Multi-User Operating System


Memory: The physical memory present inside the system is where storage occurs. It is also known as Random
Access Memory (RAM). The system may rectify the data that is present in the main memory. So, every executed
program should be copied from physical storage like a hard disk. Main memory is determined as an important part
of the OS because it specifies how many programs may be executed simultaneously.
Kernel: A multi-user operating system makes use of the Kernel component, which is built in a low-level language.
This component is embedded in the computer system's main memory and may interact directly with the system's
H/W.

13
Processor: The CPU (Central Processing Unit) of the computer is sometimes known as the computer’s brain. In
large machines, the CPU would necessitate more ICS. On smaller computers, the CPU is mapped in a single chip
known as a microprocessor.

User Interface: The user interface is the way of interaction between users and all software and hardware processes.
It enables the users to interact with the computer system in a simple manner.

Device Handler: Each input and output device needs its device handler. The device handler's primary goal is to
provide all requests from the whole device request queue pool. The device handler operates in continuous cycle
mode, first discarding the I/O request block from the queue side.

Spooler: Spooler stands for 'Simultaneous Peripheral Output on Line'. The Spooler runs all computer processes and
outputs the results at the same time. Spooling is used by a variety of output devices, including printers.

Types of Multi-User Operating System

Distributed System: A distributed system is also known as distributed computing. It is a collection of multiple
components distributed over multiple computers that interact, coordinate, and seem like a Single coherent system to
the end-user. With the aid of the network, the end-user would be able to interact with or operate them. The entire
system in the distributed operating system is a network through which the end-users communicate or operate.
Distributed operating system, also called distributed Computing, is a compilation of multiple components. The
components are distributed over multiple computers to help the end-user interact and coordinate like a single
coherent system.

Time-Sliced Systems: It's a system in which each user's job gets a specific amount of CPU time. In other words,
each work is assigned to a specific time period. These time slices look too small to the user's eyes. An internal
component known as the 'Scheduler' decides to run the next job. This scheduler determines and executes the job that
must perform based on the priority cycle. It is a system where each user task is assigned a short period of CPU time.
The CPU time gets divided into time slices where each slice is too small for the user. This method of dividing the
CPU time is known as time slicing. Time Slicing is a scheduling algorithm also called Round Robin Scheduling. It
gives equal opportunity to all the processes running in the system to use CPU time.

Multiprocessor System: Multiple processors are used in this system, which helps to improve overall performance.
If one of the processors in this system fails, the other processor is responsible for completing its assigned task.
Multiprocessor systems are systems that use multiple processors at the same time. Using multiple processors
increases the system performance as all the processors run side by side. It works at a pace that is faster than the
single-processor operating system. In a multiprocessor system, if one processor fails, another processor completes its
assigned tasks.

Features of the Multi-user Operating System

Multi-tasking - A multi-user operating system can perform multiple programs simultaneously.


Resource sharing - A multi-user operating system can share multiple peripherals or resources, such as printers, hard
drives, fax machines, plotters, etc. This feature helps to share files, documents, and data among users. This feature
maps to time- slicing, where a tiny slice of CPU time gets allocated to all users.
Background processing - A multi-user operating system can process tasks in the backend if they are not allowed to
process in the front. It also allows simultaneous processing and interaction of programs with the system.

14
Example of Multi-user Operating System
● Mac OS X,

● Windows 1010,

● Linux,

● Unix,

● Ubuntu

Advantages of the Multi-user Operating System

Avoids Disruption: A multi-user operating system has multiple computers and devices operating and running on
the same network. Thus, the damage to one computer in the network does not affect others. Thus it avoids
disruption, which is the most significant advantage of a multi-user operating system.

Distribution of Resources: One user can share the file they are working on to be visible to other users. Thus, any
user who requires it can access the file whenever they want. For example, if a user wants to view the ppt file of some
other user, the user working on it can share it so that other users can access it.

Used in Airlines, Railways, and Buses: The ticket reservation system uses a multi-user operating system wherein
multiple users can log in, book a ticket, cancel a ticket, and check the availability or the status of the booked ticket
simultaneously.

Backing up of Data: The multi-user operating system makes the backing up of data easier as it gets done on the
machine used by the user. Stability of servers: The multi-user operating system provides remote access to servers
from all countries in different time zones. The up- gradation of hardware and software with the latest technologies
makes the server systematic and stable.

Disadvantages of the Multi-user Operating System

Virus: In the multi-user operating system, if a virus gets into a single network of computers, it will pave the way for
the virus to affect all the computers in the network.

Visibility of data: Privacy of data and information becomes a concern as all the information in the computers gets
shared in public.

Multiple accounts: Multiple accounts on a single computer may not be suitable for all users. Thus, it is better to
have multiple PCs for each user.

15
Multiprocessing Operating System

In operating systems, to improve the performance of more than one CPU can be used within one computer system
called Multiprocessor operating system.
Multiple CPUs are interconnected so that a job can be divided among them for faster execution. When a job finishes,
results from all CPUs are collected and compiled to give the final output. Jobs needed to share main memory and
they may also share other system resources among themselves. Multiple CPUs can also be used to run multiple jobs
simultaneously. Multiprocessor operating systems are used in operating systems to boost the performance of
multiple CPUs within a single computer system. Multiple CPUs are linked together so that a job can be divided and
executed more quickly.

For Example: UNIX Operating system is one of the most widely used multiprocessing systems.

To employ a multiprocessing operating system effectively, the computer system must have the following
things:

● A motherboard is capable of handling multiple processors in a multiprocessing operating system.

● Processors are also capable of being used in a multiprocessing system.

Symmetrical Multiprocessing Operating System

In a Symmetrical multiprocessing operating system, each processor executes the same copy of operating system
every time. Each process makes its own decisions and works according to all other process to make sure that system
works efficiently. With the help of CPU scheduling algorithms, the task is assigned to the CPU that has least burden.
Symmetrical multiprocessing operating system is also known as “Shared Everything System” because all the
processors share memory and input-output bus. Below image describes a symmetric multiprocessing operating
system.

16
17
Advantages

● Failure of one processor does not affect the functioning of other processors.
● It divides all the workload equally to the available processors.
● Make use of available resources efficiently.

Disadvantages

● Symmetric multiprocessing OS are more complex.


● They are more costly.
● Synchronization between multiple processors is difficult.

Asymmetrical Multiprocessing Operating System

In an Asymmetrical multiprocessing operating system one processor acts as a master whereas the remaining all
processors act as slaves. Slave processors are assigned with ready to execute processes by the master processor. A
ready queue is being maintained by the master processor to provide processes for slaves. In a multiprocessing
operating system a scheduler is created by a master process that assigns processes to be executed to slave processors.
Below diagram describes the asymmetrical multiprocessing operating system.

Advantages

● Asymmetrical multiprocessing operating systems are cost-effective.


● They are easy to design and manage.
● They are more scalable.

Disadvantages

● There can be uneven distribution of workload among the processors.


● The processors do not share the same memory.
● Entire system goes down if one process fails.

18
Features Symmetric Multiprocessing Asymmetric Multiprocessing

Definition Symmetric multiprocessing occurs when The processing of programs by several


many processors work together to processors in a master-slave Arrangement is
process programs using the same OS and known as asymmetric multiprocessing.
memory.

Basic Each CPU executes the OS operations. The Master processor only carries out the OS
functions.

Ease Symmetric Multiprocessors are difficult to The master processor has access to the data
understand since all of the processors must be structure.
synchronized to maintain load balance.

Processor All processors use a common ready queue, or The master processor assigns the slave
each may have its private ready queue. processors processes, or they have some
predefined tasks.

Communicatio Shared memory allows all processors to Processors do not need to communicate
n communicate with one another. because the master processor controls them.

Architecture SMP processors all have the same Asymmetric Multiprocessing processors can
architecture. have the same or different architecture.

Failure When a CPU Fails, the system's computing If a master processor fails, control is passed to
capacity decreases. a slave processor. If a slave processor fails, the
task is passed to different processor.

Cost It is costly in comparison to Asymmetric It is cheaper than Symmetric Multiprocessing.


Multiprocessing.

19
Lecture-3

3.1 Classification of operating system Real Time Operating System:

(RTOS) are used in environments where a large number of events, mostly external to the computer system, must be
accepted and
Processed in a short time or within certain deadlines. such applications are industrial control, telephone switching
equipment, flight control, and real-time simulations. With an RTOS, the processing time is measured in tenths of
seconds. This system is time-bound and has a fixed deadline. The processing in this type of system must occur
within the specified constraints. Otherwise, this will lead to system failure.
Examples of the real-time operating systems:

● Airline traffic control systems,

● Command Control Systems,

● Airlines reservation system,

● Heart Pacemaker,

● Network Multimedia Systems,


● Robot etc.

The real-time operating systems can be of 3 types

Hard Real-Time operating system: These operating systems guarantee that critical tasks are completed within a
range of time. For example, a robot is hired to weld a car body. If the robot welds too early or too late, the car cannot
be sold, so it is a hard real-time system that requires complete car welding by robot hardly on the time., scientific
experiments, medical imaging systems, industrial control systems, weapon systems, robots, air traffic control
systems, etc.
Soft real-time operating system: This operating system provides some relaxation in the time limit. For example –
Multimedia systems, digital audio systems etc. Explicit, programmer-defined and controlled processes are
encountered in real-time systems. A separate process is changed with handling a single external event. The process
is activated upon occurrence of the related event signaled by an interrupt. Multitasking operation is accomplished by
scheduling processes for execution independently of each other. Each process is assigned a certain level of priority
that corresponds to the relative importance of the event that it services. The processor is allocated to the highest
priority processes. This type of schedule, called priority-based preemptive scheduling, is used by real-time systems.

Firm Real-time Operating System: RTOS of this type have to follow deadlines as well. In spite of its small

20
impact, missing a deadline can have unintended consequences, including a reduction in the quality of the product.
Example: Multimedia applications.
Advantages:

Maximum consumption – Maximum utilization of devices and systems. Thus more output from all the resources.
Task Shifting –Time assigned for shifting tasks in these systems is very less. For example, in older systems, it takes
about 10 microseconds. Shifting one task to another and in the latest systems, it takes 3 microseconds.

Focus On Application –Focus on running applications and less importance to applications that are in the queue.
Real-Time Operating System In Embedded System –Since the size of programs is small, RTOS can also be
Embedded systems like in transport and others.
Error Free -These types of systems are error-free.
Memory Allocation –Memory allocation is best managed in these types of systems.

Disadvantages:

Limited Tasks –Very few tasks run simultaneously, and their concentration is very less on few applications to avoid
errors.
Use Heavy System Resources –Sometimes the system resources are not so good and they are expensive as well.

Complex Algorithms –The algorithms are very complex and difficult for the designer to write on.

Device Driver and Interrupt signals –It needs specific device drivers and interrupts signals to respond earliest to
interrupts.

Thread Priority –It is not good to set thread priority as these systems are very less prone to switching tasks.

Minimum Switching – RTOS performs minimal task switching.

Interactive Operating System

Interactive operating systems are computers that accept human inputs. Users give commands or some data to the
computers by typing or by gestures. Some examples of interactive systems include MS Word and Spreadsheets, etc.
They facilitate interactive behavior. Mac and Windows OS are some examples of interactive operating systems. An
interactive operative system is an operating system that allows the execution of interactive programs. All PC
operating systems are interactive operating systems only. An interactive operating system gives permission to the
user to interact directly with the computer. In an Interactive operating system, the user enters some command into
the system and the work of the system is to execute it. Programs that allow users to enter some data or commands
are known as Interactive Operating Systems. Some commonly used examples of Interactive operating systems
include Word Processors and Spreadsheet Applications. A non-interactive program can be defined as one that once
started continues without the need for human interaction. A compiler can be an example of a non- interactive
program.

Properties of Interactive Operating System:

Batch Processing: It is defined as the process of gathering programs and data together in a batch before performing
them. The job of the operating system is to define the jobs as a single unit by using some already defined sequence
of commands or data, etc. Before they are performed or carried out, these are stored in the memory of the system
and their processing depends on a FIFO basis. The operating system releases the memory and then copies the output
into an output spool for later printing when the job is finished. Its use is that it basically improves the system
performance because a new job begins only when the old one is completed without any interference from the user.
One disadvantage is that there is a small chance that the jobs will enter an infinite loop. Debugging is also somewhat
difficult with batch processing.

21
Multitasking: The CPU can execute many tasks simultaneously by switching between them. This is known as
Time- Sharing System and also it has a very fast response time. They switch so fastly that the users can very easily
interact with each running program.

Multiprogramming: Multiprogramming happens when the memory of the system stores way too many processes.
The job of the operating system here is to run these processes in parallel on the same processor. Multiple processes
share the CPU, thus increasing CPU utilization. Now, the CPU only performs one job at a particular time while the
rest wait for the processor to be assigned to them. The operating system takes care of the fact that the CPU is never
idle by using its memory management programs so that it can monitor the state of all system resources and active
programs. One advantage of this is that it gives the user the feeling that the CPU is working on multiple programs
simultaneously.

Real-Time System: Dedicated embedded systems are real-time systems. The main job of the operating system here
is to read and react to sensor data and then provides a response in a fixed time period, therefore, ensuring good
performance.

Distributive Environment: A distributive environment consists of many independent processors. The job of the
operating system here is to distribute computation logic among the physical processors and also at the same time
manage communication between them .Each processor has its own local memory, so they do not share a memory.

Interactivity: Interactivity is defined as the power of a user to interact with the system. The main job of the
operating system here is that it basically provides an interface for interacting with the system, manages I/O devices,
and also ensures a fast response time.

Spooling: Spooling is defined as the process of pushing the data from different I/O jobs into a buffer or somewhere
in the memory so that any device can access the data when it is ready. The operating system here handles the I/O
device data spooling because the devices have different data access rates in order to maintain the spooling buffer.

Advantages of Interactive Operating System:

Usability: An operating system is designed to perform something and the instructiveness allows the user to manage
the tasks more or less in real-time. Security: Simple security policy enhancement. In non-interactive systems, the
user virtually always knows what their programs will do during their lifetime, thus allowing us to forecast and
correct the bugs.

Disadvantages of Interactive Operating System:

Tough to design: Depending on the target device, interactivity might be proved challenging to design because the
user must be prepared for every input. What about having many inputs? The state of a program can alternate at any
particular time, all the programs should be handled in some way, and also it doesn’t always work out properly.

Example of an Interactive Operating System:

● Unix Operating System

● Disk Operating System.

22
Multithreading Operating System

A multithreaded operating system is an operating system that supports multiple threads of execution within a single
process. Threads are lightweight processes that share the same memory space, allowing for more efficient
concurrent execution compared to traditional heavyweight processes. In a multithreaded operating system, each
thread within a process can execute independently, performing different tasks simultaneously. This allows for better
utilization of system resources such as CPU time and memory, as well as improved responsiveness and throughput
for applications.

Multithreading can provide several advantages, including:

Concurrency: Multiple threads can execute concurrently within a single process, allowing for better responsiveness
and improved performance, especially on multi-core processors.

Resource Sharing: Threads within the same process share resources such as memory and file descriptors, reducing
overhead compared to separate processes.

Simplified Programming: Multithreading can simplify programming by allowing developers to write concurrent
code more easily than with processes, as threads within the same process can communicate more directly and
efficiently.

Efficient Communication: Threads within the same process can communicate through shared memory, message
passing, or other inter-thread communication mechanisms, allowing for efficient data exchange.

Multithreading Model:

Multithreading allows the application to divide its task into individual threads. In multi-threads, the same process or
task can be done by the number of threads, or we can say that there is more than one thread to perform the task in
multithreading. With the use of multithreading, multitasking can be achieved.
The main drawback of single threading systems is that only one task can be performed at a time, so to overcome the
drawback of this single threading, there is multithreading that allows multiple tasks to be performed.

23
For example:

In the above example, client1, client2, and client3 are accessing the web server without any waiting. In
multithreading, several tasks can run at the same time.

In an operating system, threads are divided into the user-level thread and the Kernel-level thread. User-level threads
handled independent form above the kernel and thereby managed without any kernel support. On the opposite hand,
the operating system directly manages the kernel-level threads. Nevertheless, there must be a form of relationship
between user-level and kernel-level threads.

There exists three established multithreading models classifying these relationships are:

● Many to one multithreading model

● One to one multithreading model

● Many to Many multithreading models

Many to one multithreading model:

The many to one model maps many user levels threads to one kernel thread. This type of relationship facilitates an
effective context-switching environment, easily implemented even on the simple kernel with no thread support.

The disadvantage of this model is that since there is only one kernel-level thread schedule at any given time, this
model cannot take advantage of the hardware acceleration offered by multithreaded processes or multi-processor
systems. In this, all the thread management is done in the user space. If blocking comes, this model blocks the whole
system.

In the below figure, the many to one model associates all user-level threads to single kernel-level threads.

24
One to one multithreading model

The one-to-one model maps a single user-level thread to a single kernel-level thread. This type of relationship
facilitates the running of multiple threads in parallel. However, this benefit comes with its drawback. The generation
of every new user thread must include creating a corresponding kernel thread causing an overhead, which can
hinder the performance of the parent process. Windows series and Linux operating systems try to tackle this
problem by limiting the growth of the thread count.

In the above figure, one model associates that one user-level thread to a single kernel-level thread.

Many to Many Model multithreading model

In this type of model, there are several user-level threads and several kernel-level threads. The number of kernel
threads created depends upon a particular application. The developer can creates many threads at both levels but
may not be the same. The many to many model is compromise between the other two models. In this model, if any
thread makes a blocking system call, the kernel can schedule another thread for execution. Also, with the
introduction of multiple threads, complexity is not present as in the previous models. Though this model allows the
creation of multiple kernel threads, true concurrency cannot be achieved by this model. This is because the kernel
can schedule only one process at a time.

Many to many versions of the multithreading model associate several user-level threads to the same or much less
variety of kernel-level threads in the below figure.

25
Head-to-head comparison between the User level threads and Kernel level threads

Features User Level Threads Kernel Level Threads

Implemented by It is implemented by the users. It is


implemented by the OS.

Context switch time Its time is less. Its time is more.

Multithreading Multithread applications are unable to It may be multithreaded.


employ multiprocessing in user-level
threads.

Implementation It is easy to implement. It is complicated to implement.

Blocking Operation If a thread in the kernel is blocked, it If a thread in the kernel is blocked, it does
blocks all other threads in the same not block all other threads in the same
process. process.

Recognize OS doesn't recognize it. It is recognized by OS.

Thread Management Its library includes the source code for The application code on kernel-level
thread creation, data transfer, thread threads does not include thread
destruction, message passing, and thread management code, and it is simply an API
scheduling. to the kernel mode.

Hardware Support It doesn't need hardware support. It requires hardware support.

Creation and It may be created and managed much It takes much time to create and handle.
Management faster.

Examples Some instances of user-level threads are Some instances of Kernel-level threads are
Java threads and POSIX threads. Windows and Solaris.

Operating System Any OS may support it. The specific OS may support it.

26
Lecture-4

Operating System Services

The operating system provides the programming environment in which a programmer works computer system. The
user program requests various resources through the operating system.
The operating system gives several services to utility programmers and users. Applications access these services
through application programming interfaces or system calls.
By invoking those interfaces, the application can request a service from the operating system, pass parameters, and
acquire the operation outcomes.

User interface- Almost all operating systems have a user interface (UI). This interface can take several forms. One
is a command-line interface (CLI), which uses text commands and a method for entering them (say, a keyboard for
typing in commands in a specific format with specific options). Another is a batch interface, in which commands
and directives to control those commands are entered into files, and those files are executed. Most commonly, a
graphical user interface (GUI) is used. Here, the interface is a window system with a pointing device to direct I/O,
choose from menus, and make selections and a keyboard to enter text. Some systems provide two or all three of
these variations.

Program execution- The system must be able to load a program into memory and to run that program. The program
must be able to end its execution, either normally or abnormally (indicating error).

I/O operations- A running program may require I/O, which may involve a file or an I/O device. For specific
devices, special functions may be desired (such as recording to a CD or DVD drive or blanking a display screen).
For efficiency and protection, users usually cannot control I/O devices directly. Therefore, the operating system
must provide a means to do I/O. File-system manipulation. The file system is of particular interest. Obviously,
programs need to read and write files and directories. They also need to create and delete them by name, search for a
given file, and list file information. Finally, some operating systems include permissions management to allow or
deny access to files or directories based on file ownership. Many operating systems provide a variety of file systems,
sometimes to allow personal choice and sometimes to provide specific features or performance characteristics.

File-system manipulation- The file system is of particular interest. Obviously, programs need to read and write
files and directories. They also need to create and delete them by name, search for a given file, and list file
information. Finally, some operating systems include permissions management to allow or deny access to files or
directories based on file ownership. Many operating systems provide a variety of file systems, sometimes to allow
personal choice and sometimes to provide specific features or performance characteristics.

Communications- There are many circumstances in which one process needs to exchange information with another
process. Such communication may occur between processes that are executing on the same computer or between
processes that are executing on different computer systems tied together by a computer network. Communications
may be implemented via shared memory, in which two or more processes read and write to a shared section of
memory, or message passing, in which packets of information in predefined formats are moved between processes
by the operating system.

Error detection- The operating system needs to be detecting and correcting errors constantly. Errors may occur in
the CPU and memory hardware (such as a memory error or a power failure), in I/O devices (such as a parity error on
disk, a connection failure one network, or lack of paper in the printer), and in the user program (such as an
arithmetic overflow, an attempt to access an illegal memory location, or a too-great use of CPU time). For each type
of error, the operating system should take the appropriate action to ensure correct and consistent computing.
Sometimes, it has no choice but to halt the system. At other times, it might terminate an error-causing process or
return an error code to a process for the process to detect and possibly correct .

27
Resource allocation- When there are multiple users or multiple jobs running at the same time, resources must be
allocated to each of them. The operating system manages many different types of resources. Some (such as CPU
cycles, main memory, and file storage) may have special allocation code, whereas others (such as I/O devices) may
have much more general request and release code. For instance, in determining how best to use the CPU, operating
systems have CPU-scheduling routines that take into account the speed of the CPU, the jobs that must be executed,
the number of registers available, and other factors. There may also be routines to allocate printers, USB storage
drives, and other peripheral devices.

Protection and security- The owners of information stored in a multiuser or networked computer system may want
to control use of that information. When several separate processes execute concurrently, it should not be possible
for one process to interfere with the others or with the operating system itself. Protection involves ensuring that all
access to system resources is controlled. Security of the system from outsiders is also important. Such security starts
with requiring each user to authenticate him or herself to the system, usually by means of a password, to gain access
to system resources. It extends to defending external I/O devices, including network adapters, from invalid access
attempts and to recording all such connections for detection of break-ins. If a system is to be protected and secure,
precautions must be instituted throughout it. A chain is only as strong as its weakest link.

Accounting- We want to keep track of which users use how much and what kinds of computer resources. This
record keeping may be used for accounting (so that users can be billed) or simply for accumulating usage statistics.
Usage statistics may be valuable tool for researchers who wish to reconfigure the system to improve computing
services.

A View of Operating System Services

28
Lecture-5

5.1 Operating System Structure

A system as large and complex as a modern operating system must be engineered carefully if it is to function
properly and be modified easily. A common approach is to partition the task into small components, or modules,
rather than have one monolithic system. Each of these modules should be a well-defined portion of the system, with
carefully defined inputs, outputs, and functions. In this section, we discuss how these components are interconnected
and melded into kernel.

Simple Structure:

Many operating systems do not have well-defined structures. Frequently, such systems started as small, simple, and
limited systems and then grew beyond their original scope. MS-DOS is an example of such a system. It was
originally designed and implemented by a few people who had no idea that it would become so popular. It was
written to provide the most functionality in the least space, so it was not carefully divided into modules.
In MS-DOS, the interfaces and levels of functionality are not well separated. For instance, application programs are
able to access the basic I/O routines to write directly to the display and disk drives. Such freedom leaves MS-DOS
vulnerable to errant (or malicious) programs, causing entire system crashes when user programs fail. Of course, MS-
DOS was also limited by the hardware of its era.
Because the Intel 8088 for which it was written provides no dual mode and no hardware protection, the designers of
MS-DOS had no choice but to leave the base hardware accessible.

MS-DOS layer structure


Advantages of Simple structure:
● It delivers better application performance because of the few interfaces between the application program and the
hardware.
● Easy for kernel developers to develop such an operating system.

● It can perform the fundamental operation

● It uses straightforward commands

Disadvantages of Simple structure:


● The structure is very complicated as no clear boundaries exist between modules.

● It does not enforce data hiding in the operating system.

● Limited ability

29
● Lack of Flexibility.

30
Layered Approach

In a layered approach, the OS consists of several layers where each layer has a well-defined functionality and each
layer is designed, coded and tested independently.

The layered structure approach breaks up the operating system into different layers and retains much more control
on the system. The bottom layer (layer 0) is the hardware, and the topmost layer (layer N) is the user interface.
These layers are so designed that each layer uses the functions of the lower-level layers only. It simplifies the
debugging process as if lower- level layers are debugged, and an error occurs during debugging. The error must be
on that layer only as the lower-level layers have already been debugged.

This allows implementers to change the inner workings and increases modularity.
As long as the external interface of the routines doesn't change, developers have more freedom to change the inner
workings of the routines.

The main advantage is the simplicity of construction and debugging. The main difficulty is defining the various
layers.

The main disadvantage of this structure is that the data needs to be modified and passed on at each layer, which adds
overhead to the system. Moreover, careful planning of the layers is necessary as a layer can use only lower-level
layers. UNIX is an example of this structure.

Layering provides a distinct advantage in an operating system. All the layers can be defined separately and interact
with each other as required. Also, it is easier to create, maintain and update the system if it is done in the form of
layers. Change in one layer specification does not affect the rest of the layers.

Each of the layers in the operating system can only interact with the above and below layers. The lowest layer
handles the hardware, and the uppermost layer deals with the user applications.

Architecture of Layered Structure

This type of operating system was created as an improvement over the early monolithic systems. The operating
system is split into various layers in the layered operating system, and each of the layers has different functionalities.
There are some rules in the implementation of the layers as follows.
A particular layer can access all the layers present below it, but it cannot access them. That is, layer n-1 can access
all the layers from n-2 to 0, but it cannot access the nth.
Layer 0 deals with allocating the processes, switching between processes when interruptions occur or the timer
expires. It also deals with the basic multiprogramming of the CPU. Thus if the user layer wants to interact with the

31
hardware layer, the response will be traveled through all the layers from n-1 to 1. Each layer must be designed and

32
implemented such that it will need only the services provided by the layers.
There are six layers in the layered operating system. A diagram demonstrating these layers is as follows:

Hardware: This layer interacts with the system hardware and coordinates with all the peripheral devices used, such
as a printer, mouse, keyboard, scanner, etc. These types of hardware devices are managed in the hardware layer.
The hardware layer is the lowest and most authoritative layer in the layered operating system architecture. It is
attached directly to the core of the system.
CPU Scheduling: This layer deals with scheduling the processes for the CPU. Many scheduling queues are used to
handle processes. When the processes enter the system, they are put into the job queue.
The processes that are ready to execute in the main memory are kept in the ready queue. This layer is responsible for
managing how many processes will be allocated to the CPU and how many will stay out of the CPU.
Memory Management: Memory management deals with memory and moving processes from disk to primary
memory for execution and back again. This is handled by the third layer of the operating system. All memory
management is associated with this layer. There are various types of memories in the computer like RAM, ROM.
If you consider RAM, then it is concerned with swapping in and swapping out of memory. When our computer runs,
some processes move to the main memory (RAM) for execution, and when programs, such as calculators, exit, it is
removed from the main memory.
Process Management: This layer is responsible for managing the processes, i.e., assigning the processor to a
process and deciding how many processes will stay in the waiting schedule. The priority of the processes is also
managed in this layer. The different algorithms used for process scheduling are FCFS (first come, first served), SJF
(shortest job first), priority scheduling, round-robin scheduling, etc.

Advantages of Layered Structure

There are several advantages of the layered structure of operating system design, such as:
Modularity: This design promotes modularity as each layer performs only the tasks it is scheduled to perform.
Easy debugging: As the layers are discrete so it is very easy to debug. Suppose an error occurs in the CPU
scheduling layer. The developer can only search that particular layer to debug, unlike the Monolithic system where
all the services are present.
Easy update: A modification made in a particular layer will not affect the other layers.
No direct access to hardware: The hardware layer is the innermost layer present in the design. So a user can use
the services of hardware but cannot directly modify or access it, unlike the Simple system in which the user had
direct access to the hardware.
Abstraction: Every layer is concerned with its functions. So the functions and implementations of the other layers
are abstract to it.

33
Disadvantages of Layered Structure

Though this system has several advantages over the Monolithic and Simple design, there are also some
disadvantages, such as:

Complex and careful implementation: As a layer can access the services of the layers below it, so the arrangement
of the layers must be done carefully. For example, the backing storage layer uses the services of the memory
management layer. So it must be kept below the memory management layer. Thus with great modularity comes
complex implementation.

Slower in execution: If a layer wants to interact with another layer, it requests to travel through all the layers
present between the two interacting layers. Thus it increases response time, unlike the Monolithic system, which is
faster than this. Thus an increase in the number of layers may lead to a very inefficient design.

Functionality: It is not always possible to divide the functionalities. Many times, they are interrelated and can't be
separated.

Communication: No communication between non-adjacent layers.

34
Lecture-6

6.1 Kernel

Kernel central component of an operating system that manages operations of computer and hardware. It basically
manages operations of memory and CPU time. It is the core component of an operating system.
Kernel acts as a bridge between applications and data processing performed at hardware level using inter-process
communication and system calls.
Kernel loads first into memory when an operating system is loaded and remains into memory until the operating
system is shut down again. It is responsible for various tasks such as disk management, task management, and
memory management.
Kernel has a process table that keeps track of all active processes. Process table contains a per process region table
whose entry points to entries in the region table.
Kernel loads an executable file into memory during ‘exec’ system call ‘It decides which process should be allocated
to the processor to execute and which process should be kept in main memory to execute.
It basically acts as an interface between user applications and hardware. The major aim of the kernel is to manage
communication between software i.e. user-level applications and hardware i.e., CPU and disk memory.

Objectives of Kernel:

● To establish communication between user level application and hardware.

● To decide the state of incoming processes.

● To control disk management.

● To control memory management.

● To control task management.

Features of Kernel

● Inter-process communication

● Context switching

● Low-level scheduling of processes

● Process synchronization

The kernel handles the following:


● Resource management

● Device management

● Memory management

● CPU/GPU

● Input/output device

● System calls

35
● Memory

Microkernel

The microkernel is one of the kernel's classifications. Being a kernel, it handles all system resources. On the other
hand, the user and kernel services in a microkernel are implemented in distinct address spaces. User services are
kept in user address space, while kernel services are kept in kernel address space. It aids in reducing the kernel
and OS's size.
It provides a minimal amount of process and memory management services. The interaction between the client
application and services running in user address space is established via message passing that helps to reduce the
speed of microkernel execution. The OS is unaffected because kernel and user services are isolated, so if any of the
user services fail, the kernel
Service is unaffected. It is extendable because new services are added to the user address space, hence requiring no
changes in kernel space. It's also lightweight, secure, and reliable.
Microkernel and their user environments are typically used in C++ or C languages with a little assembly. On the
other hand, other implementation programming languages may be possible with some high-level code.
Example: Mach OS, Eclipse IDE

Architecture of Microkernel

A microkernel is minimum needed software required to implement an operating system correctly. Memory, process
scheduling methods, and fundamental inter-process communication are all included.
The microkernel includes basic needs like process scheduling mechanisms, memory, and interposes communication.
It is the only program that executes at the privileged level, i.e., kernel mode. The OS's other functions are moved
from the kernel-mode and executed in the user mode.
The microkernel ensures that the code may be easily controlled because the services are split in the user space. It
means some code runs in the kernel mode, resulting in improved security and stability.
The microkernel is entirely responsible for the operating system's most significant services, which are as follows:
o Inter-Process Communication
o Memory Management
o CPU Scheduling

Inter-Process Communication
Interposes communication refers to how processes interact with one another. A process has several threads. In the
kernel space, threads of any process interact with one another. Messages are sent and received across threads using
ports. At the kernel level, there are several ports like process port, exceptional port, bootstrap port, and registered
port. All of these ports interact with user-space processes.
Memory Management
Memory management is the process of allocating space in main memory for processes. However, there is also the
creation of virtual memory for processes. Virtual memory means that if a process has a bigger size than the main
memory, it is partitioned into portions and stored. After that, one by one, every part of the process is stored in the
main memory until the CPU executes it.
CPU Scheduling
CPU scheduling refers to which process the CPU will execute next. All processes are queued and executed one at
time. Every process has a level of priority, and the process with the highest priority is performed out first. CPU
scheduling aids in optimizing CPU utilization. In addition, resources are being used more efficiently. It also
minimizes the waiting time. Waiting time shows that a process takes less time in the queue and that resources are
allocated to the process more quickly. CPU scheduling also reduces response and turnaround times.

36
Components of Microkernel

A microkernel contains only the system's basic functions. A component is only included in the microkernel if
putting it outside would disrupt the system's operation. The user mode should be used for all other non- essential
components. The minimum functionalities needed in the microkernel are as follows:
● In the microkernel, processor scheduling algorithms are also required. Process and thread schedulers are included.

● Address spaces and other memory management mechanisms should be incorporated in the microkernel. Memory
protection features are also included.
● Inter-process communication (IPC) is used to manage servers that execute their own address spaces.

Advantages

● Micro kernels are secure since only those parts are added, which might disturb the system’s functionality.

● Micro kernels are modular, and the various modules may be swapped, reloaded, and modified without affecting the
kernel.
● Microkernel architecture is compact and isolated, so it may perform better.

● The system expansion is more accessible, so it may be introduced to the system application without disrupting the
kernel.
● When compared to monolithic systems, micro kernels have fewer system crashes. Furthermore, due to the modular
structure of the microkernel, any crashes that do occur are simply handled.
● The microkernel interface helps in enforcing a more modular system structure.

● Server failure is treated the same as any other user program failure.

● It adds new features without recompiling.

● Size is smaller

37
● Easy to extend

● Easy to port

● Less prone to errors and bugs


Disadvantages

● When the drivers are implemented as procedures, a context switch or a function call is needed.

● In a microkernel system, providing services are more costly than in a traditional monolithic system.

● The performance of a microkernel system might be indifferent and cause issues.

● Execution is slower

Monolithic kernel

The monolithic operating system is a very basic operating system in which file management, memory management,
device management, and process management are directly controlled within the kernel. The kernel can access all the
resources present in the system. In monolithic systems, each component of the operating system is contained within
the kernel. Operating systems that use monolithic architecture were first used in the 1970s.
The monolithic operating system is also known as the monolithic kernel. This is an old operating system used to
perform small tasks like batch processing and time-sharing tasks in banks. The monolithic kernel acts as a virtual
machine that controls all hardware parts.
It is different from a microkernel, which has limited tasks. A microkernel is divided into two
parts, kernel space, and user space. Both parts communicate with each other through IPC (Inter-process
communication). Microkernel's advantage is that if one server fails, then the other server takes control of it.
A monolithic kernel is an operating system architecture where the entire operating system is working in kernel
space. The monolithic model differs from other operating system architectures, such as the microkernel architecture,
in that it alone defines a high-level virtual interface over computer hardware. A set of primitives or system calls
implement all operating system services such as process management, concurrency, and memory management.
Device drivers can be added to the kernel as modules.

Monolithic Kernel Components

A monolithic design of the operating system architecture makes no special accommodation for the special nature of

38
the operating system. Although the design follows the separation of concerns, no attempt is made to restrict the
privileges granted to the individual parts of the operating system. The entire operating system executes with
maximum privileges. The communication overhead inside the monolithic operating system is the same as that of any
other software, considered relatively low.
CP/M and DOS are simple examples of monolithic operating systems. Both CP/M and DOS are operating systems
that share a single address space with the applications. In CP/M, the 16-bit address space starts with system
variables and the application area. It ends with three parts of the operating system, namely CCP (Console Command
Processor), BDOS (Basic Disk Operating System), and BIOS (Basic Input/output System).
In DOS, the 20-bit address space starts with the array of interrupt vectors and the system variables, followed by the
resident part of DOS and the application area and ending with memory block used by the video card and BIOS.

Advantages of Monolithic Architecture

• Monolithic architecture has the following advantages, such as:


• Simple and easy to implement structure.
• Faster execution due to direct access to all the services
• The execution of the monolithic kernel is quite fast as the services such as memory management, file management,
process scheduling, etc., are implemented under the same address space.
• A process runs completely in a single address space in the monolithic kernel.
• The monolithic kernel is a static single binary file.

Disadvantages of Monolithic Architecture


• If any service fails in the monolithic kernel, it leads to the failure of the entire system.
• The entire operating system needs to be modified by the user to add any new service.
• The addition of new features or removal of obsolete features is very difficult.
• Security issues are always there because there is no isolation among various servers presenting the kernel.

Features of Monolithic System

Simple structure: This type of operating system has a simple structure. All the components needed for processing
are embedded into the kernel.

Works for smaller tasks: It works better for performing smaller tasks as it can handle limited resources.

Communication between components: All the components can directly communicate with each other and also
with the kernel.

Fast operating system: The code to make a monolithic kernel is very fast and robust

Difference between Monolithic Kernel and Microkernel


A kernel is the core part of an operating system, and it manages the system resources. A kernel is like a bridge
between the application and hardware of the computer. The kernel can be classified further into two categories,
Microkernel and Monolithic Kernel.
The microkernel is a type of kernel that allows customization of the operating system. It runs on privileged mode
and provides low-level address space management and Inter-Process Communication (IPC). Moreover, OS services
such as file system, virtual memory manager, and CPU scheduler are on top of the microkernel. Each service has its
own address space to make them secure. Besides, the applications also have their own address spaces. Therefore,
there is protection among applications, OS Services, and kernels.

39
• A monolithic kernel is another classification of the kernel. In monolithic kernel-based systems, each application has
its own address space. Like microkernel, this one also manages system resources between application and hardware,
but user services and kernel services are implemented under the same address space. It increases the size of the
kernel, thus increasing the size of the operating system as well.
• This kernel provides CPU scheduling, memory management, file management, and other system functions through
system calls. As both services are implemented under the same address space, this makes operating system
execution faster

40
Terms Monolithic Kernel Microkernel

Definition A monolithic kernel is a type of kernel in A microkernel is a kernel type that provide
operating systems where the entire operating low-level address space management, thread
system works in the kernel space. management, and interposes communicatio
to implement an operating system.

Address space In a monolithic kernel, both user services and In microkernel user services and kernel
kernel services are kept in the same address services are kept in separate address spaces.
space.

Size The monolithic kernel is larger than the The microkernel is smaller in size.
microkernel.

Execution It has fast execution. It has slow execution.

OS services In a monolithic kernel system, the kernel In a microkernel-based system, the OS


contains the OS services. services and kernel are separated.

Extendible The monolithic is quite The microkernel is easily extensible.


kernel complicated to
extend.
Security If a service crashes, then the whole system If a service crashes, it does not affect th
crashes in a monolithic kernel. working of the microkernel.

Customization It is difficult to add new functionalities to the It is easier to add new functionalities to th
monolithic kernel. Therefore, it is not microkernel. Therefore, it is mor
customizable. customizable.

Code Less coding is required to write A microkernel requires more coding.


monolithic kernel. a

Example Linux, FreeBSD, OpenBSD, NetBSD, QNX, Symbian, L4Linux, Singularity, K42
Microsoft Windows (95, 98, Me), Solaris, HP- Mac OS X, Integrity, PikeOS, HURD, Minix
UX, DOS, OpenVMS, XTS- 400, etc. and Coyotos.

Reentrant Kernel

41
Reentrant Kernel: A re-entrant kernel enables processes (or, to be more precise, their corresponding kernel
threads) to give away the CPU while in kernel mode. They do not hinder other processes from also entering kernel
mode. In the case of single processor systems multiple may be scheduled together an example of this case is a disk
read. User program issues a system call for a disk read; the scheduler will assign the CPU to another process (kernel
thread) until an interrupt from the disk controller indicates that the data is available and our thread can be resumed.
This process can still access I/O (which needs kernel functions), like user input. The system stays responsive and
CPU time waste due to IO wait is reduced. In a non reentrant kernel, the original function (whatever requested data)
would be blocked until the disk read was complete
A computer program or routine is described as reentrant if it can be safely called again before its previous invocation
has been completed (i.e. it can be safely executed concurrently). To be reentrant, a computer program or routine:
● Must hold no static (or global) non-constant data.
● Must not return the address to static (or global) non-constant data.
● Must work only on the data provided to it by the caller.
● Must not rely on locks to singleton resources. a variable that is referred to only once
● Must not modify its own code (unless executing in its own unique thread storage)
● Must not call non-reentrant computer programs or routines.

Reentrant kernels are still able to execute non reentrant functions using locks to ensure only one process can execute
that non reentrant function.
Hardware interrupts are able to suspend the current process even if it is running in kernel mode (this enables things
like Ctrl + C to stop execution).

Kernel Control Path: Sequence of instructions executed by kernel to handle system call Normal execution will
execute instructions sequentially but certain actions will cause the CPU to interleave control paths.
Process in user mode invokes system call: Scheduler selects new process to run and causes a process switch. Two
control paths are executed on behalf of two different processes.
CPU detects exceptions: For example, accessing pages not present in RAM. Suspend the process that caused the
exception and start execution of a suitable procedure, page allocation. Once allocated original control path continues

42
Hardware interrupt: Hardware interrupts are higher priority processes.

Dual Mode of Operation (used to implement Protection)

The dual mode operations in the operating system protect the operating system from illegal users We accomplish
this defense by designating some of the system instructions as privileged instructions that can cause harm The
hardware only allows for the execution of privileged instructions in kernel mode An example of a privileged
instruction is the command to switch to user mode Other examples include monitoring of I/O, controlling timers and
handling interruptions. To ensure proper operating system execution, we must differentiate between machine code
execution and user defined code Most computer systems have embraced offering hardware support that helps
distinguish between different execution modes We have two modes of the operating system user mode and kernel
mode bit is required to identify in which particular mode the current instruction is executing If the mode bit is 1 it
operates user mode, and if the mode bit is 0 it operates in kernel mode NOTE At the booting time of the system, it
always starts with the kernel mode.

Types of Dual Mode in Operating System


The operating system has two modes of operation to ensure it works correctly:

1. User Mode
2. Kernel Mode

1. User Mode (Non Privileged Mode):

When the computer system runs user applications like file creation or any other application program in the User
Mode, this mode does not have direct access to the computer's hardware. For performing hardware related tasks, like
when the user application requests for a service from the operating system or some interrupt occurs, in these cases,
the system must switch to Kernel Mode. The mode bit of the user mode is 1. This means that if the mode bit of the
system's processor is 1, then the system will be in the User Mode.

2. Kernel Mode (Privileged Mode):

All the bottom level tasks of the Operating system are performed in the Kernel Mode. As the Kernel space has direct
access to the hardware of the system, the kernel mode handles all the processes which require hardware support. A
part from this, the main functionality of the Kernel.
Mode is to execute privileged instructions. These privileged instructions are not provided with user access, and that's

43
why these instructions cannot be processed in the User mode. So, all the processes and instructions that the user is
restricted to interfere with are executed in the Kernel Mode of the Operating System. The mode bit for the Kernel
Mode is 0. So, for the system to function in the Kernel Mode, the Mode bit of the processor must be equal to 0.

Need for Dual Mode Operations

Certain types of processes are to be made hidden from the user, and certain tasks that do not require any type of
hardware support. Using the dual mode of the OS, these tasks can be dealt with separately. Also, the Operating
System needs to function in the dual mode because the Kernel Level programs perform all the bottom level
functions of the OS like process management, Memory management, etc. If the user alters these, then this can cause
an entire system failure. So, for specifying the access to the users only to the tasks of their use, Dual Mode is
necessary for an Operating system.
So, whenever the system works on the user applications, it is in the User mode. Whenever the user requests some
hardware services, a transition from User mode to Kernel mode occurs, and this is done by changing the mode bit
from 1 to 0. And for returning back into the User mode, the mode bit is again changed to 1.
User Mode and Kernel Mode Switching. In its lifespan, a process executes in user mode and kernel mode. The user
mode is a normal mode where the process has limited access. However, the kernel mode is the privileged mode
where the process has unrestricted access to system resources like hardware, memory, etc.
A process can access services like hardware I/O by executing accessing kernel data in kernel mode. Anything
related to process management, I/O hardware management, and memory management requires a process to execute
in Kernel mode.
This is important to know that a process in Kernel mode gets power to access any device and memory, and at the
same time any crash in kernel mode brings down the whole system. But any crash in user mode brings down the
faulty process only. The kernel provides System Call Interface (SCI), which are entry points for user processes to
enter kernel mode. System calls are the only way through which a process can go into kernel mode from user mode.
The below diagram explains user mode to kernel mode switching in detail.

Example of Dual Mode Operation

44
With the mode bit, we can distinguish between a task executed on behalf of the operating system and one executed
on behalf of the user. When the computer system executes on behalf of a user application, the system is in user mode
However, when a user application requests a service from the operating system via a system call, it must transition
from user to kernel mode to fulfill the request. As we can say, this architectural enhancement is useful for many
other aspects of system operation. At system boot time, the hardware starts in kernel mode. The operating system is
then loaded and starts user applications in user mode whenever a trap or interrupt occurs, the hardware switches
from user mode to kernel mode, changing the mode bit's state to Thus, whenever the operating system gains control
of the computer, it is in kernel mode. The system always switches to user mode by setting the mode bit to 1 before
passing control to a user program.

System Calls in Operating System (OS)


A system call is a way for a user program to interface with the operating system. The program requests several
services, and the OS responds by invoking a series of system calls to satisfy the request. A system call can be written
in assembly language or a high-level language like C or Pascal. System calls are predefined functions that the
operating system may directly invoke if a high-level language is used. A system call is a method for a computer
program to request a service from the kernel of the operating system on which it is running. A system call is a
method of interacting with the operating system via programs. A system call is a request from computer software to
an operating system's kernel.
The Application Program Interface (API) connects the operating system's functions to user programs. It acts as a
link between the operating system and a process, allowing user-level programs to request operating system services.
The kernel system can only be accessed using system calls. System calls are required for any programs that use
resources.

How is system calls made?

When computer software needs to access the operating system's kernel, it makes a system call. The system call uses
an API to expose the operating system's services to user programs. It is the only method to access the kernel system.
All programs or processes that require resources for execution must use system calls, as they serve as an interface
between the operating system and user programs.
Below are some examples of how a system call varies from a user function.

•A system call function may create and use kernel processes to execute the asynchronous processing.
•A system call has greater authority than a standard subroutine. A system call with kernel-mode privilege executes in
the kernel protection domain.

45
•System calls are not permitted to use shared libraries or any symbols that are not present in the kernel protection
domain.
• The code and data for system calls are stored in global kernel memory.

Why do you need system calls in the Operating System?

There are various situations where you must require system calls in the operating system. Following of the situations
are as follows:
• It is required when a file system wants to create or delete a file.
• Network connections require the system calls to send and receive data packets.
• If you want to read or write a file, you need to make system calls.
• If you want to access hardware devices, including a printer, scanner, you need a system call. System calls are used to
create and manage new processes.

How System Calls Work

The Applications run in an area of memory known as user space. A system call connects to the operating system's
kernel, which executes in kernel space. When an application creates a system call, it must first obtain permission
from the kernel. It achieves this using an interrupt request, which pauses the current process and transfers control to
the kernel. If the request is permitted, the kernel performs the requested action, like creating or deleting a file. As
input, the application receives the kernel's output. The application resumes the procedure after the input is received.
When the operation is finished, the kernel returns the results to the application and then moves data from kernel
space to user space in memory.
A simple system call may take a few nanoseconds to provide the result, like retrieving the system date and time. A
more complicated system call, such as connecting to a network device, may take a few seconds. Most operating
systems launch a distinct kernel thread for each system call to avoid bottlenecks. Modern operating systems are
multi-threaded, which means they can handle various system calls at the same time.

Types of System Calls Process Control

Process control is the system call that is used to direct the processes. Some process control examples include
creating, load, abort, and end, execute, process, terminate the process, etc.
File Management
File management is a system call that is used to handle the files. Some file management examples include creating
files, delete files, open, close, read, write, etc.
Device Management
Device management is a system call that is used to deal with devices. Some examples of device management include
read, device, write, get device attributes, release device, etc.
Information Maintenance
Information maintenance is a system call that is used to maintain information. There are some examples of
information maintenance, including getting system data, set time or date, get time or date, set system data, etc.
Communication
Communication is a system call that is used for communication. There are some examples of communication,
including creates, delete communication connections, send, receive messages, etc.

Examples of Windows and UNIX system calls.

46
Process Windows Unix

Process Control Create Process() Exit Process() Fork()


Wait For Single Object() Exit()
Wait()

File Manipulation Create File() Read File() Write File() Open() Read()Write() Close()
Close Handle()

Difference between User Mode and Kernel Mode

Terms User Mode Kernel Mode

Definition User Mode is a restricted mode, which the Kernel Mode is the privileged mode, which the
application programs are executing and starts. computer enters when accessing hardware
resources.

Modes User Mode is considered as the slave mode or the Kernel mode is the system mode, master mode or
restricted mode. the privileged mode.

Address In User mode, a process gets its own address space. In Kernel Mode, processes get a single address
Space space.

Interruption In User Mode, if an interrupt occurs, only one In Kernel Mode, if an interrupt occurs, the whole
s process fails. operating system might fail.

Restrictions In user mode, there are restrictions to access kernel In kernel mode, both user programs and kernel
programs. Cannot access them directly. programs can access.

47
Important Questions

Q.N QUESTIONS CO1


O

1 What is the Operating System? Describe the Operating System Functions.


2 Enumerate various operating system components with their functions in brief.
3 Explain the Batch Operating System with an example.
4 Explain in detail about the Operating System Services.
5 Explain in detail about the Monolithic and Microkernel Systems.
6 Write down different types of Operating Systems.
7 Differentiate between (with one suitable example)
● Interactive and Batch Processing System

● Multiprogramming and Time-Sharing System


8 What is Kernel? Describe various operations performed by the kernel.
9 Comparatively, analyze the different operating structures.
10 What do you understand by system call? How is a system call made? How is a system call
handled by the System? Choose a suitable example for explanation.

48
49
50
51
52
53
54
55
56
57

You might also like