0% found this document useful (0 votes)
128 views85 pages

Course Pack OS PDF

This document provides an overview of an Operating Systems course, including its objectives, expected outcomes, reference books, course plan, and course overview. The course focuses on key concepts related to process management, memory management, input/output management, file management, and inter-process communication and synchronization. It covers topics such as operating system structures, system calls, scheduling algorithms, virtual memory, semaphores, producer-consumer problems, and conditions for deadlock. The course aims to give students a general understanding of how computers work and the basic principles of operating systems.

Uploaded by

Heman Setia
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
128 views85 pages

Course Pack OS PDF

This document provides an overview of an Operating Systems course, including its objectives, expected outcomes, reference books, course plan, and course overview. The course focuses on key concepts related to process management, memory management, input/output management, file management, and inter-process communication and synchronization. It covers topics such as operating system structures, system calls, scheduling algorithms, virtual memory, semaphores, producer-consumer problems, and conditions for deadlock. The course aims to give students a general understanding of how computers work and the basic principles of operating systems.

Uploaded by

Heman Setia
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 85

COURSE PACK FOR

Subject Title: - OPERATING SYSTEM (CBCS 2018 course)

COURSE CODE : 301


COURSE : BCA
SEMESTER III
YEAR : 2020-21

Course Instructor: Dr. Daljeet Singh Bawa

Ms. Nisha Malhotra

Course Leader: Dr. Daljeet Singh Bawa

(Dr. A.K. Srivastava)


(Dr.VikasNath)

Forwarded by:HOD ApprovedBy:Director(I/C)

BharatiVidyapeeth(Deemed to be University), Institute of Management & Research, New Delhi


An ISO 9001: 2015 & 14001:2015 Certified Institute
A-4, PaschimVihar, New Delhi-110063(Ph:011-25284396,25285808, Fax No. 011-25286442)

Note: “Strictly for Internal academic use only”


TABLE OF CONTENT

Sr. No Contents PAGE NO

1 Syllabus 1

2 Course Overview, Objective and LearningOutcomes 3-4

3 EvaluationCriteria 5

4 BooksRecommendation 6

5 SessionPlan 7

6 Brief Profile of Faculty 11-12

7 Unit 1: Introduction to Operating System:


14
 DefinitionandconceptofOS
 HistoryofOS
 ImportanceandfunctionofOperatingsystem.
 TypesofOS
 Views-commandlanguage users view, system call users
view structure of OS
 commandlineinterface,GUI, systemcalls

8 Unit 2: Process Management:


26
 Process concept
 Process Control Block
 process states and its transitions
 context switch
 OS services for Process management
 scheduling and types of schedulers
 scheduling algorithm
9. Unit 3: Storage Management:
32
 Basic concept of storage management
 logical and physical address space
 swapping
 contiguous allocation, non-contiguous allocation,
fragmentation
 segmentation
 paging, demand paging, virtual memory
 page replacement algorithms
 design issue of paging and thrashing

10. Unit 4: Inter-process communication and synchronization


43
 Mutual Exclusion
 Semaphore
 Busy-wait Implementation
 characteristics of semaphore
 queuing implementation of semaphore
 producer consumer problem
 Critical region and conditional critical area.
 Deadlock
11. Unit 5 : File Systems :
46
 Files-basic concept
 file attributes, operations
 file types, file structure, access methods
 Directory- structure-single level directory system
 Directory system

12. Unit 6: Input/output System:


50
 PrinciplesofI/Ohardware
 I/Odevices,devicecontroller
 DMA,PrinciplesofI/Osoftware- goals, interrupt handler
 Devicedriver.
 Mass storage structure-disk structure
 disk scheduling
13. Practice Questions
60

14. Previous year University question papers


64

15. Previous year Internal Question papers


73
Course Number Course Name L-T-P- Credits Year of Introduction
301 Operating System 3L-1T-0P-=4C 2018
Course Objectives
• • To understand the concepts related to operating system
• • To understand different approaches of memory management
• • To understand the concept of deadlock
• • To understand file system in operating system
• • To operate with CPU scheduling algorithms
• • To understand the concept of process management
Expected Outcome :
This COURSE focuses the concept of different types of operating systems, the concept of process. The
concept of CPU scheduling, deadlock Explain File Concepts, Access Methods, Directory Structure,
Protection, File System Structure, Allocation Methods, Free Space Management

Reference Books:
• • Operating Systems Principles, Galvin, Wiley, 7th Edition
• • Operating Systems Principles, Galvin, Wiley, 9th Edition
• • Operating systems Concepts and Design, Milan Milenkovic, TMH

Course Plan
UNIT Contents
1 Introduction to Operating System:
DefinitionandconceptofOS
HistoryofOS,
importanceandfunctionofOperatingsystem.
TypesofOS, Views-commandlanguage users view,
system call users view structure of OS
commandlineinterface,GUI, systemcalls
2 Process Management:
Process concept, Process Control Block, process
states and its transitions, context switch, OS
services for Process management, scheduling and
types of schedulers and scheduling algorithm
3 Storage Management:
Basic concept of storage management logical
and physical address space swapping,
contiguous allocation, non-contiguous
allocation, fragmentation, segmentation,
paging, demand paging, virtual memory, page
replacement algorithms and design issue of
paging and thrashing,
4 Inter-process communication and
synchronization
Mutual Exclusion, semaphore, Busy-wait
Implementation ,characteristics of semaphore,
queuing implementation of semaphore, producer
consumer problem, Critical region
and conditional critical area and deadlock
1|Page ForInternalCirculation
5 File Systems
Files-basic concept, file attributes, operations, file
types, file structure, access methods, Directory-
structure-single level directory system and
Directory system
6 Input/output System:
PrinciplesofI/Ohardware,
I/Odevices,devicecontroller,
DMA,PrinciplesofI/Osoftware- goals, interrupt
handler, Devicedriver. Mass storage structure-disk
structure and disk scheduling
CourseOverview:

This course gives you general understanding that how a computer works. This includes the concepts related to
computer system architecture and key functions of operating system to manage the hardware resources.
It focuses on the basic principles of Operating systems, Process management, Memory management, Input
output management and file management. This course also covers the concepts of mutual exclusion and various
attempts/ algorithms to solve this problem.
operating System, History of OS, Os Types, Operating System Structures – Command interpreter Systems, Operating
System Services, Systems Calls, System Programs
Process Concept, Process Control Block(PCB), Process Scheduling, CPU – Scheduling – Basic Concepts,
Scheduling Algorithms – FIFO, RR, SJF, Multi Level, Multi Level Feedback
concept of Logical and Physical Address Space, Swapping, Contiguous Allocation, Paging, Segmentation, Virtual
Memory- Demand Paging, Page Replacement, Page Replacement Algorithms, Allocation of Frames, Thrashing and
Demand Segmentation.
concept of Need of inter process communication, Mutual exclusion, Semaphore Definition, Busy wait
implementation, Characteristics of Semaphore, Queuing Implementation of Semaphore, Producer Consumer Problem,
Critical region and conditional critical region.
concept of Conditions to occur the deadlock, Reusable and consumable resources, Deadlock prevention, Deadlock
Avoidance, Resource Request, Resource Release, Detection and recovery.
File Concepts, Access Methods, Directory Structure, Protection, File System Structure, Allocation Methods,
Free Space Management.
Overview of I/O Systems, I/O Interface, Secondary Storage Structure- Disk Structure, Disk Scheduling, Case Study:-
UNIX, LINUX, WINDOWS Operating System and Overview of ANDROID Operating System
Learning Outcome:

After undergoing this course, the student will be able to:


CO1: OS and different types of operating systems and the environment, Os Services, System Calls,
Operating System Structures.

CO2: Understand the concept of process, PCB, Process and CPU scheduling and Scheduling
Algorithms

CO3:Logical and Physical Address Space, Swapping, Contiguous Allocation,Paging,


Segmentation, Virtual Memory- Demand Paging, Page Replacement Algorithms, Allocation
of Frames, Thrashing

CO4: Explain Mutual exclusion, Semaphore Definition, Busy wait implementation, Characteristics of
Semaphore, QueuingImplementation of Semaphore, Producer Consumer Problem, Critical region and
conditional critical region and deadlock.

CO5: Explain File Concepts, Access Methods, Directory Structure, Protection, File System Structure,
Allocation Methods, Free Space Management

CO6: Overview of I/O Systems, I/O Interface, Secondary Storage Structure- Disk Structure, Disk Scheduling
1. EvaluationCriteria:

Component Description Weight age

First First internal question paper will be based on 10marks


Internal first 3 unit of syllabus.
Examination

Second Second internal question paper will be based on 10marks


last 3 unit of syllabus.
Internal
Examination

CES Quiz Moodle MCQs based on the concepts of 5 marks


operating systems

CES Quiz Moodle MCQs based on the concepts of 5 marks


operating systems

CES Quiz Moodle MCQs based on the concepts of 5 marks


operating systems

Attendance Above 75% - 10 marks 10 marks


Below 75% - 0 mark
Note :
All three CES will be mandatory. If any student misses anyone CES in that case the weightage of
each CES would be 3.33 marks and if a student attempts all three CES then his/her best two CES
will be considered, in that case the weightage would be 5 marks each.
Recommended/ Reference Text Books andResources:

Text Books Text Book1: A: Operating Systems Principles, Galvin, Wiley, 7th Edition B:
Operating Systems Principles, Galvin, Wiley, 9th Edition

Course
Reading Operating systems Concepts and Design, Milan Milenkovic, TMH
Internet http://www.tutorialspoint.com/operating_system/operating_system_tutorial.pf
Resource: http://www.cs.utexas.edu/users/witchel/372/lectures/01.OSHistory.pdf
SessionPlan:

Ref. Book Learning


Unit Lecture Topic Details
Outcomes
Unit I:  What is an operating LO 1 - Student will
Introduction to System? Text Book 1 learn about OS and
1 Chapter – 1
Operating  History of OS? its history
Systems - CO1
 Simple Batch Systems Text Book 1 LO – 2 Student will
2  Multiprogrammed batched Chapter – 1 learn about OS types
systems – CO -1
 Time Sharing Systems Text Book 1 LO – 3 Student will
 Personal computer Chapter – 1 learn about OS types
3 Systems – CO -1
 Distributed Systems and
Real Time Systems
 Operating System Text Book 1 LO4 student will
4 Structures – Command Chapter – 2 learn about OS
interpreter Systems structures – CO – 1
 Operating System Services Text Book 1 LO5 Student will
5 Chapter – 2 learn about System
 Systems Calls
services and calls –
 System Programs CO 1
Unit 2: Process  Process Concept Text Book 1 LO – 6 Student will
Management 6  Process Control Chapter – 3 lean about Process
Block(PCB) and PCB – CO 2
Text Book 1 LO7 – Student will
7  Process Scheduling Chapter – 3 learn about process
scheduling – CO2
Text Book 1 LO7 – Student will
 CPU – Scheduling – Basic
Chapter – 5
8 learn about process
Concepts scheduling – CO2
 Scheduling Algorithms – Text Book 1 LO 8 – Student will
FIFO, RR, SJF, Multi Level, Chapter – 5 learn about various
9 Multi Level Feedback. scheduling
algorithms – CO 2

 Scheduling Algorithms – Text Book 1 LO 8 – Student will


FIFO, RR, SJF, Multi Level, Chapter – 5 learn about various
10 scheduling
Multi Level Feedback.
algorithms – CO 2
Unit 3: Storage  Basic Concepts Text Book 1 LO – 9 Student will
Management 11 Physical Chapter – learn about Logical
 Logical and 8,9 and Physical address
Address Space space CO – 4
12  Swapping Text Book 1 LO 10 Student will
Chapter – learn about
8,9 swapping techniques
– CO 4
Text Book 1 LO- 11 – Student
13  Contiguous Allocation Chapter – will learn about
8,9 Contiguous
allocation – CO 4
Text Book 1 LO – 12 Student
14  Paging Chapter – will learn about
8,9 Paging – CO 4
Text Book 1 LO – 13 Student
15 Chapter – will learn about
 Segmentation 8,9 Segmentation – CO
4
Text Book 1 LO – 14 Student
Chapter – will learn about
16 8,9 Virtual Memory and
Demand Paging –
Virtual Memory- Demand Paging CO 4
Text Book 1 LO – 15 Student
17 Chapter – will learn about
 Page Replacement 8,9 Page replacement –
CO 4
Text Book 1 LO – 16 Student
18  Page Replacement Chapter – will learn about
Algorithms 8,9 Page replacement
algorithms– CO 4
Text Book 1 LO – 17 Student
19 Chapter – will learn about
 Allocation of Frames 8,9 frame allocation –
CO 4
Text Book 1 LO – 18 Student
Chapter – will learn about
20  Thrashing and Demand 8,9 Demand
Segmentation. segmentation – CO
4
Unit 4: Text Book 1 LO19 Student will
Inter-process Chapter – 6 lean about mutual
Communication 21  Need exclusion need –
&  Mutual exclusion CO3
Synchronization
Text Book 1 LO19 Student will
22  Semaphore Definition Chapter – 6 lean about
Semaphore – CO3
Text Book 1 LO19 Student will
23  Busy wait implementation Chapter – 6 lean about
Semaphore – CO3
Text Book 1 LO19 Student will
Chapter – 6 lean about
Semaphore
24 characteristics– CO3

 Characteristics of Semaphore

Text Book 1 LO20 Student will


Chapter – 6 lean about
Semaphore
25 Queuing
 Queuing Implementation of implementation –
Semaphore CO3

Text Book 1 LO21 – Student will
Chapter – 6 learn about
traditional
26 concurrency
 Producer Consumer Problem problems – CO3

Text Book 1 LO22 – Student will


Chapter – 6 learn about Critical
region and
27 conditional critical
region – CO3
 Critical region and conditional
critical region
Text Book 1 LO 22 Student will
 Conditions to occur the
28 Chapter – 7 learn about
deadlock
Deadlock basics
CO5
Text Book 1 LO 23 Student will
 Reusable and consumable
29 Chapter – 7 learn about various
resources resources CO5
Text Book 1 LO 24 Student will
30  Deadlock prevention Chapter – 7 learn about
prevention CO5
Text Book 1 LO 24 Student will
31 Chapter – 7 learn about
 Deadlock Avoidance avoidance CO5
Text Book 1 LO25 – Learning
32  Resource Request Chapter – 7 about Resource
 Resource Release request and release
 CO 5
Text Book 1 LO26 – Learning
33 Detection and recovery Chapter – 7 about Deadlock
detection and
recovery CO 5
Unit 5: File 34  File Concepts Text Book 1 LO 27 Student will
Systems Chapter – learn about file
10,11 concepts – CO6
Text Book 1 LO 28 Student will
 Access Methods Chapter – learn about file
35  Directory Structure 10,11 concepts – Access
 Protection methods, protection
CO6
 File System Structure Text Book 1 LO 29 learning File
36  Allocation Methods Chapter – system structure and
 Free Space Management 10,11 allocation methods –
CO6
Unit 6: I/O  Overview of I/O Systems Text Book 1 LO 30 – Student
Systems  I/O Interface Chapter – 13 will learn about I/O
37  Secondary Storage Structure- systems,
Disk Structure

Chapter – 13 Disk structure and


38  Disk Scheduling scheduling CO – 7
LO 31 Discussion
39 Text Book 1 on various OS case
 Case Study:-UNIX, LINUX Chapter – 13 studies – CO 7
WINDOWS Operating System
LO 31 Discussion
40 Text Book 1 on various OS case
 Overview of ANDROID Chapter – 13 studies – CO 7
Operating System
Dr. DALJEET SINGH BAWA Mobile:
9582035733
E-Mail: [email protected]

Dr. Daljeet Singh Bawa is PhD in Computer Science and is presently working as Assistant Professor in IT

Department at Bharati Vidyapeeth University Institute of Management and Research, New Delhi. He has completed

M.Phil (Computer Science) as well and loves experimenting with new softwares. His areas of specialization are

Software Engineering, Operating Systems, Computer Organization and Architecture, e-learning and e-assessment and

has a rich experience of working with live software projects. His research work revolves around e-learning, blended

learning and e-assessment and has 24 research papers to his credit. He can be contacted at

[email protected].
NISHA MALHOTRA
Mobile: 9899995540
E-Mail: [email protected]

Academic Qualifications

 I have done M. Tech. from NetajiSubhas Institute of Technology (NSIT) , Delhi University and B. Tech
(Computer Science Engineering) from N.C college of Engineering, Kurukshetra University.
 Also done Certifications of CISCO:CISCO Certified Network Associate (CCNA).

Teaching Subjects

 Java Programming
 C# programming
 Linux operating system
 Data Structures
 Database Management System
 Operating System
 Software Engineering
 C, C++
 Object oriented programming and analysis
 Compiler design

Work Experience

 Done training and developed Joining Report Module in Integrated Management Information
Dissemination System (IMIS) from Defence Research and Development Organization (DRDO),
Ministry of Defence, Delhi.
 Done training in Virtual Network Computing from Information Technology
Department, AAI (Airports Authority of India), SafdurjungAirport, New Delhi
 Worked as a Lecturer in Computer Science Department at The Gate Academy Institute, Pitampura New
Delhi.
 Worked at IITM, affiliated with G.G.S Indraprastha University, Delhi as an Assistant Professor
 Works at BharatiVidyapeeth Institute of Management and Research,PaschimVihar,Delhi, as a visiting
faculty
 Also works at BharatiVidyapeeth College of EngineeringPaschimVihar,Delhi, as a visiting faculty
STUDY NOTES
UNIT 1

Introduction to Operating System:

 DefinitionandconceptofOS
 HistoryofOS
 ImportanceandfunctionofOperatingsystem.
 TypesofOS
 Views-commandlanguage users view, system call users view structure of OS
 commandlineinterface,GUI, systemcalls

 Operating system
An operating system (OS) is a collection of software that manages computer hardware resources and provides
common services for computer programs. The operating system is a vital component of the system software in a
computer system. This tutorial will take you through step by step approach while learning Operating System
concepts.
An Operating System (OS) is an interface between a computer user and computer hardware. An operating system is
a software which performs all the basic tasks like file management, memory management, process management,
handling input and output, and controlling peripheral devices such as disk drives and printers.
Some popular Operating Systems include Linux Operating System, Windows Operating System, VMS, OS/400,
AIX, z/OS, etc.
Following are some of important functions of an operating System:

 Memory Management
 Processor Management
 Device Management
 File Management
 Security
 Control over system performance
 Job accounting
 Error detecting aids
 Coordination between other software and users
 Applications of Operating System
Following are some of the important activities that an Operating System performs −
 Security − By means of password and similar other techniques, it prevents unauthorized access to
programs and data.
 Control over system performance − Recording delays between request for a service and response from the
system.
 Job accounting − Keeping track of time and resources used by various jobs and users.
 Error detecting aids − Production of dumps, traces, error messages, and other debugging and error
detecting aids.
 Coordination between other softwares and users − Coordination and assignment of compilers,
interpreters, assemblers and other software to the various users of the computer systems.
An operating system is a program that acts as an interface between the user and the computer hardware and controls
the execution of all kinds of programs.

For explanation of types of Operating system – refer hand written notes.


Advantages and disadvantages of the types of operating systems Advantages
of Batch Operating System:
 It is very difficult to guess or know the time required by any job to complete. Processors of the batch systems
know how long the job would be when it is in queue
 Multiple users can share the batch systems
 The idle time for batch system is very less
 It is easy to manage large work repeatedly in batch systems

Disadvantages of Batch Operating System:

 The computer operators should be well known with batch systems


 Batch systems are hard to debug
 It is sometime costly
 The other jobs will have to wait for an unknown time if any job fails

Advantages of Time-Sharing OS:

 Each task gets an equal opportunity


 Less chances of duplication of software
 CPU idle time can be reduced

15 | P a g e ForInternalCirculation
Disadvantages of Time-Sharing OS:

 Reliability problem
 One must have to take care of security and integrity of user programs and data
 Data communication problem
Advantages of Distributed Operating System:

 Failure of one will not affect the other network communication, as all systems are independent from each
other
 Electronic mail increases the data exchange speed
 Since resources are being shared, computation is highly fast and durable
 Load on host computer reduces
 These systems are easily scalable as many systems can be easily added to the network
 Delay in data processing reduces

Disadvantages of Distributed Operating System:

Failure of the m
 ain network will stop the entire communication
 To establish distributed systems the language which are used are not well defined yet
 These types of systems are not readily available as they are very expensive. Not only that the
underlying software is highly complex and not understood well yet

Advantages of Network Operating System:

 Highly stable centralized servers


 Security concerns are handled through servers
 New technologies and hardware upgradation are easily integrated to the system
 Server access are possible remotely from different locations and types of systems

Disadvantages of Network Operating System:

 Servers are costly


 User has to depend on central location for most operations
 Maintenance and updates are required regularly

Advantages of RTOS (real time operating system):

 Maximum Consumption: Maximum utilization of devices and system,thus more output from all the
resources
 Task Shifting: Time assigned for shifting tasks in these systems are very less. For example in older systems it
takes about 10 micro seconds in shifting one task to0000 another and in latest systems it takes 3 micro
seconds.-
 Focus on Application: Focus on running applications and less importance to applications which are in queue.
 Real time operating system in embedded system: Since size of programs are small, RTOS can also be used
in embedded systems like in transport and others.
 Error Free: These types of systems are error free.

16 | P a g e ForInternalCirculation
 Memory Allocation: Memory allocation is best managed in these type of systems.

Disadvantages of RTOS:

 Limited Tasks: Very few tasks run at the same time and their concentration is very less on few
applications to avoid errors.
 Use heavy system resources: Sometimes the system resources are not so good and they are expensive as well.
 Complex Algorithms: The algorithms are very complex and difficult for the designer to write on..
 Device driver and interrupt signals: It needs specific device drivers and interrupt signals to response earliest
to interrupts.
 Thread Priority: It is not good to set thread priority as these systems are very less prone to switching tasks.
Examples of Real-Time Operating Systems are: Scientific experiments, medical imaging systems,
industrial control systems, weapon systems, robots, air traffic control systems, etc.

 Operating system – System view and user view

The operating system can be observed from the point of view of the user or the system. This is known as the user
view and the system view respectively. More details about these are given as follows −

User View
The user view depends on the system interface that is used by the users. The different types of user view
experiences can be explained as follows −

 If the user is using a personal computer, the operating system is largely designed to
make the interaction easy. Some attention is also paid to the performance of the system,
but there is no need for the operating system to worry about resource utilization. This is
because the personal computer uses all the resources available and there is no sharing.

 If the user is using a system connected to a mainframe or a minicomputer, the


operating system is largely concerned with resource utilization. This is because there
may be multiple terminals connected to the mainframe and the operating system
makes sure that all the resources such as CPU,memory, I/O devices etc. are divided
uniformly between them.

 If the user is sitting on a workstation connected to other workstations through


networks, then the operating system needs to focus on both individual usage of
resources and sharing though the
network. This happens because the workstation exclusively uses its own resources but it also needs to share
files etc. with other workstations across the network.

 If the user is using a handheld computer such as a mobile, then the operating system
handles the usability of the device including a few remote operations. The battery
level of the device is also taken into account.

There are some devices that contain very less or no user view because there is no interaction with the users. Examples
are embedded computers in home devices, automobiles etc.
System View
According to the computer system, the operating system is the bridge between applications and hardware. It is most
intimate with the hardware and is used to control it as required.
The different types of system view for operating system can be explained as follows:

 The system views the operating system as a resource allocator. There are many
resources such as CPU time, memory space, file storage space, I/O devices etc. that
are required by processes for execution. It is the duty of the operating system to
allocate these resources judiciously to the processes so that the computer system can
run as smoothly as possible.
 The operating system can also work as a control program. It manages all the processes
and I/O devices so that the computer system works smoothly and there are no errors. It
makes sure that the I/O devices work in a proper manner without creating problems.
 Operating systems can also be viewed as a way to make using hardware easier.
 Computers were required to easily solve user problems. However it is not easy to work
directly with the computer hardware. So, operating systems were developed to easily
communicate with the hardware.
 An operating system can also be considered as a program running at all times in the
background of a computer system (known as the kernel) and handling all the application
programs. This is the definition of the operating system that is generally followed.

 History of Operating system

The 1940's - First Generations

The earliest electronic digital computers had no operating systems. Machines of the time were so primitive that
programs were often entered one bit at time on rows of mechanical switches (plug boards).
Programming languages were unknown (not even assembly languages). Operating systems were unheard of
.

The 1950's - Second Generation

By the early 1950's, the routine had improved somewhat with the introduction of punch cards. The General Motors
Research Laboratories implemented the first operating systems in early 1950's for their IBM 701. The system of the
50's generally ran one job at a time. These were called single-stream batch processing systems because programs
and data were submitted in groups or batches.

The 1960's - Third Generation


The systems of the 1960's were also batch processing systems, but they were able to take better advantage of the
computer's resources by running several jobs at once. So operating systems designers developed the concept of
multiprogramming in which several jobs are in main memory at once; a processor is switched from job to job as
needed to keep several jobs advancing while keeping the peripheral devices in use.

For example, on the system with no multiprogramming, when the current job paused to wait for other I/O
operation to complete, the CPU simply sat idle until the I/O finished. The solution for this problem that evolved
was to partition memory into several pieces, with a different job in each partition. While one job was waiting for
I/O to complete, another job could be using the CPU.

Another major feature in third-generation operating system was the technique called spooling (simultaneous
peripheral operations on line). In spooling, a high-speed device like a disk interposed between a running program and
a low-speed device involved with the program in input/output. Instead of writing directly to a printer, for example,
outputs are written to the disk. Programs can run to completion faster, and other programs can be initiated sooner
when the printer becomes available, the outputs may be printed.

Note that spooling technique is much like thread being spun to a spool so that it may be later be unwound as needed.

Another feature present in this generation was time-sharing technique, a variant of multiprogramming technique, in
which each user has an on-line (i.e., directly connected) terminal. Because the user is present and interacting with
the computer, the computer system must respond quickly to user requests, otherwise user productivity could suffer.
Timesharing systems were developed to multiprogram large number of simultaneous interactive users.

Fourth Generation

With the development of LSI (Large Scale Integration) circuits, chips, operating system entered in the system entered
in the personal computer and the workstation age. Microprocessor technology evolved to the point that it become
possible to build desktop computers as powerful as the mainframes of the 1970s. Two operating systems have
dominated the personal computer scene: MS-DOS, written by Microsoft, Inc. for the IBM PC and other machines
using the Intel 8088 CPU and its successors, and UNIX, which is dominant on the large personal computers using the
Motorola 6899 CPU family.

Early Evolution

 1945: ENIAC, Moore School of Engineering, University of Pennsylvania.


 1949: EDSAC and EDVAC
 1949: BINAC - a successor to the ENIAC
 1951: UNIVAC by Remington
 1952: IBM 701
 1956: The interrupt
 1954-1957: FORTRAN was developed
Operating Systems - Late 1950s
By the late 1950s Operating systems were well improved and started supporting following usages:

 It was able to perform Single stream batch processing.


 It could use Common, standardized, input/output routines for device access.
 Program transition capabilities to reduce the overhead of starting a new job was added.
 Error recovery to clean up after a job terminated abnormally was added.
 Job control languages that allowed users to specify the job definition and resource
requirements were made possible.

Operating Systems - In 1960s

 1961: The dawn of minicomputers


 1962: Compatible Time-Sharing System (CTSS) from MIT
 1963: Burroughs Master Control Program (MCP) for the B5000 system
 1964: IBM System/360
 1960s: Disks became mainstream
 1966: Minicomputers got cheaper, more powerful, and really useful.
 1967-1968: Mouse was invented.
 1964 and onward: Multics
 1969: The UNIX Time-Sharing System from Bell Telephone Laboratories.

Supported OS Features by 1970s

 Multi User and Multi tasking was introduced.


 Dynamic address translation hardware and Virtual machines came into picture.
 Modular architectures came into existence.
 Personal, interactive systems came into existence.

Accomplishments after 1970

 1971: Intel announces the microprocessor


 1972: IBM comes out with VM: the Virtual Machine Operating System
 1973: UNIX 4th Edition is published
 1973: Ethernet
 1974 The Personal Computer Age begins
 1974: Gates and Allen wrote BASIC for the Altair
 1976: Apple II
 August 12, 1981: IBM introduces the IBM PC
 1983 Microsoft begins work on MS-Windows
 1984 Apple Macintosh comes out
 1990 Microsoft Windows 3.0 comes out
 1991 GNU/Linux
 1992 The first Windows virus comes out
 1993 Windows NT
 2007: iOS
 2008: Android OS

 Multiprogramming, multitasking, multithreading and multiprocessing

1. Multiprogramming – A computer running more than one program at a time (like running Excel and Firefox
simultaneously).

2. Multiprocessing – A computer using more than one CPU at a time.

3. Multitasking – Tasks sharing a common resource (like 1 CPU).

4. Multithreading is an extension of multitasking.

1. Multi programming –
In a modern computing system, there are usually several concurrent application processes which want to execute.
Now it is the responsibility of the Operating System to manage all the processes effectively and efficiently.
One of the most important aspects of an Operating System is to multi program.
In a computer system, there are multiple processes waiting to be executed, i.e. they are waiting when the CPU will be
allocated to them and they begin their execution. These processes are also known as jobs. Now the main memory is
too small to accommodate all of these processes or jobs into it. Thus, these processes are initially kept in an area
called job pool. This job pool consists of all those processes awaiting allocation of main memory and CPU.
CPU selects one job out of all these waiting jobs, brings it from the job pool to main memory and starts executing
it. The processor executes one job until it is interrupted by some external factor or it goes for an I/O task.

2. Multiprocessing –
In a uni-processor system, only one process executes at a time.
Multiprocessing is the use of two or more CPUs (processors) within a single Computer system. The term also
refers to the ability of a system to support more than one processor within a single computer system.
Now since there are multiple processors available, multiple processes can be executed at a time. These multi
processors share the computer bus, sometimes the clock, memory and peripheral devices also.

3. Multitasking –
As the name itself suggests, multi tasking refers to execution of multiple tasks (say processes, programs, threads
etc.) at a time. In the modern operating systems, we are able to play MP3 music, edit documents in Microsoft Word,
surf the Google Chrome all simultaneously, this is accomplished by means of multi tasking.
Multitasking is a logical extension of multi programming. The major way in which multitasking differs from
multi programming is that multi programming works solely on the concept of context switching whereas
multitasking is based on time sharing alongside the concept of context switching.

 Context Switching
A context switching is a process that involves switching of the CPU from one process or task to another. In this
phenomenon, the execution of the process that is present in the running state is suspended by the kernel and
another process that is present in the ready state is executed by the CPU.

It is one of the essential features of the multitasking operating system. The processes are switched so fastly that it
gives an illusion to the user that all the processes are being executed at the same time.

But the context switching process involved a number of steps that need to be followed. You can't directly switch a
process from the running state to the ready state. You have to save the context of that process. If you are not saving
the context of any process P then after some time, when the process P comes in the CPU for execution again, then
the process will start executing from starting. But in reality, it should continue from that point where it left the CPU
in its previous execution. So, the context of the process should be saved before putting any other process in the
running state.

A context is the contents of a CPU's registers and program counter at any point in time. Context switching
can happen due to the following reasons:

 When a process of high priority comes in the ready state. In this case, the execution of the running process
should be stopped and the higher priority process should be given the CPU for execution.
 When an interruption occurs then the process in the running state should be stopped and the CPU should
handle the interrupt before doing something else.
 When a transition between the user mode and kernel mode is required then you have to perform the
context switching.

 System Calls in OS

In computing, a system call is the programmatic way in which a computer program requests a service from the
kernel of the operating system it is executed on. A system call is a way for programs to interact with
the operating system. A computer program makes a system call when it makes a request to the operating system’s
kernel. System call provides the services of the operating system to the user programs via Application Program
Interface(API). It provides an interface between a process and operating system to allow user-level processes to
request services of the operating system. System calls are the only entry points into the kernel system. All programs
needing resources must use system calls.

Services Provided by System Calls :


1. Process creation and management
2. Main memory management
3. File Access, Directory and File system management
4. Device handling(I/O)
5. Protection
6. Networking, etc.
Types of System Calls : There are 5 different categories of system calls –
1. Process control: end, abort, create, terminate, allocate and free memory.
2. File management: create, open, close, delete, read file etc.
3. Device management
4. Information maintenance
5. Communication

 Operating System Structure


A structure of an Operating System determines how it has been designed and how it functions. There are
numerous ways of designing a new structure of an Operating system. In this post, we will learn about six
combinations that have been tested and tried.
TYPES OF OPERATING SYSTEM STRUCTURE (for diagrams of these structures refer PPT)
• MONOLYTHIC STRUCTURE
• SIMPLE STRUCTURE

• LAYERED STRUCTURE

Monolithic System structure in an Operating System


In this organizational structure, the entire operating system runs as a single program in the kernel mode. An operating
system is a collection of various procedures linked together in a binary file. In this system, any procedure can call any
other procedure. Since it is running in kernel mode itself, it has all the permissions to call whatever it wants.
In terms of information hiding, there is none. All procedures are running in kernel mode, so they have access
to all modules and packages of other procedures.
However, using this approach without any restrictions can lead to thousands of procedure calls, and this can lead to a
messy system. For this purpose, the actual OS is constructed in a hierarchy. All the individual procedures are
compiled into a single executable file using the system linker.
Even a monolithic system has a structure in which it can run in user mode. There already is a basic structure given by
the organization

1. The main procedure that invokes the requested service procedures.


2. A set of service procedures that carry out system calls.
3. A set of utility procedures that help out the system procedures.

Layered Systems Structure in Operating Systems


As the name suggests, this system works in layers.

Working
There are six layers in the system, each with different purposes.

Layer Function
5 The operator
4 User Programs
3 Input/Output Management
2 Operator-process communication
1 Memory and drum management
0 Processor allocation and multiprogramming
Layer 0 – Processor Allocation and Multiprogramming – This layer deals with the allocation of processor,
switching between the processes when interrupts occur or when the timers expire.
The sequential processes can be programmed individually without having to worry about other processes running
on the processor. That is, layer 0 provides that basic multiprogramming of the CPU
Layer 1 – Memory and Drum Management – This layer deals with allocating memory to the processes in the main
memory. The drum is used to hold parts of the processes (pages) for which space couldn’t be provided in the main
memory. The processes don’t have to worry if there is available memory or not as layer 1 software takes care of
adding pages wherever necessary.
Layer 2 – Operator-Process communication – In this layer, each process communicates with the operator (user)
through the console. Each process has its own operator console and can directly communicate with the operator.
Layer 3 – Input/Output Management – This layer handles and manages all the I/O devices, and it buffers the
information streams that are made available to it. Each process can communicate directly with the abstract I/O
devices with all of its properties.
Layer 4 – User Programs – The programs used by the user are operated in this layer, and they don’t have to worry
about I/O management, operator/processes communication, memory management, or the processor allocation.
Layer 5 – The Operator – The system operator process is located in the outer most layer.
Simple Structure
There are many operating systems that have a rather simple structure. These started as small systems and rapidly
expanded much further than their scope. A common example of this is MS-DOS. It was designed simply for a niche
amount for people. There was no indication that it would become so popular.

 Client-Server Model in Operating Systems


The client-server model in an operating system is a variation of the microkernel system. The middle layer in the
microkernel system is the one with servers. These servers provide some kind of service to clients. This makes up the
client-server model.
Communication between clients and servers is obtained by message passing. To receive a service, one of the client
processes constructs a message saying what it wants and sends it to the appropriate service. The service then does
it work and sends back the answer.
If the clients and servers are on the same machine, then some optimizations are possible. But generally
speaking, they are on different systems and are connected via a network link like LAN or WAN.
UNIT II
Process Management:

 Process concept
 Process Control Block
 process states and its transitions
 context switch
 OS services for Process management
 scheduling and types of schedulers
 scheduling algorithm

 Introduction of Process Management


Program vs Process
A process is a program in execution. For example, when we write a program in C or C++ and compile it, the
compiler creates binary code. The original code and binary code are both programs. When we actually run the
binary code, it becomes a process.

A single program can create many processes when run multiple times; for example, when we open a .exe or binary
file multiple times, multiple instances begin (multiple processes are created).
Attributes or Characteristics of a Process

A process has following attributes.


1. Process Id: A unique identifier assigned by the operating system
2. Process State: Can be ready, running, etc.
3. CPU registers: Like the Program Counter (CPU registers must be saved and restored when a process is
swapped in and out of CPU)
5. Accounts information:
6. I/O status information: For example, devices allocated to the process, open files, etc
8. CPU scheduling information: For example, Priority (Different processes may have different priorities, for example
a short process may be assigned a low priority in the shortest job first scheduling)
 Context Switching
The process of saving the context of one process and loading the context of another process is known as
Context Switching. In simple terms, it is like loading and unloading the process from running state to
ready state.
 When does context switching happen?

1. When a high-priority process comes to ready state (i.e. with higher priority than the running process)
2. An Interrupt occurs
3. User and kernel mode switch (It is not necessary though)
4. Preemptive CPU scheduling used.

 Process Control Block


Process Control Block is a data structure that contains information of the process related to it. The process control
block is also known as a task control block, entry of the process table, etc.
Structure of the Process Control Block
The process control stores many data items that are needed for efficient process management. Some of these data
items are explained with the help of the given diagram −

The following are the data items −


Process State
This specifies the process state i.e. new, ready, running, waiting or terminated.
Process Number
This shows the number of the particular process.
Program Counter
This contains the address of the next instruction that needs to be executed in the process.
Registers
This specifies the registers that are used by the process. They may include accumulators, index registers, stack
pointers, general purpose registers etc.
List of Open Files
These are the different files that are associated with the process
CPU Scheduling Information
The process priority, pointers to scheduling queues etc. is the CPU scheduling information that is contained in the
PCB. This may also include any other scheduling parameters.
Memory Management Information
The memory management information includes the page tables or the segment tables depending on the memory
system used. It also contains the value of the base registers, limit registers etc.
I/O Status Information
This information includes the list of I/O devices used by the process, the list of files etc.
Accounting information
The time limits, account numbers, amount of CPU used, process numbers etc. are all a part of the PCB accounting
information.
Location of the Process Control Block
The process control block is kept in a memory area that is protected from the normal user access. This is done
because it contains important process information. Some of the operating systems place the PCB at the beginning of
the kernel stack for the process as it is a safe location.

 Note : For Process states ,Process state diagram, types of schedulers and cpu scheduling
algorithms - Refer handwritten notes

 Preemptive and Non-Preemptive Scheduling


1. Preemptive Scheduling:

Preemptive scheduling is used when a process switches from running state to ready state or from waiting state
to ready state. The resources (mainly CPU cycles) are allocated to the process for the limited amount of time
and then is taken away, and the process is again placed back in the ready queue if that process still has CPU
burst time remaining. That process stays in ready queue till it gets next chance to execute.

Algorithms based on preemptive scheduling are: Round Robin (RR),Shortest Remaining Time First (SRTF),
Priority (preemptive version), etc.

2. Non-Preemptive Scheduling:

Non-preemptive Scheduling is used when a process terminates, or a process switches from running to waiting
state. In this scheduling, once the resources (CPU cycles) is allocated to a process, the process holds the CPU
till it gets terminated or it reaches a waiting state. In case of non-preemptive scheduling does not interrupt a
process running CPU in middle of the execution. Instead, it waits till the process complete its CPU burst time
and then it can allocate the CPU to another process.

Algorithms based on non-preemptive scheduling are: Shortest Job First (SJF basically non
preemptive) and Priority (non preemptive version), etc.

 Note : shortest remaining time next scheduling algorithm( means preemptive version of shortest
job first) for this refer handwritten notes

 Multilevel Queue (MLQ) CPU Scheduling ( also refer handwritten notes)


It may happen that processes in the ready queue can be divided into different classes where each class has its own
scheduling needs. For example, a common division is a foreground (interactive) process
and background (batch) processes.These two classes have different scheduling needs. For this kind of situation
Multilevel Queue Scheduling is used.Now, let us see how it works.
Ready Queue is divided into separate queues for each class of processes. For example, let us take three different
types of process System processes, Interactive processes and Batch Processes. All three process have there own
queue.

All three different type of processes have there own queue. Each queue have its own Scheduling algorithm. For
example, queue 1 and queue 2 uses Round Robin while queue 3 can use FCFS to schedule there processes.

Scheduling among the queues : What will happen if all the queues have some processes? Which process should get
the cpu? To determine this Scheduling among the queues is necessary. There are two ways to do so –

1. Fixed priority preemptive scheduling method – Each queue has absolute priority over lower priority queue.
Let us consider following priority order queue 1 > queue 2 > queue 3.According to this algorithm no process
in the batch queue(queue 3) can run unless queue 1 and 2 are empty. If any batch process (queue 3) is running
and any system (queue 1) or Interactive process(queue 2) entered the ready queue the batch process is
preempted.

2. Time slicing – In this method each queue gets certain portion of CPU time and can use it to schedule its own
processes.For instance, queue 1 takes 50 percent of CPU time queue 2 takes 30 p
 Multilevel Feedback Queue Scheduling (MLFQ) CPU Scheduling Scheduling ( also refer handwritten
notes)

This Scheduling is like Multilevel Queue(MLQ) Scheduling but in this process can move between the queues.
Multilevel Feedback Queue Scheduling (MLFQ) keep analyzing the behavior (time of execution) of processes and
according to which it changes its priority.

Now let us suppose that queue 1 and 2 follow round robin with time quantum 4 and 8 respectively and queue 3
follow FCFS.One implementation of MFQS is given below –

1. When a process starts executing then it first enters queue 1.

2. In queue 1 process executes for 4 unit and if it completes in this 4 unit or it gives CPU for I/O operation in
this 4 unit than the priority of this process does not change and if it again comes in the ready queue than it
again starts its execution in Queue 1.

3. If a process in queue 1 does not complete in 4 unit then its priority gets reduced and it shifted to queue 2.

4. Above points 2 and 3 are also true for queue 2 processes but the time quantum is 8 unit.In a general case if a
process does not complete in a time quantum than it is shifted to the lower priority queue.

5. In the last queue, processes are scheduled in FCFS manner.

6. A process in lower priority queue can only execute only when higher priority queues are empty.

7. A process running in the lower priority queue is interrupted by a process arriving in the higher priority queue.

Problems in the above implementation – A process in the lower priority queue can suffer from starvation due to
some short processes taking all the CPU time.

Solution – A simple solution can be to boost the priority of all the process after regular intervals and place
them all in the highest priority queue.

Advanatages of using Multilevel feedback queue scheduling

 Firstly, it is more flexible than the multilevel queue scheduling.

 To optimize turnaround time algorithms like SJF is needed which require the running time of processes to
schedule them. But the running time of the process is not known in advance. MFQS runs a process for a time
quantum and then it can change its priority(if it is a long process). Thus it learns from past behavior of the
process and then predicts its future behavior.This way it tries to run shorter process first thus optimizing
turnaround time.

 MFQS also reduces the response time.


UNIT III

Storage Management:

 Basic concept of storage management


 logical and physical address space
 swapping
 contiguous allocation, non-contiguous allocation, fragmentation
 segmentation
 paging, demand paging, virtual memory
 page replacement algorithms
 design issue of paging and thrashing

Memory management is the functionality of an operating system which handles or manages primary memory and
moves processes back and forth between main memory and disk during execution. Memory management keeps track
of each and every memory location, regardless of either it is allocated to some process or it is free. It checks how
much memory is to be allocated to processes. It decides which process will get memory at what time. It tracks
whenever some memory gets freed or unallocated and correspondingly it updates the status.
This tutorial will teach you basic concepts related to Memory Management.

Process Address Space

The process address space is the set of logical addresses that a process references in its code. For example, when 32-
bit addressing is in use, addresses can range from 0 to 0x7fffffff; that is, 2^31 possible numbers, for a total
theoretical size of 2 gigabytes.
The operating system takes care of mapping the logical addresses to physical addresses at the time of memory
allocation to the program. There are three types of addresses used in a program before and after memory is allocated

S.N. Memory Addresses & Description

1
Symbolic addresses
The addresses used in a source code. The variable names, constants, and instruction labels are the basic elements
of the symbolic address space.

2
Relative addresses
At the time of compilation, a compiler converts symbolic addresses into relative addresses.
3
Physical addresses
The loader generates these addresses at the time when a program is loaded into main memory.

Virtual and physical addresses are the same in compile-time and load-time address-binding schemes. Virtual and
physical addresses differ in execution-time address-binding scheme.
The set of all logical addresses generated by a program is referred to as a logical address space. The set of all
physical addresses corresponding to these logical addresses is referred to as a physical address space.
The runtime mapping from virtual to physical address is done by the memory management unit (MMU) which is a
hardware device. MMU uses following mechanism to convert virtual address to physical address.
 The value in the base register is added to every address generated by a user process, which is treated as offset
at the time it is sent to memory. For example, if the base register value is 10000, then an attempt by the user
to use address location 100 will be dynamically reallocated to location 10100.
 The user program deals with virtual addresses; it never sees the real physical addresses.
 Operating system uses the following memory allocation mechanism.

S.N. Memory Allocation & Description

1
Single-partition allocation
In this type of allocation, relocation-register scheme is used to protect user processes from each
other, and from changing operating-system code and data. Relocation register contains value of
smallest physical address whereas limit register contains range of logical addresses. Each logical
address must be less than the limit register.

2
Multiple-partition allocation
In this type of allocation, main memory is divided into a number of fixed-sized partitions where each
partition should contain only one process. When a partition is free, a process is selected from the
input queue and is loaded into the free partition. When the process terminates, the partition becomes
available for another process.

 Fragmentation
 As processes are loaded and removed from memory, the free memory space is broken into little pieces. It
happens after sometimes that processes cannot be allocated to memory blocks considering their small size
and memory blocks remains unused. This problem is known as Fragmentation.
 Fragmentation is of two types −
S.N. Fragmentation & Description

1
External fragmentation
Total memory space is enough to satisfy a request or to reside a process in it, but it is not
contiguous, so it cannot be used.

2
Internal fragmentation
Memory block assigned to process is bigger. Some portion of memory is left unused, as it cannot
be used by another process.

Paging
A computer can address more memory than the amount physically installed on the system. This extra memory is
actually called virtual memory and it is a section of a hard that's set up to emulate the computer's RAM. Paging
technique plays an important role in implementing virtual memory.
Paging is a memory management technique in which process address space is broken into blocks of the same size
called pages (size is power of 2, between 512 bytes and 8192 bytes). The size of the process is measured in the
number of pages.
Similarly, main memory is divided into small fixed-sized blocks of (physical) memory called frames and the size of
a frame is kept the same as that of a page to have optimum utilization of the main memory and to avoid external
fragmentation.
Address Translation
Page address is called logical address and represented by page number and the offset.
Logical Address = Page number + page offset
Frame address is called physical address and represented by a frame number and the offset.
Physical Address = Frame number + page offset
A data structure called page map table is used to keep track of the relation between a page of a process to a frame
in physical memory.
When the system allocates a frame to any page, it translates this logical address into a physical address and create
entry into the page table to be used throughout execution of the program.
When a process is to be executed, its corresponding pages are loaded into any available memory frames. Suppose
you have a program of 8Kb but your memory can accommodate only 5Kb at a given point in time, then the
paging concept will come into picture. When a computer runs out of RAM, the operating system (OS) will move
idle or unwanted pages of memory to secondary memory to free up RAM for other processes and brings them back
when needed by the program.
This process continues during the whole execution of the program where the OS keeps removing idle pages from
the main memory and write them onto the secondary memory and bring them back when required by the program.
Advantages and Disadvantages of Paging
Here is a list of advantages and disadvantages of paging −
 Paging reduces external fragmentation, but still suffer from internal fragmentation.
 Paging is simple to implement and assumed as an efficient memory management technique.
 Due to equal size of the pages and frames, swapping becomes very easy.
 Page table requires extra memory space, so may not be good for a system having small RAM.
Segmentation
Segmentation is a memory management technique in which each job is divided into several segments of
different sizes, one for each module that contains pieces that perform related functions. Each segment is actually
a different logical address space of the program.
When a process is to be executed, its corresponding segmentation are loaded into non-contiguous memory though
every segment is loaded into a contiguous block of available memory.
Segmentation memory management works very similar to paging but here segments are of variable-length where as
in paging pages are of fixed size.
A program segment contains the program's main function, utility functions, data structures, and so on. The operating
system maintains a segment map table for every process and a list of free memory blocks along with segment
numbers, their size and corresponding memory locations in main memory. For each segment, the table stores the
starting address of the segment and the length of the segment. A reference to a memory location includes a value
that identifies a segment and an offset.

A Process Scheduler schedules different processes to be assigned to the CPU based on particular scheduling
algorithms. There are six popular process scheduling algorithms which we are going to discuss in this chapter −

 First-Come, First-Served (FCFS) Scheduling


 Shortest-Job-Next (SJN) Scheduling
 Priority Scheduling
 Shortest Remaining Time
 Round Robin(RR) Scheduling
 Multiple-Level Queues Scheduling
These algorithms are either non-preemptive or preemptive. Non-preemptive algorithms are designed so that
once a process enters the running state, it cannot be preempted until it completes its allotted time,
whereas the preemptive scheduling is based on priority where a scheduler may preempt a low priority running
process anytime when a high priority process enters into a ready state.
First Come First Serve (FCFS)

 Jobs are executed on first come, first serve basis.


 It is a non-preemptive, pre-emptive scheduling algorithm.
 Easy to understand and implement.
 Its implementation is based on FIFO queue.
 Poor in performance as average wait time is high.

Wait time of each process is as follows −

Process Wait Time : Service Time - Arrival Time

P0 0-0=0

P1 5-1=4

P2 8-2=6

P3 16 - 3 = 13

Average Wait Time: (0+4+6+13) / 4 = 5.75


Shortest Job Next (SJN)
 This is also known as shortest job first, or SJF
 This is a non-preemptive, pre-emptive scheduling algorithm.
 Best approach to minimize waiting time.
 Easy to implement in Batch systems where required CPU time is known in advance.
 Impossible to implement in interactive systems where required CPU time is not known.
 The processer should know in advance how much time process will take.
Given: Table of processes, and their Arrival time, Execution time

Process Arrival Time Execution Time Service Time

P0 0 5 0

P1 1 3 5

P2 2 8 14

P3 3 6 8

Waiting time of each process is as follows −

Process Waiting Time

P0 0-0=0

P1 5-1=4

P2 14 - 2 = 12

P3 8-3=5

Average Wait Time: (0 + 4 + 12 + 5)/4 = 21 / 4 = 5.25


Priority Based Scheduling
 Priority scheduling is a non-preemptive algorithm and one of the most common scheduling algorithms in
batch systems.
 Each process is assigned a priority. Process with highest priority is to be executed first and so on.
 Processes with same priority are executed on first come first served basis.
 Priority can be decided based on memory requirements, time requirements or any other resource
requirement.
Given: Table of processes, and their Arrival time, Execution time, and priority. Here we are considering 1 is the
lowest priority.

Process Arrival Time Execution Time Priority Service Time

P0 0 5 1 0

P1 1 3 2 11

P2 2 8 1 14

P3 3 6 3 5

Waiting time of each process is as follows −

Process Waiting Time

P0 0-0=0

P1 11 - 1 = 10

P2 14 - 2 = 12

P3 5-3=2

Average Wait Time: (0 + 10 + 12 + 2)/4 = 24 / 4 = 6


Shortest Remaining Time
 Shortest remaining time (SRT) is the preemptive version of the SJN algorithm.
 The processor is allocated to the job closest to completion but it can be preempted by a newer ready job with
shorter time to completion.
 Impossible to implement in interactive systems where required CPU time is not known.
 It is often used in batch environments where short jobs need to give preference.
Round Robin Scheduling
 Round Robin is the preemptive process scheduling algorithm.
 Each process is provided a fix time to execute, it is called a quantum.
 Once a process is executed for a given time period, it is preempted and other process executes for a given
time period.
 Context switching is used to save states of preempted processes.

Wait time of each process is as follows −

Process Wait Time : Service Time - Arrival Time

P0 (0 - 0) + (12 - 3) = 9

P1 (3 - 1) = 2

P2 (6 - 2) + (14 - 9) + (20 - 17) = 12

P3 (9 - 3) + (17 - 12) = 11

Average Wait Time: (9+2+12+11) / 4 = 8.5


Multiple-Level Queues Scheduling
Multiple-level queues are not an independent scheduling algorithm. They make use of other existing
algorithms to group and schedule jobs with common characteristics.
 Multiple queues are maintained for processes with common characteristics.
 Each queue can have its own scheduling algorithms.
 Priorities are assigned to each queue.
For example, CPU-bound jobs can be scheduled in one queue and all I/O-bound jobs in another queue. The Process
Scheduler then alternately selects jobs from each queue and assigns them to the CPU based on the algorithm
assigned to the queue.
UNIT IV

Inter-process communication and synchronization

 Mutual Exclusion
 Semaphore
 Busy-wait Implementation
 characteristics of semaphore
 queuing implementation of semaphore
 producer consumer problem
 Critical region and conditional critical area.
 Deadlock

A deadlock happens in operating system when two or more processes need some resource to complete their execution
that is held by the other process.

In the above diagram, the process 1 has resource 1 and needs to acquire resource 2. Similarly process 2 has resource 2
and needs to acquire resource 1. Process 1 and process 2 are in deadlock as each of them needs the other’s resource to
complete their execution but neither of them is willing to relinquish their resources.
Coffman Conditions
A deadlock occurs if the four Coffman conditions hold true. But these conditions are not mutually exclusive.
The Coffman conditions are given as follows −

 Mutual Exclusion
There should be a resource that can only be held by one process at a time. In the diagram below, there is a
single instance of Resource 1 and it is held by Process 1 only.

 Hold and Wait


A process can hold multiple resources and still request more resources from other processes which are
holding them. In the diagram given below, Process 2 holds Resource 2 and Resource 3 and is requesting the
Resource 1 which is held by Process 1.

 No Preemption
A resource cannot be preempted from a process by force. A process can only release a resource voluntarily.
In the diagram below, Process 2 cannot preempt Resource 1 from Process 1. It will only be released when
Process 1 relinquishes it voluntarily after its execution is complete.

 Circular Wait
A process is waiting for the resource held by the second process, which is waiting for the resource held by the
third process and so on, till the last process is waiting for a resource held by the first process. This forms a
circular chain. For example: Process 1 is allocated Resource2 and it is
requesting Resource 1. Similarly, Process 2 is allocated Resource 1 and it is requesting Resource 2. This
forms a circular wait loop.

Deadlock Detection
A deadlock can be detected by a resource scheduler as it keeps track of all the resources that are allocated to different
processes. After a deadlock is detected, it can be resolved using the following methods −

 All the processes that are involved in the deadlock are terminated. This is not a good approach as all the
progress made by the processes is destroyed.
 Resources can be preempted from some processes and given to others till the deadlock is resolved.
Deadlock Prevention
It is very important to prevent a deadlock before it can occur. So, the system checks each transaction before it is
executed to make sure it does not lead to deadlock. If there is even a slight chance that a transaction may lead to
deadlock in the future, it is never allowed to execute.
Deadlock Avoidance
It is better to avoid a deadlock rather than take measures after the deadlock has occurred. The wait for graph can be
used for deadlock avoidance. This is however only useful for smaller databases as it can get quite complex in larger
databases.
UNIT V

File Systems :
 Files-basic concept
 file attributes, operations
 file types, file structure, access methods
 Directory- structure-single level directory system

 Directory system

File
A file is a named collection of related information that is recorded on secondary storage such as magnetic disks,
magnetic tapes and optical disks. In general, a file is a sequence of bits, bytes, lines or records whose meaning is
defined by the files creator and user.
File Structure
A File Structure should be according to a required format that the operating system can understand.
 A file has a certain defined structure according to its type.
 A text file is a sequence of characters organized into lines.
 A source file is a sequence of procedures and functions.
 An object file is a sequence of bytes organized into blocks that are understandable by the machine.
 When operating system defines different file structures, it also contains the code to support these file
structure. Unix, MS-DOS support minimum number of file structure.
File Type
File type refers to the ability of the operating system to distinguish different types of file such as text files source
files and binary files etc. Many operating systems support many types of files. Operating system like MS-DOS
and UNIX have the following types of files −
Ordinary files

 These are the files that contain user information.


 These may have text, databases or executable program.
 The user can apply various operations on such files like add, modify, delete or even remove the entire
file.
Directory files

 These files contain list of file names and other information related to these files.
Special files

 These files are also known as device files.


 These files represent physical device like disks, terminals, printers, networks, tape drive etc. These
files are of two types −
 Character special files − data is handled character by character as in case of terminals or printers.
 Block special files − data is handled in blocks as in the case of disks and tapes. File
Access Mechanisms
File access mechanism refers to the manner in which the records of a file may be accessed. There are
several ways to access files −

 Sequential access
 Direct/Random access
 Indexed sequential access
Sequential access
A sequential access is that in which the records are accessed in some sequence, i.e., the information in the file is
processed in order, one record after the other. This access method is the most primitive one. Example: Compilers
usually access files in this fashion.
Direct/Random access
 Random access file organization provides, accessing the records directly.
 Each record has its own address on the file with by the help of which it can be directly accessed for reading
or writing.
 The records need not be in any sequence within the file and they need not be in adjacent locations on the
storage medium.
Indexed sequential access

 This mechanism is built up on base of sequential access.


 An index is created for each file which contains pointers to various blocks.
 Index is searched sequentially and its pointer is used to access the file directly. Space
Allocation
Files are allocated disk spaces by operating system. Operating systems deploy following three main ways to
allocate disk space to files.

 Contiguous Allocation
 Linked Allocation
 Indexed Allocation
Contiguous Allocation

 Each file occupies a contiguous address space on disk.


 Assigned disk address is in linear order.
 Easy to implement.
 External fragmentation is a major issue with this type of allocation technique.
Linked Allocation

 Each file carries a list of links to disk blocks.


 Directory contains link / pointer to first block of a file.
 No external fragmentation
 Effectively used in sequential access file.
 Inefficient in case of direct access file.
Indexed Allocation

 Provides solutions to problems of contiguous and linked allocation.


 A index block is created having all pointers to files.
 Each file has its own index block which stores the addresses of disk space occupied by the file.
 Directory contains the addresses of index blocks of files.
Security refers to providing a protection system to computer system resources such as CPU, memory, disk, software
programs and most importantly data/information stored in the computer system. If a computer program is run by an
unauthorized user, then he/she may cause severe damage to computer or data stored in it. So a computer system
must be protected against unauthorized access, malicious access to system memory, viruses, worms etc. We're going
to discuss following topics in this chapter.

 Authentication
 One Time passwords
 Program Threats
 System Threats
 Computer Security Classifications
Authentication
Authentication refers to identifying each user of the system and associating the executing programs with those users.
It is the responsibility of the Operating System to create a protection system which ensures that a user who is
running a particular program is authentic. Operating Systems generally identifies/authenticates users using following
three ways −
 Username / Password − User need to enter a registered username and password with Operating system
to login into the system.
 User card/key − User need to punch card in card slot, or enter key generated by key generator in option
provided by operating system to login into the system.
 User attribute - fingerprint/ eye retina pattern/ signature − User need to pass his/her attribute via
designated input device used by operating system to login into the system.
One Time passwords
One-time passwords provide additional security along with normal authentication. In One-Time Password system, a
unique password is required every time user tries to login into the system. Once a one-time password is used, then it
cannot be used again. One-time password are implemented in various ways.
 Random numbers − Users are provided cards having numbers printed along with corresponding alphabets.
System asks for numbers corresponding to few alphabets randomly chosen.
 Secret key − User are provided a hardware device which can create a secret id mapped with user id. System
asks for such secret id which is to be generated every time prior to login.
 Network password − Some commercial applications send one-time passwords to user on registered mobile/
email which is required to be entered prior to login.
Program Threats
Operating system's processes and kernel do the designated task as instructed. If a user program made these process
do malicious tasks, then it is known as Program Threats. One of the common example of program threat is
a program installed in a computer which can store and send user credentials via network to some hacker. Following
is the list of some well-known program threats.
 Trojan Horse − Such program traps user login credentials and stores them to send to malicious user who
can later on login to computer and can access system resources.
 Trap Door − If a program which is designed to work as required, have a security hole in its code and
perform illegal action without knowledge of user then it is called to have a trap door.
 Logic Bomb − Logic bomb is a situation when a program misbehaves only when certain conditions met
otherwise it works as a genuine program. It is harder to detect.
 Virus − Virus as name suggest can replicate themselves on computer system. They are highly dangerous and
can modify/delete user files, crash systems. A virus is generatlly a small code embedded in a program. As
user accesses the program, the virus starts getting embedded in other files/ programs and can make system
unusable for user
System Threats
System threats refers to misuse of system services and network connections to put user in trouble. System threats
can be used to launch program threats on a complete network called as program attack. System threats creates such
an environment that operating system resources/ user files are misused. Following is the list of some well-known
system threats.
 Worm − Worm is a process which can choked down a system performance by using system resources to
extreme levels. A Worm process generates its multiple copies where each copy uses system resources,
prevents all other processes to get required resources. Worms processes can even shut down an entire
network.
 Port Scanning − Port scanning is a mechanism or means by which a hacker can detects system
vulnerabilities to make an attack on the system.
 Denial of Service − Denial of service attacks normally prevents user to make legitimate use of the system.
For example, a user may not be able to use internet if denial of service attacks browser's content settings.
UNIT VI

Input/output System:
 PrinciplesofI/Ohardware
 I/Odevices,devicecontroller
 DMA,PrinciplesofI/Osoftware- goals, interrupt handler
 Devicedriver.
 Mass storage structure-disk structure
 disk scheduling

An I/O system is required to take an application I/O request and send it to the physical device, then take whatever
response comes back from the device and send it to the application. I/O devices can be divided into two categories −
 Block devices − A block device is one with which the driver communicates by sending entire blocks
of data. For example, Hard disks, USB cameras, Disk-On-Key etc.
 Character devices − A character device is one with which the driver communicates by sending and
receiving single characters (bytes, octets). For example, serial ports, parallel ports, sounds cards etc
Device Controllers
Device drivers are software modules that can be plugged into an OS to handle a particular device. Operating System
takes help from device drivers to handle all I/O devices.
The Device Controller works like an interface between a device and a device driver. I/O units (Keyboard, mouse,
printer, etc.) typically consist of a mechanical component and an electronic component where electronic component
is called the device controller.
There is always a device controller and a device driver for each device to communicate with the Operating Systems.
A device controller may be able to handle multiple devices. As an interface its main task is to convert serial bit
stream to block of bytes, perform error correction as necessary.
Any device connected to the computer is connected by a plug and socket, and the socket is connected to a device
controller. Following is a model for connecting the CPU, memory, controllers, and I/O devices where CPU and
device controllers all use a common bus for communication.
Synchronous vs asynchronous I/O
 Synchronous I/O − In this scheme CPU execution waits while I/O proceeds
 Asynchronous I/O − I/O proceeds concurrently with CPU execution
Communication to I/O Devices
The CPU must have a way to pass information to and from an I/O device. There are three approaches
available to communicate with the CPU and Device.

 Special Instruction I/O


 Memory-mapped I/O
 Direct memory access (DMA)
Special Instruction I/O
This uses CPU instructions that are specifically made for controlling I/O devices. These instructions typically
allow data to be sent to an I/O device or read from an I/O device.
Memory-mapped I/O
When using memory-mapped I/O, the same address space is shared by memory and I/O devices. The device is
connected directly to certain main memory locations so that I/O device can transfer block of data to/from memory
without going through CPU.
While using memory mapped IO, OS allocates buffer in memory and informs I/O device to use that buffer to send
data to the CPU. I/O device operates asynchronously with CPU, interrupts CPU when finished.
The advantage to this method is that every instruction which can access memory can be used to manipulate an I/O
device. Memory mapped IO is used for most high-speed I/O devices like disks, communication interfaces.
Direct Memory Access (DMA)
Slow devices like keyboards will generate an interrupt to the main CPU after each byte is transferred. If a fast device
such as a disk generated an interrupt for each byte, the operating system would spend most of its time handling
these interrupts. So a typical computer uses direct memory access (DMA) hardware to reduce this overhead.
Direct Memory Access (DMA) means CPU grants I/O module authority to read from or write to memory without
involvement. DMA module itself controls exchange of data between main memory and the I/O device. CPU is only
involved at the beginning and end of the transfer and interrupted only after entire block has been transferred.
Direct Memory Access needs a special hardware called DMA controller (DMAC) that manages the data transfers
and arbitrates access to the system bus. The controllers are programmed with source and destination pointers (where
to read/write the data), counters to track the number of transferred bytes, and settings, which includes I/O and
memory types, interrupts and states for the CPU cycles.
The operating system uses the DMA hardware as follows −

Step Description

1 Device driver is instructed to transfer disk data to a buffer address X.

2 Device driver then instruct disk controller to transfer data to buffer.

3 Disk controller starts DMA transfer.

4 Disk controller sends each byte to DMA controller.

5 DMA controller transfers bytes to buffer, increases the memory address, decreases the
counter C until C becomes zero.
6 When C becomes zero, DMA interrupts CPU to signal transfer completion.

Polling vs Interrupts I/O


A computer must have a way of detecting the arrival of any type of input. There are two ways that this can happen,
known as polling and interrupts. Both of these techniques allow the processor to deal with events that can happen
at any time and that are not related to the process it is currently running.
Polling I/O
Polling is the simplest way for an I/O device to communicate with the processor. The process of periodically
checking status of the device to see if it is time for the next I/O operation, is called polling. The I/O device
simply puts the information in a Status register, and the processor must come and get the information.
Most of the time, devices will not require attention and when one does it will have to wait until it is next interrogated
by the polling program. This is an inefficient method and much of the processors time is wasted on unnecessary
polls.
Compare this method to a teacher continually asking every student in a class, one after another, if they need
help. Obviously the more efficient method would be for a student to inform the teacher whenever they require
assistance.
Interrupts I/O
An alternative scheme for dealing with I/O is the interrupt-driven method. An interrupt is a signal to the
microprocessor from a device that requires attention.
A device controller puts an interrupt signal on the bus when it needs CPU’s attention when CPU receives an
interrupt, It saves its current state and invokes the appropriate interrupt handler using the interrupt vector
(addresses of OS routines to handle various events). When the interrupting device has been dealt with, the CPU
continues with its original task as if it had never been interrupted.
Disk scheduling is done by operating systems to schedule I/O requests arriving for the disk. Disk
scheduling is also known as I/O scheduling.
Disk scheduling is important because:
 Multiple I/O requests may arrive by different processes and only one I/O request can be served at a time by
the disk controller. Thus other I/O requests need to wait in the waiting queue and need to be scheduled.
 Two or more request may be far from each other so can result in greater disk arm movement.
 Hard drives are one of the slowest parts of the computer system and thus need to be accessed in an
efficient manner.
There are many Disk Scheduling Algorithms but before discussing them let’s have a quick look at some of the
important terms:
 Seek Time:Seek time is the time taken to locate the disk arm to a specified track where the data is to be read
or write. So the disk scheduling algorithm that gives minimum average seek time is better.
 Rotational Latency: Rotational Latency is the time taken by the desired sector of disk to rotate into a position
so that it can access the read/write heads. So the disk scheduling algorithm that gives minimum rotational
latency is better.
 Transfer Time: Transfer time is the time to transfer the data. It depends on the rotating speed of the disk and
number of bytes to be transferred.
 Disk Access Time: Disk Access Time is:

 Disk Response Time: Response Time is the average of time spent by a request waiting to perform its I/O
operation. Average Response time is the response time of the all requests. Variance Response Time is measure
of how individual request are serviced with respect to average response time. So the disk scheduling algorithm
that gives minimum variance response time is better.

Disk Scheduling Algorithms

1. FCFS: FCFS is the simplest of all the Disk Scheduling Algorithms. In FCFS, the requests are
addressed in the order they arrive in the disk queue.Let us understand this with the help of an
example.

Example:
Suppose the order of request is- (82,170,43,140,24,16,190) And
current position of Read/Write head is : 50

So, total seek time:


=(82-50)+(170-82)+(170-43)+(140-43)+(140-24)+(24-16)+(190-16)
=642
Advantages:
 Every request gets a fair chance
 No indefinite postponement
Disadvantages:
 Does not try to optimize seek time
 May not provide the best possible service
2. SSTF: In SSTF (Shortest Seek Time First), requests having shortest seek time are executed first. So, the seek
time of every request is calculated in advance in the queue and then they are scheduled according to their
calculated seek time. As a result, the request near the disk arm will get executed first. SSTF is certainly an
improvement over FCFS as it decreases the average response time and increases the throughput of
system.Let us understand this with the help of an example.

Example:
Suppose the order of request is- (82,170,43,140,24,16,190) And
current position of Read/Write head is : 50

So, total seek time:


=(50-43)+(43-24)+(24-16)+(82-16)+(140-82)+(170-40)+(190-170)
=208
Advantages:
 Average Response Time decreases
 Throughput increases
Disadvantages:
 Overhead to calculate seek time in advance
 Can cause Starvation for a request if it has higher seek time as compared to incoming requests
 High variance of response time as SSTF favours only some requests
3. SCAN: In SCAN algorithm the disk arm moves into a particular direction and services the requests coming in
its path and after reaching the end of disk, it reverses its direction and again services the request arriving in its
path. So, this algorithm works as an elevator and hence also known as elevator algorithm. As a result, the
requests at the midrange are serviced more and those arriving behind the disk arm will have to wait.

Example:
Suppose the requests to be addressed are-82,170,43,140,24,16,190. And the Read/Write arm is at 50, and it is
also given that the disk arm should move “towards the larger value”.

Therefore, the seek time is calculated as:

=(199-50)+(199-16)
=332
Advantages:
 High throughput
 Low variance of response time
 Average response time
Disadvantages:
 Long waiting time for requests for locations just visited by disk arm
4. CSCAN: In SCAN algorithm, the disk arm again scans the path that has been scanned, after reversing its
direction. So, it may be possible that too many requests are waiting at the other end or there may be zero or few
requests pending at the scanned area.
These situations are avoided in CSCAN algorithm in which the disk arm instead of reversing its direction goes to
the other end of the disk and starts servicing the requests from there. So, the disk arm moves in a circular fashion
and this algorithm is also similar to SCAN algorithm and hence it is known as C-SCAN (Circular SCAN).
Example:
Suppose the requests to be addressed are-82,170,43,140,24,16,190. And the Read/Write arm is at 50, and it is also
given that the disk arm should move “towards the larger value”.

Seek time is calculated as:


=(199-50)+(199-0)+(43-0)
=391
Advantages:
 Provides more uniform wait time compared to SCAN
5. LOOK: It is similar to the SCAN disk scheduling algorithm except for the difference that the disk arm in
spite of going to the end of the disk goes only to the last request to be serviced in front of the head and then
reverses its direction from there only. Thus it prevents the extra delay which occurred due to unnecessary
traversal to the end of the disk.

Example:
Suppose the requests to be addressed are-82,170,43,140,24,16,190. And the Read/Write arm is at 50, and it is
also given that the disk arm should move “towards the larger value”.

So, the seek time is calculated as:


=(190-50)+(190-16)
=314
6. CLOOK: As LOOK is similar to SCAN algorithm, in similar way, CLOOK is similar to CSCAN disk
scheduling algorithm. In CLOOK, the disk arm in spite of going to the end goes only to the last request to be
serviced in front of the head and then from there goes to the other end’s last request. Thus, it also prevents
the extra delay which occurred due to unnecessary traversal to the end of the disk.

Example:
Suppose the requests to be addressed are-82,170,43,140,24,16,190. And the Read/Write arm is at 50, and it is
also given that the disk arm should move “towards the larger value”

So, the seek time is calculated as:


=(190-50)+(190-16)+(43-16)
=341
PRACTICE QUESTIONS

1) Explain the main purpose of an operating system?

Operating systems exist for two main purposes. One is that it is designed to make sure a computer system performs
well by managing its computational activities. Another is that it provides an environment for the development and
execution of programs.

2) What is demand paging?

Demand paging is referred when not all of a process’s pages are in the RAM, then the OS brings the
missing(and required) pages from the disk into the RAM.

3) What are the advantages of a multiprocessor system?

With an increased number of processors, there is a considerable increase in throughput. It can also save more
money because they can share resources. Finally, overall reliability is increased as well.

4) What is kernel?

A kernel is the core of every operating system. It connects applications to the actual processing of data. It also
manages all communications between software and hardware components to ensure usability and reliability.

5) What are real-time systems?

Real-time systems are used when rigid time requirements have been placed on the operation of a processor. It has
well defined and fixed time constraints.

6) What is a virtual memory?

Virtual memory is a memory management technique for letting processes execute outside of memory. This is very
useful especially is an executing program cannot fit in the physical memory.

7) Describe the objective of multiprogramming.

The main objective of multiprogramming is to have a process running at all times. With this design, CPU
utilization is said to be maximized.

8 ) What is time- sharing system?

In a Time-sharing system, the CPU executes multiple jobs by switching among them, also known as
multitasking. This process happens so fast that users can interact with each program while it is running.

9) What is SMP?

SMP is a short form of Symmetric Multi-Processing. It is the most common type of multiple-processor systems.
In this system, each processor runs an identical copy of the operating system, and these copies communicate
with one another as needed.
10) How are server systems classified?

Server systems can be classified as either computer-server systems or file server systems. In the first case, an
interface is made available for clients to send requests to perform an action. In the second case, provisions are
available for clients to create, access and update files.

11) What is asymmetric clustering?

In asymmetric clustering, a machine is in a state known as hot standby mode where it does nothing but to monitor
the active server. That machine takes the active server’s role should the server fails.

12) What is a thread?

A thread is a basic unit of CPU utilization. In general, a thread is composed of a thread ID, program counter,
register set, and the stack.

13) Give some benefits of multithreaded programming.

– there is increased responsiveness to the user


– resource sharing within the process
– economy
– utilization of multiprocessing architecture
14) Briefly explain FCFS.

FCFS stands for First-come, first-served. It is one type of scheduling algorithm. In this scheme, the process that
requests the CPU first is allocated the CPU first. Implementation is managed by a FIFO queue.

15) What is RR scheduling algorithm?

RR (round-robin) scheduling algorithm is primarily aimed for time-sharing systems. A circular queue is a setup in
such a way that the CPU scheduler goes around that queue, allocating CPU to each process for a time interval of
up to around 10 to 100 milliseconds.

16) What are necessary conditions which can lead to a deadlock situation in a system?

Deadlock situations occur when four conditions occur simultaneously in a system: Mutual exclusion; Hold and
Wait; No preemption; and Circular wait.

17) What factors determine whether a detection-algorithm must be utilized in a deadlock avoidance system?

One is that it depends on how often a deadlock is likely to occur under the implementation of this algorithm. The
other has to do with how many processes will be affected by deadlock when this algorithm is applied.

18) State the main difference between logical from physical address space.

Logical address refers to the address that is generated by the CPU. On the other hand, physical address refers to
the address that is seen by the memory unit.

19) How does dynamic loading aid in better memory space utilization?
With dynamic loading, a routine is not loaded until it is called. This method is especially useful when large amounts
of code are needed in order to handle infrequently occurring cases such as error routines.

20) What is the basic function of paging?

Paging is a memory management scheme that permits the physical address space of a process to be
noncontiguous. It avoids the considerable problem of having to fit varied sized memory chunks onto the backing
store.
FORMAT OFINTERNAL QUESTION PAPER

1stInternal Examination (2020)

Course: Semester:
Subject: Course Code:
Max. Marks: 40 Max. Time: 2 Hours

Instructions (if any):- Use of calculator for subjects like Financial Mgt. Operation etc. allowed if required.
(Scientific calculator is not allowed).
Use of unfair means will lead to cancellation of paper followed by disciplinary action.
Question No. 1 is compulsory. Attempt any two questions from Q2 to Q5.
Attempt any two question from section 2.
Section 1
(Theoretical Concept and Practical/Application oriented)
Answer in 400 words. Each question carry 06 marks.
Q. 1
Q. 2
Q.3
Q. 4
Q.5 Write Short Note on any two. Answer in 300 words. Each carry 03 marks.
a)
b)
c)

Section 2
(Analytical Question / Case Study / Essay Type Question to test analytical and Comprehensive Skills)

Answer in 800 words. Attempt any 2 questions.Each question carry 11 marks


Q6.
Q7.
Q8.
PREVIOUS YEAR UNIVERSITY QUESTION PAPERS
Previous year Internal Question papers
1st Internal Examination (2019)

Course: BCA Semester: III


Subject: Operating Systems Course Code: 301
Max. Marks: 40 Max. Time: 2 Hours

Instructions (if any):- Use of unfair means will lead to cancellation of paper followed by disciplinary action.
Question No. 1 is compulsory. Attempt any two questions from Q2 to Q5.
Attempt any two question from section 2.
Section 1
Answer in 400 words. Each question carry 06 marks.
Q. 1 Explain static and dynamic relocation.
Q. 2 What do you understand by input output interface?
Q.3 Explain various file access methods.
Q. 4 What do you understand by swapping?
Q.5 Write Short Note on any two. Answer in 300 words. Each carry 03 marks.
a) Internal Fragmentation
b) First fit and next fit
c) File types

Section 2

Answer in 800 words. Attempt any 2 questions. Each question carry 11 marks
Q. 6 What do you understand by deadlocks? Explain deadlock prevention, deadlock avoidance and
deadlock detection & recovery?
Q. 7 Explain interrupt driven I/O and DMA?
Q. 8 Suppose that a disk drive has 50 tracks. The system refers the tracks in following sequence:
25,37,15,9,24,37,39,47,13,25,15
Currently head is on track number 20 and moving outside. Calculate total track movements and time required
to move all these tracks. (Consider seek time = 0.15 ms) in case of:
 Shortest seek time first
1st Internal Examination (2019)

Course: BCA Semester: III


Subject: Operating Systems Course Code: 301
Max. Marks: 40 Max. Time: 2 Hours

Instructions (if any):- Use of calculator for subjects like Financial Mgt. Operation etc. allowed if required. (Scientific
calculator is not allowed).
Use of unfair means will lead to cancellation of paper followed by disciplinary action.
Question No. 1 is compulsory. Attempt any two questions from Q2 to Q5.
Attempt any two question from section 2.
Section 1
Answer in 400 words. Each question carry 06 marks.
Q. 1 Explain operating system services for process management.
Q. 2 What are different states of a process?
Q.3 Explain SJF and Multilevel Scheduling algorithms?
Q. 4 Explain Different types of operating systems.
Q.5 Write Short Note on any two. Answer in 300 words. Each carry 03 marks.
a) Paging
b) ABORT System Call
c) Network OS

Section 2

Answer in 800 words. Attempt any 2 questions. Each question carry 11 marks
Q6. Explain different types of schedulers with the help of diagram.
Q7. Explain Virtual memory, demand paging, page replacement and page replacement algorithms. Q8.
What is PCB? Explain what type of information is stored in PCB?
1st Internal Examination (February, 2020)
(2014 Course)
Course: BCA Semester: III
Subject: Operating Systems Course Code: 301
Max. Marks: 40 Max. Time: 2 Hours
Instructions (if any):- Use of calculator for subjects like Financial Mgt. Operation etc. allowed if required.
(Scientific calculator is not allowed).
Use of unfair means will lead to cancellation of paper followed by disciplinary action.
Question No. 1 is compulsory. Attempt any two questions from Q2 to Q5.
Attempt any two question from section 2.
Section 1
Answer in 400 words. Each question carry 06 marks.
Q. 1 Explain Multitasking and multiprocessing operating systems.
Q. 2 What are different states of a process?
Q.3 Explain SRTN and FCFS scheduling algorithms?
Q. 4 Explain Functions of operating systems.
Q.5 Write Short Note on any two. Answer in 300 words. Each carry 03 marks.
a) Suspend System Call
b) Resume System Call
c) Real time OS

Section 2

Answer in 800 words. Attempt any 2 questions. Each question carry 11 marks
Q6. Explain different types of schedulers with the help of diagram. Q7.
Explain different views of operating systems.
Q8. Explain the concept of a process, process relationship and implicit and explicit tasking.
BharatiVidyapeeth(Deemed to be University) Institute of
Management and Research (BVIMR), New Delhi
1stInternal Examination (September, 2018)
Course: BCA Semester: III
Subject: Operating System Concepts Course Code: 301
Max. Marks: 40 Max. Time: 2 Hours

Q. 1 Attempt any five questions. Answer in 50 words (Recall) [5 x 2]

a) What do you understand by monitors?


b) What is process relationship?
c) What is inter process communication?
d) What is implicit and explicit tasking?
e) Explain in brief about batch operating system.
f) Explain inter-process signaling.
g) Explain in brief about SJF scheduling algorithm.
h) What is the difference between preemptive and non preemptive scheduling algorithm?

Q. 2 attempt any two question. Answer in 200 words (Theoretical Concept) [2 x 5]

a) Explain different views of operating system.


b) Explain the process concept. What are various process management functions performed by operating system?
c) Explain process states with the help of diagram.
Q.3 Attempt any two questions. Answer in 200 words (Practical/Application oriented) [2 x 5]

a) Explain PCB in Detail


b) Write any 5 operating system services for process management?
c) Write an algorithm to achieve mutual exclusion with semaphores.
d) What are the various services provided by operating system?

Q.4 Attempt any one. Answer in 600 words (Analytical Question / Case Study / Essay Type Question to test
analytical and Comprehensive Skills) [1x10]
a) Explain different types of schedulers with the help of diagram? Also explain round robin, SRTN
and MLQ scheduling with the help of diagram.
b) Write short notes on any two of the following :
i) Multiprocessing
ii) Mutual Exclusion
iii) Scheduling and performance criteria
BharatiVidyapeeth(Deemed to be University) Institute of
Management and Research (BVIMR), New Delhi
2nd Internal Examination (October 2018)

Course : BCA Semester : III


Subject: Operating System Concepts Course Code: 301
Max. Marks: 40 Max. Time: 2 Hours

Instructions (if any):- (accounting, mathematics regarding use of Calculator, if required). Give Examples &
Diagrammatic Representations wherever as possible

Question No. 1 is compulsory. Attempt any two questions from Q2 to Q5.


Attempt any two question from section 2.
Each Question in Section 1 carries 6 marks & Each Question in Section 2 carries 11 marks

Section 1
Answer in 400 words. Each question carry 06 marks.
Q. 1. What do you understand by contiguous and non-contiguous memory allocation. Explain with the help
of diagrams and tables.
Q. 2. What do you understand by I/O systems and I/O interface.
Q.3. What are reusable and consumable resources. What are the various conditions to occur deadlocks.
Q. 4. Explain any three methods of free space management in disk.
Q.5. Write Short Note on any two. Answer in 300 words. Each carry 03 marks.
a) File system structure.
b) Deadlocks
c) Paging

Section 2

Answer in 800 words. Attempt any 2 questions.Each question carry 11 marks


Q6. Explain different methods of disk scheduling with help of diagrams. For example a disk queue with requests
for I/O to blocks on cylinders 95,181,34,119,19,128,61,73 in that order.
Q7. Explain various allocation methods in file systems. Also explain the concept of protection in file systems.
Q8. Explain FIFO and LRU replacement algorithms with the following reference string 125,
B3, 125, 202, 125, 162, 125, B3, 179, B3, B1, 202
2nd Internal Examination (March 2020)
2018 Course

Course:BCA Semester: III


Subject: Operating Systems Course Code: 301
Max. Marks: 40 Max. Time: 2 Hours

Instructions (if any):- Use of unfair means will lead to cancellation of paper followed by disciplinary action.
Question No. 1 is compulsory. Attempt any two questions from Q2 to Q5. Attempt
any two question from section 2.
Section 1
Answer in 400 words. Each question carry 06 marks.
Q. 1 Explain static and dynamic relocation.
Q. 2 What do you understand by input output interface?
Q.3 Explain various file access methods.
Q. 4 What do you understand by swapping?
Q.5 Write Short Note on any two. Answer in 300 words. Each carry 03 marks.
a) Internal Fragmentation
b) First fit and next fit
c) File types

Section 2

Answer in 800 words. Attempt any 2 questions. Each question carry 11 marks
Q. 6 What do you understand by deadlocks? Explain deadlock prevention, deadlock avoidance and
deadlock detection & recovery?
Q. 7 Explain interrupt driven I/O and DMA?
Q. 8 Suppose that a disk drive has 50 tracks. The system refers the tracks in following sequence:
25,37,15,9,24,37,39,47,13,25,15
Currently head is on track number 20 and moving outside. Calculate total track movements and time required
to move all these tracks. (Consider seek time = 0.15 ms) in case of:
 Shortest seek time first
2ndInternal Examination (March 2020)
2014 Course

Course:BCA Semester: III


Subject: Operating Systems Course Code: 301
Max. Marks: 40 Max. Time: 2 Hours

Instructions (if any):- Use of unfair means will lead to cancellation of paper followed by disciplinary action.
Question No. 1 is compulsory. Attempt any two questions from Q2 to Q5.
Attempt any two question from section 2.
Section 1
Answer in 400 words. Each question carry 06 marks.
Q. 1 Explain static and dynamic relocation.
Q. 2 Write an algorithm to solve producer consumer problem.
Q.3 Explain various file access methods.
Q. 4 What do you understand by Semaphore?
Q.5 Write Short Note on any two. Answer in 300 words. Each carry 03 marks.
a) External Fragmentation
b) Best fit and Worst fit
c) File attributes

Section 2

Answer in 800 words. Attempt any 2 questions. Each question carry 11 marks
Q. 6 What do you understand by directory? Explain different directory structures with the help of
diagram.
Q. 7 Explain interrupt driven I/O and DMA?
Q. 8 Suppose that a disk drive has 50 tracks. The system refers the tracks in following sequence:
25,37,15,9,24,37,39,47,13,25,15
Currently head is on track number 20 and moving outside. Calculate total track movements and time required
to move all these tracks. (Consider seek time = 0.15 ms) in case of:
 First come first serve
2ndInternal Examination (2019)

Course: BCA Semester: III


Subject: Operating Systems Course Code: 301
Max. Marks: 40 Max. Time: 2 Hours

Instructions (if any):- Use of unfair means will lead to cancellation of paper followed by disciplinary action.
Question No. 1 is compulsory. Attempt any two questions from Q2 to Q5.
Attempt any two question from section 2.
Section 1
(Theoretical Concept and Practical/Application oriented)
Answer in 400 words. Each question carry 06 marks.
Q. 1 What do you understand by static and dynamic memory allocation?
Q. 2 What are file attributes and file operations?
Q.3 What do you understand by input output interface?
Q. 4 What is segmentation? Explain with the help of diagram.
Q.5 Write Short Note on any two. Answer in 300 words. Each carry 03 marks.
a) External Fragmentation
b) Single level and two level directory
c) Best Fit and Worst Fit

Section 2

Answer in 800 words. Attempt any 2 questions. Each question carry 11 marks
Q6. Suppose that a disk drive has 50 tracks. The system refers the tracks in following sequence:
25,37,15,9,24,37,39,47,13,25,15
Currently head is on track number 20 and moving outside. Calculate total track movements and time required
to move all these tracks. (Consider seek time = 0.15 ms) in case of:
 First come first served
Q7. Explain Programmed I/O and interrupt driven I/O?
Q8. What do you understand by deadlocks? Explain deadlock prevention, deadlock avoidance and deadlock detection
& recovery?
Declaration by Faculty
I Daljeet Singh Bawa and Nisha Malhotra, Designation Visiting Faculty Teaching Operating System subject in
BCA Morning, courseIIIrdsem have incorporated all the necessary pages section/quotations papers mentioned in
this check list above.

Nisha Malhotra Daljeet Singh Bawa

You might also like