Course Pack OS PDF
Course Pack OS PDF
1 Syllabus 1
3 EvaluationCriteria 5
4 BooksRecommendation 6
5 SessionPlan 7
Reference Books:
• • Operating Systems Principles, Galvin, Wiley, 7th Edition
• • Operating Systems Principles, Galvin, Wiley, 9th Edition
• • Operating systems Concepts and Design, Milan Milenkovic, TMH
Course Plan
UNIT Contents
1 Introduction to Operating System:
DefinitionandconceptofOS
HistoryofOS,
importanceandfunctionofOperatingsystem.
TypesofOS, Views-commandlanguage users view,
system call users view structure of OS
commandlineinterface,GUI, systemcalls
2 Process Management:
Process concept, Process Control Block, process
states and its transitions, context switch, OS
services for Process management, scheduling and
types of schedulers and scheduling algorithm
3 Storage Management:
Basic concept of storage management logical
and physical address space swapping,
contiguous allocation, non-contiguous
allocation, fragmentation, segmentation,
paging, demand paging, virtual memory, page
replacement algorithms and design issue of
paging and thrashing,
4 Inter-process communication and
synchronization
Mutual Exclusion, semaphore, Busy-wait
Implementation ,characteristics of semaphore,
queuing implementation of semaphore, producer
consumer problem, Critical region
and conditional critical area and deadlock
1|Page ForInternalCirculation
5 File Systems
Files-basic concept, file attributes, operations, file
types, file structure, access methods, Directory-
structure-single level directory system and
Directory system
6 Input/output System:
PrinciplesofI/Ohardware,
I/Odevices,devicecontroller,
DMA,PrinciplesofI/Osoftware- goals, interrupt
handler, Devicedriver. Mass storage structure-disk
structure and disk scheduling
CourseOverview:
This course gives you general understanding that how a computer works. This includes the concepts related to
computer system architecture and key functions of operating system to manage the hardware resources.
It focuses on the basic principles of Operating systems, Process management, Memory management, Input
output management and file management. This course also covers the concepts of mutual exclusion and various
attempts/ algorithms to solve this problem.
operating System, History of OS, Os Types, Operating System Structures – Command interpreter Systems, Operating
System Services, Systems Calls, System Programs
Process Concept, Process Control Block(PCB), Process Scheduling, CPU – Scheduling – Basic Concepts,
Scheduling Algorithms – FIFO, RR, SJF, Multi Level, Multi Level Feedback
concept of Logical and Physical Address Space, Swapping, Contiguous Allocation, Paging, Segmentation, Virtual
Memory- Demand Paging, Page Replacement, Page Replacement Algorithms, Allocation of Frames, Thrashing and
Demand Segmentation.
concept of Need of inter process communication, Mutual exclusion, Semaphore Definition, Busy wait
implementation, Characteristics of Semaphore, Queuing Implementation of Semaphore, Producer Consumer Problem,
Critical region and conditional critical region.
concept of Conditions to occur the deadlock, Reusable and consumable resources, Deadlock prevention, Deadlock
Avoidance, Resource Request, Resource Release, Detection and recovery.
File Concepts, Access Methods, Directory Structure, Protection, File System Structure, Allocation Methods,
Free Space Management.
Overview of I/O Systems, I/O Interface, Secondary Storage Structure- Disk Structure, Disk Scheduling, Case Study:-
UNIX, LINUX, WINDOWS Operating System and Overview of ANDROID Operating System
Learning Outcome:
CO2: Understand the concept of process, PCB, Process and CPU scheduling and Scheduling
Algorithms
CO4: Explain Mutual exclusion, Semaphore Definition, Busy wait implementation, Characteristics of
Semaphore, QueuingImplementation of Semaphore, Producer Consumer Problem, Critical region and
conditional critical region and deadlock.
CO5: Explain File Concepts, Access Methods, Directory Structure, Protection, File System Structure,
Allocation Methods, Free Space Management
CO6: Overview of I/O Systems, I/O Interface, Secondary Storage Structure- Disk Structure, Disk Scheduling
1. EvaluationCriteria:
Text Books Text Book1: A: Operating Systems Principles, Galvin, Wiley, 7th Edition B:
Operating Systems Principles, Galvin, Wiley, 9th Edition
Course
Reading Operating systems Concepts and Design, Milan Milenkovic, TMH
Internet http://www.tutorialspoint.com/operating_system/operating_system_tutorial.pf
Resource: http://www.cs.utexas.edu/users/witchel/372/lectures/01.OSHistory.pdf
SessionPlan:
Characteristics of Semaphore
Dr. Daljeet Singh Bawa is PhD in Computer Science and is presently working as Assistant Professor in IT
Department at Bharati Vidyapeeth University Institute of Management and Research, New Delhi. He has completed
M.Phil (Computer Science) as well and loves experimenting with new softwares. His areas of specialization are
Software Engineering, Operating Systems, Computer Organization and Architecture, e-learning and e-assessment and
has a rich experience of working with live software projects. His research work revolves around e-learning, blended
learning and e-assessment and has 24 research papers to his credit. He can be contacted at
[email protected].
NISHA MALHOTRA
Mobile: 9899995540
E-Mail: [email protected]
Academic Qualifications
I have done M. Tech. from NetajiSubhas Institute of Technology (NSIT) , Delhi University and B. Tech
(Computer Science Engineering) from N.C college of Engineering, Kurukshetra University.
Also done Certifications of CISCO:CISCO Certified Network Associate (CCNA).
Teaching Subjects
Java Programming
C# programming
Linux operating system
Data Structures
Database Management System
Operating System
Software Engineering
C, C++
Object oriented programming and analysis
Compiler design
Work Experience
Done training and developed Joining Report Module in Integrated Management Information
Dissemination System (IMIS) from Defence Research and Development Organization (DRDO),
Ministry of Defence, Delhi.
Done training in Virtual Network Computing from Information Technology
Department, AAI (Airports Authority of India), SafdurjungAirport, New Delhi
Worked as a Lecturer in Computer Science Department at The Gate Academy Institute, Pitampura New
Delhi.
Worked at IITM, affiliated with G.G.S Indraprastha University, Delhi as an Assistant Professor
Works at BharatiVidyapeeth Institute of Management and Research,PaschimVihar,Delhi, as a visiting
faculty
Also works at BharatiVidyapeeth College of EngineeringPaschimVihar,Delhi, as a visiting faculty
STUDY NOTES
UNIT 1
DefinitionandconceptofOS
HistoryofOS
ImportanceandfunctionofOperatingsystem.
TypesofOS
Views-commandlanguage users view, system call users view structure of OS
commandlineinterface,GUI, systemcalls
Operating system
An operating system (OS) is a collection of software that manages computer hardware resources and provides
common services for computer programs. The operating system is a vital component of the system software in a
computer system. This tutorial will take you through step by step approach while learning Operating System
concepts.
An Operating System (OS) is an interface between a computer user and computer hardware. An operating system is
a software which performs all the basic tasks like file management, memory management, process management,
handling input and output, and controlling peripheral devices such as disk drives and printers.
Some popular Operating Systems include Linux Operating System, Windows Operating System, VMS, OS/400,
AIX, z/OS, etc.
Following are some of important functions of an operating System:
Memory Management
Processor Management
Device Management
File Management
Security
Control over system performance
Job accounting
Error detecting aids
Coordination between other software and users
Applications of Operating System
Following are some of the important activities that an Operating System performs −
Security − By means of password and similar other techniques, it prevents unauthorized access to
programs and data.
Control over system performance − Recording delays between request for a service and response from the
system.
Job accounting − Keeping track of time and resources used by various jobs and users.
Error detecting aids − Production of dumps, traces, error messages, and other debugging and error
detecting aids.
Coordination between other softwares and users − Coordination and assignment of compilers,
interpreters, assemblers and other software to the various users of the computer systems.
An operating system is a program that acts as an interface between the user and the computer hardware and controls
the execution of all kinds of programs.
15 | P a g e ForInternalCirculation
Disadvantages of Time-Sharing OS:
Reliability problem
One must have to take care of security and integrity of user programs and data
Data communication problem
Advantages of Distributed Operating System:
Failure of one will not affect the other network communication, as all systems are independent from each
other
Electronic mail increases the data exchange speed
Since resources are being shared, computation is highly fast and durable
Load on host computer reduces
These systems are easily scalable as many systems can be easily added to the network
Delay in data processing reduces
Failure of the m
ain network will stop the entire communication
To establish distributed systems the language which are used are not well defined yet
These types of systems are not readily available as they are very expensive. Not only that the
underlying software is highly complex and not understood well yet
Maximum Consumption: Maximum utilization of devices and system,thus more output from all the
resources
Task Shifting: Time assigned for shifting tasks in these systems are very less. For example in older systems it
takes about 10 micro seconds in shifting one task to0000 another and in latest systems it takes 3 micro
seconds.-
Focus on Application: Focus on running applications and less importance to applications which are in queue.
Real time operating system in embedded system: Since size of programs are small, RTOS can also be used
in embedded systems like in transport and others.
Error Free: These types of systems are error free.
16 | P a g e ForInternalCirculation
Memory Allocation: Memory allocation is best managed in these type of systems.
Disadvantages of RTOS:
Limited Tasks: Very few tasks run at the same time and their concentration is very less on few
applications to avoid errors.
Use heavy system resources: Sometimes the system resources are not so good and they are expensive as well.
Complex Algorithms: The algorithms are very complex and difficult for the designer to write on..
Device driver and interrupt signals: It needs specific device drivers and interrupt signals to response earliest
to interrupts.
Thread Priority: It is not good to set thread priority as these systems are very less prone to switching tasks.
Examples of Real-Time Operating Systems are: Scientific experiments, medical imaging systems,
industrial control systems, weapon systems, robots, air traffic control systems, etc.
The operating system can be observed from the point of view of the user or the system. This is known as the user
view and the system view respectively. More details about these are given as follows −
User View
The user view depends on the system interface that is used by the users. The different types of user view
experiences can be explained as follows −
If the user is using a personal computer, the operating system is largely designed to
make the interaction easy. Some attention is also paid to the performance of the system,
but there is no need for the operating system to worry about resource utilization. This is
because the personal computer uses all the resources available and there is no sharing.
If the user is using a handheld computer such as a mobile, then the operating system
handles the usability of the device including a few remote operations. The battery
level of the device is also taken into account.
There are some devices that contain very less or no user view because there is no interaction with the users. Examples
are embedded computers in home devices, automobiles etc.
System View
According to the computer system, the operating system is the bridge between applications and hardware. It is most
intimate with the hardware and is used to control it as required.
The different types of system view for operating system can be explained as follows:
The system views the operating system as a resource allocator. There are many
resources such as CPU time, memory space, file storage space, I/O devices etc. that
are required by processes for execution. It is the duty of the operating system to
allocate these resources judiciously to the processes so that the computer system can
run as smoothly as possible.
The operating system can also work as a control program. It manages all the processes
and I/O devices so that the computer system works smoothly and there are no errors. It
makes sure that the I/O devices work in a proper manner without creating problems.
Operating systems can also be viewed as a way to make using hardware easier.
Computers were required to easily solve user problems. However it is not easy to work
directly with the computer hardware. So, operating systems were developed to easily
communicate with the hardware.
An operating system can also be considered as a program running at all times in the
background of a computer system (known as the kernel) and handling all the application
programs. This is the definition of the operating system that is generally followed.
The earliest electronic digital computers had no operating systems. Machines of the time were so primitive that
programs were often entered one bit at time on rows of mechanical switches (plug boards).
Programming languages were unknown (not even assembly languages). Operating systems were unheard of
.
By the early 1950's, the routine had improved somewhat with the introduction of punch cards. The General Motors
Research Laboratories implemented the first operating systems in early 1950's for their IBM 701. The system of the
50's generally ran one job at a time. These were called single-stream batch processing systems because programs
and data were submitted in groups or batches.
For example, on the system with no multiprogramming, when the current job paused to wait for other I/O
operation to complete, the CPU simply sat idle until the I/O finished. The solution for this problem that evolved
was to partition memory into several pieces, with a different job in each partition. While one job was waiting for
I/O to complete, another job could be using the CPU.
Another major feature in third-generation operating system was the technique called spooling (simultaneous
peripheral operations on line). In spooling, a high-speed device like a disk interposed between a running program and
a low-speed device involved with the program in input/output. Instead of writing directly to a printer, for example,
outputs are written to the disk. Programs can run to completion faster, and other programs can be initiated sooner
when the printer becomes available, the outputs may be printed.
Note that spooling technique is much like thread being spun to a spool so that it may be later be unwound as needed.
Another feature present in this generation was time-sharing technique, a variant of multiprogramming technique, in
which each user has an on-line (i.e., directly connected) terminal. Because the user is present and interacting with
the computer, the computer system must respond quickly to user requests, otherwise user productivity could suffer.
Timesharing systems were developed to multiprogram large number of simultaneous interactive users.
Fourth Generation
With the development of LSI (Large Scale Integration) circuits, chips, operating system entered in the system entered
in the personal computer and the workstation age. Microprocessor technology evolved to the point that it become
possible to build desktop computers as powerful as the mainframes of the 1970s. Two operating systems have
dominated the personal computer scene: MS-DOS, written by Microsoft, Inc. for the IBM PC and other machines
using the Intel 8088 CPU and its successors, and UNIX, which is dominant on the large personal computers using the
Motorola 6899 CPU family.
Early Evolution
1. Multiprogramming – A computer running more than one program at a time (like running Excel and Firefox
simultaneously).
1. Multi programming –
In a modern computing system, there are usually several concurrent application processes which want to execute.
Now it is the responsibility of the Operating System to manage all the processes effectively and efficiently.
One of the most important aspects of an Operating System is to multi program.
In a computer system, there are multiple processes waiting to be executed, i.e. they are waiting when the CPU will be
allocated to them and they begin their execution. These processes are also known as jobs. Now the main memory is
too small to accommodate all of these processes or jobs into it. Thus, these processes are initially kept in an area
called job pool. This job pool consists of all those processes awaiting allocation of main memory and CPU.
CPU selects one job out of all these waiting jobs, brings it from the job pool to main memory and starts executing
it. The processor executes one job until it is interrupted by some external factor or it goes for an I/O task.
2. Multiprocessing –
In a uni-processor system, only one process executes at a time.
Multiprocessing is the use of two or more CPUs (processors) within a single Computer system. The term also
refers to the ability of a system to support more than one processor within a single computer system.
Now since there are multiple processors available, multiple processes can be executed at a time. These multi
processors share the computer bus, sometimes the clock, memory and peripheral devices also.
3. Multitasking –
As the name itself suggests, multi tasking refers to execution of multiple tasks (say processes, programs, threads
etc.) at a time. In the modern operating systems, we are able to play MP3 music, edit documents in Microsoft Word,
surf the Google Chrome all simultaneously, this is accomplished by means of multi tasking.
Multitasking is a logical extension of multi programming. The major way in which multitasking differs from
multi programming is that multi programming works solely on the concept of context switching whereas
multitasking is based on time sharing alongside the concept of context switching.
Context Switching
A context switching is a process that involves switching of the CPU from one process or task to another. In this
phenomenon, the execution of the process that is present in the running state is suspended by the kernel and
another process that is present in the ready state is executed by the CPU.
It is one of the essential features of the multitasking operating system. The processes are switched so fastly that it
gives an illusion to the user that all the processes are being executed at the same time.
But the context switching process involved a number of steps that need to be followed. You can't directly switch a
process from the running state to the ready state. You have to save the context of that process. If you are not saving
the context of any process P then after some time, when the process P comes in the CPU for execution again, then
the process will start executing from starting. But in reality, it should continue from that point where it left the CPU
in its previous execution. So, the context of the process should be saved before putting any other process in the
running state.
A context is the contents of a CPU's registers and program counter at any point in time. Context switching
can happen due to the following reasons:
When a process of high priority comes in the ready state. In this case, the execution of the running process
should be stopped and the higher priority process should be given the CPU for execution.
When an interruption occurs then the process in the running state should be stopped and the CPU should
handle the interrupt before doing something else.
When a transition between the user mode and kernel mode is required then you have to perform the
context switching.
System Calls in OS
In computing, a system call is the programmatic way in which a computer program requests a service from the
kernel of the operating system it is executed on. A system call is a way for programs to interact with
the operating system. A computer program makes a system call when it makes a request to the operating system’s
kernel. System call provides the services of the operating system to the user programs via Application Program
Interface(API). It provides an interface between a process and operating system to allow user-level processes to
request services of the operating system. System calls are the only entry points into the kernel system. All programs
needing resources must use system calls.
• LAYERED STRUCTURE
Working
There are six layers in the system, each with different purposes.
Layer Function
5 The operator
4 User Programs
3 Input/Output Management
2 Operator-process communication
1 Memory and drum management
0 Processor allocation and multiprogramming
Layer 0 – Processor Allocation and Multiprogramming – This layer deals with the allocation of processor,
switching between the processes when interrupts occur or when the timers expire.
The sequential processes can be programmed individually without having to worry about other processes running
on the processor. That is, layer 0 provides that basic multiprogramming of the CPU
Layer 1 – Memory and Drum Management – This layer deals with allocating memory to the processes in the main
memory. The drum is used to hold parts of the processes (pages) for which space couldn’t be provided in the main
memory. The processes don’t have to worry if there is available memory or not as layer 1 software takes care of
adding pages wherever necessary.
Layer 2 – Operator-Process communication – In this layer, each process communicates with the operator (user)
through the console. Each process has its own operator console and can directly communicate with the operator.
Layer 3 – Input/Output Management – This layer handles and manages all the I/O devices, and it buffers the
information streams that are made available to it. Each process can communicate directly with the abstract I/O
devices with all of its properties.
Layer 4 – User Programs – The programs used by the user are operated in this layer, and they don’t have to worry
about I/O management, operator/processes communication, memory management, or the processor allocation.
Layer 5 – The Operator – The system operator process is located in the outer most layer.
Simple Structure
There are many operating systems that have a rather simple structure. These started as small systems and rapidly
expanded much further than their scope. A common example of this is MS-DOS. It was designed simply for a niche
amount for people. There was no indication that it would become so popular.
Process concept
Process Control Block
process states and its transitions
context switch
OS services for Process management
scheduling and types of schedulers
scheduling algorithm
A single program can create many processes when run multiple times; for example, when we open a .exe or binary
file multiple times, multiple instances begin (multiple processes are created).
Attributes or Characteristics of a Process
1. When a high-priority process comes to ready state (i.e. with higher priority than the running process)
2. An Interrupt occurs
3. User and kernel mode switch (It is not necessary though)
4. Preemptive CPU scheduling used.
Note : For Process states ,Process state diagram, types of schedulers and cpu scheduling
algorithms - Refer handwritten notes
Preemptive scheduling is used when a process switches from running state to ready state or from waiting state
to ready state. The resources (mainly CPU cycles) are allocated to the process for the limited amount of time
and then is taken away, and the process is again placed back in the ready queue if that process still has CPU
burst time remaining. That process stays in ready queue till it gets next chance to execute.
Algorithms based on preemptive scheduling are: Round Robin (RR),Shortest Remaining Time First (SRTF),
Priority (preemptive version), etc.
2. Non-Preemptive Scheduling:
Non-preemptive Scheduling is used when a process terminates, or a process switches from running to waiting
state. In this scheduling, once the resources (CPU cycles) is allocated to a process, the process holds the CPU
till it gets terminated or it reaches a waiting state. In case of non-preemptive scheduling does not interrupt a
process running CPU in middle of the execution. Instead, it waits till the process complete its CPU burst time
and then it can allocate the CPU to another process.
Algorithms based on non-preemptive scheduling are: Shortest Job First (SJF basically non
preemptive) and Priority (non preemptive version), etc.
Note : shortest remaining time next scheduling algorithm( means preemptive version of shortest
job first) for this refer handwritten notes
All three different type of processes have there own queue. Each queue have its own Scheduling algorithm. For
example, queue 1 and queue 2 uses Round Robin while queue 3 can use FCFS to schedule there processes.
Scheduling among the queues : What will happen if all the queues have some processes? Which process should get
the cpu? To determine this Scheduling among the queues is necessary. There are two ways to do so –
1. Fixed priority preemptive scheduling method – Each queue has absolute priority over lower priority queue.
Let us consider following priority order queue 1 > queue 2 > queue 3.According to this algorithm no process
in the batch queue(queue 3) can run unless queue 1 and 2 are empty. If any batch process (queue 3) is running
and any system (queue 1) or Interactive process(queue 2) entered the ready queue the batch process is
preempted.
2. Time slicing – In this method each queue gets certain portion of CPU time and can use it to schedule its own
processes.For instance, queue 1 takes 50 percent of CPU time queue 2 takes 30 p
Multilevel Feedback Queue Scheduling (MLFQ) CPU Scheduling Scheduling ( also refer handwritten
notes)
This Scheduling is like Multilevel Queue(MLQ) Scheduling but in this process can move between the queues.
Multilevel Feedback Queue Scheduling (MLFQ) keep analyzing the behavior (time of execution) of processes and
according to which it changes its priority.
Now let us suppose that queue 1 and 2 follow round robin with time quantum 4 and 8 respectively and queue 3
follow FCFS.One implementation of MFQS is given below –
2. In queue 1 process executes for 4 unit and if it completes in this 4 unit or it gives CPU for I/O operation in
this 4 unit than the priority of this process does not change and if it again comes in the ready queue than it
again starts its execution in Queue 1.
3. If a process in queue 1 does not complete in 4 unit then its priority gets reduced and it shifted to queue 2.
4. Above points 2 and 3 are also true for queue 2 processes but the time quantum is 8 unit.In a general case if a
process does not complete in a time quantum than it is shifted to the lower priority queue.
6. A process in lower priority queue can only execute only when higher priority queues are empty.
7. A process running in the lower priority queue is interrupted by a process arriving in the higher priority queue.
Problems in the above implementation – A process in the lower priority queue can suffer from starvation due to
some short processes taking all the CPU time.
Solution – A simple solution can be to boost the priority of all the process after regular intervals and place
them all in the highest priority queue.
To optimize turnaround time algorithms like SJF is needed which require the running time of processes to
schedule them. But the running time of the process is not known in advance. MFQS runs a process for a time
quantum and then it can change its priority(if it is a long process). Thus it learns from past behavior of the
process and then predicts its future behavior.This way it tries to run shorter process first thus optimizing
turnaround time.
Storage Management:
Memory management is the functionality of an operating system which handles or manages primary memory and
moves processes back and forth between main memory and disk during execution. Memory management keeps track
of each and every memory location, regardless of either it is allocated to some process or it is free. It checks how
much memory is to be allocated to processes. It decides which process will get memory at what time. It tracks
whenever some memory gets freed or unallocated and correspondingly it updates the status.
This tutorial will teach you basic concepts related to Memory Management.
The process address space is the set of logical addresses that a process references in its code. For example, when 32-
bit addressing is in use, addresses can range from 0 to 0x7fffffff; that is, 2^31 possible numbers, for a total
theoretical size of 2 gigabytes.
The operating system takes care of mapping the logical addresses to physical addresses at the time of memory
allocation to the program. There are three types of addresses used in a program before and after memory is allocated
−
1
Symbolic addresses
The addresses used in a source code. The variable names, constants, and instruction labels are the basic elements
of the symbolic address space.
2
Relative addresses
At the time of compilation, a compiler converts symbolic addresses into relative addresses.
3
Physical addresses
The loader generates these addresses at the time when a program is loaded into main memory.
Virtual and physical addresses are the same in compile-time and load-time address-binding schemes. Virtual and
physical addresses differ in execution-time address-binding scheme.
The set of all logical addresses generated by a program is referred to as a logical address space. The set of all
physical addresses corresponding to these logical addresses is referred to as a physical address space.
The runtime mapping from virtual to physical address is done by the memory management unit (MMU) which is a
hardware device. MMU uses following mechanism to convert virtual address to physical address.
The value in the base register is added to every address generated by a user process, which is treated as offset
at the time it is sent to memory. For example, if the base register value is 10000, then an attempt by the user
to use address location 100 will be dynamically reallocated to location 10100.
The user program deals with virtual addresses; it never sees the real physical addresses.
Operating system uses the following memory allocation mechanism.
1
Single-partition allocation
In this type of allocation, relocation-register scheme is used to protect user processes from each
other, and from changing operating-system code and data. Relocation register contains value of
smallest physical address whereas limit register contains range of logical addresses. Each logical
address must be less than the limit register.
2
Multiple-partition allocation
In this type of allocation, main memory is divided into a number of fixed-sized partitions where each
partition should contain only one process. When a partition is free, a process is selected from the
input queue and is loaded into the free partition. When the process terminates, the partition becomes
available for another process.
Fragmentation
As processes are loaded and removed from memory, the free memory space is broken into little pieces. It
happens after sometimes that processes cannot be allocated to memory blocks considering their small size
and memory blocks remains unused. This problem is known as Fragmentation.
Fragmentation is of two types −
S.N. Fragmentation & Description
1
External fragmentation
Total memory space is enough to satisfy a request or to reside a process in it, but it is not
contiguous, so it cannot be used.
2
Internal fragmentation
Memory block assigned to process is bigger. Some portion of memory is left unused, as it cannot
be used by another process.
Paging
A computer can address more memory than the amount physically installed on the system. This extra memory is
actually called virtual memory and it is a section of a hard that's set up to emulate the computer's RAM. Paging
technique plays an important role in implementing virtual memory.
Paging is a memory management technique in which process address space is broken into blocks of the same size
called pages (size is power of 2, between 512 bytes and 8192 bytes). The size of the process is measured in the
number of pages.
Similarly, main memory is divided into small fixed-sized blocks of (physical) memory called frames and the size of
a frame is kept the same as that of a page to have optimum utilization of the main memory and to avoid external
fragmentation.
Address Translation
Page address is called logical address and represented by page number and the offset.
Logical Address = Page number + page offset
Frame address is called physical address and represented by a frame number and the offset.
Physical Address = Frame number + page offset
A data structure called page map table is used to keep track of the relation between a page of a process to a frame
in physical memory.
When the system allocates a frame to any page, it translates this logical address into a physical address and create
entry into the page table to be used throughout execution of the program.
When a process is to be executed, its corresponding pages are loaded into any available memory frames. Suppose
you have a program of 8Kb but your memory can accommodate only 5Kb at a given point in time, then the
paging concept will come into picture. When a computer runs out of RAM, the operating system (OS) will move
idle or unwanted pages of memory to secondary memory to free up RAM for other processes and brings them back
when needed by the program.
This process continues during the whole execution of the program where the OS keeps removing idle pages from
the main memory and write them onto the secondary memory and bring them back when required by the program.
Advantages and Disadvantages of Paging
Here is a list of advantages and disadvantages of paging −
Paging reduces external fragmentation, but still suffer from internal fragmentation.
Paging is simple to implement and assumed as an efficient memory management technique.
Due to equal size of the pages and frames, swapping becomes very easy.
Page table requires extra memory space, so may not be good for a system having small RAM.
Segmentation
Segmentation is a memory management technique in which each job is divided into several segments of
different sizes, one for each module that contains pieces that perform related functions. Each segment is actually
a different logical address space of the program.
When a process is to be executed, its corresponding segmentation are loaded into non-contiguous memory though
every segment is loaded into a contiguous block of available memory.
Segmentation memory management works very similar to paging but here segments are of variable-length where as
in paging pages are of fixed size.
A program segment contains the program's main function, utility functions, data structures, and so on. The operating
system maintains a segment map table for every process and a list of free memory blocks along with segment
numbers, their size and corresponding memory locations in main memory. For each segment, the table stores the
starting address of the segment and the length of the segment. A reference to a memory location includes a value
that identifies a segment and an offset.
A Process Scheduler schedules different processes to be assigned to the CPU based on particular scheduling
algorithms. There are six popular process scheduling algorithms which we are going to discuss in this chapter −
P0 0-0=0
P1 5-1=4
P2 8-2=6
P3 16 - 3 = 13
P0 0 5 0
P1 1 3 5
P2 2 8 14
P3 3 6 8
P0 0-0=0
P1 5-1=4
P2 14 - 2 = 12
P3 8-3=5
P0 0 5 1 0
P1 1 3 2 11
P2 2 8 1 14
P3 3 6 3 5
P0 0-0=0
P1 11 - 1 = 10
P2 14 - 2 = 12
P3 5-3=2
P0 (0 - 0) + (12 - 3) = 9
P1 (3 - 1) = 2
P3 (9 - 3) + (17 - 12) = 11
Mutual Exclusion
Semaphore
Busy-wait Implementation
characteristics of semaphore
queuing implementation of semaphore
producer consumer problem
Critical region and conditional critical area.
Deadlock
A deadlock happens in operating system when two or more processes need some resource to complete their execution
that is held by the other process.
In the above diagram, the process 1 has resource 1 and needs to acquire resource 2. Similarly process 2 has resource 2
and needs to acquire resource 1. Process 1 and process 2 are in deadlock as each of them needs the other’s resource to
complete their execution but neither of them is willing to relinquish their resources.
Coffman Conditions
A deadlock occurs if the four Coffman conditions hold true. But these conditions are not mutually exclusive.
The Coffman conditions are given as follows −
Mutual Exclusion
There should be a resource that can only be held by one process at a time. In the diagram below, there is a
single instance of Resource 1 and it is held by Process 1 only.
No Preemption
A resource cannot be preempted from a process by force. A process can only release a resource voluntarily.
In the diagram below, Process 2 cannot preempt Resource 1 from Process 1. It will only be released when
Process 1 relinquishes it voluntarily after its execution is complete.
Circular Wait
A process is waiting for the resource held by the second process, which is waiting for the resource held by the
third process and so on, till the last process is waiting for a resource held by the first process. This forms a
circular chain. For example: Process 1 is allocated Resource2 and it is
requesting Resource 1. Similarly, Process 2 is allocated Resource 1 and it is requesting Resource 2. This
forms a circular wait loop.
Deadlock Detection
A deadlock can be detected by a resource scheduler as it keeps track of all the resources that are allocated to different
processes. After a deadlock is detected, it can be resolved using the following methods −
All the processes that are involved in the deadlock are terminated. This is not a good approach as all the
progress made by the processes is destroyed.
Resources can be preempted from some processes and given to others till the deadlock is resolved.
Deadlock Prevention
It is very important to prevent a deadlock before it can occur. So, the system checks each transaction before it is
executed to make sure it does not lead to deadlock. If there is even a slight chance that a transaction may lead to
deadlock in the future, it is never allowed to execute.
Deadlock Avoidance
It is better to avoid a deadlock rather than take measures after the deadlock has occurred. The wait for graph can be
used for deadlock avoidance. This is however only useful for smaller databases as it can get quite complex in larger
databases.
UNIT V
File Systems :
Files-basic concept
file attributes, operations
file types, file structure, access methods
Directory- structure-single level directory system
Directory system
File
A file is a named collection of related information that is recorded on secondary storage such as magnetic disks,
magnetic tapes and optical disks. In general, a file is a sequence of bits, bytes, lines or records whose meaning is
defined by the files creator and user.
File Structure
A File Structure should be according to a required format that the operating system can understand.
A file has a certain defined structure according to its type.
A text file is a sequence of characters organized into lines.
A source file is a sequence of procedures and functions.
An object file is a sequence of bytes organized into blocks that are understandable by the machine.
When operating system defines different file structures, it also contains the code to support these file
structure. Unix, MS-DOS support minimum number of file structure.
File Type
File type refers to the ability of the operating system to distinguish different types of file such as text files source
files and binary files etc. Many operating systems support many types of files. Operating system like MS-DOS
and UNIX have the following types of files −
Ordinary files
These files contain list of file names and other information related to these files.
Special files
Sequential access
Direct/Random access
Indexed sequential access
Sequential access
A sequential access is that in which the records are accessed in some sequence, i.e., the information in the file is
processed in order, one record after the other. This access method is the most primitive one. Example: Compilers
usually access files in this fashion.
Direct/Random access
Random access file organization provides, accessing the records directly.
Each record has its own address on the file with by the help of which it can be directly accessed for reading
or writing.
The records need not be in any sequence within the file and they need not be in adjacent locations on the
storage medium.
Indexed sequential access
Contiguous Allocation
Linked Allocation
Indexed Allocation
Contiguous Allocation
Authentication
One Time passwords
Program Threats
System Threats
Computer Security Classifications
Authentication
Authentication refers to identifying each user of the system and associating the executing programs with those users.
It is the responsibility of the Operating System to create a protection system which ensures that a user who is
running a particular program is authentic. Operating Systems generally identifies/authenticates users using following
three ways −
Username / Password − User need to enter a registered username and password with Operating system
to login into the system.
User card/key − User need to punch card in card slot, or enter key generated by key generator in option
provided by operating system to login into the system.
User attribute - fingerprint/ eye retina pattern/ signature − User need to pass his/her attribute via
designated input device used by operating system to login into the system.
One Time passwords
One-time passwords provide additional security along with normal authentication. In One-Time Password system, a
unique password is required every time user tries to login into the system. Once a one-time password is used, then it
cannot be used again. One-time password are implemented in various ways.
Random numbers − Users are provided cards having numbers printed along with corresponding alphabets.
System asks for numbers corresponding to few alphabets randomly chosen.
Secret key − User are provided a hardware device which can create a secret id mapped with user id. System
asks for such secret id which is to be generated every time prior to login.
Network password − Some commercial applications send one-time passwords to user on registered mobile/
email which is required to be entered prior to login.
Program Threats
Operating system's processes and kernel do the designated task as instructed. If a user program made these process
do malicious tasks, then it is known as Program Threats. One of the common example of program threat is
a program installed in a computer which can store and send user credentials via network to some hacker. Following
is the list of some well-known program threats.
Trojan Horse − Such program traps user login credentials and stores them to send to malicious user who
can later on login to computer and can access system resources.
Trap Door − If a program which is designed to work as required, have a security hole in its code and
perform illegal action without knowledge of user then it is called to have a trap door.
Logic Bomb − Logic bomb is a situation when a program misbehaves only when certain conditions met
otherwise it works as a genuine program. It is harder to detect.
Virus − Virus as name suggest can replicate themselves on computer system. They are highly dangerous and
can modify/delete user files, crash systems. A virus is generatlly a small code embedded in a program. As
user accesses the program, the virus starts getting embedded in other files/ programs and can make system
unusable for user
System Threats
System threats refers to misuse of system services and network connections to put user in trouble. System threats
can be used to launch program threats on a complete network called as program attack. System threats creates such
an environment that operating system resources/ user files are misused. Following is the list of some well-known
system threats.
Worm − Worm is a process which can choked down a system performance by using system resources to
extreme levels. A Worm process generates its multiple copies where each copy uses system resources,
prevents all other processes to get required resources. Worms processes can even shut down an entire
network.
Port Scanning − Port scanning is a mechanism or means by which a hacker can detects system
vulnerabilities to make an attack on the system.
Denial of Service − Denial of service attacks normally prevents user to make legitimate use of the system.
For example, a user may not be able to use internet if denial of service attacks browser's content settings.
UNIT VI
Input/output System:
PrinciplesofI/Ohardware
I/Odevices,devicecontroller
DMA,PrinciplesofI/Osoftware- goals, interrupt handler
Devicedriver.
Mass storage structure-disk structure
disk scheduling
An I/O system is required to take an application I/O request and send it to the physical device, then take whatever
response comes back from the device and send it to the application. I/O devices can be divided into two categories −
Block devices − A block device is one with which the driver communicates by sending entire blocks
of data. For example, Hard disks, USB cameras, Disk-On-Key etc.
Character devices − A character device is one with which the driver communicates by sending and
receiving single characters (bytes, octets). For example, serial ports, parallel ports, sounds cards etc
Device Controllers
Device drivers are software modules that can be plugged into an OS to handle a particular device. Operating System
takes help from device drivers to handle all I/O devices.
The Device Controller works like an interface between a device and a device driver. I/O units (Keyboard, mouse,
printer, etc.) typically consist of a mechanical component and an electronic component where electronic component
is called the device controller.
There is always a device controller and a device driver for each device to communicate with the Operating Systems.
A device controller may be able to handle multiple devices. As an interface its main task is to convert serial bit
stream to block of bytes, perform error correction as necessary.
Any device connected to the computer is connected by a plug and socket, and the socket is connected to a device
controller. Following is a model for connecting the CPU, memory, controllers, and I/O devices where CPU and
device controllers all use a common bus for communication.
Synchronous vs asynchronous I/O
Synchronous I/O − In this scheme CPU execution waits while I/O proceeds
Asynchronous I/O − I/O proceeds concurrently with CPU execution
Communication to I/O Devices
The CPU must have a way to pass information to and from an I/O device. There are three approaches
available to communicate with the CPU and Device.
Step Description
5 DMA controller transfers bytes to buffer, increases the memory address, decreases the
counter C until C becomes zero.
6 When C becomes zero, DMA interrupts CPU to signal transfer completion.
Disk Response Time: Response Time is the average of time spent by a request waiting to perform its I/O
operation. Average Response time is the response time of the all requests. Variance Response Time is measure
of how individual request are serviced with respect to average response time. So the disk scheduling algorithm
that gives minimum variance response time is better.
1. FCFS: FCFS is the simplest of all the Disk Scheduling Algorithms. In FCFS, the requests are
addressed in the order they arrive in the disk queue.Let us understand this with the help of an
example.
Example:
Suppose the order of request is- (82,170,43,140,24,16,190) And
current position of Read/Write head is : 50
Example:
Suppose the order of request is- (82,170,43,140,24,16,190) And
current position of Read/Write head is : 50
Example:
Suppose the requests to be addressed are-82,170,43,140,24,16,190. And the Read/Write arm is at 50, and it is
also given that the disk arm should move “towards the larger value”.
=(199-50)+(199-16)
=332
Advantages:
High throughput
Low variance of response time
Average response time
Disadvantages:
Long waiting time for requests for locations just visited by disk arm
4. CSCAN: In SCAN algorithm, the disk arm again scans the path that has been scanned, after reversing its
direction. So, it may be possible that too many requests are waiting at the other end or there may be zero or few
requests pending at the scanned area.
These situations are avoided in CSCAN algorithm in which the disk arm instead of reversing its direction goes to
the other end of the disk and starts servicing the requests from there. So, the disk arm moves in a circular fashion
and this algorithm is also similar to SCAN algorithm and hence it is known as C-SCAN (Circular SCAN).
Example:
Suppose the requests to be addressed are-82,170,43,140,24,16,190. And the Read/Write arm is at 50, and it is also
given that the disk arm should move “towards the larger value”.
Example:
Suppose the requests to be addressed are-82,170,43,140,24,16,190. And the Read/Write arm is at 50, and it is
also given that the disk arm should move “towards the larger value”.
Example:
Suppose the requests to be addressed are-82,170,43,140,24,16,190. And the Read/Write arm is at 50, and it is
also given that the disk arm should move “towards the larger value”
Operating systems exist for two main purposes. One is that it is designed to make sure a computer system performs
well by managing its computational activities. Another is that it provides an environment for the development and
execution of programs.
Demand paging is referred when not all of a process’s pages are in the RAM, then the OS brings the
missing(and required) pages from the disk into the RAM.
With an increased number of processors, there is a considerable increase in throughput. It can also save more
money because they can share resources. Finally, overall reliability is increased as well.
4) What is kernel?
A kernel is the core of every operating system. It connects applications to the actual processing of data. It also
manages all communications between software and hardware components to ensure usability and reliability.
Real-time systems are used when rigid time requirements have been placed on the operation of a processor. It has
well defined and fixed time constraints.
Virtual memory is a memory management technique for letting processes execute outside of memory. This is very
useful especially is an executing program cannot fit in the physical memory.
The main objective of multiprogramming is to have a process running at all times. With this design, CPU
utilization is said to be maximized.
In a Time-sharing system, the CPU executes multiple jobs by switching among them, also known as
multitasking. This process happens so fast that users can interact with each program while it is running.
9) What is SMP?
SMP is a short form of Symmetric Multi-Processing. It is the most common type of multiple-processor systems.
In this system, each processor runs an identical copy of the operating system, and these copies communicate
with one another as needed.
10) How are server systems classified?
Server systems can be classified as either computer-server systems or file server systems. In the first case, an
interface is made available for clients to send requests to perform an action. In the second case, provisions are
available for clients to create, access and update files.
In asymmetric clustering, a machine is in a state known as hot standby mode where it does nothing but to monitor
the active server. That machine takes the active server’s role should the server fails.
A thread is a basic unit of CPU utilization. In general, a thread is composed of a thread ID, program counter,
register set, and the stack.
FCFS stands for First-come, first-served. It is one type of scheduling algorithm. In this scheme, the process that
requests the CPU first is allocated the CPU first. Implementation is managed by a FIFO queue.
RR (round-robin) scheduling algorithm is primarily aimed for time-sharing systems. A circular queue is a setup in
such a way that the CPU scheduler goes around that queue, allocating CPU to each process for a time interval of
up to around 10 to 100 milliseconds.
16) What are necessary conditions which can lead to a deadlock situation in a system?
Deadlock situations occur when four conditions occur simultaneously in a system: Mutual exclusion; Hold and
Wait; No preemption; and Circular wait.
17) What factors determine whether a detection-algorithm must be utilized in a deadlock avoidance system?
One is that it depends on how often a deadlock is likely to occur under the implementation of this algorithm. The
other has to do with how many processes will be affected by deadlock when this algorithm is applied.
18) State the main difference between logical from physical address space.
Logical address refers to the address that is generated by the CPU. On the other hand, physical address refers to
the address that is seen by the memory unit.
19) How does dynamic loading aid in better memory space utilization?
With dynamic loading, a routine is not loaded until it is called. This method is especially useful when large amounts
of code are needed in order to handle infrequently occurring cases such as error routines.
Paging is a memory management scheme that permits the physical address space of a process to be
noncontiguous. It avoids the considerable problem of having to fit varied sized memory chunks onto the backing
store.
FORMAT OFINTERNAL QUESTION PAPER
Course: Semester:
Subject: Course Code:
Max. Marks: 40 Max. Time: 2 Hours
Instructions (if any):- Use of calculator for subjects like Financial Mgt. Operation etc. allowed if required.
(Scientific calculator is not allowed).
Use of unfair means will lead to cancellation of paper followed by disciplinary action.
Question No. 1 is compulsory. Attempt any two questions from Q2 to Q5.
Attempt any two question from section 2.
Section 1
(Theoretical Concept and Practical/Application oriented)
Answer in 400 words. Each question carry 06 marks.
Q. 1
Q. 2
Q.3
Q. 4
Q.5 Write Short Note on any two. Answer in 300 words. Each carry 03 marks.
a)
b)
c)
Section 2
(Analytical Question / Case Study / Essay Type Question to test analytical and Comprehensive Skills)
Instructions (if any):- Use of unfair means will lead to cancellation of paper followed by disciplinary action.
Question No. 1 is compulsory. Attempt any two questions from Q2 to Q5.
Attempt any two question from section 2.
Section 1
Answer in 400 words. Each question carry 06 marks.
Q. 1 Explain static and dynamic relocation.
Q. 2 What do you understand by input output interface?
Q.3 Explain various file access methods.
Q. 4 What do you understand by swapping?
Q.5 Write Short Note on any two. Answer in 300 words. Each carry 03 marks.
a) Internal Fragmentation
b) First fit and next fit
c) File types
Section 2
Answer in 800 words. Attempt any 2 questions. Each question carry 11 marks
Q. 6 What do you understand by deadlocks? Explain deadlock prevention, deadlock avoidance and
deadlock detection & recovery?
Q. 7 Explain interrupt driven I/O and DMA?
Q. 8 Suppose that a disk drive has 50 tracks. The system refers the tracks in following sequence:
25,37,15,9,24,37,39,47,13,25,15
Currently head is on track number 20 and moving outside. Calculate total track movements and time required
to move all these tracks. (Consider seek time = 0.15 ms) in case of:
Shortest seek time first
1st Internal Examination (2019)
Instructions (if any):- Use of calculator for subjects like Financial Mgt. Operation etc. allowed if required. (Scientific
calculator is not allowed).
Use of unfair means will lead to cancellation of paper followed by disciplinary action.
Question No. 1 is compulsory. Attempt any two questions from Q2 to Q5.
Attempt any two question from section 2.
Section 1
Answer in 400 words. Each question carry 06 marks.
Q. 1 Explain operating system services for process management.
Q. 2 What are different states of a process?
Q.3 Explain SJF and Multilevel Scheduling algorithms?
Q. 4 Explain Different types of operating systems.
Q.5 Write Short Note on any two. Answer in 300 words. Each carry 03 marks.
a) Paging
b) ABORT System Call
c) Network OS
Section 2
Answer in 800 words. Attempt any 2 questions. Each question carry 11 marks
Q6. Explain different types of schedulers with the help of diagram.
Q7. Explain Virtual memory, demand paging, page replacement and page replacement algorithms. Q8.
What is PCB? Explain what type of information is stored in PCB?
1st Internal Examination (February, 2020)
(2014 Course)
Course: BCA Semester: III
Subject: Operating Systems Course Code: 301
Max. Marks: 40 Max. Time: 2 Hours
Instructions (if any):- Use of calculator for subjects like Financial Mgt. Operation etc. allowed if required.
(Scientific calculator is not allowed).
Use of unfair means will lead to cancellation of paper followed by disciplinary action.
Question No. 1 is compulsory. Attempt any two questions from Q2 to Q5.
Attempt any two question from section 2.
Section 1
Answer in 400 words. Each question carry 06 marks.
Q. 1 Explain Multitasking and multiprocessing operating systems.
Q. 2 What are different states of a process?
Q.3 Explain SRTN and FCFS scheduling algorithms?
Q. 4 Explain Functions of operating systems.
Q.5 Write Short Note on any two. Answer in 300 words. Each carry 03 marks.
a) Suspend System Call
b) Resume System Call
c) Real time OS
Section 2
Answer in 800 words. Attempt any 2 questions. Each question carry 11 marks
Q6. Explain different types of schedulers with the help of diagram. Q7.
Explain different views of operating systems.
Q8. Explain the concept of a process, process relationship and implicit and explicit tasking.
BharatiVidyapeeth(Deemed to be University) Institute of
Management and Research (BVIMR), New Delhi
1stInternal Examination (September, 2018)
Course: BCA Semester: III
Subject: Operating System Concepts Course Code: 301
Max. Marks: 40 Max. Time: 2 Hours
Q.4 Attempt any one. Answer in 600 words (Analytical Question / Case Study / Essay Type Question to test
analytical and Comprehensive Skills) [1x10]
a) Explain different types of schedulers with the help of diagram? Also explain round robin, SRTN
and MLQ scheduling with the help of diagram.
b) Write short notes on any two of the following :
i) Multiprocessing
ii) Mutual Exclusion
iii) Scheduling and performance criteria
BharatiVidyapeeth(Deemed to be University) Institute of
Management and Research (BVIMR), New Delhi
2nd Internal Examination (October 2018)
Instructions (if any):- (accounting, mathematics regarding use of Calculator, if required). Give Examples &
Diagrammatic Representations wherever as possible
Section 1
Answer in 400 words. Each question carry 06 marks.
Q. 1. What do you understand by contiguous and non-contiguous memory allocation. Explain with the help
of diagrams and tables.
Q. 2. What do you understand by I/O systems and I/O interface.
Q.3. What are reusable and consumable resources. What are the various conditions to occur deadlocks.
Q. 4. Explain any three methods of free space management in disk.
Q.5. Write Short Note on any two. Answer in 300 words. Each carry 03 marks.
a) File system structure.
b) Deadlocks
c) Paging
Section 2
Instructions (if any):- Use of unfair means will lead to cancellation of paper followed by disciplinary action.
Question No. 1 is compulsory. Attempt any two questions from Q2 to Q5. Attempt
any two question from section 2.
Section 1
Answer in 400 words. Each question carry 06 marks.
Q. 1 Explain static and dynamic relocation.
Q. 2 What do you understand by input output interface?
Q.3 Explain various file access methods.
Q. 4 What do you understand by swapping?
Q.5 Write Short Note on any two. Answer in 300 words. Each carry 03 marks.
a) Internal Fragmentation
b) First fit and next fit
c) File types
Section 2
Answer in 800 words. Attempt any 2 questions. Each question carry 11 marks
Q. 6 What do you understand by deadlocks? Explain deadlock prevention, deadlock avoidance and
deadlock detection & recovery?
Q. 7 Explain interrupt driven I/O and DMA?
Q. 8 Suppose that a disk drive has 50 tracks. The system refers the tracks in following sequence:
25,37,15,9,24,37,39,47,13,25,15
Currently head is on track number 20 and moving outside. Calculate total track movements and time required
to move all these tracks. (Consider seek time = 0.15 ms) in case of:
Shortest seek time first
2ndInternal Examination (March 2020)
2014 Course
Instructions (if any):- Use of unfair means will lead to cancellation of paper followed by disciplinary action.
Question No. 1 is compulsory. Attempt any two questions from Q2 to Q5.
Attempt any two question from section 2.
Section 1
Answer in 400 words. Each question carry 06 marks.
Q. 1 Explain static and dynamic relocation.
Q. 2 Write an algorithm to solve producer consumer problem.
Q.3 Explain various file access methods.
Q. 4 What do you understand by Semaphore?
Q.5 Write Short Note on any two. Answer in 300 words. Each carry 03 marks.
a) External Fragmentation
b) Best fit and Worst fit
c) File attributes
Section 2
Answer in 800 words. Attempt any 2 questions. Each question carry 11 marks
Q. 6 What do you understand by directory? Explain different directory structures with the help of
diagram.
Q. 7 Explain interrupt driven I/O and DMA?
Q. 8 Suppose that a disk drive has 50 tracks. The system refers the tracks in following sequence:
25,37,15,9,24,37,39,47,13,25,15
Currently head is on track number 20 and moving outside. Calculate total track movements and time required
to move all these tracks. (Consider seek time = 0.15 ms) in case of:
First come first serve
2ndInternal Examination (2019)
Instructions (if any):- Use of unfair means will lead to cancellation of paper followed by disciplinary action.
Question No. 1 is compulsory. Attempt any two questions from Q2 to Q5.
Attempt any two question from section 2.
Section 1
(Theoretical Concept and Practical/Application oriented)
Answer in 400 words. Each question carry 06 marks.
Q. 1 What do you understand by static and dynamic memory allocation?
Q. 2 What are file attributes and file operations?
Q.3 What do you understand by input output interface?
Q. 4 What is segmentation? Explain with the help of diagram.
Q.5 Write Short Note on any two. Answer in 300 words. Each carry 03 marks.
a) External Fragmentation
b) Single level and two level directory
c) Best Fit and Worst Fit
Section 2
Answer in 800 words. Attempt any 2 questions. Each question carry 11 marks
Q6. Suppose that a disk drive has 50 tracks. The system refers the tracks in following sequence:
25,37,15,9,24,37,39,47,13,25,15
Currently head is on track number 20 and moving outside. Calculate total track movements and time required
to move all these tracks. (Consider seek time = 0.15 ms) in case of:
First come first served
Q7. Explain Programmed I/O and interrupt driven I/O?
Q8. What do you understand by deadlocks? Explain deadlock prevention, deadlock avoidance and deadlock detection
& recovery?
Declaration by Faculty
I Daljeet Singh Bawa and Nisha Malhotra, Designation Visiting Faculty Teaching Operating System subject in
BCA Morning, courseIIIrdsem have incorporated all the necessary pages section/quotations papers mentioned in
this check list above.