0% found this document useful (0 votes)
11 views341 pages

CS3451-Introduction To Operating System (IV SEM)

The document provides an introduction to operating systems, covering their objectives, functions, and structures. It discusses key concepts such as system calls, interrupts, and various types of operating systems including batch, time-sharing, and multiprocessor systems. Additionally, it outlines the differences between system calls and system programs, as well as the importance of dual mode operation for system protection.

Uploaded by

likabanu01
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views341 pages

CS3451-Introduction To Operating System (IV SEM)

The document provides an introduction to operating systems, covering their objectives, functions, and structures. It discusses key concepts such as system calls, interrupts, and various types of operating systems including batch, time-sharing, and multiprocessor systems. Additionally, it outlines the differences between system calls and system programs, as well as the importance of dual mode operation for system protection.

Uploaded by

likabanu01
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 341

CS3451 – Introduction to Operating Systems Unit 1 Mailam Engineering College

UNIT I INTRODUCTION
Computer System - Elements and organization; Operating System Overview - Objectives and
Functions - Evolution of Operating System; Operating System Structures – Operating System
Services - User Operating System Interface - System Calls – System Programs - Design and
Implementation - Structuring methods.
PART A
1. What is an Operating System? (April/May-2023)(Nov/Dec-2024)
 An operating system is a program that manages the computer hardware. It also
provides a basis for application programs and act as an intermediary between a
user of a computer and the computer hardware.
 It controls and coordinates the use of the hardware among the various application
programs for the various users.
2. Mention the purpose of system calls.(April/May-2024)
 System calls allow user-level processes to request services of the operating system.
3. Can traps be generated intentionally by a user program? If so, for what purpose?
 A trap is a software‐generated interrupt. An interrupt can be used to signal the
completion of an I/O to obviate the need for device polling.
 A trap can be used to call operating system routines or to catch arithmetic errors.
4. Mention the objectives of an operating system.
 Convenience – makes computer user friendly.
 Efficiency - allows computer to use resources efficiently.
 Ability to evolve - constructed in a way to permit effective development, testing and
introduction of new functions without interfering with service.
5. What is SYSGEN and system boot?
 The SysGen process runs as a series of jobs under the control of the operating
system.
 The process of bringing up the operating system is called booting.
6. Consider a memory system with a cache access time of 10ns and a memory access
time of 110ns assumes memory access time includes the time to check the cache. If
the effective access time is 10% greater than the cache access time. What is the hit
ratio H?
Cache access time Tc = 100 ns
Memory access time Tm = 500 ns
If the effective access time is 10% greater than the cache access time, what is the hit ratio
H?

PREPARED BY: Mr.D.Srinivasan,AP/CSE, Mr.R. Arunkumar,AP/CSE & Mrs.A.Thilagavathi,AP/CSE Page 1


CS3451 – Introduction to Operating Systems Unit 1 Mailam Engineering College

Effective access time =cache hit ratio*cache access time+ cache miss ratio
*(cache access time +main memory access time)
Effective access time =10% greater the cache access time ==>110

let cache hit ratio is h


110 = h*100ns +(1-h) (100+500)
110= 100h+600-600h
500h= 490
h= 490/500= .98 = 98%
7. How does an interrupt differ from a trap? (April/May-2024)
 An interrupt is a hardware‐generated change‐of‐flow within the system. An interrupt
handler is summoned to deal with the cause of the interrupt; control is then
returned to the interrupted context and instruction.
 A trap is a software‐generated interrupt. An interrupt can be used to signal the
completion of an I/O to obviate the need for device polling. A trap can be used to
call operating system routines or to catch arithmetic errors.
8. What are the advantages and disadvantages of multiprocessor systems?

Advantages of multiprocessor systems

 Increased Throughput

 Economy of Scale

 Increased Reliability

Disadvantage of multiprocessor systems

 If one processor fails then it will affect in the speed

9. What are the advantages of peer-peer system and client server system?
 Peer-Peer is easy to install
 All the resources and contents are shared by all the peers without server help,
where Server shares all the contents and resources.
 Peer-Peer is more reliable as central dependency is eliminated. Failure of one peer
doesn’t affect the functioning of other peers. In case of Client –Server network, if
server goes down, whole network gets affected.

PREPARED BY: Mr.D.Srinivasan,AP/CSE, Mr.R. Arunkumar,AP/CSE & Mrs.A.Thilagavathi,AP/CSE Page 2


CS3451 – Introduction to Operating Systems Unit 1 Mailam Engineering College

10. What is the purpose of system programs?


 System programs can be thought of as bundles of useful system calls. They provide
basic functionality to users so that users do not need to write their own programs to
solve common problems.
 The system program serves as a part of the operating system.
 It lies between the user interface and the system calls.

11. Does timesharing differ from Multiprogramming? If so, how?


Time Sharing: Here, OS assigns time slots to each job. Here, each job is executed
according to the allotted time slots.

Job1: 0 to 5

Job2: 5 to 10

Job3: 10 to 15

Multi-Tasking: In this operating system, jobs are executed in parallel by the


operating system. But we can achieve this multi-tasking through multiple processors
(or)multicore CPU only.

CPU1: Job1 CPU2: Job2 CPU3: Job3

12. Why API’s need to be used rather than system calls?


 API is easier to migrate your software to different platforms.

 API usually provides more useful functionality than the system call.

 API can support multiple versions of the operating system.

13. Write the difference between batch systems and time-sharing systems.

Batch operating systems:

 A batch is a sequence of jobs.

 This batch is submitted to batch processing operating systems, and output would
appear some later time in the form of a program or as program error.

 To speed up processing similar jobs are batched together.

 The major task of batch operating systems is to transfer control automatically from
one job to next.

PREPARED BY: Mr.D.Srinivasan,AP/CSE, Mr.R. Arunkumar,AP/CSE & Mrs.A.Thilagavathi,AP/CSE Page 3


CS3451 – Introduction to Operating Systems Unit 1 Mailam Engineering College

Time sharing:

 Time sharing or multi-tasking is a logical execution of multiprogramming.


Multiple jobs are executed by the CPU switching between them. Time
shared operating system allows many users to share the computer
simultaneously.

14. Define operating system and list its objectives.

 An operating system is a program that manages a computer’s hardware.

 An operating system is a program that controls the execution of application


programs and act as an interface between applications and the computer hardware.

Objectives:

 Convenience

 Efficiency

 Ability to evolve

15. What is the main purpose of an operating system?

 Ease of use -It provides the environment for executing the programs.

 Efficient use of computer - The primary goal of an OS is the efficient use of computer
systems which is otherwise called as resource utilization.

16. Why is the Operating System viewed as a resource allocator & control program?
(Nov/Dec-2023)

 A computer system has many resources that may be required to solve a problem.

 The operating system acts as the manager of these resources.

 The OS is viewed as a control program because it manages the execution of user


programs to prevent errors and improper use of the computer.

17. List the basic elements of a computer system.

 Processor

 Main memory

 I/O modules,System bus

PREPARED BY: Mr.D.Srinivasan,AP/CSE, Mr.R. Arunkumar,AP/CSE & Mrs.A.Thilagavathi,AP/CSE Page 4


CS3451 – Introduction to Operating Systems Unit 1 Mailam Engineering College

18. Define SMP.

 SMP (symmetric multiprocessing) is the processing of programs by multiple


processors that share a common operating system and memory.

 There are two or more similar processors of comparable capability.

 All processors can perform the same functions.

19. List the advantages of SMP.

 Performance

 Availability

 Incremental growth

 Scaling

20. Define Multicore computers.

 A multicore computer, also known as a chip multiprocessor, combines two or more


processors on a single piece of silicon.

 Each processor consists of all of the components of an independent processor, such as


registers, ALU, pipeline hardware and control unit, and an instruction and data
caches.

21. What is serial processing and its problems?

 In serial processing, processor does only one process at a time. (Requests, gets it,
and fetches it). Therefore this kind of processing is slow;

 While in parallel processing, CPU does more than one process at a time, so it's
faster & consumes less time.

Problems in Serial processing:

 Scheduling

 Setuptime

22. Write a note on the working of simple batch systems.

• Users submit jobs to operator

• Operator batches jobs

PREPARED BY: Mr.D.Srinivasan,AP/CSE, Mr.R. Arunkumar,AP/CSE & Mrs.A.Thilagavathi,AP/CSE Page 5


CS3451 – Introduction to Operating Systems Unit 1 Mailam Engineering College

• Monitor controls sequence of events to process batch

• When one job is finished, control returns to Monitor which reads next job

• Monitor handles scheduling.

23. Define Time sharing systems.

 The processor time is shared among multiple users.

 In a time-sharing system, multiple users simultaneously access the system through


terminals, with the OS interleaving the execution of each user program in a quantum
time.

24. Difference between the Batch system and time-sharing system.

25. Define Single Processor system.

 On a single-processor system, there is one main CPU capable of executing a general-


purpose instruction set, including instructions from user processes.

 They may come in the form of device-specific processors, such as disk, keyboard, and
graphics controllers.

26. What are Multiprocessor Systems?

 Multiprocessor systems (also known as parallel systems or multicore systems) have two
or more processors in close communication, sharing the computer bus and sometimes
the clock, memory, and peripheral devices.

 Recently, multiple processors have appeared on mobile devices such as smart phones
and tablet computers.

27. What is graceful degradation and fault tolerance?

 The ability to continue providing service proportional to the level of surviving hardware
is called graceful degradation.

PREPARED BY: Mr.D.Srinivasan,AP/CSE, Mr.R. Arunkumar,AP/CSE & Mrs.A.Thilagavathi,AP/CSE Page 6


CS3451 – Introduction to Operating Systems Unit 1 Mailam Engineering College

 Fault tolerance requires mechanism to allow the failure to be detected, diagnosed and if
possible, corrected.

28. Distinguish between symmetric and asymmetric multiprocessor systems.

Symmetric multiprocessor systems Asymmetric multiprocessor systems

 Here each processor runs an identical  Here each processor is assigned a


copy of the operating system and these specific task. A master processor looks
copies communicate with one another to slave processors for predefined
as needed. instruction.

 Symmetric multiprocessor systems  It defines master slave relationship.


means that all processor are peers and
 The master processor schedules and
no master slave relationship exists
allocates work to the slave
between processor.
processors.
 Ex: Encore version of Unix for
 Ex: Sun OS version
Multimax Computer

29. What is trap?

 A trap (or an exception) is a software-generated interrupt caused either by an error


or by a specific request from a user program that an operating-system service be
performed.

30. What is Dual mode operation and what are the two type’s modes?

 The dual mode operation provides us with the means for protecting the operating
system from wrong users and wrong users from one another.

Transition from user to kernel mode

 Two separate modes of operation:

1. User mode or supervisor mode

2. kernel mode or system mode or privileged mode

PREPARED BY: Mr.D.Srinivasan,AP/CSE, Mr.R. Arunkumar,AP/CSE & Mrs.A.Thilagavathi,AP/CSE Page 7


CS3451 – Introduction to Operating Systems Unit 1 Mailam Engineering College

31. Define System call. Give any two system calls with their purpose.( Nov/Dec-2023)

 System calls provide an interface to the services made available by an operating


system.

1. Process control :

 These system calls are used to create, manage, and control processes. Examples
include fork(), exec(), wait(), kill(), and getpid().

2. Device manipulation :

 These system calls are used to manage and manipulate I/O devices such as
printers, keyboards, and disk drives.

32. What are the various types of system calls? Give any two System Call with
purpose.

 Process control

 File manipulation

 Device manipulation

 Information maintenance

 Communications

 Protection.

33. What are the various types of communication models?

 In the message-passing model, the communicating processes exchange messages


with one another to transfer information.

 In the shared-memory model, processes use shared memory create () and shared
memory attach () system calls to create and gain access to regions of memory owned
by other processes.

34. Differentiate tightly coupled systems and loosely coupled systems.

Loosely coupled systems Tightly coupled systems

 Each processor has its own  Common memory is shared by


memory many processors

PREPARED BY: Mr.D.Srinivasan,AP/CSE, Mr.R. Arunkumar,AP/CSE & Mrs.A.Thilagavathi,AP/CSE Page 8


CS3451 – Introduction to Operating Systems Unit 1 Mailam Engineering College

 Each processor can  No need of any special


communicate with other all communication lines.
through communication lines.

35. What is system calls and system programs?

 The system calls are list of the function that will be call in section between user and
kernel.

 But the system program is the program that can do system works like: Change system
settings.

36. Write about Multicore and Multiprocessor.

 The main difference between multicore and multiprocessor is that the multicore refers
to a single CPU with multiple execution units while the multiprocessor refers to a
system that has two or more CPUs.

 Multicores have multiple cores or processing units in a single CPU. A multiprocessor


contains multiple CPUs.

37. What is dual mode operation and what is the need of it?

 The dual mode of operation provides us with the means for protecting the OS from
errant users-and errant users from one another.

 If an attempt is made to execute a privileged instruction in user mode, the hardware


does not execute the instruction but rather treats it as illegal and traps it to the OS.

38. What is the Kernel?

 A more common definition is that the OS is the one program running at all times on
the computer, usually called the kernel, with all else being application programs.

39. What is meant by Mainframe Systems?

 Mainframe systems are the first computers developed to tackle many commercial and
scientific applications.

 These systems are developed from the batch systems and then multiprogramming
system and finally time-sharing systems.

PREPARED BY: Mr.D.Srinivasan,AP/CSE, Mr.R. Arunkumar,AP/CSE & Mrs.A.Thilagavathi,AP/CSE Page 9


CS3451 – Introduction to Operating Systems Unit 1 Mailam Engineering College

40. What is meant by Batch Systems?

 Operators batched together jobs with similar needs and ran through the computer as a
group.

 The operators would sort programs into batches with similar requirements and as
system become available, it would run each batch.

41. List the various services provided by the OS. (April/May-2023)(Nov/Dec-


2024)
 Program development and Execution
 Access to I/O Devices
 Controlled access to files
 Resource allocation
 Communications.
 Error detection and response, Accounting, Protection and security.

PREPARED BY: Mr.D.Srinivasan,AP/CSE, Mr.R. Arunkumar,AP/CSE & Mrs.A.Thilagavathi,AP/CSE Page 10


CS3451 – Introduction to Operating Systems Unit 1 Mailam Engineering College

PART B

1. Explain in detail about the Computer System Overview.

Key Points

1. Basic elements of a computer

2. Computer System Organization

3. Storage Structure

4. I/O Structure

1. Basic elements of a computer

 Processor: Controls the operation of the computer and performs its data processing
function is shown in Fig 1.1.

 Main memory: Stores data and programs. This memory is volatile. Main memory is also
called as real or primary memory.

 I/O modules: Move data between the computer and its external environment including
secondary memory devices (e.g. disks), communications equipment and terminals.

 System bus: Provides for communication among processors, main memory, and I/O
modules.

 Program Counter (PC): It holds the address of the next instruction to be executed.

 Instruction Register (IR): It specifies the recently fetched instruction or current


instruction.

 Memory Address Register (MAR): It specifies the address in memory for the next read or
write.

 Memory Buffer Register (MBR): It contains the data to be written into memory or which
receives the data from memory.

 I/O Address Register (I/O AR): It receives a particular I/O device.

 I/O Buffer Register (I/O BR): It is used for the exchange of data between an I/O module
and the processor.

PREPARED BY: Mr.D.Srinivasan,AP/CSE, Mr.R. Arunkumar,AP/CSE & Mrs.A.Thilagavathi,AP/CSE Page 11


CS3451 – Introduction to Operating Systems Unit 1 Mailam Engineering College

 Memory Module consists of a bit pattern that can be interpreted as either an instruction
or data.

 Each location contains a bit pattern that can be interpreted as either an instruction or
data.

 I/O module transfers data from external devices to processor and memory and vice versa. It
contains internal buffer for temporarily holding data until they can be sent on.

Fig.1.1 Computer Components

2. Computer System Organization

Computer System Operation

 Fig.1.2 refers A modern general-purpose computer system consists of one or more


CPUs and a number of device controllers connected through a common bus that
provides access to shared memory.

 Each device controller is in charge of a specific type of device (for example, disk drives,
audio devices, or video displays).

 The CPU and the device controllers can execute in parallel, competing for memory
cycles. To ensure orderly access to the shared memory, a memory controller
synchronizes access to the memory.

 For a computer to start running—for instance, when it is powered up or rebooted—it


needs to have an initial program to run.

PREPARED BY: Mr.D.Srinivasan,AP/CSE, Mr.R. Arunkumar,AP/CSE & Mrs.A.Thilagavathi,AP/CSE Page 12


CS3451 – Introduction to Operating Systems Unit 1 Mailam Engineering College

 This initial program, or bootstrap program, tends to be simple. Typically, it is stored in


read-only memory (ROM) or electrically erasable programmable read-only memory
(EEPROM), known by the general term firmware, within the computer hardware.

 It initializes all aspects of the system, from CPU registers to device controllers to
memory contents.

 The occurrence of an event is usually signaled by an interrupt from either the hardware
or the software.

 Hardware may trigger an interrupt at any time by sending a signal to the CPU, usually
by way of the system bus. Software may trigger an interrupt by executing a special
operation called a system call (also called a monitor call).

 When the CPU is interrupted, it stops what it is doing and immediately transfers
execution to a fixed location.

3. Storage Structure

 Basically, we want the programs and data to reside in main memory permanently.

 This arrangement is usually not possible for the following two reasons:

o Main memory is usually too small to store all needed programs and data
permanently.

o Main memory is a volatile storage device that loses its contents when power is
turned off or otherwise lost.

PREPARED BY: Mr.D.Srinivasan,AP/CSE, Mr.R. Arunkumar,AP/CSE & Mrs.A.Thilagavathi,AP/CSE Page 13


CS3451 – Introduction to Operating Systems Unit 1 Mailam Engineering College

There are two types of storage devices: -

Volatile Storage Device – It loses its contents when the power of the device is
removed.

Non-Volatile Storage device – It does not lose its contents when the power is removed.
It holds all the data when the power is removed.

 Secondary Storage is used as an extension of main memory. Secondary storage devices


can hold the data permanently.

 Storage devices consists of Registers, Cache, Main-Memory, Electronic-Disk, Magnetic-


Disk, Optical-Disk, Magnetic-Tapes.

 Each storage system provides the basic system of storing a datum and of holding the
datum until it is retrieved at a later time.

 All the storage devices differ in speed, cost, size and volatility.

 The most common Secondary-storage device is a Magnetic-disk, which provides storage


for both programs and data.

Fig.1.3 Storage Device Hierarchy

PREPARED BY: Mr.D.Srinivasan,AP/CSE, Mr.R. Arunkumar,AP/CSE & Mrs.A.Thilagavathi,AP/CSE Page 14


CS3451 – Introduction to Operating Systems Unit 1 Mailam Engineering College

Fig.1.3 shows that all the storage devices are arranged according to speed and cost.

 The higher levels are expensive, but they are fast. As we move down the hierarchy, the
cost per bit generally decreases, whereas the access time generally increases.

 The storage systems above the electronic disk are Volatile, where as those below are
Non-Volatile.

 An Electronic disk can be either designed to be either Volatile or Non-Volatile.

 During normal operation, the electronic disk stores data in a large DRAM array, which
is Volatile.

 But many electronic disk devices contain a hidden magnetic hard disk and a battery for
backup power.

 If external power is interrupted, the electronic disk controller copies the data from RAM
to the magnetic disk. When external power is restored, the controller copies the data
back into the RAM.

 Caches can be installed to improve performance where a large access-time or transfer-


rate disparity exists between two components.

4. I/O Structure

 I/O Structure consists of Programmed I/O, Interrupt driven I/O, DMS, CPU, Memory,
External devices, these are all connected with the help of Peripheral I/O Buses and
General I/O buses.

 I/O Types

Fig.1.4. I/O Types

PREPARED BY: Mr.D.Srinivasan,AP/CSE, Mr.R. Arunkumar,AP/CSE & Mrs.A.Thilagavathi,AP/CSE Page 15


CS3451 – Introduction to Operating Systems Unit 1 Mailam Engineering College

Programmed I/O

 Fig 1.4 shows the different types of I/O. In the programmed I/O when we write the
input then the device should be ready to take the data otherwise the program should
wait for some time so that the device or buffer will be free then it can take the input.

 Once the input is taken then it will be checked whether the output device or output
buffer is free then it will be printed. This process is continued every time in transferring
of the data.

I/O Interrupts

 To initiate any I / O operation, the CPU first loads the registers to the device controller.
Then the device controller checks the contents of the registers to determine what
operation to perform.

 There are two possibilities if I / O operations want to be executed. These are as


follows

o Synchronous I / O − The control is returned to the user process after the


I/O process is completed.

o Asynchronous I/O − The control is returned to the user process without


waiting for the I/O process to finish. Here, I/O process and the user process
run simultaneously.

DMA Structure

 Direct Memory Access (DMA) is a method of handling I / O. Here the device controller
directly communicates with memory without CPU involvement.

 After setting the resources of I/O devices like buffers, pointers, and counters, the
device controller transfers blocks of data directly to storage without CPU intervention.

 DMA is generally used for high speed I / O devices.

PREPARED BY: Mr.D.Srinivasan,AP/CSE, Mr.R. Arunkumar,AP/CSE & Mrs.A.Thilagavathi,AP/CSE Page 16


CS3451 – Introduction to Operating Systems Unit 1 Mailam Engineering College

2. Explain in detail about operating system objectives and functions. (April/May-2023


& Nov/Dec-2023)

 An OS is a program that controls the execution of application programs and acts as an


interface between applications and the computer hardware.

Objectives

 Convenience – makes a computer more convenient to use.


 Efficiency - allows computer to use resources efficiently.
 Ability to evolve - constructed in a way to permit effective development, testing
and introduction of new functions without interfering with service.

Functions

a) Operating System as a User/Computer Interface

b) Operating System as Resource Manager

c) Ease of Evolution of an Operating System

a) Operating System as a User/Computer Interface

 The hardware and software in a computer system can be viewed in a layered fashion as
shown in the figure.1.5.

Fig.1.5 Computer Software and Hardware Structure

• The hardware and software used in providing applications to a user can be viewed in a
layered or hierarchical fashion.

PREPARED BY: Mr.D.Srinivasan,AP/CSE, Mr.R. Arunkumar,AP/CSE & Mrs.A.Thilagavathi,AP/CSE Page 17


CS3451 – Introduction to Operating Systems Unit 1 Mailam Engineering College

• The user of those applications, the end user, generally is not concerned with the details
of computer hardware.

 Thus, the end-user views the computer system as a set of applications.

 Application users are not concerned with the details of computer hardware.

 The operating system masks the details of the hardware from the programmer and acts
as a mediator, making it easier to access the services.

a) Operating System as Resource Manager

• A computer is a set of resources for the movement, storage, and processing of data and
for the control of these functions. The OS is responsible for managing these resources.

• The OS functions in the same way as ordinary computer software. That is a program or
suite of programs executed by the processor.

• The OS frequently relinquishes control and must depend on the processor to allow it to
regain control.

• The OS directs the processor in the use of the other system resources and in the timing
of its execution of other programs.

• A portion of the OS is in main memory. This includes the kernel, or nucleus which
contains.

• The remainder of main memory contains user programs and data is shown in fig 1.6.
The memory management hardware in the processor and the OS jointly control the
allocation of main memory.

Fig.1.6. Operating System as a Resource Manager

PREPARED BY: Mr.D.Srinivasan,AP/CSE, Mr.R. Arunkumar,AP/CSE & Mrs.A.Thilagavathi,AP/CSE Page 18


CS3451 – Introduction to Operating Systems Unit 1 Mailam Engineering College

3. Explain in detail about the evolution of the computer system.(April/May-2024)

Key Points

1. Serial Processing
2. Simple batch systems
3. Multiprogrammed Batch Systems
4. Time-Sharing Systems
5. Mutiprocessor System
6. Distributed Systems
7. Client Server System
8. Clustered Systems
9. Real Time Systems
10. Hand Held Systems

1. Serial Processing:

• In serial processing, processor does only one process at a time (Requests, gets it, and
fetches it).

• Therefore, this kind of processing is slow; while in parallel processing, CPU does more
than one process at a time, so it's faster & consumes less time.

• These computers were run from a console consisting of display lights, toggle switches,
some form of input device, and a printer.

• If an error halted the program, the error condition was indicated by the lights.

• If the program proceeded to a normal completion, the output appeared on the printer.

• These early systems presented two main problems

o Scheduling

o Setup time

2. Simple batch systems:

• Early computers were expensive, and therefore it was important to maximize processor
utilization.

• The wasted time due to scheduling and setup time was unacceptable.To improve
utilization, the concept of a batch OS was accepted.

PREPARED BY: Mr.D.Srinivasan,AP/CSE, Mr.R. Arunkumar,AP/CSE & Mrs.A.Thilagavathi,AP/CSE Page 19


CS3451 – Introduction to Operating Systems Unit 1 Mailam Engineering College

• The central idea behind the concept of a batch OS was developed.

• Monitor point of view: The monitor controls the sequence of events. For this to be so,
much of the monitor must always be in main memory and available for execution. That
portion is referred to as the resident monitor.

• Processor point of view: At a certain point, the processor is executing instructions


from the portion of main memory, containing the monitor. These instructions cause the
next job to be read into another portion of main memory.

• The user submits the job cards or tape to a computer operator, who batches the jobs
together sequentially and places the entire batch on an input device, for use by the
monitor.

3. Multiprogrammed Batch Systems:

• When one job needs to wait for I/O, the processor can switch to the other job, which is
likely not waiting for I/O.

• Expand memory to hold three, four, or more programs and switch among all of them.
The approach is known as multiprogramming, or multitasking is shown in fig 1.7.

Fig.1.7 Multiprogramming Examples

PREPARED BY: Mr.D.Srinivasan,AP/CSE, Mr.R. Arunkumar,AP/CSE & Mrs.A.Thilagavathi,AP/CSE Page 20


CS3451 – Introduction to Operating Systems Unit 1 Mailam Engineering College

4. Time-Sharing Systems:

• With the use of multiprogramming, batch processing can be quite efficient.

• Time sharing or multitasking is a logical extension of multiprogramming.

• The CPU executes multiple jobs by switching among them but switches occur so
frequently that the user can interact with each program while it is running.

• However, for some jobs, such as transaction processing, an interactive mode is


essential.

• A time-shared OS allows many users to share computer simultaneously.

• Processor time is shared among multiple users.

• In a time-sharing system, multiple users simultaneously access the system through


terminals.

• Both batch processing and time-sharing use multiprogramming.

5. Multiprocessor Systems

 Multiprocessor system also known as parallel system or tightly coupled system have
more than one processor in close communication sharing the computer bus, the clock
and sometimes memory and peripheral devices.

Advantages of Multiprocessor systems:

 Increased throughput. By increasing the number of processors, we expect to get


more work done in less time.

 Economy of scale. Multiprocessor systems can cost less than equivalent multiple
single-processor systems, because they can share peripherals, mass storage, and
power supplies.

 Increased reliability. If functions can be distributed properly among several


processors, then the failure of one processor will not halt the system, only slow it
down.

PREPARED BY: Mr.D.Srinivasan,AP/CSE, Mr.R. Arunkumar,AP/CSE & Mrs.A.Thilagavathi,AP/CSE Page 21


CS3451 – Introduction to Operating Systems Unit 1 Mailam Engineering College

Types of Multiprocessor Systems:

Asymmetric multiprocessing:

 In which each processor is assigned a specific task.

 A master processor controls the system; the other processors either look to the
slave for instruction or have predefined tasks.

 This scheme defines a master-slave relationship.

 The boss processor schedules and allocates work to the worker processors.

Symmetric multiprocessing:

• In which each processor performs all tasks within the operating system.

• SMP means that all processors are peers; no master-slave relationship exists
between processors.

6. Distributed Operating Systems

 Each processor has its own local memory and clock

 The processor communicates with one another through various communication links
such as high-speed buses or telephone lines.

 These systems are referred to as loosely coupled systems.

 Advantages

o Resource sharing

o Computation speeds up

o Reliability

7. Client-Server Systems

 A computing system composed of two logical parts, a server which provides


information or services and a client, which requests them is called client server
system.

 The general structure of a client-server system is depicted in the figure below:

 Server Systems can be broadly categorized as compute servers and file servers.

PREPARED BY: Mr.D.Srinivasan,AP/CSE, Mr.R. Arunkumar,AP/CSE & Mrs.A.Thilagavathi,AP/CSE Page 22


CS3451 – Introduction to Operating Systems Unit 1 Mailam Engineering College

o Compute-server systems provide an interface to which clients can send


requests to perform an action, in response to which they execute the action and
send back results to the client.

o File-server systems provide a file-system interface where clients can create,


update, read, and delete files.

8. Clustered Systems

 Clustered systems gather together multiple CPUs to accomplish computational work.

 Clustered systems differ from parallel systems, however, in that they are composed of
two or more individual systems coupled together.

 There are two types of clustering

 Asymmetric Clustering

o In this, one machine is in hot standby mode while the other is running the
applications.

o The hot standby host (machine) does nothing but monitor the active server.

o If that server fails, the hot standby host becomes the active server.

 Symmetric Clustering

o In this, two or more hosts are running applications, and they are monitoring
each other.

o This mode is obviously more efficient, as it uses all of the available hardware.

9. Real-Time Operating System

 A real time system has well-defined fixed time constraints.

 Processing must be done within the defined constraints or the system will fail.

 Real time systems are of two types

o Hard Real Time Systems

o Soft Real Time Systems

 The Real-Time Operating system which guarantees the maximum time for critical
operations and complete them on time are referred to as Hard Real-Time Operating
Systems.

PREPARED BY: Mr.D.Srinivasan,AP/CSE, Mr.R. Arunkumar,AP/CSE & Mrs.A.Thilagavathi,AP/CSE Page 23


CS3451 – Introduction to Operating Systems Unit 1 Mailam Engineering College

 While the real-time operating systems that can only guarantee a maximum of the
time, i.e. the critical task will get priority over other tasks. These systems are
referred to as Soft Real-Time Operating Systems.

10. Handheld Systems

 Handheld systems include Personal Digital Assistants(PDAs), such as Palm-


Pilots or Cellular Telephones with connectivity to a network such as the Internet.

 They are usually of limited size due to which most handheld devices have a small
amount of memory, include slow processors, and feature small display screens.

 Many handheld devices have between 512 KB and 8 MB of memory. As a result, the
operating system and applications must manage memory efficiently. This includes
returning all allocated memory back to the memory manager once the memory is no
longer being used.

 Currently, many handheld devices do not use virtual memory techniques, thus forcing
program developers to work within the confines of limited physical memory.

 Processors for most handheld devices often run at a fraction of the speed of a processor
in a PC. Faster processors require more power. To include a faster processor in a
handheld device would require a larger battery that would have to be replaced more
frequently.

 The last issue confronting program designers for handheld devices is the small display
screens typically available.

 One approach for displaying the content in web pages is web clipping, where only a
small subset of a web page is delivered and displayed on the handheld device.

 Some handheld devices may use wireless technology such as BlueTooth, allowing
remote access to e-mail and web browsing.

 Cellular telephones with connectivity to the Internet fall into this category.

 Their use continues to expand as network connections become more available and
other options such as cameras and MP3 players, expand their utility.

PREPARED BY: Mr.D.Srinivasan,AP/CSE, Mr.R. Arunkumar,AP/CSE & Mrs.A.Thilagavathi,AP/CSE Page 24


CS3451 – Introduction to Operating Systems Unit 1 Mailam Engineering College

4. Explain in detail about operating system structure. (or) Explain Various structures of
Operating System.(Nov/Dec-2023)

System Structure

 An operating system is a construct that allows the user application programs to


interact with the system hardware.

 Since the operating system is such a complex structure, it should be created with
utmost care so it can be used and modified easily.

 An easy way to do this is to create the operating system in parts.

 Each of these parts should be well defined with clear inputs, outputs and functions.

It is divided into 3 categories

1. Simple structure

2. Layered approach

3. Microkernel approach

1. Simple Structure

• Many commercial systems do not have a well-defined structure. Frequently, such


operating systems started as small, simple, and limited systems, and then grew beyond
their original scope.

o MS-DOS structure

o Unix structure

Example 1: MS-DOS structure

• It provides the most functionality in the least space

• In MS-DOS the interfaces and levels of functionality are not well separated.

 For instance, application programs are able to access the basic I/O routines to write is
shown in Fig 1.8.

PREPARED BY: Mr.D.Srinivasan,AP/CSE, Mr.R. Arunkumar,AP/CSE & Mrs.A.Thilagavathi,AP/CSE Page 25


CS3451 – Introduction to Operating Systems Unit 1 Mailam Engineering College

Fig.1.8. MS-DOS Layer Structure

Example 2: UNIX:

 UNIX is another system that was initially limited by hardware functionality.

 It consists of two separable parts:

o Kernel program

o System programs

 The kernel is further separated into a series of interfaces and device drivers,
which were added and expanded over the years as UNIX evolved.

 The kernel provides the file system, CPU scheduling, memory management, and
other operating system functions through system calls is shown in Fig 1.9.

Fig.1.9. Unix System Structure

PREPARED BY: Mr.D.Srinivasan,AP/CSE, Mr.R. Arunkumar,AP/CSE & Mrs.A.Thilagavathi,AP/CSE Page 26


CS3451 – Introduction to Operating Systems Unit 1 Mailam Engineering College

2. Layered approach

 In which the operating system is broken into a number of layers(levels). The bottom
layer (layer 0) is the hardware; the highest (layer N) is the user interface.

 A typical operating-system layer — say, layer M — consists of data structures and a set
of routines that can be invoked by higher-level layers.

 Layer M, in turn, can invoke operations on lower-level layers.

 Given proper hardware support, OS may be broken into smaller, more appropriate
pieces.

 One method is the layered approach, in which the operating system is broken up into a
number of layers (or levels), each built on top of lower layers.

 The bottom layer (layer 0) is the hardware: the highest (layer N) is the user interface is
shown in Fig 1.10.

Advantage:

 The main advantage of the layered approach is modularity

 Modularity makes the debugging and system verification easy

Disadvantage:

 The major difficulty with the layered approach involves appropriately defining the
various layers. Because a layer can use only lower-level layers, careful planning is
necessary.

 A problem with layered implementations is that they tend to be less efficient than other
types.

Fig.1.10 Layered Structure

PREPARED BY: Mr.D.Srinivasan,AP/CSE, Mr.R. Arunkumar,AP/CSE & Mrs.A.Thilagavathi,AP/CSE Page 27


CS3451 – Introduction to Operating Systems Unit 1 Mailam Engineering College

o Layer 0 -Hardware

o Layer 1 - CPU Scheduling

o Layer 2 - Memory Management

o Layer 3 - Process Management

o Layer 4 - buffering for input and output

o Layer 5 - user programs

3. Microkernel approach

 Remove the non-essential components from the kernel into the user space.

 Moves as much from the kernel into user space

 Communication takes place between user modules using message passing is shown in
Fig 1.11.

 Benefits

o Easier to extend a microkernel

o Easier to port the operating system to new architectures

o More reliable (less code is running in kernel mode)

o More secure

Fig.1.11 Microkernel

Disadvantage: The performance of microkernel can suffer due to increased system-function


overhead.

PREPARED BY: Mr.D.Srinivasan,AP/CSE, Mr.R. Arunkumar,AP/CSE & Mrs.A.Thilagavathi,AP/CSE Page 28


CS3451 – Introduction to Operating Systems Unit 1 Mailam Engineering College

5. Discuss about various services provided by operating system. (April/May-2023)


(April/May-2024)

Services of OS:

User interface

• Almost all operating systems have a user interface (UI). This interface can take several
forms.

• One is a command-line interface (CLI), which uses text commands and a method for
entering them (say, a keyboard for typing in commands in a specific format with
specific options).

• Another is a batch interface, in which commands and directives to control those


commands are entered into files, and those files are executed. Most commonly, a
graphical user interface (GUI) is used.

• Here, the interface is a window system with a pointing device to direct I/O, choose from
menus, and make selections and a keyboard to enter text. Some systems provide two or
all three of these variations. The OS services is shown in fig 1.12.

Program execution:

 The system must be able to load a program into memory and to run that program.

 The program must be able to end its execution, either normally or abnormally
(indicating error).

I/O Operations:

• A running program may require I/O, which may involve a file or an I/O device.

• For specific devices, special functions may be desired (such as recording to a CD or


DVD drive or blanking a display screen).

• For efficiency and protection, users usually cannot control I/O devices directly.
Therefore, the operating system must provide a means to do I/O.

File-system manipulation

 The file system is of particular interest. Obviously, programs need to read and write
files and directories.

PREPARED BY: Mr.D.Srinivasan,AP/CSE, Mr.R. Arunkumar,AP/CSE & Mrs.A.Thilagavathi,AP/CSE Page 29


CS3451 – Introduction to Operating Systems Unit 1 Mailam Engineering College

 They also need to create and delete them by name, search for a given file, and list file
information.

 Finally, some operating systems include permissions management to allow or deny


access to files or directories based on file ownership.

 Many operating systems provide a variety of file systems, sometimes to allow personal
choice and sometimes to provide specific features or performance characteristics.

Communications

• There are many circumstances in which one process needs to exchange information
with another process.

• Such communication may occur between processes that are executing on the same
computer or between processes that are executing on different computer systems tied
together by a computer network.

• Communications may be implemented via shared memory, in which two or more


processes read and write to a shared section of memory, or message passing, in which
packets of information in predefined formats are moved between processes by the
operating system.

Error detection

• The operating system needs to be detecting and correcting errors constantly.

• Errors may occur in the CPU and memory hardware (such as a memory error or a
power failure), in I/O devices (such as a parity error on disk, a connection failure on a
network, or lack of paper in the printer), and in the user program (such as an
arithmetic overflow, an attempt to access an illegal memory location, or a too-great use
of CPU time).

• For each type of error, the operating system should take the appropriate action to
ensure correct and consistent computing.

• Sometimes, it has no choice but to halt the system. At other times, it might terminate
an error-causing process or return an error code to a process for the process to detect
and possibly correct.

PREPARED BY: Mr.D.Srinivasan,AP/CSE, Mr.R. Arunkumar,AP/CSE & Mrs.A.Thilagavathi,AP/CSE Page 30


CS3451 – Introduction to Operating Systems Unit 1 Mailam Engineering College

Resource allocation

• When there are multiple users or multiple jobs running at the same time, resources
must be allocated to each of them.

• The operating system manages many different types of resources. Some (such as CPU
cycles, main memory, and file storage) may have special allocation code, whereas
others (such as I/O devices) may have much more general request and release code.

• For instance, in determining how best to use the CPU, operating systems have CPU-
scheduling routines that take into account the speed of the CPU, the jobs that must be
executed, the number of registers available, and other factors.

• There may also be routines to allocate printers, USB storage drives, and other
peripheral devices.

Accounting

• We want to keep track of which users use how much and what kinds of computer
resources.

• This record keeping may be used for accounting (so that users can be billed) or simply
for accumulating usage statistics.

• Usage statistics may be a valuable tool for researchers who wish to reconfigure the
system to improve computing services.

Protection and security

• The owners of information stored in a multiuser or networked computer system may


want to control use of that information.

• When several separate processes execute concurrently, it should not be possible for
one process to interfere with the others or with the operating system itself.

• Protection involves ensuring that all access to system resources is controlled. Security
of the system from outsiders is also important.

• Such security starts with requiring each user to authenticate himself or herself to the
system, usually by means of a password, to gain access to system resources.

• If a system is to be protected and secure, precautions must be instituted throughout it.


A chain is only as strong as its weakest link.

PREPARED BY: Mr.D.Srinivasan,AP/CSE, Mr.R. Arunkumar,AP/CSE & Mrs.A.Thilagavathi,AP/CSE Page 31


CS3451 – Introduction to Operating Systems Unit 1 Mailam Engineering College

Fig.1.12 A view of Operating System services

6. Explain in detail about User and Operating System Interface. (Nov/Dec-2023)

 One provides a command line interface, or command interpreter, that allows users to
directly enter commands to be performed by the operating system.

 The other allows users to interface with the operating system via a graphical user
interface or GUI

Command Interpreters

1. Some operating systems include the command interpreter in the kernel.

2. Others, such as Windows and UNIX, treat the command interpreter as a special
program that is running when a job is initiated or when a user first logs on (on
interactive systems).

3. On systems with multiple command interpreters to choose from, the interpreters are
known as shells.

4. The main function of the command interpreter is to get and execute the next user-
specified command. Many of the commands given at this level manipulate files: create,
delete, list, print, copy, execute, and so on.

The MS-DOS and UNIX shells operate in this way. These commands can be implemented
in two general ways.

PREPARED BY: Mr.D.Srinivasan,AP/CSE, Mr.R. Arunkumar,AP/CSE & Mrs.A.Thilagavathi,AP/CSE Page 32


CS3451 – Introduction to Operating Systems Unit 1 Mailam Engineering College

1. In one approach, the command interpreter itself contains the code to execute the
command.

 For example, a command to delete a file may cause the command interpreter to
jump to a section of its code that sets up the parameters and makes the
appropriate system call.

2. An alternative approach—used by UNIX, among other operating systems —implements


most commands through system programs.

 In this case, the command interpreter does not understand the command in any
way; it merely uses the command to identify a file to be loaded into memory and
executed.

Graphical User Interface

 A second strategy for interfacing with the operating system is through a user friendly
graphical user interface, or GUI.

 Here, rather than entering commands directly via a command-line interface, users employ
a mouse-based window and- menu system characterized by a desktop metaphor.

 The user moves the mouse to position its pointer on images, or icons, on the screen (the
desktop) that represent programs, files, directories, and system functions.

 Depending on the mouse pointer’s location, clicking a button on the mouse can invoke a
program, select a file or directory—known as a folder—or pull down a menu that contains
commands.

7.What is System Call? Explain the various types of system calls with an example for
each. Or Explain the purpose and importance of system calls in details with examples.
(April/May-2023 & Nov/Dec-2023)(April/May-2024)(Nov/Dec-2024)

Definition

 System calls provide an interface to the services made available by an operating


system.

 These calls are generally available as assembly language instructions is shown in fig
1.13.

PREPARED BY: Mr.D.Srinivasan,AP/CSE, Mr.R. Arunkumar,AP/CSE & Mrs.A.Thilagavathi,AP/CSE Page 33


CS3451 – Introduction to Operating Systems Unit 1 Mailam Engineering College

Fig.1.13 The handling of a user application invoking the open system call

 Three general methods are used to pass parameters to the operating system. The
simplest approach is to pass the parameters in registers.

 In some cases, however, there may be more parameters than registers. In these cases,
the parameters are generally stored in a block, or table, in memory, and the address of
the block is passed as a parameter in a register is shown in fig 1.14.

Fig.1.14 Passing Parameter as a Table

Types of System Calls:

System calls can be grouped roughly into six major categories:

PREPARED BY: Mr.D.Srinivasan,AP/CSE, Mr.R. Arunkumar,AP/CSE & Mrs.A.Thilagavathi,AP/CSE Page 34


CS3451 – Introduction to Operating Systems Unit 1 Mailam Engineering College

 Process control

 File manipulation

 Device manipulation

 Information maintenance

 Communications

 Protection.

Process Control:

 A running program needs to be able to halt its execution either normally (end()) or
abnormally (abort()).

 If a system call is made to terminate the currently running program abnormally, or if


the program runs into a problem and causes an error trap, a dump of memory is
sometimes taken and an error message generated.

 The dump is written to disk and may be examined by a debugger

 A process is a program in execution. It is a active state of a program. A running


program is called as process. It needs several system calls such as

Activities

 end, abort

 load, execute

 create process, terminate process

 get process attributes, set process attributes

 wait for time

 wait event, signal event

 allocate and free memory

PREPARED BY: Mr.D.Srinivasan,AP/CSE, Mr.R. Arunkumar,AP/CSE & Mrs.A.Thilagavathi,AP/CSE Page 35


CS3451 – Introduction to Operating Systems Unit 1 Mailam Engineering College

File Management:

• Once the file is created, we need to open () it and to use it. We may also read ( ), write( ),
or reposition() (rewind or skip to the end of the file, for example).

• Finally, we need to close ( ) the file, indicating that we are no longer using it.

• File attributes include the file name, file type, protection codes, accounting information,
and so on.

• At least two system calls, get file attributes ( ) and set file attributes ( ), are required for
this function. Some operating systems provide many more calls, such as calls for file
move ( ) and copy ( ).

Activities

 create file, delete file

 open, close

 read, write, reposition

 get file attributes, set file attributes

Device Management:

• A process may need several resources to execute—main memory, disk drives, access to
files, and so on. If the resources are available, they can be granted, and control can be
returned to the user process. Otherwise, the process will have to wait until sufficient
resources are available.

• The various resources controlled by the operating system can be thought of as devices.
Some of these devices are physical devices (for example, disk drives), while others can
be thought of as abstract or virtual devices (for example, files)

• A system with multiple users may require us to first request() a device, to ensure
exclusive use of it. After we are finished with the device, we release() it. These functions
are similar to the open() and close() system calls for files.

• A process may need several resources to execute — main memory, disk drives, access
to files, and so on. If the resources are available, they can be granted, and control can
be returned to the user process.

PREPARED BY: Mr.D.Srinivasan,AP/CSE, Mr.R. Arunkumar,AP/CSE & Mrs.A.Thilagavathi,AP/CSE Page 36


CS3451 – Introduction to Operating Systems Unit 1 Mailam Engineering College

Activities

 request device, release device

 read, write, reposition

 get device attributes, set device attributes

 logically attach or detach devices

Information Maintenance:

• Many system calls exist simply for the purpose of transferring information between the
user program and the operating system.

• For example, most systems have a system call to return the current time() and date().
Other system calls may return information about the system, such as the number of
current users, the version number of the operating system, the amount of free memory
or disk space, and so on.

• In addition, the operating system keeps information about all its processes, and system
calls are used to access this information.

Activities

 get time or date, set time or date

 get system data, set system data

 get process, file, or device attributes

 set process, file, or device attributes

Communication:

• There are two common models of interprocess communication: the message-passing


model and the shared-memory model.

 In the message-passing model, the communicating processes exchange messages


with one another to transfer information. Messages can be exchanged between the
processes either directly or indirectly through a common mailbox.

 In the shared-memory model, processes use shared memory create() and shared
memory attach() system calls to create and gain access to regions of memory owned
by other processes. Recall that, normally, the operating system tries to prevent one
process from accessing another process’s memory.

PREPARED BY: Mr.D.Srinivasan,AP/CSE, Mr.R. Arunkumar,AP/CSE & Mrs.A.Thilagavathi,AP/CSE Page 37


CS3451 – Introduction to Operating Systems Unit 1 Mailam Engineering College

Activities

 create, delete communication connection

 send, receive messages

 transfer status information

 attach or detach remote devices

Protection:

 Protection provides a mechanism for controlling access to the resources provided by


a computer system.

 System calls providing protection include set permission() and get permission(),
which manipulate the permission settings of resources such as files and disks.

 The allow user() and deny user() system calls specify whether particular users can
— or cannot — be allowed access to certain resources.

8.Describe system programs in detail with neat sketch.

System Program:

• System programs, provide a convenient environment for program development and


execution.

They can be divided into these categories

• File management. These programs create, delete, copy, rename, print, dump, list, and
generally manipulate files and directories.

• Status information. Some programs simply ask the system for the date, time, amount
of available memory or disk space, number of users, or similar status information.
Others are more complex, providing detailed performance, logging, and debugging
information, registry-which is used to store and retrieve configuration information.

• File modification. Several text editors may be available to create and modify the
content of files stored on disk or other storage devices.

• Programming-language support. Compilers, assemblers, debuggers, and interpreters


for common programming languages (such as C, C++, Java, and PERL) are often
provided with the operating system.

PREPARED BY: Mr.D.Srinivasan,AP/CSE, Mr.R. Arunkumar,AP/CSE & Mrs.A.Thilagavathi,AP/CSE Page 38


CS3451 – Introduction to Operating Systems Unit 1 Mailam Engineering College

• Program loading and execution. Once a program is assembled or com-piled, it must


be loaded into memory to be executed. The system may provide absolute loaders,
relocatable loaders, linkage editors, and overlay loaders.

• Communications. These programs provide the mechanism for creating virtual


connections among processes, users, and computer systems. They allow users to send
messages to one another’s screens, to browse Web pages, to send e-mail messages, to
log in remotely, or to transfer files from one machine to another.

• Background services. All general-purpose systems have methods for launching certain
system-program processes at boot time. Some of these processes terminate after
completing their tasks, while others continue to run until the system is halted.

9.Discuss in detail about Operating-System Design and Implementation.

Design Goals

 The first problem in designing a system is to define goals and specifications.

 At the highest level, the design of the system will be affected by the choice of hardware
and the type of system: batch, time sharing, single user, multiuser, distributed, real
time, or general purpose.

 Beyond this highest design level, the requirements may be much harder to specify.

 The requirements can, however, be divided into two basic groups: user goals and
system goals.

 User Goals: Operating system should be convenient to use, easy to learn, reliable, safe,
and fast

 System Goals: operating system should be easy to design, implement, and maintain,
as well as flexible, reliable, error-free, and efficient.

Mechanisms and Policies

 One important principle is the separation of policy from mechanism. Mechanisms


determine how to do something; policies determine what will be done.

 The separation of policy and mechanism is important for flexibility

 If the mechanism is properly separated from policy, it can be used either to support
a policy decision that I/O-intensive programs should have priority over CPU-
intensive ones or to support the opposite policy.

PREPARED BY: Mr.D.Srinivasan,AP/CSE, Mr.R. Arunkumar,AP/CSE & Mrs.A.Thilagavathi,AP/CSE Page 39


CS3451 – Introduction to Operating Systems Unit 1 Mailam Engineering College

 Policy decisions are important for all resource allocation. Whenever it is necessary
to decide whether or not to allocate a resource, a policy decision must be made.
Whenever the question is how rather than what, it is a mechanism that must be
determined.

Implementation

 Once an operating system is designed, it must be implemented. Because operating


systems are collections of many programs, written by many people over a long
period of time, it is difficult to make general statements about how they are
implemented.

 Much variation

o Early operating systems were written in assembly language

o Then system programming languages like Algol, PL/I

o Now C, C++

 Actually, usually a mix of languages

o The lowest levels of the kernel might be assembly language.

o Higher level routines might be in C

o System programs in C, C++, scripting languages like PERL, Python, shell


scripts

 More high-level language easier to port to other hardware

o But slower

 Emulation can allow an OS to run on non-native hardware.

o Emulators are programs that duplicate the functionality of one system on


another system.

 The only possible disadvantages of implementing an operating system in a higher-


level language are reduced speed and increased storage requirements.

 The major performance improvements in operating systems are more likely to be the
result of better data structures and algorithms than of excellent assembly language
code

PREPARED BY: Mr.D.Srinivasan,AP/CSE, Mr.R. Arunkumar,AP/CSE & Mrs.A.Thilagavathi,AP/CSE Page 40


CS3451 – Introduction to Operating Systems Unit 1 Mailam Engineering College

o Although operating systems are large, only a small amount of the code is
critical to high performance, the interrupt handler, I/O manager, memory
manager, and CPU scheduler are probably the most critical routines.

10. What is the main difficulty that a programmer must overcome in writing an
operating system for a real-time Environment? (April/May-2024)

The main difficulty a programmer must overcome in writing an operating system for a real-
time environment is ensuring deterministic behavior. In real-time systems, tasks must be
executed within strict timing constraints, meaning that the system must provide predictable
and reliable responses to events, often within microseconds or milliseconds. The challenges
involved include:

1. Meeting timing constraints: The operating system must guarantee that critical tasks
or processes meet their deadlines, which requires precise scheduling, resource
management, and prioritization.
2. Interrupt handling: The system needs to handle interrupts promptly and predictably,
ensuring that time-sensitive tasks are executed without unnecessary delays.
3. Resource contention: Managing limited resources (like CPU time, memory, and I/O
devices) in a way that guarantees the real-time tasks are prioritized and are not starved
by lower-priority tasks.
4. Concurrency and synchronization: Multiple tasks might be running simultaneously,
and their interactions need to be carefully synchronized to avoid issues like race
conditions or deadlocks that could interfere with time-sensitive operations.
5. Avoiding non-deterministic behaviors: Real-time systems should avoid behaviors like
jitter (unpredictable fluctuations in task execution time), which could arise from non-
deterministic scheduling or inefficient resource management.

The operating system must be optimized for these requirements, often at the cost of features
or flexibility that a general-purpose operating system might support, to ensure reliable and
consistent performance under real-time constraints.

PREPARED BY: Mr.D.Srinivasan,AP/CSE, Mr.R. Arunkumar,AP/CSE & Mrs.A.Thilagavathi,AP/CSE Page 41


CS3451 – Introduction to Operating Systems Unit 1 Mailam Engineering College

11. Explain the following terms with necessary illustrations. (April/May-2024)


(i) Buffering
(ii) Spooling
(iii) Time sharing systems
(iv) Distributed systems
(v) Real-time systems.

(i) Buffering :

It refers to the process of temporarily storing data in a memory area, called a buffer,
while it is being transferred between two locations or processed. The data can be
coming from or going to an input/output (I/O) device, such as a disk, network, or
keyboard, and the buffer acts as a temporary holding area that helps manage
differences in data processing speeds between components.

(ii) Spooling :

Spooling (Simultaneous Peripheral Operations On-Line) is a process in which data or


tasks are temporarily stored in a buffer (usually on disk or in memory) to be processed
later, typically by a peripheral device like a printer or a disk. The idea is to manage the
flow of data to and from devices that cannot handle multiple operations
simultaneously, thereby allowing the system to perform other tasks while the devices
are busy with their work.

(iii) Time Sharing systems:

Time-sharing systems (also known as multitasking systems) are operating systems


designed to allow multiple users or processes to share system resources
simultaneously. This is achieved by rapidly switching between different tasks, giving
the illusion that each task is running concurrently, even if there is only one processor
or a limited number of processors available.

In a time-sharing system, the CPU is allocated to various processes for short periods,
typically milliseconds, in a round-robin or priority-based scheduling manner. This
approach maximizes the system's efficiency by keeping all users and processes actively
engaged with the system, providing the appearance of simultaneous execution.

PREPARED BY: Mr.D.Srinivasan,AP/CSE, Mr.R. Arunkumar,AP/CSE & Mrs.A.Thilagavathi,AP/CSE Page 42


CS3451 – Introduction to Operating Systems Unit 1 Mailam Engineering College

(iv) Distributed system

It is a network of independent computers that work together to provide a cohesive


service to users or other systems. These computers, often located in different physical
locations, communicate and coordinate with each other to achieve a common goal,
despite being physically separated. The key characteristic of a distributed system is
that it appears to users as a single coherent system, even though it consists of multiple
interconnected nodes (computers or devices).

Example,

 Cloud Computing
 Distributed Databases
 Distributed File Systems
 Google File System (GFS)
 Hadoop Distributed File System (HDFS)
 Peer-to-Peer Networks:

(v) real-time system

It is a type of computing system that is designed to process data and respond to inputs
within a specific time constraint, often referred to as a deadline. In real-time systems,
the correctness of the system's operation not only depends on the logical correctness of
the results but also on the timing of the responses. These systems are used in
environments where timing is critical, such as in embedded systems, industrial control,
robotics, telecommunications, and many others.

PREPARED BY: Mr.D.Srinivasan,AP/CSE, Mr.R. Arunkumar,AP/CSE & Mrs.A.Thilagavathi,AP/CSE Page 43


CS3451 – Introduction to Operating Systems Unit 1 Mailam Engineering College

ANNA UNIVERSITY QUESTIONS

APRIL/MAY-2023
PART A

1. Define operating system. (Q.No: 1 )


2. List the services of OS. (Q.No: 41)

PART B

11. a) Explain the functions performed by an operating system. (Q.No: 2,5)


b) What is a System Call? Elaborate on the types of system calls. (Q.No:7 )

*****

ANNA UNIVERSITY QUESTIONS

NOV/DEC-2023
PART A

1. OS is a control program”. Justify the statement with an example scenario. (Q.NO:16)


2. Define System call. Give any two system calls with their purpose.(Q.NO:31)

PART B

1. List down the objectives and functions of Operating Systems. (Q.NO:2)

2. Detail the various types of user interface supported by Operating Systems.(Q.NO:6)

3. Explain Various structures of Operating System.(Q.NO:4)

4. Explain the purpose and importance of system calls in details with examples.(Q.NO:7)

*****

PREPARED BY: Mr.D.Srinivasan,AP/CSE, Mr.R. Arunkumar,AP/CSE & Mrs.A.Thilagavathi,AP/CSE Page 44


CS3451 – Introduction to Operating Systems Unit 1 Mailam Engineering College

ANNA UNIVERSITY QUESTIONS

APRIL/MAY-2024
PART A

1. How does an interrupt differ from a trap? (Q.No:7)


2. What is the purpose of system calls?(Q.No:2)

PART B

1. What is the main difficulty that a programmer must overcome in writing an operating system for a
real-time Environment?(Q.No:10)
2. Describe three general methods for passing parameters to the operating system.(Q.No:7)
3. Consider a computing cluster consisting of two nodes running a database. Describe two ways in
which the cluster software can manage access to the data on the disk. Discuss the benefits and
disadvantages of each.(Q.No:3)
4. List five services provided by an operating system, and explain how each creates convenience for
users. In which cases would it be impossible for user-level programs to provide these sevices?
Explain your answer.(Q.No:5)

*****
ANNA UNIVERSITY QUESTIONS

NOV/DEC-2024
PART A

1. Define Operating Systems.(Q.No:1)


2. List out different services of operating systems.(Q.No:41)
PART B

11. Explain the following terms with necessary illustrations. (Q.No:11)


(i) Buffering
(ii) Spooling
(iii) Time sharing systems
(iv) Distributed systems
(v) Real-time systems.
12. What are system calls? Explain different categories of system calls with examples.
(Q.No:7)

*****

PREPARED BY: Mr.D.Srinivasan,AP/CSE, Mr.R. Arunkumar,AP/CSE & Mrs.A.Thilagavathi,AP/CSE Page 45


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System Univ-II

UNIT II
PROCESS MANAGEMENT
Processes - Process Concept - Process Scheduling - Operations on Processes - Inter-
process Communication; CPU Scheduling - Scheduling criteria - Scheduling
algorithms: Threads - Multithread Models – Threading issues; Process
Synchronization - The Critical-Section problem - Synchronization hardware –
Semaphores – Mutex - Classical problems of synchronization - Monitors; Deadlock -
Methods for handling deadlocks, Deadlock prevention, Deadlock avoidance, Deadlock
detection, Recovery from deadlock.
2 MARKS
1. Define a process.
A process is a program in execution. It is the unit of work in a modern operating
system. A process is an active entity with a program counter specifying the next
instructions to execute and a set of associated resources. It also includes the
process stack, containing temporary data and a data section containing global
variables.
2. What is process control block?
Each process is represented in the operating system by a process control block
also called a task control block. It contains many pieces of information associated
with a specific process. It simply acts as a repository for any information that may
vary from process to process. It contains the following information:
 Process state
 Program counter
 CPU registers
 CPU- scheduling information
 Memory-management information
 Accounting information
 I/O status information
3. What are the states of a Process?(April/May-2024)

New : The process is being created.


Running : Instructions are being executed.
Waiting : The process is waiting for some event to
occur (such as an I/O completion or
reception of a signal).
Ready : The process is waiting to be assigned to
a processor.
Terminated : The process has finished execution.

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 1


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System Univ-II

4. What are the objectives of multiprogramming and time sharing system?


 The objective of multiprogramming is to have some process running at all
times, to maximize CPU utilization.
 The objective of time sharing is to switch the CPU among processes so
frequently that users can interact with each program while it is running.

5. What are the use of job queues, ready queues and device queues?
 As a process enters a system they are put in to a job queue. This queue
consists of all jobs in the system.
 The processes that are residing in main memory and are ready and waiting
to execute are kept on a list called ready queue.
 The list of processes waiting for a particular I/O device kept in the device
queue.

6. What is meant by context switch?


Switching the CPU to another process requires saving the state of the old
process and loading the saved state for the new process. This task is known as
context switch.

7. What is independent process?


A process is independent it cannot affect or be affected by the other processes
executing in the system. Any process does not share data with other process is
a independent process.

8. What is co-operative process?


A process is co-operating if it can affect or be affected by the other processes
executing in the system. Any process that share data with other process is a co-
operating process.

9. What is the benefits OS co-operating process?


 Information sharing.
 Computation speed up.
 Modularity, Convenience.

10.What do you meant by degree of multiprogramming?


If the degree of multiprogramming is stable, then the average rate of process
creation must be equal to the average departure rate of processes leaving the
system.

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 2


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System Univ-II

11.What do you meant by Inter Process Communication?(Nov/Dec-2024)


 A process is independent if it cannot affect or be affected by the other processes
executing in the system. Any process that does not share data with any other
process is independent.
 A process is cooperating if it can affect or be affected by the other processes
executing in the system. Any process that shares data with other processes is a
cooperating process.

12. What is the use of inter process communication?


Inter process communication provides a mechanism to allow the co-operating
process to communicate with each other and synchronizes their actions without
sharing the same address space. It is provided a message passing system.

13.What do you meant by cascading termination?


 If a process terminates (either normally or abnormally), then all its children
must also be terminated.
 This phenomenon, referred to as cascading termination, is normally initiated
by the operating system.

14.What are the two models of IPC?


There are two fundamental models of Inter Process Communication:
 Shared Memory
 Message Passing
 In the shared-memory model, a region of memory that is shared by cooperating
processes is established.
 Processes can then exchange information by reading and writing data to the
shared region.
In the message-passing model, communication takes place by means of messages
exchanged between the cooperating processes.

15.What are the two operations on a process?


Process Creation:
During the course of execution, a process may create several new processes.
The creating process is called a parent process, and the new processes are
called the children of that process.
Process Termination:
A process terminates when it finishes executing its final statement and asks the
operating system to delete it by using the exit() system call.

16.What do you meant by message passing system?


Message passing provides a mechanism to allow processes to communicate and
to synchronize their actions without sharing the same address space.
PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 3
MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System Univ-II

17.What do you meant by thread?


 A thread is a basic unit of CPU utilization.
 It comprises a thread ID, a program counter, a register set, and a stack.
 It shares with other threads belonging to the same process its code section, data
section, and other operating-system resources, such as open files and signals.
 A traditional (or heavyweight) process has a single thread of control.
 If a process has multiple threads of control, it can perform more than one task
at a time.

18.What are the benefits of multithreads?


 Responsiveness - One thread may provide rapid response while other threads
are blocked or slowed down doing intensive calculations.
 Resource sharing - By default threads share common code, data, and other
resources, which allows multiple tasks to be performed simultaneously in a
single address space.
 Economy - Creating and managing threads ( and context switches between
them ) is much faster than performing the same tasks for processes.
 Scalability, i.e. Utilization of multiprocessor architectures.

19.What do you meant by multicore or multiprocessor system?


 System design is to place multiple computing cores on a single chip.
 Each core appears as a separate processor to the operating system.
 Whether the cores appear across CPU chips or within CPU chips, we call these
systems multicore or multiprocessor systems.

20.What are the benefits of multithreaded programming?


The benefits of multithreaded programming can be broken down into
four major categories:
 Responsiveness
 Resource sharing
 Economy
 Utilization of multiprocessor architectures

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 4


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System Univ-II

21.Compare user threads and kernel threads.


S.No User Threads Kernel Threads
User threads are supported Kernel threads are supported directly by
above the kernel and are the operating system.
1
implemented by a thread library
at the user level.
Thread creation, scheduling and Thread creation, scheduling and
management are done in the management are done by the operating
2
user space without kernel system.
intervention.
They are fast to create and Slower to create and manage compared to
manage blocking system call will user threads. If the thread performs a
3 cause the entire process to blocking system call, the kernel can
block. schedule another thread in the
application for execution.

22.What is critical section problem?


 Consider a system consists of 'n' processes. Each process has segment of code
called a critical section, in which the process may be changing common
variables, updating a table, writing a file.
 When one process is executing in its critical section, no other process can have
allowed executing in its critical section.
23.What are the requirements that a solution to the critical section problem
must satisfy?
The three requirements are
 Mutual exclusion
 Progress
 Bounded waiting

24.What do you meant by mutex lock? (Nov/Dec-17)


 Operating-systems designers build software tools to solve the critical-section
problem. The simplest of these tools is the mutex lock. (mutual exclusion.)
 We use the mutex lock to protect critical regions and thus prevent race
conditions.
 That is, a process must acquire the lock before entering a critical section; it
releases the lock when it exits the critical section.
 The acquire()function acquires the lock, and the release() function releases the
lock.

25.Define busy waiting and spinlock.

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 5


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System Univ-II

When a process is in its critical section, any other process that tries to enter its
critical section must loop continuously in the entry code. This is called as busy
waiting and this type of semaphore is also called a spinlock, because the
process while waiting for the lock.

26.What do you meant by monitor?


Monitors:
A high-level abstraction that provides a convenient and effective
mechanism for process synchronization
Only one process may be active within the monitor at a time.
Syntax:
monitor monitor-name
{
// shared variable declarations
procedure body P1 (…) { …. }

procedure body Pn (…) {……}
{
initialization code }}
27.Define CPU scheduling.
 CPU scheduling is the process of switching the CPU among various processes.
CPU scheduling is the basis of multi programmed operating systems.
 By switching the CPU among processes, the operating system can make the
computer more productive.

28.What is a Dispatcher?
The dispatcher is the module that gives control of the CPU to the process
selected by the short-term scheduler. This function involves:
 Switching context
 Switching to user mode
 Jumping to the proper location in the user program to restart that program.

29.What is dispatch latency?


The time taken by the dispatcher to stop one process and start another running
is known as dispatch latency.

30.What are the various scheduling criteria for CPU scheduling?


The various scheduling criteria are
 CPU utilization
 Throughput
 Turnaround time
 Waiting time
PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 6
MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System Univ-II

 Response time

31.Define throughput.
Throughput in CPU scheduling is the number of processes that are completed
per unit time. For long process, this rate may be one process per hour, for short
transactions, throughput might be 10 process per second.

32.What is turnaround time?


 Turnaround time is the interval from the time of submission to the time of
completion of a process.
 It is the sum of the periods spent waiting to get into memory, waiting in the
ready queue, executing on the CPU, and doing I/O.
33.What is the problem of priority scheduling and its solution?
 A major problem with priority scheduling algorithms is indefinite blocking, or
starvation. A process that is ready to run but waiting for the CPU can be
considered blocked.
 A priority scheduling algorithm can leave some low priority processes waiting
indefinitely.
 In a heavily loaded computer system, a steady stream of higher-priority
processes can prevent a low-priority process from ever getting the CPU.
 Aging:A solution to the problem of indefinite blockage of low-priority processes
is aging. Aging involves gradually increasing the priority of processes that wait
in the system for a long time.

34.Define race condition.


 When several process access and manipulate same data concurrently, then the
outcome of the execution depends on particular order in which the access takes
place is called race condition.
 To avoid race condition, only one process at a time can manipulate the shared
variable.

35.Define deadlock.
 A process requests resources; if the resources are not available at that time, the
process enters a wait state.
 Waiting processes may never again change state, because the resources they
have requested are held by other waiting processes. This situation is called a
deadlock.

36.What is the sequence in which resources may be utilized?


Under normal mode of operation, a process may utilize a resource in the
following Sequence:

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 7


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System Univ-II

 Request: If the request cannot be granted immediately, then the requesting


process must wait until it can acquire the resource.
 Use: The process can operate on the resource.
 Release: The process releases the resource.

37.What is a resource-allocation graph?


 Deadlocks can be described more precisely in terms of a directed graph called a
system resource allocation graph. This graph consists of a set of vertices V and
a set of edges E.
 The set of vertices V is partitioned into two different types of nodes; P the set
consisting of all active processes in the system and R the set consisting of all
resource types in the system.

38.Define request edge and assignment edge.


 A directed edge from process Pi to resource type Rj is denoted by Pi, Rj; it
signifies that process Pi requested an instance of resource type Rj and is
currently waiting for that resource.
 A directed edge from resource type Rj to process Pi is denoted by Rj, Pi, it
signifies that an instance of resource type has been allocated to a process Pi. A
directed edge Pi, Rj is called a request edge. A directed edge Rj, Pi is called an
assignment edge.

39.What are the methods for handling deadlocks?


 The deadlock problem can be dealt with in one of the three ways:
 Use a protocol to prevent or avoid deadlocks, ensuring that the system will
never enter a deadlock state.
 Allow the system to enter the deadlock state, detect it and then recover.
 Ignore the problem all together, and pretend that deadlocks never occur in the
system.

40.Define deadlock prevention.


 Deadlock prevention is a set of methods for ensuring that at least one of the
four necessary conditions like mutual exclusion, hold and wait, no preemption
and circular wait cannot hold.
 By ensuring that that at least one of these conditions cannot hold, the
occurrence of a deadlock can be prevented.

41.Define deadlock avoidance.


An alternative method for avoiding deadlocks is to require additional
information about how resources are to be requested. Each request requires the
system consider the resources currently available, the resources currently

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 8


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System Univ-II

allocated to each process, and the future requests and releases of each process,
to decide whether they could be satisfied or must wait to avoid a possible future
deadlock.

42.What are a safe state and an unsafe state?


 A state is safe if the system can allocate resources to each process in some
order and still avoid a deadlock. A system is in safe state only if there exists a
safe sequence.
 A sequence of processes <P1,P2,....Pn> is a safe sequence for the current
allocation state if, for each Pi, the resource that Pi can still request can be
satisfied by the current available resource plus the resource held by all the Pj,
with j<i. if no such sequence exists, then the system state is said to be unsafe.

43.What is banker's algorithm?


 Banker's algorithm is a deadlock avoidance algorithm that is applicable to a
resource-allocation system with multiple instances of each resource type. The
two algorithms used for its implementation are:
 Safety algorithm: The algorithm for finding out whether or not a system is in a
safe state.
 Resource-request algorithm: if the resulting resource allocation is safe, the
transaction is completed and process Pi is allocated its resources. If the new
state is unsafe Pi must wait and the old resource-allocation state is restored.

44.What is a deadlock?
 A process requests resources; if the resources are not available at that time, the
process enters a wait state.
 Waiting processes may never again change state, because the resources they
have requested are held by other waiting processes.
 This situation is called a deadlock.

45.What is Semaphore?
 A semaphore 'S' is a synchronization tool which is an integer value that, apart
from initialization, is accessed only through two standard atomic operations;
 wait() - operation was originally termed P (from the Dutch proberen, “to test”);
 signal() - was originally called V (from verhogen, “to increment”).
 Semaphores can be used to deal with the n-process critical section problem. It
can be also used to solve various Synchronization problems.

46.What is starvation? How can it be solved?

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 9


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System Univ-II

 Starvation: Starvation is a resource management problem where a process


does not get the resources it needs for a long time because the resources are
being allocated to other processes.
 Aging: Aging is a technique to avoid starvation in a scheduling system. It works
by adding an aging factor to the priority of each request. The aging factor
must increase the requests priority as time passes and must ensure that a
request will eventually be the highest priority request (after it has waited long
enough.

47.Distinguish between preemptive and non preemptive scheduling.


 Under non preemptive scheduling once the CPU has been allocated to a
process, the process keeps the CPU until it releases the CPU either by
terminating or switching to the waiting state.
 Preemptive scheduling can preempt a process which is utilizing the CPU in
between its execution and give the CPU to another process.

48.Differentiate a thread from a process.


 A thread otherwise called a lightweight process (LWP) is a basic unit of CPU
utilization, it comprises of a thread id, a program counter, a register set and a
stack. It is an Passive entity.
 It shares with other threads belonging to the same process its code section, data
section, and operating system resources such as open files and signals.
 A process is a program in execution. It is an active entity and it includes the
process stack, containing temporary data and the data section contains global
variables.

49.Give the necessary conditions for deadlock to occur.(April/May-2023)


 Mutual exclusion
 Hold and wait
 No preemption
 Circular wait

50.Can multiple user level threads achieve better performance on a


multiprocessor system than a single processor system? Justify your answer.
 Threads are very useful in modern programming whenever a process has
multiple tasks to perform independently of the others.
 This is particularly true when one of the tasks may block, and it is desired to
allow the other tasks to proceed without blocking.
 For example in a word processor, a background thread may check spelling and
grammar while a foreground thread processes user input ( keystrokes ), while

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 10


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System Univ-II

yet a third thread loads images from the hard drive, and a fourth does periodic
automatic backups of the file being edited.

51.What do you mean by Short term scheduler?


The selection process is carried out by the short-term scheduler (or CPU
scheduler). The scheduler selects from among the processes in memory that are
ready to execute, and allocates the CPU to one of them.

52.What is Gantt chart?


Gantt is a type of bar chart that illustrates a process schedule. It can be used to
show current schedule status of a process.

53.Differentiate single threaded and multi threaded processes.


S. No. Multithreaded Process Single Threaded Process
In this type of programming In this type of programming a single
1 multiple threads run at the same thread runs at a time.
time
Multi threaded model doesn’t use Single threaded model uses a process
2
event loop with polling event loop with polling
3 CPU time is never wasted. CPU time is wasted.
4 Idle time is minimum. Idle time is more.
It results in more efficient It results in less efficient programs.
5
programs.
When one thread is paused due When one thread is paused, the
6 to some reason, other threads system waits until this thread is
run as normal. resumed.

54.Give the queuing diagram representation of process


scheduling.(April/May-2019)

55.I.ist out the benefits and challenges of thread


handling..(April/May-2019)
Benefits:
Responsiveness ,Resource Sharing ,Economy ,Scalability

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 11


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System Univ-II

Challenges:
 Shared Access to Data

 Locks Can Cause Performance Issues

 Exceptions in Threads Can Cause Problems

 Background Threads Need Care When Updating a GUI

56.What is the difference between thread and process ?


(April/May-2021)
PROCESS THREAD

Process means any program is in


1. execution. Thread means segment of a process.

2. Process takes more time to terminate. Thread takes less time to terminate.

3. It takes more time for creation. It takes less time for creation.

It also takes more time for context


4. switching. It takes less time for context switching.

Process is less efficient in term of Thread is more efficient in term of


5. communication. communication.

57.Define Mutex.(Nov/Dec-2021)
Mutex is a binary variable whose purpose is to provide locking mechanism. It is
used to provide mutual exclusion to a section of code, means only one process
can work on a particular code section at a time.

58.State the critical section problem. (April/May-2023)


The critical section problem in operating systems is an issue that arises when
shared resources are accessed by concurrent processes. The role of the
operating system here is to ensure that when two or more processes require to
access the shared resource concurrently, only one process gets the access at a
time.

59.Draw the Life cycle of a Process.(Nov/Dec-2023)

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 12


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System Univ-II

60.Compare process creation and thread creation in terms of economy.


(Nov/Dec-2023)

Comparison Process Thread


Basis

Definition A process is a program under A thread is a lightweight


execution i.e. an active process that can be managed
program. independently by a scheduler

Context Processes require more time Threads require less time for
switching time for context switching as they context switching as they are
are heavier. lighter than processes.

Memory Processes are totally A thread may share some


Sharing independent and don’t share memory with its peer threads.
memory.

Communication Communication between Communication between


processes requires more time threads requires less time
than between threads. than between processes.

Blocked If a process gets blocked, If a user level thread gets


remaining processes can blocked, all of its peer threads
continue execution. also get blocked.

Resource Processes require more Threads generally need less


Consumption resources than threads. resources than processes.

Dependency Individual processes are Threads are parts of a process


independent of each other. and so are dependent.

Data and Code Processes have independent A thread shares the data
sharing data and code segments. segment, code segment, files

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 13


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System Univ-II

etc. with its peer threads.

Treatment by All the different processes are All user level peer threads are
OS treated separately by the treated as a single task by the
operating system. operating system.

Time for Processes require more time Threads require less time for
creation for creation. creation.

Time for Processes require more time Threads require less time for
termination for termination. termination.

61.What are the Threading issues? (April/May-2024)


List of issues that are considered during multithreaded programs are
The Fork and Exec system call
Cancellation
Signal Handling
Thread Pools, Thread Specific data
62. What do you mean by cooperating process? (Nov/Dec-2024)
A cooperating process refers to a process in a computer system that works in
collaboration with other processes to achieve a common goal. These processes
interact by sharing resources, exchanging information, or synchronizing their
activities. Unlike independent processes, which run in isolation, cooperating
processes rely on communication mechanisms to coordinate their operations.
Some key aspects of cooperating processes include:
o Inter-process Communication (IPC)
o Synchronization
o Resource Sharing
o Mutual Exclusion

63. What is zombie process?


A process that has terminated, but whose parent has not yet called wait( ), is
known as a zombie process.

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 14


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System Univ-II

PART B

1. What is Process? Explain about the states of process with neat sketch and
discuss the process state transition with a neat diagram. (April/May-2023)

Definition:
Process is a program execution. A process is the unit of work in a modern time-
sharing system.
Process Concept:
A program is a passive entity, such as the contents of a file stored on disk,
whereas a process is an active entity, with a program counter specifying the
next instruction to execute and a set of associated resources. The structure of a
process in memory is shown in Figure 2.1.
 A process is more than the program code, which is sometimes known as the text
section.
 It also includes program counter and the contents of the processor’s registers,
 The process stack, which contains temporary data (such as function parameters,
return addresses, and local variables),
 And a data section, which contains global variables.
 And heap, which is memory that is dynamically allocated during process run
time.
 A program becomes a process when an executable file is loaded into memory.

Fig.2.1 process in memory


Process State:
As a process executes, it changes state. The state of a process is defined in part by the
current activity of that process. Each process may be in one of the following states.
Fig.2.2 shows the states of the process.

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 15


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System Univ-II

The states are,

• New. The process is being created.


• Running. Instructions are being executed.
• Waiting. The process is waiting for some event to occur (such as an I/O completion
or reception of a signal).
• Ready. The process is waiting to be assigned to a processor.
• Terminated. The process has finished execution.

Fig.2.2 process state Diagram

Process Control Block:


Each process is represented in the operating system by a process control block
(PCB)—also called a task control block. It contains many pieces of information
associated with a specific process.

Fig.2.3 shows the elements of the Process Control Block. The elements are,

State: The state may be new, ready, running, waiting, halted, and so on.
Program counter: The counter indicates the address of the next instruction to be
executed for this process.
 CPU registers: The registers vary in number and type, depending on the
computer architecture.
 CPU-scheduling information: This information includes a process priority,
pointers to scheduling queues, and any other scheduling parameters.
 Memory-management information: This information may include such
information as the value of the base and limit registers, the page tables, or the
segment tables, depending on the memory system used by the operating
system.
 I/O Status information: The information includes the list of I/O devices
allocated to this process, a list of open files, and so on.

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 16


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System Univ-II

Fig.2.3 Process Control Block

2.Explain in detail about process scheduling with neat diagram.


Process Scheduling:
 The objective of multiprogramming is to have some process running at all times,
to maximize CPU utilization.
 The objective of time sharing is to switch the CPU among processes so
frequently that users can interact with each program while it is running.
Scheduling Queues:
 As processes enter the system, they are put into a job queue, which consists of
all processes in the system. The processes that are residing in main memory
and are ready and waiting to execute are kept on a list called the ready queue.
 This queue is generally stored as a linked list. A ready-queue header contains
pointers to the first and final PCBs in the list. Each PCB includes a pointer field
that points to the next PCB in the ready queue.
 When a process is allocated the CPU, it executes for a while and eventually
quits, is interrupted, or waits for the occurrence of a particular event, such as
the completion of an I/O request. Suppose the process makes an I/O request to
a shared device, such as a disk.
 Since there are many processes in the system, the disk may be busy with the
I/O request of some other process.
 The process therefore may have to wait for the disk. The list of processes
waiting for a particular I/O device is called a device queue. Each device has its
own device queue Figure 2.4 shows all ready queue and various I/O device
queue.

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 17


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System Univ-II

Fig.2.4 The Ready Queue and various I/O device Queue


 A common representation of process scheduling is a Queueing diagram, is
shown in Figure 2.5.
 In this diagram each rectangular box represents a queue.
Two types of queues are present:
 The ready queue and a set of device queues. The circles represent the resources
that serve the queues, and the arrows indicate the flow of processes in the
system.
 A new process is initially put in the ready queue. It waits there until it is
selected for execution, or dispatched.

Fig.2.5 Queuing diagram representation of process scheduling


Schedulers:
 A process migrates among the various scheduling queues throughout its lifetime. The
selection process is carried out by the appropriate scheduler.
 The, long-term scheduler, or job scheduler selects processes from the job pool and
loads them into memory for execution.

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 18


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System Univ-II

 The short-term scheduler, or CPU scheduler, selects from among the processes that
are ready to execute and allocates the CPU to one of them.
 The long-term scheduler executes much less frequently; minutes may separate the
creation of one new process and the next. The long-term scheduler controls the degree
of multiprogramming (the number of processes in memory).
 Some operating systems, such as time-sharing systems, may introduce an additional,
intermediate level of scheduling. This medium-term scheduler is shown in following
figure 2.6.

Fig.2.6 Addition of medium term scheduling to the queuing diagram

Context Switch:
• When an interrupt occurs, the system needs to save the current context of the process
running on the CPU so that it can restore that context when its processing is done,
essentially suspending the process and then resuming it.
• Switching the CPU to another process requires performing a state save of the current
process and a state restore of a different process. This task is known as a context
switch.

3.Explain in detail about operations of process. (April/May-2024)

The processes in most systems can execute concurrently, and they may be created
and deleted dynamically.
Thus, these systems must provide a mechanism for process creation and termination.
Process Creation:
During the course of execution, a process may create several new processes. The
creating process is called a parent process, and the new processes are called the
children of that process.
 Each of these new processes may in turn create other processes, forming a tree
of processes.

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 19


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System Univ-II

 Most operating systems (including UNIX, Linux, and Windows) identify


processes according to a unique process identifier (or pid), which is typically an
integer number.
 Figure 2.6 illustrates a typical process tree for the Linux operating system,
showing the name of each process and its pid.

Fig.2.6 A tree of processes on a typical Linux System

 The init process (which always has a pid of 1) serves as the root parent process
for all user processes.
 Once the system has booted, the init process can also create various user
processes, such as a web or print server.

When a process creates a new process, two possibilities for execution exist:
1. The parent continues to execute concurrently with its children.
2. The parent waits until some or all of its children have terminated.

There are also two address-space possibilities for the new process:

 The child process is a duplicate of the parent process (it has the same program
and data as the parent).
 The child process has a new program loaded into it.

To illustrate these differences, let’s first consider the UNIX operating system.

 A new process is created by the fork() system call. The new process consists of a
copy of the address space of the original process. This mechanism allows the
parent process to communicate easily with its child process.
 Both processes (the parent and the child) continue execution at the instruction
after the fork(), with one difference: the return code for the fork() is zero for the
new (child) process, whereas the (nonzero) process identifier of the child is
PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 20
MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System Univ-II

returned to the parent. Fig.2.7 shows Process creation using the fork() system
call.
 After a fork() system call, one of the two processes typically uses the exec()
system call to replace the process’s memory space with a new program.

Fig.2.7 Process creation using the fork() system call

 The exec() system call loads a binary file and starts its execution.
Process Termination:
 A process terminates when it finishes executing its final statement and asks the
operating system to delete it by using the exit() system call. At that point, the
process may return a status value (typically an integer) to its parent process (via
the wait() system call).
 All the resources of the process—including physical and virtual memory, open
files, and I/O buffers—are deallocated by the operating system.
A parent may terminate the execution of one of its children for a variety of
reasons, such as these:

 If a process terminates (either normally or abnormally), then all its children


must also be terminated. This phenomenon, referred to as cascading
termination, is normally initiated by the operating system.

4.Explain in detail about Inter Process Communication.

Processes executing concurrently in the operating system may be either independent


processes or cooperating processes.
 A process is independent if it cannot affect or be affected by the other processes
executing in the system.
 Any process that does not share data with any other process is independent.
 A process is cooperating if it can affect or be affected by the other processes
executing in the system.
 Any process that shares data with other processes is a cooperating process.

There are two fundamental models of Inter Process Communication:

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 21


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System Univ-II

1. Shared Memory
2. Message Passing
Figure 2.8 shows the types of Communications models. Ie)(a) Message passing. (b)
Shared memory.
a. In the shared-memory model, a region of memory that is shared by cooperating
processes is established.
Processes can then exchange information by reading and writing data to the shared
region.

b.In the message-passing model, communication takes place by means of messages


exchanged between the cooperating processes.

Figure 2.8 Communications models. (a) Message passing. (b) Shared memory.

 Both of the models just mentioned are common in operating systems, and many
systems implement both.
 Message passing is useful for exchanging smaller amounts of data, because no
conflicts need be avoided.
 Message passing is also easier to implement in a distributed system than shared
memory.
 Shared memory can be faster than message passing, since message-passing systems
are typically implemented using system calls.

Shared-Memory Systems:

 Inter Process Communication using shared memory requires communicating


processes to establish a region of shared memory.
 Typically, a shared-memory region resides in the address space of the process
creating the shared-memory segment.
 Other processes that wish to communicate using this shared-memory segment must
attach it to their address space.
 Shared memory requires that two or more processes agree to remove this restriction.
 They can then exchange information by reading and writing data in the shared areas.

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 22


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System Univ-II

One solution to the producer–consumer problem uses shared memory:


 To allow producer and consumer processes to run concurrently, we must have
available a buffer of items that can be filled by the producer and emptied by the
consumer.
 This buffer will reside in a region of memory that is shared by the producer and
consumer processes.
 A producer can produce one item while the consumer is consuming another item.
 The producer and consumer must be synchronized, so that the consumer does not try
to consume an item that has not yet been produced.

Two types of buffers can be used.

 The unbounded buffer places no practical limit on the size of the buffer. The
consumer may have to wait for new items, but the producer can always produce new
items.
 The bounded buffer assumes a fixed buffer size. In this case, the consumer must wait
if the buffer is empty, and the producer must wait if the buffer is full.

The following variables reside in a region of memory shared by the producer and
consumer processes:

The producer process using shared memory.


:
while (true)
{/* produce an item in next produced */
while (((in + 1) % BUFFER SIZE) == out)
; /* do nothing */
buffer[in] = next produced;
in = (in + 1) % BUFFER SIZE; }

The consumer process using shared memory.

while (true)
{
while (in == out)
; /* do nothing */
next consumed = buffer[out];
out = (out + 1) % BUFFER SIZE;
/* consume the item in next consumed */
}

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 23


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System Univ-II

Message-Passing Systems:
 Message passing provides a mechanism to allow processes to communicate and to
synchronize their actions without sharing the same address space.
 It is particularly useful in a distributed environment, where the communicating
processes may reside on different computers connected by a network.
 For example, an Internet chat program could be designed so that chat participants
communicate with one another by exchanging messages.

A message-passing facility provides at least two operations:


o send(message)
o receive(message)
 Messages sent by a process can be either fixed or variable in size. If only fixed-sized
messages can be sent, the system-level implementation is straightforward.
 If processes P and Q want to communicate, they must send messages to and receive
messages from each other: a communication link must exist between them.

Here are several methods for logically implementing a link and the send()/receive()
operations:

• Direct or indirect communication


• Synchronous or asynchronous communication
• Automatic or explicit buffering

Direct communication:

Direct communication, each process that wants to communicate must explicitly name
the recipient or sender of the communication. In this scheme, the send() and
receive() primitives are defined as:

• send(P, message)—Send a message to process P.


• receive(Q, message)—Receive a message from process Q.

Indirect communication:
Indirect communication, the messages are sent to and received from mailboxes, or
ports.
• A mailbox can be viewed abstractly as an object into which messages can be placed by
processes and from which messages can be removed.
• Each mailbox has a unique identification.
• For example, POSIX message queues use an integer value to identify a mailbox.
• A process can communicate with another process via a number of different mailboxes,
but two processes can communicate only if they have a shared mailbox.

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 24


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System Univ-II

The send() and receive() primitives are defined as follows:


 send(A, message)—Send a message to mailbox A.
 receive(A, message)—Receive a message from mailbox A.

In this scheme, a communication link has the following properties:


• A link is established between a pair of processes only if both members of the pair have
a shared mailbox.
• A link may be associated with more than two processes.
• Between each pair of communicating processes, a number of different links may exist,
with each link corresponding to one mailbox.

5.Explain in detail about threads and its types. (Nov/Dec 2015)(Nov/Dec-2024)

Thread Overview:
A thread is a basic unit of CPU utilization. It comprises a thread ID, a program
counter, a register set, and a stack.
 It shares with other threads belonging to the same process its code section, data
section, and other operating-system resources, such as open files and signals. Figure
2.9 shows Single threaded and multithreaded process model.
• A traditional (or heavyweight) process has a single thread of control.
• If a process has multiple threads of control, it can perform more than one task at a
time.

Figure 2.9 Single threaded and Multithreaded process

For example.

 A word processor may have a thread for displaying graphics, another thread for
responding to keystrokes from the user, and a third thread for performing spelling
and grammar checking in the background.
 Such applications can perform several CPU-intensive tasks in parallel across the
multiple computing cores. The Multithreaded server Architecture is shown in Fig 2.10.

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 25


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System Univ-II

 If the web server ran as a traditional single-threaded process, it would be able to


service only one client at a time, and a client might have to wait a very long time for
its request to be serviced.

Figure 2.10 Multithreaded server Architecture

Benefits:
Responsiveness:
 Multithreading an interactive application may allow a program to continue
running even if part of it is blocked or is performing a lengthy operation,
thereby increasing responsiveness to the user.
Resource sharing:
 Processes can only share resources through techniques such as shared memory
and message passing.
Economy:
 Allocating memory and resources for process creation is costly. Because threads
share the resources of the process to which they belong, it is more economical
to create and context-switch threads.
Scalability:
 The benefits of multithreading can be even greater in a multiprocessor
architecture, where threads may be running in parallel on different processing
cores.
Similarity between Threads and Processes:
 Only one thread or process is active at a time
 Within process both execute sequentially
 Both can create children
Differences between Threads and Processes:
 Threads are not independent, processes are.
 Threads are designed to assist each other, processes may or may not do it

Types of Threads:
User Level thread (ULT):

It Is implemented in the user level library, they are not created using the system calls.
Thread switching does not need to call OS and to cause interrupt to Kernel. Kernel

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 26


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System Univ-II

doesn’t know about the user level thread and manages them as if they were single-
threaded processes.

Advantages of ULT :

 Can be implemented on an OS that does’t support multithreading.


 Simple representation since thread has only program counter, register set,
stack space.
 Simple to create since no intervention of kernel.
 Thread switching is fast since no OS calls need to be made.

Disadvantages of ULT :
 No or less co-ordination among the threads and Kernel.
 If one thread causes a page fault, the entire process blocks.
1. Kernel Level Thread (KLT) :
Kernel knows and manages the threads. Instead of thread table in each process, the
kernel itself has thread table (a master one) that keeps track of all the threads in the
system. In addition kernel also maintains the traditional process table to keep track of
the processes. OS kernel provides system call to create and manage threads.

Advantages of KLT:
 Since kernel has full knowledge about the threads in the system, scheduler may
decide to give more time to processes having large number of threads.
 Good for applications that frequently block.

Disadvantages of KLT :
 Slow and inefficient.
 It requires thread control block so it is an overhead.

6.Explain in detail about multicore programming with its types.

Multicore Programming:

On a system with a single computing core, concurrency merely means that the
execution of the threads will be interleaved over time .Figure 2.11 shows the
concurrent execution on a single core system , because the processing core is capable
of executing only one thread at a time.

Figure 2.11 Concurrent execution on a single core system


PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 27
MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System Univ-II

On a system with multiple cores, however, concurrency means that the threads can
run in parallel, because the system can assign a separate thread to each core .

Figure 2.12 Parallel execution on a multicore system


A system is parallel if it can perform more than one task simultaneously is shown in
fig 2.12. Before the advent of SMP and multicore architectures, most computer
systems had only a single processor.

Programming Challenges

 The trend towards multicore systems continues to place pressure on system designers
and application programmers to make better use of the multiple computing cores.
 Designers of operating systems must write scheduling algorithms that use multiple
processing cores to allow the parallel execution shown in Figure 2.13

Figure 2.13 multiple processing cores

In general, five areas present challenges in programming for multicore systems:

1. Identifying tasks: This involves examining applications to find areas that can be
divided into separate, concurrent tasks.

2. Balance: While identifying tasks that can run in parallel, programmers must also
ensure that the tasks perform equal work of equal value.

3. Data splitting: Just as applications are divided into separate tasks, the data
accessed and manipulated by the tasks must be divided to run on separate cores.

4. Data dependency: The data accessed by the tasks must be examined for
dependencies between two or more tasks. When one task depends on data from
another, programmers must ensure that the execution of the tasks is synchronized to
accommodate the data dependency.

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 28


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System Univ-II

5. Testing and debugging. Testing and debugging such concurrent programs is


inherently more difficult than testing and debugging single-threaded applications.

Types of Parallelism:
In general, there are two types of parallelism:
 Data parallelism and Task parallelism.
Data parallelism - focuses on distributing subsets of the same data across multiple
computing cores and performing the same operation on each core.

Task parallelism - involves distributing not data but tasks (threads) across
multiple computing cores.
Each thread is performing a unique operation. Different threads may be operating on
the same data, or they may be operating on different data.

7.Explain in detail about multithreading models with neat diagram.


Threads may be provided either at the user level, for user threads, or by the
kernel, for kernel threads.

 User threads are supported above the kernel and are managed without kernel
support, whereas kernel threads are supported and managed directly by the
operating system.
 Virtually all contemporary operating systems—including Windows, Linux, Mac
OS X, and Solaris— support kernel threads.
Three common ways of establishing such a relationship:
1. many-to-one model,
2. one-to-one model,
3. many-to many model.
Many-to-One Model
 The many-to-one model maps many user-level threads to one kernel thread is
shown in fig 2.14.
 Thread management is done by the thread library in user space, so it is
efficient.

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 29


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System Univ-II

Figure 2.14 Many to one model


One-to-One Model
 The one-to-one model maps each user thread to a kernel thread is shown in figure
2.15.
 It provides more concurrency than the many-to-one model by allowing another thread
to run when a thread makes a blocking system call. It also allows multiple threads to
run in parallel on multiprocessors.
 The only drawback to this model is that creating a user thread requires creating the
corresponding kernel thread.
 The one-to-one model allows greater concurrency, but the developer has to be careful
not to create too many threads within an application.
 Linux, along with the family of Windows operating systems, implement the one-to-one
model.

Figure 2.15 One to one model


Many-to-Many Model
 The many-to-many model multiplexes many user-level threads to a smaller or equal
number of kernel threads.
 The number of kernel threads may be specific to either a particular application or a
particular machine (an application may be allocated more kernel threads on a
multiprocessor than on a single processor).

Figure 2.16 Many to many model

 One variation on the many-to-many model still multiplexes many user level threads to
a smaller or equal number of kernel threads but also allows a user-level thread to be

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 30


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System Univ-II

bound to a kernel thread. This variation is sometimes referred to as the two-level


model is shown in fig 2.17.
 The Solaris operating system supported the two-level model in versions older than
Solaris 9. However, beginning with Solaris 9, this system uses the one-to-one model.

Figure 2.17 Two level model

8. What is a race condition? Explain how a critical section avoids this condition.
Race condition:
 When several process access and manipulate same data concurrently, then the
outcome of the execution depends on particular order in which the access takes
place is called race condition.
 To avoid race condition, only one process at a time can manipulate the shared
variable.
Synchronization:
 Process Synchronization means sharing system resources by processes in such
a way that, Concurrent access to shared data is handled thereby minimizing the
chance of inconsistent data. Maintaining data consistency demands
mechanisms to ensure synchronized execution of cooperating processes.
 Process Synchronization was introduced to handle problems that arose while
multiple process executions. Some of the problems are discussed below.
Critical-Section Problem:
 Consider a system consisting of n processes {P0, P1, ..., Pn−1}.
 Each process has a segment of code, called a critical section, in which the
process may be changing common variables, updating a table, writing a file, and
so on.
 The important feature of the system is that, when one process is executing in its
critical section, no other process is allowed to execute in its critical section.
 That is, no two processes are executing in their critical sections at the same
time.
 The critical-section problem is to design a protocol that the processes can use to
cooperate. Each process must request permission to enter its critical section.
 The section of code implementing this request is the entry section.
 The critical section may be followed by an exit section.
PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 31
MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System Univ-II

 The remaining code is the remainder section.

The general structure of a typical process Pi.


do
{
entry section
critical section
exit section
remainder section
} while (true);
The entry section and exit section are enclosed in boxes to highlight these
important segments of code.
A solution to the critical-section problem must satisfy the following three
requirements:
1. Mutual exclusion.
 If process Pi is executing in its critical section, then no other processes can be
executing in their critical sections.
2. Progress.
 If no process is executing in its critical section and some processes wish to
enter their critical sections, then only those processes that are not executing in
their remainder sections can participate in deciding which will enter its critical
section next, and this selection cannot be postponed indefinitely.
3. Bounded waiting.
 There exists a bound, or limit, on the number of times that other processes are
allowed to enter their critical sections after a process has made a request to
enter its critical section and before that request is granted.
Two general approaches are used to handle critical sections in operating systems:

1. Preemptive kernels and


2. Non preemptive kernels.
 A preemptive kernel allows a process to be preempted while it is running in
kernel mode.
 A non preemptive kernel does not allow a process running in kernel mode to be
preempted;
9.Explain in detail about mutex locks. (April/May 2017)

Mutex Locks:
 Operating-systems designers build software tools to solve the critical-section
problem. The simplest of these tools is the mutex lock. (mutual exclusion.)
 We use the mutex lock to protect critical regions and thus prevent race
conditions.
PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 32
MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System Univ-II

 That is, a process must acquire the lock before entering a critical section; it
releases the lock when it exits the critical section.
 The acquire()function acquires the lock, and the release() function releases the
lock.
The definition of acquire() is as follows:
acquire()
{
while (!available)
; /* busy wait */
available = false;;
}
do
{
acquire lock
critical section
release lock
remainder section
} while (true);
 A mutex lock has a boolean variable available whose value indicates if the lock
is available or not.
 If the lock is available, a call to acquire() succeeds, and the lock is then
considered unavailable.
 A process that attempts to acquire an unavailable lock is blocked until the lock
is released.
The definition of release() is as follows:
release()
{
available = true;
}
 Calls to either acquire() or release() must be performed automically.
 The main disadvantage of the implementation given here is that it requires busy
waiting.
 While a process is in its critical section, any other process that tries to enter its
critical section must loop continuously in the call to acquire().
 In fact, this type of mutex lock is also called a spinlock because the process
“spins” while waiting for the lock to become available.
 This continual looping is clearly a problem in a real multiprogramming system,
where a single CPU is shared among many processes.

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 33


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System Univ-II

10.Explain in detail about semaphores and show how wait() and signal()
operations could be implemented in multiprocessor environments, using the
Test and Set() instructions. (or) How process synchronization is achieved using
semaphore? Give an example.(April/May-2023 & Nov/Dec-2023)

A semaphore S is an integer variable that, apart from initialization, is accessed only


through two standard atomic operations:

1. wait() - operation was originally termed P (from the Dutch proberen, “to test”);
2. signal() - was originally called V (from verhogen, “to increment”).

The definition of wait() is as follows:

wait(S)
{
while (S <= 0)
; // busy wait
S--; }
The definition of signal() is as follows:
signal(S)
{
S++;
}

 All modifications to the integer value of the semaphore in the wait() and signal()
operations must be executed indivisibly.
 That is, when one process modifies the semaphore value, no other process can
simultaneously modify that same semaphore value.

Semaphore Usage:
 Operating systems often distinguish between counting and binary semaphores.
 The value of a counting semaphore can range over an unrestricted domain.
 The value of a binary semaphore can range only between 0 and 1.
 Thus, binary semaphores behave similarly to mutex locks.
 Counting semaphores can be used to control access to a given resource
consisting of a finite number of instances.
 The semaphore is initialized to the number of resources available.
 Each process that wishes to use a resource performs a wait() operation on the
semaphore (thereby decrementing the count).
 When a process releases a resource, it performs a signal() operation
(incrementing the count).

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 34


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System Univ-II

 When the count for the semaphore goes to 0, all resources are being used.
 After that, processes that wish to use a resource will block until the count
becomes greater than 0.
To implement semaphores under this definition, we define a semaphore as
follows:
typedef struct
{
int value;
struct process *list;
} semaphore;
 Each semaphore has an integer value and a list of processes list. When a
process must wait on a semaphore, it is added to the list of processes. A signal()
operation removes one process from the list of waiting processes and awakens
that process.
The wait() semaphore operation can be defined as
wait(semaphore *S)
{
S->value--;
if (S->value < 0) {
add this process to S->list;
block();
}
}
The signal() semaphore operation can be defined as
signal(semaphore *S)
{
S->value++;
if (S->value <= 0) {
remove a process P from S->list;
wakeup(P);
}
}
 The block() operation suspends the process that invokes it.
 The wakeup(P) operation resumes the execution of a blocked process P.
 If a semaphore value is negative, its magnitude is the number of processes
waiting on that semaphore.
 This fact results from switching the order of the decrement and the test in the
implementation of the wait() operation.
 The list of waiting processes can be easily implemented by a link field in each
process control block (PCB). Each semaphore contains an integer value and a
pointer to a list of PCBs.

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 35


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System Univ-II

11.Explain in detail about monitors. (or) Explain the dining-philosopher


critical section problem solution using monitor.(April/May-2019)

Monitors: A high-level abstraction that provides a convenient and effective mechanism


for process synchronization.

Only one process may be active within the monitor at a time.


monitor monitor-name
{
// shared variable declarations
procedure body P1 (…) { …. }

procedure body Pn (…) {……}
{
initialization code
}
}
 To allow a process to wait within the monitor, a condition variable must be declared
as condition x, y;
The Schematic view of a monitor is shown in fig 2.18.

Schematic view of a monitor

Fig 2.18 Schematic view of a monitor

Monitor with condition variables x,y is shown in Fig 2.19.Two operations on a


condition variable:

o x.wait () –a process that invokes the operation is suspended.


o x.signal () –resumes one of the suspended processes(if any)
PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 36
MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System Univ-II

Fig 2.19 Monitor with condition variable

12.Explain in detail about any two CPU scheduling algorithms with suitable
examples.

There are many different CPU-scheduling algorithms.

First-Come, First-Served Scheduling:

 With this scheme, the process that requests the CPU first is allocated the CPU
first.
 When a process enters the ready queue, its PCB is linked onto the tail of the
queue.
 When the CPU is free, it is allocated to the process at the head of the queue.
 The running process is then removed from the queue.
Consider the following set of processes that arrive at time 0, with the length of the
CPU burst given in milliseconds:
Process Burst Time
P1 24
P2 3
P3 3

Gantt chart:
If the processes arrive in the order P1, P2, P3, and are served in FCFS order,

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 37


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System Univ-II

The waiting time is 0 milliseconds for process P1, 24 milliseconds for process
P2, and 27 milliseconds for process P3. Thus, the average waiting time is (0 +
24 + 27)/3 = 17 milliseconds.
Shortest-Job-First Scheduling:
 This algorithm associates with each process the length of the process’s next
CPU burst.
 When the CPU is available, it is assigned to the process that has the smallest
next CPU burst.
 If the next CPU bursts of two processes are the same, FCFS scheduling is used
to break the tie.
 It is shortest-next- CPU-burst algorithm, because scheduling depends on the
length of the next
Consider the following set of processes, with the length of the CPU burst given in
milliseconds:

Process Burst Time


P1 6
P2 8
P3 7
P4 3
Gantt chart:

The waiting time is 3 milliseconds for process P1, 16 milliseconds for process P2, 9
milliseconds for process P3, and 0 milliseconds for process P4.
Thus, the average waiting time is (3 + 16 + 9 + 0)/4 = 7 milliseconds.
Consider the following four processes, with the length of the CPU burst given in
milliseconds:

Process Arrival Time Burst Time


P1 0 8
P2 1 4
P3 2 9
P4 3 5

Gantt chart:

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 38


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System Univ-II

Priority Scheduling:

 A priority is associated with each process, and the CPU is allocated to the process
with the highest priority.
 Equal-priority processes are scheduled in FCFS order.
 An SJF algorithm is simply a priority algorithm where the priority (p) is the inverse of
the (predicted) next CPU burst.

Consider the following set of processes, assumed to have arrived at time 0 in the
order P1, P2, · · ·, P5, with the length of the CPU burst given in milliseconds:

Process Burst Time Priority


P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2

Gantt chart:

The average waiting time is 8.2 milliseconds.


 Priority scheduling can be either preemptive or non preemptive.
 When a process arrives at the ready queue, its priority is compared with the priority of
the currently running process.
 A major problem with priority scheduling algorithms is indefinite blocking, or
starvation. A process that is ready to run but waiting for the CPU can be considered
blocked.
 A priority scheduling algorithm can leave some low priority processes waiting
indefinitely.
 In a heavily loaded computer system, a steady stream of higher-priority processes can
prevent a low-priority process from ever getting the CPU.
Round-Robin Scheduling:
 The round-robin (RR) scheduling algorithm is designed especially for timesharing
systems.
 It is similar to FCFS scheduling, but preemption is added to enable the system to
switch between processes. A small unit of time, called a time quantum or time slice, is
defined.

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 39


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System Univ-II

 A time quantum is generally from 10 to 100 milliseconds in length.


 The ready queue is treated as a circular queue.
 The CPU scheduler goes around the ready queue, allocating the CPU to each process
for a time interval of up to 1 time quantum.
Consider the following set of processes that arrive at time 0, with the length of
the CPU burst given in milliseconds:
Process Burst Time
P1 24
P2 3
P3 3
 If we use a time quantum of 4 milliseconds, then process P1 gets the first 4
milliseconds.
 Since it requires another 20 milliseconds, it is preempted after the first time quantum,
and the CPU is given to the next process in the queue, process P2.
 Process P2 does not need 4 milliseconds, so it quits before its time quantum expires.
 The CPU is then given to the next process, process P3.
 Once each process has received 1 time quantum, the CPU is returned to process P1
for an additional time quantum

The resulting RR schedule is as follows:

Let’s calculate the average waiting time for this schedule. P1 waits for 6 milliseconds
(10 - 4), P2 waits for 4 milliseconds, and P3 waits for 7 milliseconds.

Thus, the average waiting time is 17/3 = 5.66 milliseconds.

13.What is a deadlock? What are the necessary conditions for a deadlock to


occur?
Deadlock Definition:
 A process requests resources.
 If the resources are not available at that time, the process enters a wait state.
 Waiting processes may never change state again because the resources they
have requested are held by other waiting processes.
 This situation is called a deadlock.
A process must request a resource before using it, and must release resource
after using it.

1. Request: If the request cannot be granted immediately then the requesting


process must wait until it can acquire the resource.
PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 40
MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System Univ-II

2. Use: The process can operate on the resource


3. Release: The process releases the resource.
Deadlock Characterization :Four Necessary conditions for a deadlock
Mutual exclusion:
 At least one resource must be held in a non sharable mode.
 That is only one process at a time can use the resource. If another process
requests that resource, the requesting process must be delayed until the
resource has been released.
Hold and wait:
 A process must be holding at least one resource and waiting to acquire
additional resources that are currently being held by other processes.
No preemption:
 Resources cannot be preempted.
Circular wait:
 P0 is waiting for a resource that is held by P1, P1 is waiting for a resource that
is held by P2...Pn-1.
Resource-Allocation Graph:
 It is a Directed Graph with a set of vertices V and set of edges E.
 V is partitioned into two types:
1. Nodes P = {p1, p2,..pn}
2. Resource type R ={R1,R2,...Rm}
 Pi -->Rj - request => request edge
 Rj-->Pi - allocated => assignment edge.
 Pi is denoted as a circle and Rj as a square.
 Rj may have more than one instance represented as a dot with in the square.
P = { P1,P2,P3}
R = {R1,R2,R3,R4}
E= {P1->R1, P2->R3, R1->P2, R2->P1, R3->P3 }
 Resource instances
One instance of resource type R1,Two instance of resource type R2,One instance of
resource type R3,Three instances of resource type R4 is shown in fig 2.20

Fig 2.20 Resource instances

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 41


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System Univ-II

Process states:
Process P1 is holding an instance of resource type R2, and is waiting for an instance
of resource type R1.
Resource Allocation Graph with a deadlock
Process P2 is holding an instance of R1 and R2 and is waiting for an instance of
resource type R3.Process P3 is holding an instance of R3 is shown in fig.2.21
P1->R1->P2->R3->P3->R2->P1
P2->R3->P3->R2->P2

Fig 2.21 Resource Allocation Graph with a deadlock

14.Explain about the methods used to prevent deadlocks. (Nov/Dec-2023)


Deadlock Definition:
 A process requests resources. If the resources are not available at that time, the
process enters a wait state.
Waiting processes may never change state again because the resources they have
requested are held by other waiting processes. This situation is called a deadlock.

Deadlock Prevention:
 This ensures that the system never enters the deadlock state.
 Deadlock prevention is a set of methods for ensuring that at least one of the
necessary conditions cannot hold.
 By ensuring that at least one of these conditions cannot hold, we can prevent
the occurrence of a deadlock.

1. Denying Mutual exclusion:


 Mutual exclusion condition must hold for non-sharable resources.
Example:
 Printer cannot be shared simultaneously shared by prevent processes.
 Sharable resource - example Read-only files.
 If several processes attempt to open a read-only file at the same time, they can
be granted simultaneous access to the file.
 A process never needs to wait for a sharable resource.
PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 42
MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System Univ-II

2. Denying Hold and wait


 Whenever a process requests a resource, it does not hold any other resource.
 One technique that can be used requires each process to request and be
allocated all its resources before it begins execution.
 Another technique is before it can request any additional resources, it must
release all the resources that it is currently allocated.
These techniques have two main disadvantages :
 First, resource utilization may be low, since many of the resources may be
allocated but unused for a long time.
 We must request all resources at the beginning for both protocols.

3. Denying No preemption
 If a Process is holding some resources and requests another resource that
cannot be immediately allocated to it. (i.e. the process must wait), then all
resources currently being held are preempted.
 These resources are implicitly released.
 The process will be restarted only when it can regain its old resources.

4. Denying Circular wait


 Impose a total ordering of all resource types and allow each process to
request for resources in an increasing order of enumeration.
 Let R = {R1,R2,...Rm} be the set of resource types.
 Assign to each resource type a unique integer number.
 If the set of resource types R includes tapedrives, disk drives and printers.
F(tapedrive)=1,
F(diskdrive)=5,
F(Printer)=12.
Each process can request resources only in an increasing order of enumeration.

15.Explain Bankers deadlock avoidance algorithm with an


illustration.(April/May-2023)
Deadlock Avoidance:
 Deadlock avoidance request that the OS be given in advance additional
information concerning which resources a process will request and use during
its life time.
 With this information it can be decided for each request whether or not the
process should wait.

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 43


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System Univ-II

 To decide whether the current request can be satisfied or must be delayed, a


system must consider the resources currently available, the resources currently
allocated to each process and future requests and releases of each process.

Safe State
A state is safe if the system can allocate resources to each process in some order and
still avoid a deadlock. Fig.2.22 shows the state of the system. A deadlock is an unsafe
state. Not all unsafe states are dead locks; an unsafe state may lead to a dead lock.

Fig.2.22 state of the system

Two algorithms are used for deadlock avoidance namely;


1. Resource Allocation Graph Algorithm - single instance of a resource type.
2. Banker’s Algorithm – several instances of a resource type.
Resource allocation graph algorithm:
Claim edge - Claim edge Pi---> Rj indicates that process Pi may request
resource Rj at some time, represented by a dashed directed edge.
 When process Pi request resource Rj, the claim edge Pi -> Rj is converted to a
request edge.
 Similarly, when a resource Rj is released by Pi the assignment edge Rj -> Pi is
reconverted to a claim edge Pi -> Rj
The request can be granted only if converting the request edge Pi -> Rj to an
assignment edge Rj -> Pi does not form a cycle.

Fig.2.23 –Request edge and Assigned edge

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 44


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System Univ-II

 If no cycle exists, then the allocation of the resource will leave the system in a
safe state.
 If a cycle is found, then the allocation will put the system in an unsafe state.

Banker's algorithm Data structures used:

 Available: indicates the number of available resources of each type.


 Max: Max[i, j]=k then process Pi may request at most k instances of resource type
Rj
 Allocation : Allocation[i. j]=k, then process Pi is currently allocated K instances of
resource type Rj
 Need : if Need[i, j]=k then process Pi may need K more instances of resource type Rj
Need [i, j]=Max[i, j]-Allocation[i, j]

Safety algorithm
1. Initialize work := available and Finish [i]:=false for i=1,2,3 .. n
2. Find an i such that both
Finish[i]=false
Needi<= Work
i)if no such i exists, goto step 4
3. work :=work+ allocationi;
Finish[i]:=true
goto step 2
4. If finish[i]=true for all i, then the system is in a safe state
Resource Request Algorithm
Let Requesti be the request from process Pi for resources.
1. If Requesti<= Needi goto step2, otherwise raise an error condition, since the
process has exceeded its maximum claim.
2. If Requesti <= Available, goto step3, otherwise Pi must wait, since the resources
are not available.
3. Available := Availabe-Requesti;
Allocationi := Allocationi + Requesti
Needi := Needi - Requesti;
16.Explain the two solutions of recovery from deadlock. (or)How can a system
recover from deadlock? [Nov/Dec 2021]

Deadlock Recovery

1. Process Termination:

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 45


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System Univ-II

1. Abort all deadlocked processes.


2. Abort one deadlocked process at a time until the deadlock cycle is eliminated.
After each process is aborted , a deadlock detection algorithm must be invoked to
determine where any process is still dead locked.

2. Resource Preemption:
Preemptive some resources from process and give these resources to other processes
until the deadlock cycle is broken.

o Selecting a victim: which resources and which process are to be preempted.


o Rollback: if we preempt a resource from a process it cannot continue with its normal
execution. It is missing some needed resource. we must rollback the process to some
safe state, and restart it from that state.
o Starvation : How can we guarantee that resources will not always be preempted from
the same process.
17.How can deadlock be detected? Explain.(or) Discuss how deadlocks could be
detected in detail.(APR/MAY 2015 & 2021))
If deadlocks are not avoided, then another approach is to detect when they have
occurred and recover somehow.
Single Instance of Each Resource Type:

 If all resources have only a single instance, then we can define a deadlock detection
algorithm that use a variant of resource-allocation graph called a wait for graph is
shown in fig 2.23

Fig.2.23 a) Resource Allocation graph b) Corresponding wait for graph

Several Instances of a Resource Type:


Available : Number of available resources of each type
Allocation : number of resources of each type currently allocated to each process
Request : Current request of each process

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 46


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System Univ-II

If Request [i,j]=k, then process Pi is requesting K more instances of resource type Rj.

1. Initialize work := available


Finish[i]=false, otherwise finish [i]:=true

2. Find an index i such that both


a. Finish[i]=false
b. Requesti<=work
if no such i exists go to step4.
3. Work:=work+allocationi
Finish[i]:=true
goto step2
3. If finish[i]=false then process Pi is deadlocked

18.Write short notes on Threading Issues.(or) Discuss about the issues to be


considered with multithread programs. ]

Threading Issues

List of issues that are considered during multithreaded programs are


 The Fork and Exec system call
 Cancellation
 Signal Handling
 Thread Pools, Thread Specific data
The Fork and Exec system call:
The fork system call is used to create a separate, duplicate process. In a
multithreaded program, the semantics of the fork and exec system calls change.
That is, if a thread invokes the exec system call, the program specified in the
parameter to exec will replace the entire process-including all threads and LWPs.

Usage of the two versions of fork depends upon the application. If exec is called
immediately after forking, then duplicating all threads is unnecessary, as the
program specified in the parameters to exec will replace the process.

Cancellation
Thread cancellation is the task of terminating a thread before it has completed.
A thread that is to be cancelled is often referred to as the target thread. Cancellation
of a target thread may occur in two different scenarios:
 Asynchronous cancellation: One thread immediately terminates the target
thread.

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 47


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System Univ-II

 Deferred cancellation: The target thread can periodically check if it should


terminate, allowing the target thread an opportunity to terminate itself in an
orderly fashion.
Signal Handling
A signal is used in UNIX systems to notify a process that a particular event has
occurred. A signal may be received either synchronously or asynchronously,
depending upon the source and the reason for the event being signalled.

Every signal may be handled by one of two possible handlers:

 A default signal handler


 A user-defined signal handler

Every signal has a default signal handler that is run by the kernel when handling the
signal.
 Deliver the signal to the thread to which the signal applies.
 Deliver the signal to every thread in the process.
 Deliver the signal to certain threads in the process.
 Assign a specific thread to receive all signals for the process.

Thread Pools

The general idea behind a thread pool is to create a number of threads at


process startup and place them into a pool, where they sit and wait for work. When a
server receives a request, it awakens a thread from this pool-if one is available-
passing it the request to service.

Once the thread completes its service, it returns to the pool awaiting more
work. If the pool contains no available thread, the server waits until one becomes
free.
Benefits of thread pools are:
 It is usually faster to service a request with an existing thread than waiting to
create a thread.
 A thread pool limits the number of threads that exist at any one point. This is
particularly important on systems that cannot support a large number of
concurrent threads.

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 48


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System Univ-II

Thread Specific data

Threads belonging to a process share the data of the process. Indeed, this
sharing of data provides one of the benefits of multithreaded programming. However,
each thread might need its own copy of certain data in some circumstances.

19.What are the classical problems of synchronization?

These problems are used for testing nearly every newly proposed synchronization
scheme. The following problems of synchronization are considered as classical
problems:

1.Bounded-buffer (or Producer-Consumer) Problem,

2. Dining-Philosphers Problem,

3. Readers and Writers Problem,

4. Sleeping Barber Problem


An inadequate solution could result in a deadlock where both processes are
waiting to be awakened.

The below solution consists of four classes:

1. Q : the queue that you’re trying to synchronize


2. Producer : the threaded object that is producing queue entries
3. Consumer : the threaded object that is consuming queue entries
4. PC : the driver class that creates the single Q, Producer, and Consumer.

// Java implementation of a producer and consumer that use semaphores to control


synchronization.

Import java.util.concurrent.Semaphore;

class Q
{
// an item
int item;

// semCon initialized with 0 permits


// to ensure put() executes first
static Semaphore semCon = new Semaphore(0);
PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 49
MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System Univ-II

static Semaphore semProd = new Semaphore(1);

// to get an item from buffer


void get()
{
try {
// Before consumer can consume an item,
// it must acquire a permit from semCon
semCon.acquire();
}
catch(InterruptedException e) {
System.out.println("InterruptedException caught");
}
// consumer consuming an item
System.out.println("Consumer consumed item : " + item);

// After consumer consumes the item, it releases semProd to notify producer


semProd.release();
}
// to put an item in buffer
void put(int item)
{
try {
// Before producer can produce an item,
// it must acquire a permit from semProd
semProd.acquire();
} catch(InterruptedException e) {
System.out.println("InterruptedException caught");
}

// producer producing an item


this.item = item;
System.out.println("Producer produced item : " + item);
// After producer produces the item,
// it releases semCon to notify consumer
semCon.release();
}
}

// Producer class
class Producer implements Runnable
{
PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 50
MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System Univ-II

Q q;
Producer(Q q) {
this.q = q;
new Thread(this, "Producer").start();
}

public void run() {


for(int i=0; i < 5; i++)
// producer put items
q.put(i);
}
}

// Consumer class
class Consumer implements Runnable
{
Q q;
Consumer(Q q){
this.q = q;
new Thread(this, "Consumer").start(); }

public void run()


{
for(int i=0; i < 5; i++)
// consumer get items
q.get();
}
}

// Driver class
class PC
{
public static void main(String args[])
{
// creating buffer queue
Q q = new Q();

// starting consumer thread


new Consumer(q);

// starting producer thread


new Producer(q);
PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 51
MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System Univ-II

}
}
Output:

Producer produced item : 0

Consumer consumed item : 0

Producer produced item : 1

Consumer consumed item : 1

Producer produced item : 2

Consumer consumed item : 2

Producer produced item : 3

Consumer consumed item : 3

Producer produced item : 4

Consumer consumed item : 4

ii) Reader Writer Problem:

Readers writer problem is another example of a classic synchronization problem.


There are many variants of this problem, one of which is examined below.

The Problem Statement

There is a shared resource which should be accessed by multiple processes. There are
two types of processes in this context. They are reader and writer. Any number
of readers can read from the shared resource simultaneously, but only one writer can
write to the shared resource. When a writer is writing data to the resource, no other
process can access the resource. A writer cannot write to the resource if there are non
zero number of readers accessing the resource at that time.

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 52


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System Univ-II

The Solution

From the above problem statement, it is evident that readers have higher priority than
writer. If a writer wants to write to the resource, it must wait until there are no
readers currently accessing that resource.

 Here, we use one mutex m and a semaphore w. An integer variable read_count is


used to maintain the number of readers currently accessing the resource. The
variable read_count is initialized to 0. A value of 1 is given initially to m and w.
 Instead of having the process to acquire lock on the shared resource, we use the
mutex m to make the process to acquire and release lock whenever it is updating
the read_count variable.

The code for the writer process looks like this

while(TRUE)
{
wait(w);
/* perform the write operation */
signal(w);
}

while(TRUE)
{
//acquire lock
wait(m);
read_count++;
if(read_count == 1)
wait(w);
//release lock
signal(m);
/* perform the reading operation */
// acquire lock
wait(m);
read_count--;
if(read_count == 0)
signal(w);
// release lock
signal(m);
}
As seen above in the code for the writer, the writer just waits on the w semaphore
until it gets a chance to write to the resource.
PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 53
MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System Univ-II

 After performing the write operation, it increments w so that the next writer can
access the resource.

 On the other hand, in the code for the reader, the lock is acquired whenever
the read_count is updated by a process.

 When a reader wants to access the resource, first it increments


the read_count value, then accesses the resource and then decrements
the read_count value.

 The semaphore w is used by the first reader which enters the critical section
and the last reader which exits the critical section.

 The reason for this is, when the first readers enters the critical section, the
writer is blocked from the resource. Only new readers can access the resource
now.

 Similarly, when the last reader exits the critical section, it signals the writer
using the w semaphore because there are zero readers now and a writer can
have the chance to access the resource.

20)Compare and contrast preemptive and non-preemptive


scheduling.(April/May-2021)

Parameter PREEMPTIVE SCHEDULING NON-PREEMPTIVE SCHEDULING

Once resources(CPU Cycle) are


In this resources(CPU Cycle) allocated to a process, the process
are allocated to a process for a holds it till it completes its burst
Basic limited time. time or switches to waiting state.

Process can be interrupted in Process can not be interrupted until


Interrupt between. it terminates itself or its time is up.

If a process having high If a process with a long burst time


priority frequently arrives in is running CPU, then later coming
the ready queue, a low priority process with less CPU burst time
Starvation process may starve. may starve.

It has overheads of scheduling


Overhead the processes. It does not have overheads.

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 54


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System Univ-II

Parameter PREEMPTIVE SCHEDULING NON-PREEMPTIVE SCHEDULING

Flexibility flexible rigid

Cost cost associated no cost associated

CPU In preemptive scheduling, CPU It is low in non preemptive


Utilization utilization is high. scheduling.

Examples of preemptive
scheduling are Round Robin Examples of non-preemptive
and Shortest Remaining Time scheduling are First Come First
Examples First. Serve and Shortest Job First

21) Give an example of a situation in which ordinary pipes are


more suitable than named pipes and an example of a situation in
which named pipes are more suitable than ordinary
pipes.(April/May-2024)

Ordinary Pipes :

Situation: These are working on a simple script that takes the output from one
program and directly pipes it into another program in a shell (e.g., in Linux or Unix).
For instance, you might want to take the output of ls and pipe it into grep to filter files
by name:

ls | grep "pattern"

In this case, ordinary pipes are more suitable because:

 The pipe creates a straightforward, temporary communication channel between


two processes in a single, linear flow.
 No need for long-term communication or named identification of processes.
 The data is streamed from one process to another in real-time, without needing
to manage or track multiple processes across the system.

Example 2: Named Pipes

Situation: You are developing a system where multiple independent programs need to
communicate with each other asynchronously. For example, you have a producer
PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 55
MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System Univ-II

program that generates data and a consumer program that processes the data, and
you need the two to communicate across a network or over time. You can use a
named pipe (FIFO), which provides a named communication endpoint that can be
opened by both processes.

The producer writes to a named pipe:

echo "data" > /tmp/my_pipe

And the consumer reads from the same named pipe:

cat /tmp/my_pipe

In this case, named pipes are more suitable because:

 The communication between processes can happen over time, allowing the
producer and consumer to run independently.
 Named pipes allow multiple processes (both reading and writing) to access the
pipe through a shared name, making it easier to manage complex workflows.
 The pipe persists beyond the scope of the process that created it, allowing for
more flexible and asynchronous communication between different processes or
systems.

22) Describe how deadlock is possible with the dining-philosopher's


problem.(April/May-2024)

Deadlock in the Dining Philosophers Problem occurs when all philosophers (or
processes) are in a state where they are each waiting for a resource that is held by
another philosopher. Here's how this can happen:

Scenario Leading to Deadlock:

1. The Setup:
o There are five philosophers sitting around a circular table, each needing
two forks (one for the left and one for the right) to eat.
o Each philosopher picks up the fork on their left, and then the fork on
their right, to eat.
2. The Circular Wait Condition:
o Suppose each philosopher picks up the left fork at the same time.
o Now, every philosopher is holding one fork (their left one) and waiting for
the right fork to be available.
o The right fork for each philosopher is held by their neighbor.

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 56


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System Univ-II

o This creates a circular wait where each philosopher is waiting for a fork
that is being held by another philosopher.

Deadlock Conditions:

The four necessary conditions for deadlock are:

1. Mutual Exclusion: Each fork can only be held by one philosopher at a time.
2. Hold and Wait: Philosophers are holding a fork and waiting for the other fork.
3. No Preemption: A fork cannot be forcibly taken away from a philosopher once
they've picked it up.
4. Circular Wait: A cycle of philosophers exists where each philosopher is waiting
for a fork that is being held by the next philosopher in the cycle.

When all philosophers simultaneously pick up their left fork, they are all stuck in a
circular wait for the right fork, creating a deadlock situation where none of them can
proceed to eat. In this situation, no philosopher can ever finish eating because they
will never be able to acquire both forks they need.

How to Prevent Deadlock:

Several strategies can be used to prevent deadlock in the Dining Philosophers


Problem:

 Resource Ordering: Enforce a rule where philosophers always pick up the


lower-numbered fork first and the higher-numbered one second. This breaks
the circular wait condition.
 Timeouts: Philosophers could put down the forks after waiting for a certain
period and retry, preventing them from indefinitely holding one fork.
 Concurrency Control: Use semaphores or mutexes to manage the acquisition
of forks to ensure that philosophers do not enter the waiting state if it would
lead to deadlock.

23. Explain the difference between long-term, short-term, and medium- term
schedulers. (Nov/Dec-2024)

Short-Term Medium-term Long-Term


Basis Scheduler Scheduler Scheduler

It is also called a
1. Alternate It is also called a It is also called a
process swapping
Name CPU scheduler. job scheduler.
scheduler.

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 57


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System Univ-II

Short-Term Medium-term Long-Term


Basis Scheduler Scheduler Scheduler

It provides lesser
It reduces the control It controls the
2. Degree in control over the
over the degree degree of
programming degree of
of multiprogramming. multiprogramming.
multiprogramming.

The speed of the The speed of a long-


Speed of medium
short-term term term
scheduler between the
3. Speed scheduler is very scheduler is more
short-term and long-
fast. than medium-term
term scheduler
scheduler.

4. Usage in
time- sharing It is minimal in the It is almost absent
It is a part of the time-
system time-sharing or minimal in a
sharing system.
sharing system. sharing system.
system

It can reintroduce the


It selects the It selects processes
from among the
processes from from the pool and
process into memory
5. Purpose among the process loads them into
that executes and its
that is ready to memory for
execution can be
execute. execution.
continued.

6. Process Process state is Process state is not Process state is new


state ready to running present to ready.

Select that process,


Select a good
Select a new which is currently not
7. Selection process , mix of I/O
process for a CPU need to load fully on
of process bound and CPU
quite frequently. RAM, so it swap it into
bound.
swap partition.

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 58


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System Univ-II

PART-C-CASE STUDY

1.Consider the following snapshot of a system. Execute bankers algorithm


answer the following.
Process Allocation Maximum Available
A B C A B C A B C
P0 0 1 0 7 5 3 3 3 2
P1 2 0 0 3 2 2
P2 3 0 2 9 0 2
P3 2 1 1 2 2 2
P4 0 0 2 4 3 3

Answer the following:


a. What is the content of the need matrix?
b. Is the system in a safe state?
c. If the request for process P1 arrivals for (0,4,2,0) can it be granted immediately.

SOLUTION:
a. The content of the matrix Need = Max-Allocation and is

A B C
P0 7 4 3
P1 1 2 2
P2 6 0 0
P3 0 1 1
P4 4 3 1

a. Check for

P0: Need <=Available 7 4 3<= 3 3 2 Condition gets false so


So request P0 is not granted.

P1: Need <=Available 1 2 2<= 3 3 2 Condition gets true so

Available=Available+Allocation
Available=3 3 2+ 2 0 0
Available = 5 3 2
So request P1 is granted.

P2: Need <=Available 6 0 0<= 5 3 2 Condition gets false so


So request P2 is not granted.

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 59


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System Univ-II

P3: Need <=Available 0 1 1<= 5 3 2 Condition gets true so


Available=Available+Allocation
Available=5 3 2+ 2 1 1
Available = 7 4 3
So request P3 is granted.

P4: Need <=Available 4 3 1<= 7 4 3 Condition gets true so


Available=Available+Allocation
Available=7 4 3+ 0 0 2
Available = 7 4 5
So request P4 is granted.

P2: Need <=Available 6 0 0<= 7 4 5 Condition gets true so


Available=Available+Allocation
Available=7 4 5+ 3 0 2
Available = 10 4 7
So request P2 is granted.

P0: Need <=Available 7 4 3<= 10 4 7 Condition gets true so


Available=Available+Allocation
Available=10 4 7+ 0 1 0
Available = 10 5 7
So request P2 is granted.

The system is currently in a safe state. The sequence <P1,P3,P4,P2,P0> satisfies the
safety state.

Request (1,0,2) , to decide whether this request can be immediately granted, we first
check that Request<=Avaliable (1,02)<=(3,3,2), which is true. So the new state will be

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 60


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System Univ-II

Allocation Need
Available
A B C A B C A B C
P0 0 1 0 7 4 3 2 3
0
P1 3 0 2 0 2 0
P2 3 0 2 6 0 0
P3 2 1 1 0 1 1
P4 0 0 2 4 3 1

P0: Need <=Available 7 4 3<= 2 3 0 Condition gets false so


So request P0 is not granted.

P1: Need <=Available 0 2 0<= 2 3 0 Condition gets true so


Available=Available+Allocation
Available=2 3 0+ 3 0 2
Available = 5 3 2
So request P1 is granted.

P2: Need <=Available 6 0 0<= 5 3 2 Condition gets false so


So request P2 is not granted.

P3: Need <=Available 0 1 1<= 5 3 2 Condition gets true so


Available=Available+Allocation
Available=5 3 2+ 2 1 1
Available = 7 4 3
So request P3 is granted.

P4: Need <=Available 0 0 2<= 4 6 1 Condition gets true so


Available=Available+Allocation
Available=7 4 3+ 0 0 2
Available = 7 4 5
So request P4 is granted.

P0: Need <=Available 7 4 3<= 7 4 5 Condition gets true so


Available=Available+Allocation
Available=7 4 5+ 0 1 0
Available = 7 5 5
So request P0 is granted.
PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 61
MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System Univ-II

P2: Need <=Available 6 0 0<= 7 5 5 Condition gets true so


Available=Available+Allocation
Available=7 5 5+ 3 0 2
Available = 10 5 7 So request P2 is granted.

The system is currently in a safe state. The sequence <P1,P3,P4,P0,P2> satisfies the
safety state.

2.Consider the following five processes, with the length of CPU burst time given
in milliseconds.

Process Burst Time Arrival Time Priority


P1 10 0 5
P2 6 0 2
P3 7 1 4
P4 4 1 1
P5 5 2 3

Consider the FCFS, SJF, RR (time quantum = 5) and Priority scheduling


algorithms. Illustrate the scheduling using Gantt chart. Which algorithm will
give the minimum average waiting time? Discuss. (APR/MAY 2015 & Nov/Dec-
2023)

Sol:
First-Come, First-Served Scheduling
Gantt chart

P1 P2 P3 P4 P5
0 10 16 23 27 32

Waiting Time
Process Waiting Time
P1 0-0 = 0
P2 10-0 = 10
P3 16-1 = 15
P4 23-1 = 22
P5 27-2 = 25

Average Waiting Time= (0+10+15+22+25)/5


PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 62
MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System Univ-II

= 72/5
= 14.4 ms

Turnaround Time
Process Turnaround
Time
P1 10+0 = 10
P2 6+10 = 16
P3 7+15 = 22
P4 4+22 = 26
P5 5+25 = 30

Average Turnaround Time= (10+16+22+26+30)/5


= 104/5
= 20.8 ms

Non Preemption Shortest-Job-First Scheduling:


Gantt chart
P2 P4 P5 P3 P1
0 6 10 15 22 32
Waiting Time
Process Waiting Time
P1 22-0 = 22
P2 0-0 = 0
P3 15-1 = 14
P4 6-1 = 5
P5 10-2 = 8

Average Waiting Time= (22+0+14+5+8)/5


= 49/5 = 9.8 ms
Turnaround Time
Process Turnaround
Time
P1 10+22 = 32
P2 6+0 = 6
P3 7+14 = 21
P4 4+5 = 9
P5 5+8 = 13

Average Turnaround Time= (32+6+21+9+13)/5


= 81/5

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 63


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System Univ-II

= 16.2 ms

Preemption Shortest-Job-First Scheduling :


Gantt chart
P2 P4 P2 P5 P3 P1
0 1 5 10 15 22 32
Waiting Time
Process Waiting Time
P1 22-0-0 = 22
P2 5-0-1 = 4
P3 15-1-0 = 14
P4 1-1-0 = 0
P5 10-2-0 = 8

Average Waiting Time= (22+4+14+0+8)/5


= 48/5
= 9.6 ms
Turnaround Time
Process Turnaround
Time
P1 10+22 = 32
P2 6+4 = 10
P3 7+14 = 21
P4 4+0 = 4
P5 5+8 = 13

Average Turnaround Time= (32+10+21+4+13)/5


= 80/5
= 16 ms
Round-Robin Scheduling:
Gantt chart
P1 P2 P3 P4 P5 P1 P2 P3
0 5 10 15 19 24 29 30 32
Waiting Time
Process Waiting Time
P1 24-0-5 = 19
P2 29-0-5 = 24
P3 30-1-5 = 24
P4 15-1-0 = 14
P5 19-2-0 = 17

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 64


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System Univ-II

Average Waiting Time= (19+24+24+14+17)/5


= 98/5
= 19.6 ms
Turnaround Time
Process Turnaround
Time
P1 10+19 = 29
P2 6+24 = 30
P3 7+24 = 31
P4 4+14 = 18
P5 5+17 = 22

Average Turnaround Time= (29+30+31+18+22)/5


= 130/5
= 26 ms
Non Preemption Priority Scheduling:
Gantt chart
P2 P4 P5 P3 P1
0 6 10 15 22 32

Waiting Time
Process Waiting Time
P1 22-0 = 22
P2 0-0 = 0
P3 15-1 = 14
P4 6-1 = 5
P5 10-2 = 8

Average Waiting Time= (22+0+14+5+8)/5


= 49/5
= 9.8 ms
Turnaround Time
Process Turnaround
Time
P1 10+22 = 32
P2 6+0 = 6
P3 7+14 = 21
P4 4+5 = 9
P5 5+8 = 13

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 65


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System Univ-II

Average Turnaround Time= (32+6+21+9+13)/5


= 81/5
= 16.2 ms
Preemption Priority Scheduling

Gantt chart
P2 P4 P2 P5 P3 P1
0 1 5 10 15 22 32
Waiting Time
Process Waiting Time
P1 22-0-0 = 22
P2 5-0-1 = 4
P3 15-1-0 = 14
P4 1-1-0 = 0
P5 10-2-0 = 8

Average Waiting Time= (22+4+14+0+8)/5


= 48/5
= 9.6 ms

Turnaround Time
Process Turnaround
Time
P1 10+22 = 32
P2 6+4 = 10
P3 7+14 = 21
P4 4+0 = 4
P5 5+8 = 13

Average Turnaround Time= (32+10+21+4+13)/5


= 80/5
= 16 ms

Non Preemptive SJF and Non Preemptive priority scheduling algorithm gives
minimum average waiting time as compared to other CPU scheduling algorithms.

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 66


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System Univ-II

3)Consider the following five processes that arrive at time 0, with the length of the
CPU burst time given in milliseconds.
Process CPU BURST TIME
P1 10
P2 29
P3 3
P4 7
P5 12
Consider the FCFS, non preemptive Shortest job First (SJF). Round Rabin(RR)
(quantum=10 milliseconds) scheduling algorithms. Illustrate the scheduling using
Gantt chart. Which algorithm will give the minimum average waiting time?
(APRIL/MAY-2023)

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 67


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System Univ-II

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 68


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System Univ-II

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 69


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System Univ-II

4. Consider the following set of processes, with the length of the CPU burst time
given in milliseconds.(Nov/Dec-2023)

Process Burst
Time
P1 10
P2 1
P3 2
P4 1
P5 5
a)Draw Gantt’s Chart illustrating the execution of these processes using FCFS, SJF
and Round Robin (with quantum = 1) scheduling techniques.
b) Find the Turnaround time and waiting time of each process using the above
techniques.
To solve this problem, let's start by drawing the Gantt Chart for each scheduling
technique: FCFS, SJF, and Round Robin (with a quantum of 1). Then, we'll calculate
the turnaround time and waiting time for each process using each scheduling
technique.

Given Processes:

P1: Burst Time = 10


P2: Burst Time = 1
P3: Burst Time = 2
P4: Burst Time = 1
P5: Burst Time = 5

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 70


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System Univ-II

a) Gantt Chart:

FCFS (First-Come, First-Served):


P1 | P2 | P3 | P4 | P5 || P1 | P2 | P3 | P4 | P5 |
SJF (Shortest Job First):
| P2 | P4 | P3 | P5 | P1 || P2 | P4 | P3 | P5 | P1 |
Round Robin (Quantum = 1):
| | P1 | P2 | P3 | P4 | P5 | P1 | P5 | P1 | P5 | P1 | P5 | P1 | P5 | P1 |P1 |
P5 | P1 | P5 | P1 |

b) Turnaround Time and Waiting Time:

For FCFS:
Turnaround Time:
 P1: 10
 P2: 11
 P3: 13
 P4: 14
 P5: 19
Waiting Time:
 P1: 0
 P2: 10
 P3: 11
 P4: 13
 P5: 14
For SJF:
Turnaround Time:
P1: 19
P2: 2
P3: 15
P4: 3
P5: 14
Waiting Time:
P1: 9
P2: 0
P3: 13
P4: 1
P5: 9
For Round Robin (Quantum = 1):
Turnaround Time:
P1: 14
P2: 2
PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 71
MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System Univ-II

P3: 15
P4: 3
P5: 19
Waiting Time:
P1: 4
P2: 0
P3: 12
P4: 1
P5: 9

These calculations should provide you with the required Gantt charts and the
turnaround time and waiting time for each process under each scheduling technique.

5. The processes are assumed to have arrived in the order P1,P2,P3,P4,P5 all at
time 0. Calculate the average turnaround time and maximum waiting time for
per- emptive scheduling algorithm.(Nov/Dec-2023)

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 72


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System Univ-II

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 73


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System Univ-II

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 74


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System Univ-II

6. Consider three processes, all arriving at time zero, with total execution time
of 10, 20 and 30 units respectively. Each process spends the first 20% of
execution time doing I/O, the next 70% of time doing computation and the last
10% of time doing I/O again. The operating system uses a shortest remaining
compute time first scheduling algorithm and schedules a new process either
when the running process gets blocked on I/O or when the running process
finishes its compute burst. Assume that all I/O operations can be overlapped as
much as possible. (Nov/Dec-2023)

(i) Calculate average waiting time and average turnaround time


(ii) Draw Gantt chart of CPU burst
(iii) Calculate CPU idle time.

To calculate the average waiting time and average turnaround time, let's follow the
Shortest Remaining Time First (SRTF) scheduling algorithm for the given processes.

Process P1: Total Execution Time = 10 units

Burst 1: 2 units (20% of 10)


Burst 2: 7 units (70% of 10)
Burst 3: 1 unit (10% of 10)

Process P2: Total Execution Time = 20 units

Burst 1: 4 units (20% of 20)


Burst 2: 14 units (70% of 20)
Burst 3: 2 units (10% of 20)

Process P3: Total Execution Time = 30 units

Burst 1: 6 units (20% of 30)


Burst 2: 21 units (70% of 30)
Burst 3: 3 units (10% of 30)

Now let's simulate the execution using the SRTF scheduling algorithm and calculate
waiting time and turnaround time.

Time Execution Queue CPU Status Gantt Chart


0 - P1, P2, P3 - -
2 P1 P2, P3 Executing P1 P1
4 P1 P2, P3 Executing P1 P1
PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 75
MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System Univ-II

Time Execution Queue CPU Status Gantt Chart


6 P1 P2, P3 Executing P1 P1
8 P1 P2, P3 Executing P1 P1
10 P1 P2, P3 Executing P1 P1
12 P1 P2, P3 Executing P1 P1
14 P1 P2, P3 Executing P1 P1
16 P1 P2, P3 Executing P1 P1
18 P1 P2, P3 Executing P1 P1
20 P1 P2, P3 P2 (SRTF) P1, P2
22 P2 P3 Executing P2 P1, P2
24 P2 P3 Executing P2 P1, P2
26 P2 P3 Executing P2 P1, P2
28 P2 P3 Executing P2 P1, P2
30 P2 P3 P3 (SRTF) P1, P2, P3
32 P3 - Executing P3 P1, P2, P3
36 P3 - Executing P3 P1, P2, P3
40 P3 - Executing P3 P1, P2, P3
44 P3 - Executing P3 P1, P2, P3
48 P3 - Executing P3 P1, P2, P3
50 - - - P1, P2, P3

(i) Calculating average waiting time and average turnaround time:

Average Waiting Time (AWT) for each process:

AWT_P1 = (0 + 2 + 4 + 6 + 8 + 10 + 12 + 14 + 16 + 18) / 10 = 90 / 10 = 9 units


AWT_P2 = (20 + 22 + 24 + 26 + 28) / 5 = 120 / 5 = 24 units
AWT_P3 = (30 + 32 + 36 + 40 + 44) / 5 = 182 / 5 = 36.4 units

Average Turnaround Time (ATAT) for each process:

ATAT_P1 = (10 + 12 + 14 + 16 + 18 + 20) / 6 = 90 / 6 = 15 units


ATAT_P2 = (30 + 32 + 36 + 40 + 44 + 50) / 6 = 232 / 6 = 38.67 units
ATAT_P3 = (50 + 54 + 58 + 62 + 68 + 80) / 6 = 372 / 6 = 62 units

(ii) Gantt chart of CPU burst:

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 76


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System Univ-II

| P1 | P2 | P1 | P3 | P2 | P3 | P3 | | P1 | P2 | P1 | P3 | P2 | P3 | P3 |

(iii) Calculate CPU idle time: The CPU is idle during the time when no process is in the
ready queue. In this case, the CPU is idle during the time P1, P2, and P3 complete
their I/O operations. The total CPU idle time is 2 + 2 + 4 = 8 units.

7. Consider the following scenario. There are 4 segments in a program of sizes,


A0=400B, A1=100B, A2=21B and A3=365B. Assume that the main memory
address ranges from 0 to 1999, among which the following are the available free
slots : 50-350, 450-500, 670-1060 and 1200-1850. Answer the followings.
(i) Provide diagrammatic representation of logical memory to physical
memory
(ii) Provide segment map table and draw a suitable memory management unit.
(iii) Find out internal, external and total fragmentation.
(iv) List the segments of following physical address: 1050, 560, 78, 2000
(Nov/Dec-2023)

(i) Diagrammatic representation of logical memory to physical memory:


Logical memory: 0→19990→1999
Free Slots:
50−35050−350
450−500450−500
670−1060670−1060
1200−18501200−1850
Segments:
A0:400B
A1:100B
A2:21B
A3:365B

Now, let's allocate these segments to available free slots. I'll represent the allocated
segments using arrows:

0 500 1000 1500 2000


|--------------------|--------------------|--------------------|--------------------|
| A0 (400B) | A1 (100B) |
| | |

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 77


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System Univ-II

| A2 (21B) | A3 (365B) |
|--------------------| |--------------------|-------------------|-------------|

(ii) Segment Map Table and Memory Management Unit:

Segment Map Table:

Segment Base Address Limit

A0 50 400

A1 450 100

A2 670 21

A3 1200 365

Memory Management Unit:

Logical Address Segment Offset Physical Address

100 A0 100 150

500 A1 50 500

700 A2 50 720

1400 A3 200 1400

(iii) Internal, External, and Total Fragmentation:

 Internal Fragmentation: This occurs within allocated memory blocks. Since segments
might not perfectly fit into available slots, there may be some wasted space within the
allocated memory. In this case, for example, A0 has 50 bytes of internal
fragmentation, A1 has 0 bytes, A2 has 0 bytes, and A3 has 0 bytes. Total internal
fragmentation is 50+0+0+0=5050+0+0+0=50 bytes.
 External Fragmentation: This occurs between allocated memory blocks. In this case,
there is external fragmentation due to the non-contiguous allocation of segments. The
free slots are scattered, and combining them could provide larger contiguous space.
Total external fragmentation is 350+200+90+150=790350+200+90+150=790 bytes.
 Total Fragmentation: This is the sum of internal and external fragmentation. In this
case, 50+790=84050+790=840 bytes.

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 78


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System Univ-II

(iv) Segments at specific physical addresses:

 Physical Address 1050: This falls within the allocated region of A3. The segment is A3,
and the offset is 1050−1200=−1501050−1200=−150, which is out of bounds.
 Physical Address 560: This falls within the allocated region of A1. The segment is A1,
and the offset is 560−450=110560−450=110 within the allocated space.
 Physical Address 78: This falls outside any allocated region.
 Physical Address 2000: This falls outside any allocated region.

8). Consider the following snapshot of a system. (April/May-2024)

Allocation Max Available


ABCD ABCD ABCD
To 0012 0012 1520
T1 1000 1750
T2 1354 2356
T3 0632 0652
T4 0014 0656

Answer the following questions using the banker's algorithm:


(1) What is the content of the matrix Need?
(2) Is the system in a safe state?
(3) If a request from thread T1 arrives for (0,4,2,0) can the
request be granted immediately

(1) What is the content of the matrix Need?

To calculate the Need matrix, we subtract the Allocation matrix from the Max matrix
for each process:

Need=Max−Allocation\text{Need} = \text{Max} - \text{Allocation}Need=Max−Allocation

For each process:

 For T0:
Need(T0)=Max(T0)−Allocation(T0)=(0,0,1,2)−(0,0,1,2)=(0,0,0,0)\text{Need}(T0) =
\text{Max}(T0) - \text{Allocation}(T0) = (0, 0, 1, 2) - (0, 0, 1, 2) = (0, 0, 0,
0)Need(T0)=Max(T0)−Allocation(T0)=(0,0,1,2)−(0,0,1,2)=(0,0,0,0)
 For T1:
Need(T1)=Max(T1)−Allocation(T1)=(1,7,5,0)−(1,0,0,0)=(0,7,5,0)\text{Need}(T1) =
\text{Max}(T1) - \text{Allocation}(T1) = (1, 7, 5, 0) - (1, 0, 0, 0) = (0, 7, 5,
0)Need(T1)=Max(T1)−Allocation(T1)=(1,7,5,0)−(1,0,0,0)=(0,7,5,0)

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 79


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System Univ-II

 For T2:
Need(T2)=Max(T2)−Allocation(T2)=(2,3,5,6)−(1,3,5,4)=(1,0,0,2)\text{Need}(T2) =
\text{Max}(T2) - \text{Allocation}(T2) = (2, 3, 5, 6) - (1, 3, 5, 4) = (1, 0, 0,
2)Need(T2)=Max(T2)−Allocation(T2)=(2,3,5,6)−(1,3,5,4)=(1,0,0,2)
 For T3:
Need(T3)=Max(T3)−Allocation(T3)=(0,6,5,2)−(0,6,3,2)=(0,0,2,0)\text{Need}(T3) =
\text{Max}(T3) - \text{Allocation}(T3) = (0, 6, 5, 2) - (0, 6, 3, 2) = (0, 0, 2,
0)Need(T3)=Max(T3)−Allocation(T3)=(0,6,5,2)−(0,6,3,2)=(0,0,2,0)
 For T4:
Need(T4)=Max(T4)−Allocation(T4)=(0,6,5,6)−(0,0,1,4)=(0,6,4,2)\text{Need}(T4) =
\text{Max}(T4) - \text{Allocation}(T4) = (0, 6, 5, 6) - (0, 0, 1, 4) = (0, 6, 4,
2)Need(T4)=Max(T4)−Allocation(T4)=(0,6,5,6)−(0,0,1,4)=(0,6,4,2)

Thus, the Need matrix is:

Process Need (A B C D)
T0 0000
T1 0750
T2 1002
T3 0020
T4 0642

(2) Is the system in a safe state?

To check if the system is in a safe state, we can use the Banker's Algorithm and
check whether there exists a sequence of processes that can finish (i.e., a safe
sequence). The key steps are:

1. Start with the Available vector:


Available = (1, 5, 2, 0)
2. Find a process whose Need can be satisfied by the Available resources.

 T0's Need = (0, 0, 0, 0) → Can be satisfied by Available (1, 5, 2, 0). So, T0 can
finish. After T0 finishes, Available resources will be updated:
New Available=Available+Allocation(T0)=(1,5,2,0)+(0,0,1,2)=(1,5,3,2)\text{New
Available} = \text{Available} + \text{Allocation}(T0) = (1, 5, 2, 0) + (0, 0, 1, 2) = (1,
5, 3, 2)New Available=Available+Allocation(T0)=(1,5,2,0)+(0,0,1,2)=(1,5,3,2)
 T1's Need = (0, 7, 5, 0) → Can be satisfied by Available (1, 5, 3, 2). So, T1 can
finish. After T1 finishes, Available resources will be updated:
New Available=(1,5,3,2)+(1,0,0,0)=(2,5,3,2)\text{New Available} = (1, 5, 3, 2) + (1,
0, 0, 0) = (2, 5, 3, 2)New Available=(1,5,3,2)+(1,0,0,0)=(2,5,3,2)

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 80


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System Univ-II

 T2's Need = (1, 0, 0, 2) → Can be satisfied by Available (2, 5, 3, 2). So, T2 can
finish. After T2 finishes, Available resources will be updated:
New Available=(2,5,3,2)+(1,3,5,4)=(3,8,8,6)\text{New Available} = (2, 5, 3, 2) + (1,
3, 5, 4) = (3, 8, 8, 6)New Available=(2,5,3,2)+(1,3,5,4)=(3,8,8,6)
 T3's Need = (0, 0, 2, 0) → Can be satisfied by Available (3, 8, 8, 6). So, T3 can
finish. After T3 finishes, Available resources will be updated:
New Available=(3,8,8,6)+(0,6,3,2)=(3,14,11,8)\text{New Available} = (3, 8, 8, 6) +
(0, 6, 3, 2) = (3, 14, 11, 8)New Available=(3,8,8,6)+(0,6,3,2)=(3,14,11,8)
 T4's Need = (0, 6, 4, 2) → Can be satisfied by Available (3, 14, 11, 8). So, T4 can
finish. After T4 finishes, Available resources will be updated:
New Available=(3,14,11,8)+(0,0,1,4)=(3,14,12,12)\text{New Available} = (3, 14,
11, 8) + (0, 0, 1, 4) = (3, 14, 12,
12)New Available=(3,14,11,8)+(0,0,1,4)=(3,14,12,12)

Since we were able to find a sequence of processes that can finish (T0, T1, T2, T3, T4),
the system is in a safe state.

(3) If a request from thread T1 arrives for (0, 4, 2, 0), can the request be granted
immediately?

To determine if the request can be granted, we need to check the following:

1. The request must be less than or equal to the Need of T1.

Request(T1)=(0,4,2,0)andNeed(T1)=(0,7,5,0)\text{Request}(T1) = (0, 4, 2, 0)
\quad \text{and} \quad \text{Need}(T1) = (0, 7, 5,
0)Request(T1)=(0,4,2,0)andNeed(T1)=(0,7,5,0)

Since the request (0, 4, 2, 0) is less than or equal to the Need (0, 7, 5, 0), this
condition is satisfied.

2. The request must be less than or equal to the Available resources.

Request(T1)=(0,4,2,0)andAvailable=(1,5,2,0)\text{Request}(T1) = (0, 4, 2, 0)
\quad \text{and} \quad \text{Available} = (1, 5, 2,
0)Request(T1)=(0,4,2,0)andAvailable=(1,5,2,0)

Since the request (0, 4, 2, 0) is less than or equal to the Available resources (1,
5, 2, 0), this condition is also satisfied.

Since both conditions are satisfied, the request can be granted immediately.

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 81


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System Univ-II

4. Consider the 5 processes, A, B, C, D and E, as shown in the table. The highest number
has low Priority. Find The completion order of the 5 processes under the
policies.(Nov/Dec-2024)

Process Arrival Time Burst Time Priority


A 0 6 3
B 2 4 4
C 4 2 3
D 7 4 2
E 11 2 1

(i) Draw four Gantt charts illustrating the execution of these processes using FCFS,
pre-emptive SJF, non-pre-emptive Priority and RR (Quantum= 2) scheduling.
(ii) Calculate the average waiting and turnaround times for the above
scheduling algorithms.

Process Arrival Time Burst Time Priority


A 0 6 3
B 2 4 4
C 4 2 3
D 7 4 2
E 11 2 1

 Lower priority number means higher priority.


 Higher priority number means lower priority.

(i) Gantt Charts for Different Scheduling Policies

1. First Come First Serve (FCFS) Scheduling

 Processes execute in the order of arrival.

Process Arrival Time Burst Time Start Time Completion Time


A 0 6 0 6
B 2 4 6 10
C 4 2 10 12
D 7 4 12 16
E 11 2 16 18

Gantt Chart:
PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 82
MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System Univ-II

|A |A |A |A |A |A |B |B |B |B |C |C |D |D |D |D |E |E |
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18

2. Shortest Job First (Preemptive) (SJF)

 Always selects the process with the shortest remaining time.

Execution Order:

|A |A |A |A |A |A |C |C |B |B |B |B |D |D |D |D |E |E |
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18

3. Non-Preemptive Priority Scheduling

 Lower priority number means higher priority.


 If two processes have the same priority, FCFS is used.

Sorted Order:

1. E (Priority 1)
2. D (Priority 2)
3. A, C (Priority 3 → Arrival order decides)
4. B (Priority 4)

Execution Order:

|A |A |A |A |A |A |C |C |D |D |D |D |E |E |B |B |B |B |
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18

4. Round Robin (Quantum = 2)

 Execution Order:

|A |A |B |B |A |A |C |C |A |A |B |B |D |D |E |E |D |D |
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18

(ii) Calculations of Waiting Time and Turnaround Time

 Turnaround Time (TAT) = Completion Time - Arrival Time


 Waiting Time (WT) = Turnaround Time - Burst Time

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 83


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System Univ-II

5. Represent and explain the drawback of the typical 'Semaphore" solution to Dining
Philosophers' problem with pseudo code, and also provide a solution to remedy the
drawback.(Nov/Dec-2024)

In the Dining Philosophers Problem, a group of philosophers sits around a circular table with
one fork (chopstick) between each pair. To eat, a philosopher must acquire both the left and
right forks. The problem arises in ensuring no deadlocks or starvation occur while multiple
philosophers try to eat concurrently.

A typical semaphore-based solution involves using a binary semaphore (mutex) for each fork,
ensuring that philosophers pick up forks in a synchronized manner. However, this approach
can lead to deadlock if each philosopher picks up their left fork and waits indefinitely for the
right fork.

Pseudo Code for Semaphore-Based Solution with Deadlock Risk

semaphore forks[N]; // One semaphore per fork


void philosopher(int i) {
while (true) {
think();
wait(forks[i]); // Pick up left fork
wait(forks[(i + 1) % N]); // Pick up right fork
eat();
signal(forks[i]); // Release left fork
signal(forks[(i + 1) % N]); // Release right fork
}
}

Drawback: Deadlock

 If all philosophers pick up their left fork at the same time, no one will be able to acquire
their right fork.
 This leads to circular waiting, where each philosopher is waiting for a resource held by
another, causing deadlock.

Solution: Use an Arbitrator (Waiter) Approach

To prevent deadlock, we introduce a central arbitrator (waiter) who controls access to the
forks. A philosopher must request permission before picking up both forks.

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 84


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System Univ-II

Improved Pseudo Code Using a Waiter

semaphore mutex = 1; // To ensure mutual exclusion in picking forks


semaphore forks[N]; // One semaphore per fork
void philosopher(int i) {
while (true) {
think();
wait(mutex); // Request permission from the waiter
wait(forks[i]); // Pick up left fork
wait(forks[(i + 1) % N]); // Pick up right fork
signal(mutex); // Release waiter lock
eat();
signal(forks[i]); // Release left fork
signal(forks[(i + 1) % N]); // Release right fork
}
}

Uses:

1. Avoids Deadlock: A philosopher must acquire permission before attempting to pick up


forks. This prevents circular waiting.
2. Prevents Starvation: The mutex ensures that philosophers are served fairly.
3. Ensures Concurrency: Multiple philosophers can still eat simultaneously as long as they
don’t cause a deadlock.

This waiter approach is one of the simplest and most effective solutions to prevent deadlocks in
the Dining Philosophers problem.

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 85


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System Univ-II

ANNA UNIVERSITY QUESTIONS


APRIL/MAY- 2023
PART A
1. State the critical section problem. (Q.No:59 )
2.Name the four conditions for deadlock. (Q.No: 49)
PART B
12)a)I) With a neat sketch, explain the different states of a process.(5) (Q.No:1)
ii) How process synchronization is achieved using semaphore? Give an example.(8)
(Q.No: 10)
b) Write Bankers algorithm for deadlock avoidance. Explain with an example. (13)
(Q.No: 15)

PART C
16)a) Consider the following five processes that arrive at time 0, with the length of the
CPU burst time given in milliseconds.
Process CPU BURST TIME
P1 10
P2 29
P3 3
P4 7
P5 12
Consider the FCFS, non preemptive Shortest job First (SJF). Round Rabin(RR)
(quantum=10 milliseconds) scheduling algorithms. Illustrate the scheduling using
Gantt chart. Which algorithm will give the minimum average waiting time?(Q.No:
PART- C- 3)
ANNA UNIVERSITY QUESTIONS
NOV/DEC- 2023
PART A
1.Draw the Life cycle of a Process.(Q.NO:59)
2.Compare Process creation and thread creation in terms of economy.(Q.NO:60)
PART B
1. Consider the following set of processes, with the length of the CPU burst time given
in milliseconds.(Q.NO:4)

Process Burst
Time
P1 10
P2 1
P3 2
P4 1

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 86


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System Univ-II

P5 5
a)Draw Gantt’s Chart illustrating the execution of these processes using FCFS, SJF
and Round Robin (with quantum = 1) scheduling techniques.
b) Find the Turnaround time and waiting time of each process using the above
techniques.
2. What are semaphores? How do they implement mutual exclusion?(Q.NO:10)
3. Explain the techniques used to prevent deadlocks.(Q.NO:14)
PART – C
1. Consider three processes, all arriving at time zero, with total execution time of 10,
20 and 30 units respectively. Each process spends the first 20% of execution time
doing I/O, the next 70% of time doing computation and the last 10% of time doing I/O
again. The operating system uses a shortest remaining compute time first scheduling
algorithm and schedules a new process either when the running process gets blocked
on I/O or when the running process finishes its compute burst. Assume that all I/O
operations can be overlapped as much as possible. (Q.NO:6)

a. Calculate average waiting time and average


turnaround time
b. Draw Gantt chart of CPU burst
c. Calculate CPU idle time.

2. Consider the following scenario. There are 4 segments in a program of sizes, A0=400B,
A1=100B, A2=21B and A3=365B. Assume that the main memory address ranges
from 0 to 1999, among which the following are the available free slots : 50-350, 450-
500, 670-1060 and 1200-1850. Answer the followings.(Q.NO:7)
 Provide diagrammatic representation of logical memory to physical
memory
 Provide segment map table and draw a suitable memory management
unit.
 Find out internal, external and total fragmentation.
 List the segments of following physical address: 1050, 560, 78, 2000

*****

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 87


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System Univ-II

ANNA UNIVERSITY QUESTIONS


APRIL/MAY- 2024

PART A
1. Define the process states.(Q.No:3)
2. What are the threading issues?(Q.No:61)

PART B
1. Describe how processes are created and terminated in an operating system.(Q.No:3)
2. Give an example of a situation in which ordinary pipes are more suitable than named pipes and an
example of a situation in which named pipes are more suitable than ordinary pipes.(Q.No:21)
3. Describe how deadlock is possible with the dining-philosopher's problem.(Q.No:22)
4. Consider the following snapshot of a system.(CASE STUDY:Q.No:8)

Allocation Max Available


ABCD ABCD ABCD
To 0012 0012 1520
T1 1000 1750
T2 1354 2356
T3 0632 0652
T4 0014 0656

Answer the following questions using the banker's algorithm:


(1) What is the content of the matrix Need?
(2) Is the system in a safe state?
(3) If a request from thread T1 arrives for (0,4,2,0) can the request be granted
immediately

ANNA UNIVERSITY QUESTIONS


NOV/DEC- 2024

PART A
1. What do you mean by cooperating process?(Q.No:62)
2. Define IPC.(Q.No:11)
PART B
1. Explain the difference between long-term, short-term, and medium- term schedulers.(Q.No:23)
2. Discuss about threads.(Q.No:5)
3. Explain deadlock prevention and avoidance.
Given 3 processes, A, B and C, three resources, x, y, and z and following events,
PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 88
MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System Univ-II

(i) A requests x
(ii) A requests y
(iii) B requests y
(iv) B requests z
(v) C requests z
(vi) C requests x
(vii) C requests y
Assume that requested resources should always be allocated to the request process if available.
Draw the resource allocation graph for the sequences. Also, mention whether it is a deadlock.
If it is, how to recover from the deadlock.

PART C
1. Consider the 5 processes, A, B, C, D and E, as shown in the table. The highest number has low
Priority. Find The completion order of the 5 processes under the policies.(Q.No.4)

Process Arrival Time Burst Time Priority


A 0 6 3
B 2 4 4
C 4 2 3
D 7 4 2
E 11 2 1

(iii) Draw four Gantt charts illustrating the execution of these processes using FCFS, pre-
emptive SJF, non-pre-emptive Priority and RR (Quantum= 2) scheduling.
(iv) Calculate the average waiting and turnaround times for the above
scheduling algorithms.

2. Represent and explain the drawback of the typical 'Semaphore" solution to Dining Philosophers'
problem with pseudo code, and also provide a solution to remedy the drawback.(Q.No.5)
*****

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 89


MAILAM ENGINEERING COLLEGE CS3451- Introduction to Operating System – UNIT III

UNIT III MEMORY MANAGEMENT

Main Memory - Swapping - Contiguous Memory Allocation – Paging - Structure of the


Page Table - Segmentation, Segmentation with paging; Virtual Memory - Demand Paging
– Copy on Write - Page Replacement - Allocation of Frames –Thrashing.

2 MARKS

1. Define logical address and physical address.


An address generated by the CPU is referred as logical address. An address seen by the
memory unit that is the one loaded into the memory address register of the memory is
commonly referred to as physical address.

2. What is logical address space and physical address space?


The set of all logical addresses generated by a program is called a logical address space;
the set of all physical addresses corresponding to these logical addresses is a physical
address space.

3. What is the main function of the memory-management unit?


The runtime mapping from virtual to physical addresses is done by a hardware device
called a memory management unit (MMU).

4. Define dynamic loading.


To obtain better memory-space utilization dynamic loading is used. With dynamic
loading, a routine is not loaded until it is called. All routines are kept on disk in a
relocatable load format. The main program is loaded into memory and executed. If the
routine needs another routine, the calling routine checks whether the routine has been
loaded. If not, the relocatable linking loader is called to load the desired program into
memory.

5. What are overlays? [Nov/Dec2012]


To enable a process to be larger than the amount of memory allocated to it, overlays are
used. The idea of overlays is to keep in memory only those instructions and data that
are needed at a given time. When other instructions are needed, they are loaded into
space occupied previously by instructions that are no longer needed.

6. Define swapping. [April/May-2023]


A process needs to be in memory to be executed. However, a process can be swapped
temporarily out of memory to a backing store and then brought back into memory for
continued execution. This process is called swapping.

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 1


MAILAM ENGINEERING COLLEGE CS3451- Introduction to Operating System – UNIT III

7. How is memory protected in a paged environment?


Protection bits that are associated with each frame accomplish memory
protection in a paged environment. The protection bits can be checked to verify that no
writes are being made to a read-only page.

8. What do you mean by compaction?


Compaction is a solution to external fragmentation. The memory contents are
shuffled to place all free memory together in one large block. It is possible only if
relocation is dynamic, and is done at execution time.

9. What are pages and frames?


Paging is a memory management scheme that permits the physical-address
space of a process to be non contiguous. In the case of paging, physical memory is
broken into fixed-sized blocks called frames and logical memory is broken into blocks of
the same size called pages.

10. What is the use of valid-invalid bits in paging?


When the bit is set to valid, this value indicates that the associated page is in
the process’s logical address space, and is thus a legal page. If the bit is said to invalid,
this value indicates that the page is not in the process’s logical address space. Using the
valid-invalid bit traps illegal addresses.

11. What is the basic method of segmentation?


Segmentation is a memory management scheme that supports the user view of memory.
A logical address space is a collection of segments. The logical address consists of
segment number and offset. If the offset is legal, it is added to the segment base to
produce the address in physical memory of the desired byte.
The logical-address space is a collection of segments. The process of mapping the logical
address space to the physical address space using a segment table is known as
segmentation.

12. What is virtual memory?


Virtual memory is a technique that allows the execution of processes that may not be
completely in memory. It is the separation of user logical memory from
physical memory. This separation provides an extremely large virtual memory, when
only a smaller physical memory is available.

13. What is Demand paging? Write its advantages.[Nov/Dec-2021 & Nov/Dec-


2023]
Virtual memory is commonly implemented by demand paging. In demand paging, the
pager brings only those necessary pages into memory instead of swapping in a whole

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 2


MAILAM ENGINEERING COLLEGE CS3451- Introduction to Operating System – UNIT III

process. Thus, it avoids reading into memory pages that will not be used anyway,
decreasing the swap time and the amount of physical memory needed.
Advantages of demand paging.
 Only loads pages that are demanded by the executing process.
 As there is more space in main memory, more processes can be loaded reducing
context switching time which utilizes large amounts of resources.
 Less loading latency occurs at program startup, as less information is accessed from
secondary storage and less information is brought into main memory.

14. Define lazy swapper.


Rather than swapping the entire process into main memory, a lazy swapper is used.
A lazy swapper never swaps a page into memory unless that page will be needed.

15. What is a pure demand paging?


When starting execution of a process with no pages in memory, the
operating system sets the instruction pointer to the first instruction of the process,
which is on a non- memory resident page, the process immediately faults for the
page. After this page is brought into memory, the process continues to
execute, faulting as necessary until every page that it needs is in memory.
At that point, it can execute with no more faults. This schema is pure demand
paging.

16. Define effective access time.


Let p be the probability of a page fault (0<=p<=1). The value of p is expected to be
close to 0; that is, there will be only a few page faults. The effective access time is
Effective access time = (1-p) * ma + p * page fault time. ( ma: memory-access time).

17. Define secondary memory.


This memory holds those pages that are not present in main memory. The
secondary memory is usually a high-speed disk. It is known as the swap device, and the
section of the disk used for this purpose is known as swap space.

18. What is the various page replacement algorithms used for page replacement?
 FIFO page replacement
 Optimal page replacement
 LRU page replacement
 LRU approximation page replacement
 Counting based page replacement
 Page buffering algorithm

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 3


MAILAM ENGINEERING COLLEGE CS3451- Introduction to Operating System – UNIT III

19. What are the major problems to implement demand paging? (Nov/Dec 2015)
The two major problems to implement demand paging is developing
 Frame allocation algorithm and Page replacement algorithm

20. Define Thrashing. How to limit the effect of thrashing? (Apr/May 2015)
(April/May-2019)
The page is brought in and taken out of the memory and is not allowed to execute. This
technique is known as thrashing. We can limit the effects of thrashing by using a local
replacement algorithm.

To prevent thrashing, we must provide a process as many frames as it needs. The


working-set strategy starts by looking at how many frames a process is actually using.
This approach defines the locality model of process execution.

21. Define roll out and roll in.


If a higher-priority process arrived and wants service, the memory manager can swap
out the lower-priority process so that it can load and execute the higher-priority process.
When the higher-priority process finishes, the lower-priority process can be swapped
back in and continued. This variant of swapping is sometimes called roll out, Rollin.

22. Write notes on contiguous storage allocation.


The memory is usually divided into two partitions: one for resident operating
system, and one for the user processes. The operating system may be placed in either
low or high memory.
The major factor affecting this decision is the location of interrupt vector. Since the
interrupt vector is often in low memory, programmers usually place the operating
system in low memory as well. In this contiguous memory allocation, each process is
contained in a single contiguous section of memory.

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 4


MAILAM ENGINEERING COLLEGE CS3451- Introduction to Operating System – UNIT III

23. What is the difference between Internal and External Fragmentation?


(Apr/May-2024)
S.No Internal Fragmentation External fragmentation
1 Internal Fragmentation occurs when a External fragmentation occurs
fixed size memory allocation technique when a dynamic memory
is used allocation technique is used
2 Internal fragmentation occurs when a External fragmentation is due to
fixed size partition is assigned to a the lack of enough adjacent space
program/file with less size than the after loading and unloading of
partition making the rest of the space programs or files for some time
in that partition unusable because then all free space is
distributed here and there
3 Internal fragmentation can be maimed External fragmentation can be
by having partitions of several sizes prevented by mechanisms such as
and assigning a program based on the segmentation and paging.
best fit. However, still internal
fragmentation is not fully eliminated
4 When the allocated memory may be It exists when enough total
slightly memory space exists to satisfy a
larger than the requested memory, the request, but it is not contiguous;
difference between these two numbers storage is fragmented into a large
is internal fragmentation. number of small holes.

24. Define. Translation look–aside buffer.


The TLB is associative, high-speed memory. Each entry in the TLB consists of two parts:
a key (or tag) and a value. When the associative memory presented with an item, it is
compared with all keys simultaneously. If the item is found, the corresponding value
field is returned. The search is fast; the hardware is expensive. Typically, the number of
entries in a TLB is small, often numbering between 64, and 1024.

25. What is a hit ratio?


The percentage of times that a particular page number is found in the TLB is called the
hit ratio.

26. Define page replacement approach.


Page replacement uses the following approach:
a. Find the location of the desired page on the disk.
b. Find a free frame:
i. If there is a free frame, use it.
ii. If there is no free frame, use a page replacement algorithm to Select a victim
frame.
PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 5
MAILAM ENGINEERING COLLEGE CS3451- Introduction to Operating System – UNIT III

iii. Write the victim page to the disk; change the page and frame tables
accordingly.
c. Read the desired page into the (newly) free frame; change the page and frame tables.
d. Restart the user process.

27. Define Slab Allocation and its states.


A second strategy for allocating kernel memory is known as slab allocation. A slab is
made up of one or more physically contiguous pages.
In Linux, a slab may be in one of three possible states:
1. Full. All objects in the slab are marked as used.
2. Empty. All objects in the slab are marked as free.
3. Partial. The slab consists of both used and free objects.

28. Name two differences between logical and physical addresses. (May/June
2016) (Nov/Dec-2019)
S.N Logical Address Physical Address
1 Logical address does not refer to an Physical address that refers to an
actual existing address; rather, it refers actual physical address in memory
to an abstract address in an abstract
address space.
2 A logical address is generated by the CPU Physical addresses are generated
and is translated in to a physical address by the MMU.
by the memory management unit (MMU)
(when address binding occurs at
execution time)

29. How does the system detect thrashing? (May/June 2016)


Thrashing is caused by under allocation of the minimum number of pages required by a
process, forcing it to continuously page fault. The system can detect thrashing by
evaluating the level of CPU utilization as compared to the level of multiprogramming. It
can be eliminated by reducing the level of multiprogramming.

.30. What is thrashing? How to resolve it? (April/May-2021, 2023 & Nov/Dec-
2023)
With a computer, thrashing or disk thrashing describes when a hard drive is being
overworked by moving information between the system memory and virtual
memory excessively. Thrashing occurs when the system does not have enough memory,
the system swap file is not properly configured, too much is running at the same time,
or has low system resources. When thrashing occurs, you will notice the computer hard
drive always working, and a decrease in system performance. Thrashing is serious
because of the amount of work the hard drive has to do, and if left unfixed can cause an
early hard drive failure.
PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 6
MAILAM ENGINEERING COLLEGE CS3451- Introduction to Operating System – UNIT III

Ways to eliminate thrashing:


To resolve hard drive thrashing, you can do any of the
suggestions below.
1. Increase the amount of RAM in the computer.
2. Decrease the number of programs being run on the computer.
3. Adjust the size of the swap file.

31.Under what circumstances do page faults occur? State the actions taken
by the operating system when a page fault occurs. (Nov/Dec-2019)
(Nov/Dec-2024)
A page fault occurs when an access to a page that has not been brought into main
memory takes place. The operating system verifies the memory access, aborting the
program if it is invalid. If it is valid, a free frame is located and I/O is requested to read
the needed page into the free frame. Upon completion of I/O, the process table and page
table are updated and the instruction is restarted.
32. When trashing is used? (Nov/Dec-2021)
Thrashing is a condition or a situation when the system is spending a major portion of
its time in servicing the page faults, but the actual processing done is very negligible.
The basic concept involved is that if a process is allocated too few frames, then there will
be too many and too frequent page faults.
33. What is the purpose of paging the page tables? (Apr/May-2024)
A page table is a data structure used by a virtual memory system in a computer to store
mappings between virtual addresses and physical addresses.
34. Define the benefits of virtual memory.(Apr/May-2024)
 It can handle twice as many addresses as main memory.
 It enables more applications to be used at once.
 It frees applications from managing shared memory and saves users from having
to add memory modules when RAM space runs out.

35. What is address binding? (Nov/Dec-2024)

The mapping of data and computer instructions to actual memory locations is known as
address binding. In computer memory, logical and physical addresses are employed.
Additionally, the OS handles this aspect of computer memory management on behalf of
programs that need memory access.

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 7


MAILAM ENGINEERING COLLEGE CS3451- Introduction to Operating System – UNIT III

PART-B
1)Explain about Swapping in detail. (Nov/Dec-19)

A process must be in memory to be executed. A process, however, can be swapped


temporarily out of memory to a backing store and then brought back into memory for
continued execution.

Standard Swapping:

 Standard swapping involves moving processes between main memory and a


backing store.

Backing store:

 The backing store is commonly a fast disk.


 It must be large enough to accommodate copies of all memory images for all users,
and it must provide direct access to these memory images.
 The system maintains a ready queue consisting of all processes is on the backing
store or in memory and are ready to run is shown in fig 3.1.

Fig 3.1 Swapping of two processes using a disk as a backing store

Dispatcher:

Whenever the CPU scheduler decides to execute a process, it calls the dispatcher. The
dispatcher checks to see whether the next process in the queue is in memory. If it is not,
and if there is no free memory region, the dispatcher swaps out a process currently in
memory and swaps in the desired process.

 It then reloads registers and transfers control to the selected process

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 8


MAILAM ENGINEERING COLLEGE CS3451- Introduction to Operating System – UNIT III

 The context-switch time in such a swapping system is fairly high.


2)Briefly Explain about Contiguous memory allocation with neat diagram.
[April/May-2021]

The main memory must accommodate both the operating system and the various user
processes. Contiguous memory allocation is used to allocate different parts of the main
memory in the most efficient way possible.

The memory is divided into two partitions:


 Resident operating system,
 User processes.
 The operating system is placed in either low memory or high memory.
 The major factor affecting this decision is the location of the interrupt vector.

Interrupt vector:
 The interrupt vector is often in low memory; programmers usually place the
operating system in low memory as well.
 Several user processes to reside in memory at the same time.
 In this contiguous memory allocation, each process is contained in a single
contiguous section of memory.

Fig 3.2 Hardware Support for relocation and limit register

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 9


MAILAM ENGINEERING COLLEGE CS3451- Introduction to Operating System – UNIT III

Memory Protection:
Relocation Register:

The relocation register contains the value of the smallest physical address. (Example,
relocation = 100040)

Limit Register:

 The limit register contains the range of logical addresses. (Example, limit = 74600).

 With relocation and limit registers, each logical address must be less than the limit
register is shown in fig 3.2.

 The MMU maps the logical address dynamically by adding the value in the relocation
register. This mapped address is sent to memory is shown in fig 3.3.

Fig 3.3 Dynamic relocation using a relocation register

Memory Allocation:

Fixed-partition scheme (called MVT):

 Fixed-partition scheme (called MVT) is used primarily in a batch environment.


 Many of the ideas presented here are also applicable to a time-sharing environment
in which pure segmentation is used for memory management.

Variable-partition scheme:
 The operating system keeps a table indicating which parts of memory are available
and which are occupied.
 When a process is allocated space, it is loaded into memory.
 When a process terminates, it releases its memory, which the operating system
may then fill with another process from the input queue.
PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 10
MAILAM ENGINEERING COLLEGE CS3451- Introduction to Operating System – UNIT III

Dynamic storage allocation problem:

Dynamic storage allocation problem, concerns how to satisfy a request of size n from a
list of free holes. Solutions for Dynamic storage allocation problem:
 First fit: Allocate the first hole that is big enough.
 Best fit: Allocate the smallest hole that is big enough.
 Worst fit: Allocate the largest hole.
Fragmentation

External fragmentation:

 Both the first-fit and best-fit strategies for memory allocation suffer from external
fragmentation.
 External fragmentation exists when there is enough total memory space to satisfy
a request but the available spaces are not contiguous: storage is fragmented into a
large number of small holes.
Internal fragmentation: The memory allocated to a process may be slightly larger than
the requested memory. The difference between these two numbers is internal
fragmentation—unused memory that is internal to a partition.
Compaction:
 One solution to the problem of external fragmentation is compaction.
 The goal is to shuffle the Memory contents so as to place all free memory together
in one large block.
 Compaction is not always possible, if relocation is static and is done at assembly
or load time, compaction cannot be done.

3.What is paging? Explain the concept of Paging and Translation Look- aside
Buffer with example(or)Explain the need and concept of paging technique in
memory management.(or) Explain paging scheme of memory management.
[April/May 2021, 2023] [Nov/Dec -2021 & Nov/Dec-2023][Nov/Dec-2024]

Paging:
 Segmentation permits the physical address space of a process to be
noncontiguous. Paging is another memory-management scheme that offers this
advantage.

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 11


MAILAM ENGINEERING COLLEGE CS3451- Introduction to Operating System – UNIT III

 The difference between paging and segmentation is Paging avoids external


fragmentation and the need for compaction, whereas segmentation does not.
 Paging solves the problem of memory chunks of varying sizes onto the backing
store.
Advantages:
 Paging in its various forms is used in most operating systems.
 Paging is implemented through cooperation between the operating system and the
computer hardware.

Fig 3.4 Paging Hardware


Basic method:
 Physical memory into fixed-sized blocks called frames and breaking logical
memory into blocks of the same size called pages.
 When a process is to be executed, its pages are loaded into any available memory
frames from their source.
 The backing store is divided into fixed-sized blocks that are of the same size
memory frames or clusters of multiple frames.
The above diagram 3.4 shows, The hardware support for paging. Every address
generated by the CPU is divided into two parts:
Page number (p):
 The page number is used as an index into a page table.
 The page table contains the base address of each page in physical memory.
Page offset (d):
 The base address is combined with the page offset to define the physical memory
address that is sent to the memory unit.
PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 12
MAILAM ENGINEERING COLLEGE CS3451- Introduction to Operating System – UNIT III

Paging model of Logical and physical memory:

Fig 3.5 Paging model of logical and physical memory

The paging model of memory is shown in Figure 3.5. The page size (like the frame size)
is defined by the hardware. For example page 0 is stored in frame 1 it is showed in
page table and physical memory.
The size of a page is typically a power of 2, varying between 512 bytes and 16 MB per
page, depending on the computer architecture.

 The Figure 3.6. Shows that the page size is 4 bytes and the physical memory contains
32 bytes (8 pages).
 Logical address 0 is page 0, offset 0. Indexing into the page table, find that page 0 is
in frame 5.
 Thus, logical address 0 maps to physical address 20 (= (5 x 4) + 0).
 Logical address 3 (page 0, offset 3) maps to physical address 23 (= (5 x 4) + 3).
Some CPUs and kernels even support multiple page sizes. Solaris uses 8 KB and 4 MB
page sizes, depending on the data stored by the pages. Researchers are now developing
variable on-the-fly page-size.

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 13


MAILAM ENGINEERING COLLEGE CS3451- Introduction to Operating System – UNIT III

Fig. 3.6 Paging example for a 32 –byte memory with 4-byte page

 For example, which frames are allocated, which frames are available, how many
total frames there are, and so on. This information is generally kept in a data structure
called a frame table and free frame list is shown in fig 3.7.

Fig 3.7 Free frames a) Before allocation and b)After allocation

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 14


MAILAM ENGINEERING COLLEGE CS3451- Introduction to Operating System – UNIT III

Page-table base register (PTBR):


The page table is kept in main memory, and a page-table base register (PTBR) points
to the page table.
Translation look-aside buffer (TLB):
The standard solution to the time required to access a user memory location problem is
to use a special, small, fast look up hardware cache, called translation look-aside
buffer (TLB).The TLB is associative, high-speed memory.
Each entry in the TLB consists of two parts:
 Key (or tag)
 Value.

TLB miss:
If the page number is not in the TLB (known as a TLB miss), a memory reference to the
page table must be made.
The below diagram 3.8 shows,
 The page number and frame number is presented in the TLB.
 If the TLB is already full of entries, the operating system must select one for
replacement.
 Replacement policies range from least recently used (LRU) to random.

Fig 3.8 Paging Hardware with TLB

Wired down:
 Some TLBs allow entries to be wired down, meaning that they cannot be removed
from the TLB. Typically, TLB entries for kernel code are often wired down.

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 15


MAILAM ENGINEERING COLLEGE CS3451- Introduction to Operating System – UNIT III

Address-space identifiers (ASIDs):

 The TLBs store address-space identifiers (ASIDs) in each entry of the TLB.
 An ASID uniquely identifies each process and is used to provide address space
protection for that process.
 Every time a new page table is selected (for instance, each context switch), the
TLB must be flushed (or erased) to ensure that the next executing process does not
use the wrong translation information.

Hit ratio: The percentage of times that a particular page number is found in the TLB is
called the hit ratio.

Protection:
 Memory protection in a paged environment is accomplished by protection bits 11
that are associated with each frame. Normally, these bits are kept in the page table.

 One bit can define a page to be read-write or read-only.

Valid bit: When this bit is set to "valid," this value indicates that the associated page is
in the process' logical
address space, and is thus a legal (or valid) page.

Invalid bit: If the bit is set to "invalid," this value indicates that the page is not in the
process' logical-address space. Illegal addresses are trapped by using the valid-invalid
bit.

Page-table length register (PTLR): Some systems provide hardware, in the form of a
page-table length register (PTLR), to indicate the size of the page table.
 This value is checked against every logical address to verify that the address is in the
valid range for the process.

Reentrant code (or pure code):


 Reentrant code (or pure code) is non-self-modifying code. If the code is reentrant,
then it never changes during execution.
 If the code is reentrant code (or pure code), however, it can be shared, as shown in
Figure 3.9. Here three-page editor-each page of size 50 KB;
 The large page size is used to simplify the figure-being shared among three
processes.
 Each process has its own data page.

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 16


MAILAM ENGINEERING COLLEGE CS3451- Introduction to Operating System – UNIT III

Fig 3.9 Sharing of code in a paging environment

 Only one copy of the editor needs to be kept in physical memory. Each user's page
table maps onto the same physical copy of the editor, but data pages are mapped onto
different frames.

3) Write about the techniques for structuring the page table. (NOV/DEC 2023 &
Nov/Dec-2024)

Need for Paging

Lets consider a process P1 of size 2 MB and the main memory which is divided
into three partitions. Out of the three partitions, two partitions are holes of size 1
MB each.

P1 needs 2 MB space in the main memory to be loaded. We have two holes of 1


MB each but they are not contiguous.

Although, there is 2 MB space available in the main memory in the form of those
holes but that remains useless until it become contiguous. This is a serious
problem to address.

We need to have some kind of mechanism which can store one process at different
locations of the memory.

The Idea behind paging is to divide the process in pages so that, we can store
them in the memory at different holes. We will discuss paging with the examples
in the next sections.

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 17


MAILAM ENGINEERING COLLEGE CS3451- Introduction to Operating System – UNIT III

Fig 3.9(a) Need for Paging

The most common techniques used for the structuring the page table.

Hierarchical Paging:

Most modern computer systems support a large logical-address space (232 to 264). In
such an environment, the page table itself becomes excessively large.

Example:

 Consider a system with a 32-bit logical-address space. If the page size in such a
system is 4 KB (212), then a page table may consist of up to 1 million entries (232/212)

 Assuming that each entry consists of 4 bytes, each process may need up to 4 MB
of physical-address space for the page table alone.
 Clearly, we would not want to allocate the page table contiguously in main
memory.
 One simple solution to this problem is to divide the page table into smaller
pieces. There are several ways to accomplish this division.
 One way is to use a two-level paging algorithm, in which the page table itself is
also paged is shown in below fig 3.10.

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 18


MAILAM ENGINEERING COLLEGE CS3451- Introduction to Operating System – UNIT III

Fig 3.10 A two level page table scheme

Remember our example to our 32-bit machine with a page size of 4 KB.

A logical address is divided into a page number consisting of 20 bits, and a page offset
consisting of 12 bits.
Because we page the page table, the page number is further divided into a 10-bit page
number and a 10-bit page offset.

Fig 3.11 Address translation for a two level 32 bit paging Architecture

Where pl is an index into the outer page table and p2 is the displacement within the
page of the outer page table. The address-translation method for this architecture is
shown in Figure 3.11.
 Forward-mapped page table:
Address translation works from the outer page table inwards; this scheme is also
known as a forward-mapped page table. The Pentium-II uses this architecture.

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 19


MAILAM ENGINEERING COLLEGE CS3451- Introduction to Operating System – UNIT III

VAX Architecture:

 Consider the memory management of one of the classic systems, the VAX
minicomputer from Digital Equipment Corporation (DEC).
 The VAX architecture also supports a variation of two-level paging.
 The VAX is a 32-bit machine with page size of 512 bytes is shown in fig 3.12.
 The logical-address space of a process is divided into four equal sections, each of
which consists of 230 bytes.

Fig 3.12 -32-bit VAX architecture

The above diagram shows that, where s designates the section number, p is an index
into the page table, and d is the displacement within the page.
Section:
Each section represents a different part of the logical-address space of a process. The
first 2 high-order bits of the logical address designate the appropriate section.
Page: The next 21 bits represent the logical page number of that section.
Offset:The final 9 bits represent an offset in the desired page.
The below diagram 3.13 shows that, the size of a one-level page table for a VAX
process using one section still is 221 bits * 4 bytes per entry = 8 MB.
Inner page table:The inner page tables could conveniently be one page long, or contain
210 4-byte entries.
Outer page table:The outer page table will consist of 242 entries, or 244 bytes.
 The addresses would look like:

Fig 3.13 -64 bit page address

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 20


MAILAM ENGINEERING COLLEGE CS3451- Introduction to Operating System – UNIT III

 The obvious method to avoid such a large table is to divide the outer page table
into smaller pieces.
 This approach is also used on some 32-bit processors for added flexibility and
efficiency.
The below diagram 3.14 shows that, the outer page table can also be divided into
various ways, it also giving a three-level paging scheme
 Suppose that the outer page table is made up of standard-size pages (210entries,
or 212 bytes); a 64-bit address space is still daunting: The outer page table is still
234 bytes large.

Fig 3.14 -64 bit page address


 SPARC architecture-The SPARC architecture (with 32-bit addressing) supports a
three-level paging scheme,
 Motorola 68030 architecture-The 32-bit Motorola 68030 architecture supports a
four-level paging scheme.
 UltraSPARC architecture -64-bit UltraSPARC would require seven levels of paging
scheme.
Hashed Page Tables:
A common approach for handling address spaces larger than 32 bits is to use a hashed
page table, with the hash value being the virtual-page number.
 Each entry in the hash table contains a linked list of elements that hash to the same
location (to handle collisions).
 Each element consists of three fields:
(a) The virtual page number,
(b) The value of the mapped page frame, and
(c) Pointer to the next element in the linked list.
If there is no match, subsequent entries in the linked list are searched for a matching
virtual page number. This scheme is shown in Figure 3.15

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 21


MAILAM ENGINEERING COLLEGE CS3451- Introduction to Operating System – UNIT III

Fig 3.15 Hashed page table


Clustered page tables:
Clustered page tables are similar to hashed page tables except that each entry in the
hash table refers to several pages (such as 16) rather than a single page.
Sparse:
 Clustered page tables are particularly useful for sparse address spaces where
memory references are noncontiguous and scattered throughout the address
space.
Inverted page table:
 Inverted page table is used to overcome the problem of Page table.
 An inverted page table has one entry for each real page (or frame) of memory.
Each entry consists of the virtual address of the page stored in that real memory
location; with information about the process that owns that page.
The below diagram 3.16 shows,
 The operation of an inverted page table.
 Inverted page tables often require an address-space identifier stored in each
entry of the page table.
 Storing the address-space identifier ensures the mapping of a logical page
for a particular process to the corresponding physical page frame

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 22


MAILAM ENGINEERING COLLEGE CS3451- Introduction to Operating System – UNIT III

Fig 3.16 Inverted page table

Each virtual address in the system consists of a triple <process-id, page-number,


offset>.

4)Draw the diagram of segmentation memory management scheme and explain its
principle. [April/May2010] [Nov/Dec 2017]

Segmentation is a memory-management scheme that supports this user view of


memory. A logical address space is a collection of segments. Each segment has a name
and a length. The addresses specify both the segment name and the offset within the
segment.
Basic Method:
 When writing a program, a programmer thinks of it as a main program with a set
of methods, procedures, or functions.
 It may also include various data structures: objects, arrays, stacks, variables, and
so on. Each of these modules or data elements is referred to by name.
 The programmer talks about “the stack,” “the math library,” and “the main
program” without caring what addresses in memory these elements occupy.
 Segments vary in length, and the length of each is intrinsically defined by its
purpose in the program. Elements within a segment are identified by their offset
from the beginning of the segment: the first statement of the program, the seventh

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 23


MAILAM ENGINEERING COLLEGE CS3451- Introduction to Operating System – UNIT III

stack frame entry in the stack, the fifth instruction of the Sqrt() is shown in fig
3.17.

Fig 3.17 Programmer’s view of a program

The user therefore specifies each address by two quantities:

 Segment name
 Offset.

 Thus, a logical address consists of a two tuple:

<segment – number, offset>

 Normally, the user program is compiled, and the compiler automatically constructs
segments reflecting the input program. A Pascal compiler might create separate
segments for the following:

1. The code
2. Global variables
3. The heap, from which memory is allocated
4. The stacks used by each thread
5. The standard C library

Segmentation Hardware:
Segment table:

 Each entry of the segment table has a segment base and a segment limit.
 The segment base contains the starting physical address where the segment
resides in memory, whereas the segment limit specifies the length of the segment.
PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 24
MAILAM ENGINEERING COLLEGE CS3451- Introduction to Operating System – UNIT III

 Define an implementation to map two-dimensional user-defined addresses into


one-dimensional physical addresses. This mapping is affected by a segment table.

Fig.3.18 Segmentation Hardware

The above diagram 3.18 Shows,

 A logical address consists of two parts:


 Segment number, s,
 Offset into that segment, d.
 The segment number is used as an index into the segment table.
 The offset d of the logical address must be between 0 and the segment limit.

7) Explain in detail about Virtual Memory.

Virtual memory is a technique that allows the execution of processes that may not
be completely in memory.

Advantages:

 Programs can be larger than physical memory. Further, virtual memory


abstracts main memory into an extremely large, uniform array of storage,
separating logical memory as viewed by the user from physical memory.
 This technique frees programmers from the concerns of memory-storage
limitations.
 It allows processes to easily share files and address spaces, and it provides an
efficient mechanism for process creation.

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 25


MAILAM ENGINEERING COLLEGE CS3451- Introduction to Operating System – UNIT III

Disadvantages:

 It is not easy to implement.


 Decrease performance if it is used carelessly.
Basic requirement for the memory-management algorithms:

 The instructions being executed must be in physical memory.


 The first approach to meeting this requirement is to place the entire logical
address space in physical memory.
 Overlays and dynamic loading can help to ease this restriction, but they
generally require special precautions and extra work by the programmer.
 This restriction seems both necessary and reasonable, but it is also
unfortunate, since it limits the size of a program to the size of physical
memory.

Virtual memory involves the separation of user logical memory from physical memory.
This separation allows an extremely large virtual memory to be provided for
programmers when only a smaller physical memory is available.

Fig 3.19. Diagram showing virtual memory that is larger than physical memory

The above diagram 3.19 Shows,

 The virtual address space of a process refers to the logical (or virtual) view of
how a process is stored in memory.
 Typically, this view is that a process begins at a certain logical address—say,
address 0—and exists in contiguous memory physical memory may be

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 26


MAILAM ENGINEERING COLLEGE CS3451- Introduction to Operating System – UNIT III

organized in page frames and that the physical page frames assigned to a
process may not be contiguous.
 It is up to the memory management unit (MMU) to map logical pages to
physical page frames in memory.
 Allow heap to grow upward in memory as it is used for dynamic memory
allocation.
 Similarly, allow for the stack to grow downward in memory through successive
function calls.
 The large blank space (or hole) between the heap and the stack is part of the
virtual address space but will require actual physical pages only if the heap or
stack grows is shown in fig 3.20.
Sparse:
 Virtual address spaces that include holes are known as sparse address spaces.
Using a sparse address space is beneficial because the holes can be filled as
the stack or heap segments grow or if we wish to dynamically link libraries (or
possibly other shared objects) during program execution.

Fig 3.20 Virtual address space

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 27


MAILAM ENGINEERING COLLEGE CS3451- Introduction to Operating System – UNIT III

Fig 3.21 Shared library using virtual memory

 System libraries can be shared by several processes through mapping of the


shared object into a virtual address space. Although each process considers the
libraries to be part of its virtual address space, the actual pages where the
libraries reside in physical memory are shared by all the processes is shown in fig
3.21. Typically, a library is mapped read-only into the space of each process that
is linked with it.
 Similarly, processes can share memory. Virtual memory allows one process to
create a region of memory that it can share with another process. Processes
sharing this region consider it part of their virtual address space, yet the actual
physical pages of memory are shared.

8)Write about the concepts of Demand Paging (or) Swapping.(Nov/Dec-


2019)[April/May-2021](Apr/May-2024)

Demand Paging (or) Swapping:

 A demand-paging system is similar to a paging system with swapping. Processes


reside on secondary memory (which is usually a disk).

 When we want to execute a process, we swap it into memory. Rather than swapping
the entire process into memory, however, we use a lazy swapper.

Lazy swapper:
 A lazy swapper never swaps a page into memory unless that page will be needed.
 Since we are now viewing a process as a sequence of pages, rather than as one
large contiguous address space, use of swap is technically incorrect.

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 28


MAILAM ENGINEERING COLLEGE CS3451- Introduction to Operating System – UNIT III

Swapper and Pager:


 A swapper manipulates entire processes, whereas a pager is concerned with the
individual pages of a process. So use pager, rather than swapper, in connection
with demand paging is shown in fig 3.22.
Basic Concepts:
 Swapping in a whole process, the pager brings only those necessary pages into
memory.
 It avoids reading into memory pages that will not be used anyway.
 It decreases the swap time and the amount of physical memory needed.

Fig 3.22. Transfer of a paged memory to contiguous disk space

The valid-invalid bit scheme can be used for this purpose.

 Valid-If the bit is set to "valid," this value indicates that the associated page is
both legal and in memory.
 Invalid-If the bit is set to "invalid," this value indicates that the page either is
not valid (that is, not in the logical address space of the process), or is valid but
is currently on the disk.
 While the process executes and accesses pages that are memory resident, execution
proceeds normally.

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 29


MAILAM ENGINEERING COLLEGE CS3451- Introduction to Operating System – UNIT III

Page-fault trap:
 Access to a page marked invalid causes a page-fault trap.
 The paging hardware, in translating the address through the page table, If the invalid bit
is set, causing a trap to the operating system is shown in fig 3.23
 This trap is the result of the operating system's failure to bring the desired page into
memory rather than an invalid address error as a result of an attempt to use an illegal
memory address.

Fig 3.23.Page table when some pages are not in memory

The procedure for handling this page fault is straightforward

 We check an internal table (usually kept with the process control block) for this
process, to determine whether the reference was a valid or invalid memory access.
 If the reference was invalid, we terminate the process. If it was valid, but we have
not yet brought in that page, we now page it in.
 We find a free frame (by taking one from the free-frame list, for example).
Pure demand paging:

After this page is brought into memory, the process continues to execute, faulting as
necessary until every page that it needs is in memory. At that point, it can execute with
no more faults. This scheme is pure demand paging: Never bring a page into memory
until it is required.
PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 30
MAILAM ENGINEERING COLLEGE CS3451- Introduction to Operating System – UNIT III

Locality of reference:

 Analysis of running processes shows that this behavior is exceedingly unlikely.


 Programs tend to have locality of reference, which results in reasonable performance
from demand paging.
The hardware to support demand paging is the same as the hardware for
paging and swapping:

Page table: This table has the ability to mark an entry invalid through a valid-invalid
bit or special value of protection bits.
Secondary memory: This memory holds those pages that are not presenting main
memory. The secondary memory is usually a high-speed disk. It is known as the swap
device, and the section of disk used for this purpose is known as swap space.

Performance of Demand Paging:

Effective access time:

Demand paging can have a significant effect on the performance of a computer system.
To compute the effective access time for a demand paged memory.

9)Explain in detail about Page Replacement.

Page replacement:

If no frame is free, we find one that is not currently being used and free it. We can free a
frame by writing its contents to swap space, and changing the page table (and all other
tables) to indicate that the page is no longer in memory. Use the freed frame to hold the
page for which the process faulted is shown in fig 3.24.

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 31


MAILAM ENGINEERING COLLEGE CS3451- Introduction to Operating System – UNIT III

Fig 3.24 Need for page replacement

Basic Scheme:

Modify the page-fault service routine to include page replacement:

1. Find the location of the desired page on the disk.


2. Find a free frame:

a. If there is a free frame, use it


b. If there is no free frame, use a page-replacement algorithm to select a victim frame.
c. Write the victim page to the disk; change the page and frame tables accordingly.

3. Read the desired page into the (newly) free frame; change the page and frame tables.
4. Restart the user process.

 If no frames are free, two-page transfers (one out and one in) are required. This
situation effectively doubles the page-fault service time and increases the effective
access time accordingly.

Modify bit (or dirty bit):

Each page or frame may have a modify bit associated with it in the hardware. The
modify bit for a page is set by the hardware whenever any word or byte in the page is
written into, indicating that the page has been modified.
 If the modify bit is set, we know that the page has been modified since it was read
in from the disk.
 If the modify bit is not set, however, the page has not been modified since it was
read into memory. Therefore, if the copy of the page on the disk has not been overwritten

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 32


MAILAM ENGINEERING COLLEGE CS3451- Introduction to Operating System – UNIT III

(by some other page, for example), then we can avoid writing the memory page to the
disk: it is already there.
 This technique also applies to read-only pages. Such pages cannot be modified;
thus, they may be discarded when desired is shown in fig 3.25.

.
Fig 3.25 Page replacement

Solve two major problems to implement demand paging:

 Frame-allocation algorithm
 Page-replacement algorithm.

Reference string:

 An algorithm by running it on a particular string of memory references and


computing the number of page faults. The string of memory references is
called a reference string.
 Generate reference strings artificially (by a random-number generator, for
example) or we can trace a given system and record the address of each
memory reference
 The latter choice produces a large number of data (on the order of 1 million
addresses per second). To reduce the number of data, we use two facts.
 First, for a given page size (and the page size is generally fixed by the
hardware or system), we need to consider only the page number, rather
than the entire address.
 Second, if we have a reference to a page p, then any immediately following
references to page p will never cause a page fault.
 Page p will be in memory after the first reference; the immediately following
references will not fault is shown in fig 3.26

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 33


MAILAM ENGINEERING COLLEGE CS3451- Introduction to Operating System – UNIT III

Fig 3.26 Graph of page fault vs number of frames

12) With neat diagram explain the different page replacement algorithms.(or)
Explain first in first out page replacement algorithm and optimal page
replacement algorithm with an example and diagrams [April/May-
2021,23][Nov/Dec-2021]

(i) FIFO Page Replacement:

 A simple and obvious page replacement strategy is FIFO, i.e., first-in-first-out.


 As new pages are brought in, they are added to the tail of a queue, and the page at the
head of the queue is the next victim. In the following example, 20-page requests result in
15-page faults is shown in fig 3.27.

Fig 3.27 FIFO page replacement Algorithms

 Although FIFO is simple and easy, it is not always optimal, or even efficient.
 An interesting effect that can occur with FIFO is Belady's anomaly, in which increasing
the number of frames available can actually increase the number of page faults that
occur is shown in fig 3.28.
 Consider, for example, the following chart based on the page sequence (1, 2, 3, 4, 1, 2,
5, 1, 2, 3, 4, 5 ) and a varying number of available frames. Obviously, the maximum

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 34


MAILAM ENGINEERING COLLEGE CS3451- Introduction to Operating System – UNIT III

number of faults is 12 (every request generates a fault), and the minimum number is 5
(each page loaded only once), but in between there are some interesting results:

Fig 3.28 Page fault curve for FIFO replacement on a reference string

(ii) Optimal Page Replacement

 The discovery of Belady's anomaly led to the search for an optimal page-replacement
algorithm, which is simply that which yields the lowest of all possible page-faults, and
which does not suffer from Belady's anomaly.
 Such an algorithm does exist, and is called OPT or MIN. This algorithm is simply
"Replace the page that will not be used for the longest time in the future."
 For example, Figure 3.29 shows that by applying OPT to the same reference string used
for the FIFO example, the minimum number of possible page faults is 9. Since 6 of the
page-faults are unavoidable ( the first reference to each new page ), FIFO can be shown
to require 3 times as many ( extra ) page faults as the optimal algorithm.

Fig 3.29 Optimal page replacement algorithm

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 35


MAILAM ENGINEERING COLLEGE CS3451- Introduction to Operating System – UNIT III

(iii) LRU Page Replacement

 The prediction behind LRU, the Least Recently Used, algorithm is that the page that
has not been used in the longest time is the one that will not be used again in the near
future
 Figure 3.30 illustrates LRU for our sample string, yielding 12 page faults, ( as compared
to 15 for FIFO and 9 for OPT. )

Fig 3.30 Optimal page replacement algorithm

13) Write notes about the Thrashing and its effects.(Nov/Dec 2015)

Thrashing:
 Thrashing is the coincidence of high page traffic and low CPU efficiency.
 The high paging activity is called thrashing.
 If the process does not have number of frames it needs to support pages in active
use, it will quickly page fault.A process is thrashing if it is spending more time
paging than executing.
Cause of Thrashing:
 Thrashing results in severe performance problems.
 Consider the following scenario, which is based on the actual behavior of early paging
systems.
 The operating system monitors CPU utilization. If CPU utilization is too low, we
increase the degree of multiprogramming by introducing a new process to the system
is shown in fig 3.31.
 A global page-replacement algorithm is used; it replaces pages with no regard to the
process to which they belong.

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 36


MAILAM ENGINEERING COLLEGE CS3451- Introduction to Operating System – UNIT III

 The CPU scheduler sees the decreasing CPU utilization, and increases the degree of
multiprogramming as a result.
 If the degree of multiprogramming is increased further, thrashing sets in and CPU
utilization drops sharply

Fig 3.31 - Thrashing


Locality model:
The working-set strategy starts by looking at how many frames a process is actually
using. This approach defines the locality model of process execution.
Working-Set Model:
 The working-set model is based on the assumption of locality.
 This model uses a parameter, A, to define the working-set window.
 The idea is to examine the most recent A page references.
 The set of pages in the most recent A page references is the working set.
 If a page is in active use, it will be in the working set.
 If it is no longer being used, it will drop from the working set A time units after its
last reference.
 Thus, the working set is an approximation of the program's locality.
For example, given the sequence of memory references shown in Figure 3.32, if A = 10
memory references, then the working set at time tl is (1, 2, 5, 6,7).
 By time t2, the working set has changed to {3,4). The accuracy of the working set
depends on the selection of A.

Fig 3.32 Working set model


PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 37
MAILAM ENGINEERING COLLEGE CS3451- Introduction to Operating System – UNIT III

 If A is too small, it will not encompass the entire locality; if A is too large, it may
overlap several localities.
 In the extreme, if A is infinite, the working set is the set of pages touched during the
process execution. The most important property of the working set is its size.
 If we compute the working-set size, WSSi, for each process in the system, we can
then consider

 Where D is the total demand for frames. Each process is actively using the pages in
its working set. Thus, process i needs WSSi frames.
 If the total demand is greater than the total number of available frames (D > m),
thrashing will occur, because some processes will not have enough frames.
Page-Fault Frequency :
 A strategy that uses the page-fault frequency (PFF) takes a more direct approach.

Fig 3.33 -Page fault frequency


 Thrashing has a high page-fault rate. It Control the page-fault rate.
 When it is too high, we know that the process needs more frames.
 Similarly, if the page-fault rate is too low, then the process may have too many
frames.
 Establish upper and lower bounds on the desired page-fault rate is shown in above
fig 3.33.
 If the actual page-fault rate exceeds the upper limit, we allocate that process another
frame; if the page-fault rate falls below the lower limit, we remove a frame from that
process.

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 38


MAILAM ENGINEERING COLLEGE CS3451- Introduction to Operating System – UNIT III

14) Explain paging with Segmentation.[ [Nov/Dec-2021]

Both paging and segmentation have advantages and disadvantages. In fact, of the two
most popular microprocessors now being used, the Motorola is designed based on a flat-
address space, whereas the Intel 80x86 and Pentium family are based on segmentation.

 Both are merging memory models toward a mixture of paging and segmentation. We
can combine these two methods to improve on each.
 This combination is best illustrated by the architecture of the Intel 386. The IBM
OS/2 32-bit version is operating system running on top of the Intel 386 (and later)
architecture.
 The Intel 386 uses segmentation with paging for memory management.
 The maximum number of segments per process is 16 KB, and each segment can be
as large as 4 gigabytes. The page size is 4 KB.
The logical-address space of a process is divided into two partitions:
o The first partition consists of up to 8 KB segments that are private to that
process.
o The second partition consists of up to 8 KB segments that are shared
among all the processesis shown in fig 3.34.
 Local descriptor table (LDT):
o Information about the first partition is kept in the local descriptor table
(LDT).
 Global descriptor table (GDT):
o Information about the second partition is kept in the global descriptor
table (GDT).
Each entry in the LDT and GDT consists of 8 bytes, with detailed information about a
particular segment including the base location and length of that segment.

Fig 3.34 Logical address


 The logical address is a pair (selector, offset), where the selector is a 16-bit number:
in which s designates the segment number, g indicates whether the segment is in the
GDT or LDT, and p deals with protection.

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 39


MAILAM ENGINEERING COLLEGE CS3451- Introduction to Operating System – UNIT III

Linear address:
The base and limit information about the segment in question are used to generate a
linear address.

 First, the limit is used to check for address validity. If the address is not valid, a
memory fault is generated, resulting in a trap to the operating system.
 If it is valid, then the value of the offset is added to the value of the base, resulting in
a 32-bit linear address. This address is then translated into a physical address.
 The linear address is divided into a page number consisting of 20 bits, and a page
offset consisting of 12 bits is shown in fig 3.35.
The page table, the page number is further divided into a 10-bit page directory pointer
and a 10-bit page table pointer.

Fig 3.35 Linear Address


 The Intel address translation is shown in Figure 3.36. To improve the efficiency of
physical-memory use, Intel 386 page tables can be swapped to disk.
 In this case, an invalid bit is used in the page directory entry to indicate whether
the table to which the entry is pointing is Page table.

Fig 3.36 . Intel 80386 The address-translation scheme

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 40


MAILAM ENGINEERING COLLEGE CS3451- Introduction to Operating System – UNIT III

15) Write short notes Copy on write.

1.Copy on Write:

This works by allowing the parent and child processes to initially share the same pages.
These shared pages are marked as Copy on Write pages.

 Traditionally, fork() worked by creating a copy of the parent’s address space for the
child, duplicating the pages belonging to the parent.
 However, considering that many child processes invoke the exec() system call
immediately after creation, the copying of the parent’s address space may be
unnecessary. Instead, we can use a technique known as copy-on-write, which
works by allowing the parent and child processes initially to share the same
pages. These shared pages are marked as copy-on-write pages, meaning that if
either process writes to a shared page, a copy of the shared page is created.

Copy-on-write is illustrated in Figures 3.37 and 3.38, which show the contents of the
physical memory before and after process 1 modifies page C.

For example, assume that the child process attempts to modify a page containing
portions of the stack, with the pages set to be copy-on-write.

Fig 3.37 Before process1 modifies process C

The operating system will obtain a frame from the free frame list and create a copy of
this page, mapping it to the address space of the child process.
 The child process will then modify its copied page and not the page belonging to
the parent process. Obviously, when the copy-on-write technique is used, only the
pages that are modified by either process are copied; all unmodified pages can be
shared by the parent and child processes.

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 41


MAILAM ENGINEERING COLLEGE CS3451- Introduction to Operating System – UNIT III

 Several versions of UNIX (including Linux, macOS, and BSD UNIX) provide a
variation of the fork() system call—vfork() (for virtual memory fork)— that operates
differently from fork() with copy-on-write. With vfork(), theparent process is
suspended, and the child process uses the address space of the parent.
 Because vfork() does not use copy-on-write, if the child process changes any pages
of the parent’s address space, the altered pages will be visible to the parent once it
resumes.
 Therefore, vfork() must be used with caution to ensure that the child process does
not modify the address space of the parent.
 vfork() is intended to be used when the child process calls exec() immediately after
creation. Because no copying of pages takes place, vfork()

Fig 3.38 After process1 modifies process C

16)Explain About Allocation of Frames.

Consider a simple case of a system with 128 frames. The operating system may take 35,
leaving 93 frames for the user process. Under pure demand paging, all 93 frames would
initially be put on the free-frame list. When a user process started execution, it would
generate a sequence of page faults.

 The first 93-page faults would all get free frames from the free-frame list. When
the free-frame list was exhausted, a page-replacement algorithm would be used to
select one of the 93 in-memory pages to be replaced with the 94th, and so on.
When the process terminated, the 93 frames would once again be placed on the
free-frame list.

 There are many variations on this simple strategy. We can require that the
operating system allocate all its buffer and table space from the free-frame list.
When this space is not in use by the operating system, it can be used to support
user paging.
PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 42
MAILAM ENGINEERING COLLEGE CS3451- Introduction to Operating System – UNIT III

We can try to keep three free frames reserved on the free-frame list at all times. Thus,
when a page fault occurs, there is a free frame available to page into. While the page
swap is taking place, a replacement can be selected, which is then written to the storage
device as the user process continues to execute. Other variants are also possible, but
the basic strategy is clear: the user process is allocated any free frame.

Minimum Number of Frames:

Our strategies for the allocation of frames are constrained in various ways. We cannot,
for example, allocate more than the total number of available frames (unless there is
page sharing.

 We must also allocate at least a minimum number of frames. Here, we look more
closely at the latter requirement.

 One reason for allocating at least a minimum number of frames involves


performance. Obviously, as the number of frames allocated to each process
decreases, the page-fault rate increases, slowing process execution.

 In addition, remember that, when a page fault occurs before an executing


instruction is complete, the instruction must be restarted. Consequently, we must
have enough frames to hold all the different pages that any single instruction can
reference.

For example, consider a machine in which all memory-reference instructions may


reference only one memory address. In this case, we need at least one frame for the
instruction and one frame for the memory reference.

In addition, if one-level indirect addressing is allowed (for example, a load instruction on


frame 16 can refer to an address on frame 0, which is an indirect reference to frame 23),
then paging requires at least three frames per process. The minimum number of frames
is defined by the computer architecture.

20 )Consider the following segment table:[May/Jun ‘12](April/May-2019)

Segment Base Length


0 219 600
1 2300 14
2 90 100
3 1327 580
4 1952 96

What are the physical addresses for the following logical addresses?
a. 0, 430 b. 1, 10 c. 2, 500 d. 3, 400 e. 4, 112
What are the physical addresses for the logical addresses 3400 and 0110?

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 43


MAILAM ENGINEERING COLLEGE CS3451- Introduction to Operating System – UNIT III

90
Segment 2
190

219
Segment 0
819

1327
Segment 3
1907

1952
Segment 4
2048

2300
Segment 1
2314

Logical address Physical address

a.0,430 219+430=649 valid


b. 1,10 2300+10=2310 valid
c. 2,500 90+500=290 invalid
d. 3,400 1327+400=1727 valid
e. 4,112 1952+112=2065 invalid

21) Given memory partitions of 100KB, 500KB, 200KB, 300KB and 600KB (in
order), how would each of the first fit, best fit and worst fit algorithms place
processes of 212KB, 417KB, 112Kband 426KB(in order)? Which algorithm makes
the most efficient use of memory? (Nov/Dec 2015)

100KB

500KB

200KB

300KB

600KB

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 44


MAILAM ENGINEERING COLLEGE CS3451- Introduction to Operating System – UNIT III

Solution:

FIRST FIT:
212KB is put in 500KB partition
417KB is put in 600KB partition
112KB is put in 288KB partition (new partition 288KB=500KB-212KB)
426KB must wait

BEST FIT:
212KB is put in 300KB partition
417KB is put in 500KB partition
112KB is put in 200KB partition
426KB is put in 600KB partition

WORST FIT:
212KB is put in 600KB partition
417KB is put in 500KB partition
112KB is put in 388KB partition
426KB must wait (In this example, Best fit turns out to be the best.)
22) On a system with paging, a process cannot access memory that it does not
own. Why? How could the operating system allow access to additional memory?
Why should it or should it not?

A process cannot access memory it does not own in a system with paging because of
memory protection. The operating system uses paging to divide memory into small,
fixed-size blocks (pages). Each process is assigned its own set of pages, and the memory
management unit (MMU) ensures that processes can only access the pages allocated to
them. This protects the memory space of each process, preventing one process from
reading or modifying the memory of another, which helps maintain stability and
security.

Allowing Access to Additional Memory:

The operating system could allow a process to access additional memory through
techniques such as:

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 45


MAILAM ENGINEERING COLLEGE CS3451- Introduction to Operating System – UNIT III

1. Memory Sharing: The OS could allow processes to share certain pages of


memory, such as shared libraries or inter-process communication buffers, where
multiple processes are given access to the same pages.
2. Page Swapping: The OS can swap data between RAM and secondary storage (e.g.,
hard disk) if additional memory is needed, though this typically involves moving
memory in and out of physical memory rather than allowing access to entirely new
areas of memory.
3. Increasing Address Space: In some cases, the OS can allocate additional pages or
increase the process’s virtual memory space if resources are available.

Should the OS Allow Access?

 Yes, it may be appropriate in some cases. For example, shared memory between
processes is crucial for certain types of communication or when allocating memory
for kernel and user-space interactions.
 No, unrestricted access to memory should generally not be allowed. Allowing one
process to access memory it does not own can introduce significant security risks
(e.g., allowing a malicious process to read or write to another process’s memory),
potentially causing system crashes, data corruption, or breaches in privacy.

Thus, the OS must carefully control memory access to ensure processes only interact
with their own allocated memory (and shared areas when necessary) to maintain system
stability, security, and integrity.

23) Under what circumstances do page faults occur? Describe the actions taken by
the operating system when a page fault occurs. (Apr/May-2024 & Nov/Dec-2024)

A page fault occurs when a process tries to access a page in virtual memory that is not
currently loaded into physical memory (RAM). This can happen under the following
circumstances:

1. Page Not in RAM: The page the process is trying to access is not in the main
memory but stored in secondary storage (e.g., the hard drive or SSD).
2. Page Swap: The page might have been swapped out to disk due to memory
pressure (i.e., other pages were loaded into RAM).
3. Invalid Memory Access: If a process tries to access a page it is not allowed to
access (e.g., accessing an address outside its allocated range), this can also trigger
a page fault, but it could indicate an error like a bug or a malicious attempt to
access restricted memory.

Actions Taken by the Operating System When a Page Fault Occurs:

When a page fault happens, the operating system takes the following actions:

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 46


MAILAM ENGINEERING COLLEGE CS3451- Introduction to Operating System – UNIT III

 Check the Validity of the Access


 Locate the Page on Disk
 Free Up Space in RAM (if needed)
 Load the Page into RAM
 Update Page Table
 Resume Process Execution
24)Discuss about the different memory allocation techniques.(Nov/Dec-2024)

There are different types of memory management techniques in operating


system.(Figure.3.39)

Figure 3.39 Different Memory allocation techniques


Contiguous Memory Allocation :
 Contiguous memory allocation is a memory management technique.(Figure.3.40)
 This technique is used to assign contiguous blocks of memory to each task.
(contiguous block - adjacent block)
 Thus, whenever a process asks to access the main memory, It allocates a
continuous segment from the empty region to the process based on its size.

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 47


MAILAM ENGINEERING COLLEGE CS3451- Introduction to Operating System – UNIT III

Figure 3.40 Contiguous Memory allocation

Single Contiguous Memory Management Scheme :


 In this scheme, the main memory is divided into two contiguous areas or
partitions.(Figure 3.41)
 The operating systems reside permanently in one partition(lower memory)
 The user process is loaded into the other partition.

Figure 3.41 Single Contiguous Memory Management

Multiple Partitioning
 This technique allows more than one processes to be loaded into main memory.
 In this type of allocation, main memory is divided into two types partitions.
1. fixed-sized partitions.
2. Variable-sized partitions.
Fixed - sized Partitioning : (Oldest Technique)
 It is also called Static Partitioning.
 In this technique, main memory is pre-divided into fixed size
partitions.(Figure 3.42)
 In this partitioning, the number of partitions in Main memory is fixed but the
size of each partition may or may not be the same.
 The operating system always resides in the first partition while the other
partitions can be used to store user processes.
 Only one process can be placed in a partition.
 The partitions cannot overlap. (no join)
 A process must be contiguously present in a partition for the execution.

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 48


MAILAM ENGINEERING COLLEGE CS3451- Introduction to Operating System – UNIT III

Figure 3.42 Fixed sized partitioning

Fixed - sized Partitioning :

VARIABLE PARTITIONS
 It is also called Dynamic Partitioning.
 In this technique, main memory is not divided into fixed size
partitions.(Figure 3.43)
 In this technique, the partition size is not declared initially. It is declared at the
time of process loading.
 The first partition is reserved for the operating system. The remaining space is
divided into processes.
 The size of each partition will be equal to the size of the process.
 The partition size varies according to the need of the process so that the internal
fragmentation can be avoided.

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 49


MAILAM ENGINEERING COLLEGE CS3451- Introduction to Operating System – UNIT III

Figure 3.43 Dynamic Partitioning

NON-CONTIGUOUS MEMORY ALLOCATION


 Non-contiguous memory allocation is a method of memory allocation in operating
systems (Figure 3.44)
 The processes get overall space in the memory, but not in a single place.
 That is , The processes are present in various locations according to the needs of
the process.

Figure 3.44 Non-Contiguous Memory Management


Paging :

 Paging is one of the storage mechanism.


 It is used to retrieve processes from the secondary storage into the main
memory in the form of pages.(Figure 3.45)
 The main idea behind the paging is to divide each process in the form of pages.
 The main memory will also be divided in the form of frames.
 One page of the method should be saved in one of the memory frames.
PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 50
MAILAM ENGINEERING COLLEGE CS3451- Introduction to Operating System – UNIT III

 The size of each frame must be uniform. The page size must match the frame
size since the frames in Paging are mapped to the pages.

Figure 3.45 Paging


SEGMENTATION :

 Segmentation is a memory management technique.


 Here, the memory is divided into the variable size parts. Each part is known as
Segment.(Figure 3.46)
 A segment can be allocated to a process.
 Segmentation gives the user’s view of the process which paging does not
provide.

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 51


MAILAM ENGINEERING COLLEGE CS3451- Introduction to Operating System – UNIT III

CASE STUDY
1.Consider the following page-reference string:[Apr/May- 15]
[Nov/Dec2015]{April/May-2019)(Nov/Dec-19)(Nov/Dec-2023)

1,2,3,4,2,1,5,6,2,1,2,3,7,6,3,2,1,2,3,6.

How many page faults would occur for the following replacement algorithms,
assuming one, two, three, four, five, six, or seven frames? Remember that all
frames are initially empty, so your first unique pages will all cost one fault each.

 LRU replacement
 FIFO replacement
 Optimal replacement
 (or)
Sol:

a) One frames:
As there is no page referenced more than once all the algorithms viz LRU, FIFO and
Optimal will result in 20 page faults.

b) Two frames:
LRU:
1 2 3 4 2 1 5 6 2 1 2 3 7 6 3 2 1 2 3 6
1 1 3 3 2 2 5 5 2 2 2 7 7 3 3 1 3 3
2 2 4 4 1 1 6 6 1 3 3 6 6 2 2 2 6
Total page fault: 18
FIFO:
1 2 3 4 2 1 5 6 2 1 2 3 7 6 3 2 1 2 3 6
1 1 3 3 2 2 5 5 2 2 3 3 6 6 2 2 3 3
2 2 4 4 1 1 6 6 1 1 7 7 3 3 1 1 6
Total page fault: 18

Optimal:
1 2 3 4 2 1 5 6 2 1 2 3 7 6 3 2 1 2 3 6
1 1 3 4 1 5 6 1 3 3 3 3 1 1 6
2 2 2 2 2 2 2 2 7 6 2 2 3 3
Total page fault: 15

c) Three frames:
LRU:
1 2 3 4 2 1 5 6 2 1 2 3 7 6 3 2 1 2 3 6
1 1 1 4 4 5 5 5 1 1 7 7 2 2 2
2 2 2 2 2 6 6 6 3 3 3 3 3 3
3 3 1 1 1 2 2 2 2 6 6 1 6
Total page fault: 15

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 52


MAILAM ENGINEERING COLLEGE CS3451- Introduction to Operating System – UNIT III

FIFO:
1 2 3 4 2 1 5 6 2 1 2 3 7 6 3 2 1 2 3 6
1 1 1 4 4 4 6 6 6 3 3 3 2 2 2 6
2 2 2 1 1 1 2 2 2 7 7 7 1 1 1
3 3 3 5 5 5 1 1 1 6 6 6 3 3
Total page fault: 16
Optimal:
1 2 3 4 2 1 5 6 2 1 2 3 7 6 3 2 1 2 3 6
1 1 1 1 1 1 3 3 3 3 6
2 2 2 2 2 2 7 2 2 2
3 4 5 6 6 6 6 1 1
Total page fault: 11

d) Four frames:
LRU:
1 2 3 4 2 1 5 6 2 1 2 3 7 6 3 2 1 2 3 6
1 1 1 1 1 1 1 1 6 6
2 2 2 2 2 2 2 2 2
3 3 5 5 3 3 3 3
4 4 6 6 7 7 1
Total page fault: 10
FIFO:
1 2 3 4 2 1 5 6 2 1 2 3 7 6 3 2 1 2 3 6
1 1 1 1 5 5 5 5 3 3 3 3 1 1
2 2 2 2 6 6 6 6 7 7 7 7 3
3 3 3 3 2 2 2 2 6 6 6 6
4 4 4 4 1 1 1 1 2 2 2
Total page fault: 14
Optimal:
1 2 3 4 2 1 5 6 2 1 2 3 7 6 3 2 1 2 3 6
1 1 1 1 1 1 7 1
2 2 2 2 2 2 2
3 3 3 3 3 3
4 5 6 6 6
Total page fault: 8

e) Five frames:
LRU:
1 2 3 4 2 1 5 6 2 1 2 3 7 6 3 2 1 2 3 6
1 1 1 1 1 1 1 1
2 2 2 2 2 2 2
3 3 3 6 6 6
4 4 4 3 3
5 5 5 7
Total page fault: 8

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 53


MAILAM ENGINEERING COLLEGE CS3451- Introduction to Operating System – UNIT III

FIFO:
1 2 3 4 2 1 5 6 2 1 2 3 7 6 3 2 1 2 3 6
1 1 1 1 1 6 6 6 6 6
2 2 2 2 2 1 1 1 1
3 3 3 3 3 2 2 2
4 4 4 4 4 3 3
5 5 5 5 5 7
Total page fault: 10

Optimal:
1 2 3 42 1 5 6 2 1 2 3 7 6 3 2 1 2 3 6
1 1 1 1 1 1 1
2 2 2 2 2 2
3 3 3 3 3
4 4 4 7
5 6 6
Total page fault: 7
f) Six frames:
LRU:
1 2 3 4 2 1 5 6 2 1 2 3 7 6 3 2 1 2 3 6
1 1 1 1 1 1 1
2 2 2 2 2 2
3 3 3 6 3
4 4 4 7
5 5 5
6 6
Total page fault: 7
FIFO:
1 2 3 4 2 1 5 6 2 1 2 3 7 6 3 2 1 2 3 6
1 1 1 1 1 6 7 7 7 7
2 2 2 2 2 2 1 1 1
3 3 3 3 3 3 2 2
4 4 4 4 4 4 3
5 5 5 5 5 5
6 6 6 6 6
Total page fault: 10

Optimal:
1 2 3 4 2 1 5 6 2 1 2 3 7 6 3 2 1 2 3 6
1 1 1 1 1 1 1
2 2 2 2 2 2
3 3 3 3 3
4 4 4 7
PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 54
MAILAM ENGINEERING COLLEGE CS3451- Introduction to Operating System – UNIT III

5 5 5
6 6
Total page fault: 7

g) Seven frames:

LRU:
1 2 3 4 2 1 5 6 2 1 2 3 7 6 3 2 1 2 3 6
1 1 1 1 1 1 1
2 2 2 2 2 2
3 3 3 6 3
4 4 4 7
5 5 5
6 6
7
Total page fault: 7 FIFO and Optimal will also be same as above.

2. Assume that a program has just referenced an address in virtual memory.


Describe a scenario in which each of the following can occur.(Apr/May-2024)
(If no such scenario can occur, explain why)
(i) TLB miss with no page fault
(ii) TLB miss with page fault
(iii) TLB hit with no page fault
(iv) TLB hit with page fault

(i) TLB miss with no page fault

A TLB miss occurs when the address being referenced is not found in the TLB
(Translation Lookaside Buffer), and a page fault occurs when the requested page is not
in physical memory. However, a TLB miss with no page fault can happen when the
virtual page exists in memory but is not present in the TLB.

 Assume the virtual memory page exists in physical memory (it is mapped to a
valid physical page), but the entry for this page is not in the TLB. This could occur
because the TLB is a limited cache and may have evicted older entries, so it has to
go through the page table to translate the address to a physical address.
 The page table lookup is successful (no page fault), but the TLB does not have the
entry, so a TLB miss occurs.
 Once the TLB miss is handled, the program can continue execution with the page
already loaded in physical memory.

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 55


MAILAM ENGINEERING COLLEGE CS3451- Introduction to Operating System – UNIT III

(ii) TLB miss with page fault

A TLB miss with page fault occurs when a virtual address is not found in the TLB, and
in addition, the page is not in physical memory (i.e., the page is not mapped to a valid
physical address, and a page fault is triggered).

 The program accesses a virtual address that is not in the TLB.


 The page table lookup indicates that the page is not in physical memory (for
example, it may be on disk or it might not have been allocated yet).
 This causes a page fault.
 The operating system will load the page into physical memory from secondary
storage (e.g., disk), and after the page is loaded, the TLB entry can be updated
accordingly.

(iii) TLB hit with no page fault

A TLB hit with no page fault occurs when the TLB contains the entry for the virtual
address being accessed, and the corresponding page is already in physical memory (no
page fault occurs).

 The program accesses a virtual address, and the address translation is found in
the TLB.
 The page table entry points to a valid physical page already present in memory
(i.e., the page is already in physical memory).
 No page fault occurs, as the page is already resident in memory, and the
translation can be completed using the TLB hit.

(iv) TLB hit with page fault

A TLB hit with page fault seems like a contradiction at first because a TLB hit implies
the virtual address is already mapped, while a page fault indicates the page isn't in
memory. However, this can occur in a scenario where the TLB entry is valid, but the
page table entry points to a valid address in memory that is not currently resident (for
example, the page could have been swapped out).

 The program accesses a virtual address, and the translation is found in the TLB (a
TLB hit).
 However, when the page table is checked to ensure the page is in memory, the
page is not found in physical memory (for example, it may have been swapped out
to disk).
 This causes a page fault, even though the TLB hit occurred because the page is
not currently in memory.

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 56


MAILAM ENGINEERING COLLEGE CS3451- Introduction to Operating System – UNIT III

 The page is then swapped into memory from secondary storage, and once the page
is in memory, the program can resume execution.

3. Apply the (i) FIFO, (ii) LRU, and (iii) optimal (OPT) replacement algorithms for
the page-reference strings: 4, 2, 1, 7, 9, 8, 3, 5, 2, 6, 8, 1, 0, 7, 2, 4, 1, 3, 5, 8
Indicate the number of page faults for each algorithm assuming demand paging
with three frames.(Apr/May-2024)

Page Reference String: 4, 2, 1, 7, 9, 8, 3, 5, 2, 6, 8, 1, 0, 7, 2, 4, 1, 3, 5, 8

1. FIFO (First-In-First-Out) Algorithm:

 We replace the oldest page in memory when we have a page fault.

Step Reference Memory Frames Page Fault? Explanation


1 4 4 Yes 4 is not in memory, so we load it.
2 2 4, 2 Yes 2 is not in memory, so we load it.
3 1 4, 2, 1 Yes 1 is not in memory, so we load it.
4 7 2, 1, 7 Yes 4 is replaced with 7.
5 9 1, 7, 9 Yes 2 is replaced with 9.
6 8 7, 9, 8 Yes 1 is replaced with 8.
7 3 9, 8, 3 Yes 7 is replaced with 3.
8 5 8, 3, 5 Yes 9 is replaced with 5.
9 2 3, 5, 2 Yes 8 is replaced with 2.
10 6 5, 2, 6 Yes 3 is replaced with 6.
11 8 2, 6, 8 Yes 5 is replaced with 8.
12 1 6, 8, 1 Yes 2 is replaced with 1.
13 0 8, 1, 0 Yes 6 is replaced with 0.
14 7 1, 0, 7 Yes 8 is replaced with 7.
15 2 0, 7, 2 Yes 1 is replaced with 2.
16 4 7, 2, 4 Yes 0 is replaced with 4.
17 1 2, 4, 1 Yes 7 is replaced with 1.
18 3 4, 1, 3 Yes 2 is replaced with 3.
19 5 1, 3, 5 Yes 4 is replaced with 5.
20 8 3, 5, 8 Yes 1 is replaced with 8.

Number of page faults for FIFO: 20

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 57


MAILAM ENGINEERING COLLEGE CS3451- Introduction to Operating System – UNIT III

2. LRU (Least Recently Used) Algorithm:

 We replace the least recently used page when a page fault occurs.

Step Reference Memory Frames Page Fault? Explanation


1 4 4 Yes 4 is not in memory, so we load it.
2 2 4, 2 Yes 2 is not in memory, so we load it.
3 1 4, 2, 1 Yes 1 is not in memory, so we load it.
4 7 2, 1, 7 Yes 4 is replaced with 7.
5 9 1, 7, 9 Yes 2 is replaced with 9.
6 8 7, 9, 8 Yes 1 is replaced with 8.
7 3 9, 8, 3 Yes 7 is replaced with 3.
8 5 8, 3, 5 Yes 9 is replaced with 5.
9 2 3, 5, 2 Yes 8 is replaced with 2.
10 6 5, 2, 6 Yes 3 is replaced with 6.
11 8 2, 6, 8 Yes 5 is replaced with 8.
12 1 6, 8, 1 Yes 2 is replaced with 1.
13 0 8, 1, 0 Yes 6 is replaced with 0.
14 7 1, 0, 7 Yes 8 is replaced with 7.
15 2 0, 7, 2 Yes 1 is replaced with 2.
16 4 7, 2, 4 Yes 0 is replaced with 4.
17 1 2, 4, 1 Yes 7 is replaced with 1.
18 3 4, 1, 3 Yes 2 is replaced with 3.
19 5 1, 3, 5 Yes 4 is replaced with 5.
20 8 3, 5, 8 Yes 1 is replaced with 8.

Number of page faults for LRU: 20

3. Optimal (OPT) Algorithm:

 We replace the page that will not be used for the longest time in the future.

Step Reference Memory Frames Page Fault? Explanation


1 4 4 Yes 4 is not in memory, so we load it.
2 2 4, 2 Yes 2 is not in memory, so we load it.
3 1 4, 2, 1 Yes 1 is not in memory, so we load it.
4 7 2, 1, 7 Yes 4 is replaced with 7.
5 9 1, 7, 9 Yes 2 is replaced with 9.
6 8 7, 9, 8 Yes 1 is replaced with 8.
7 3 9, 8, 3 Yes 7 is replaced with 3.
8 5 8, 3, 5 Yes 9 is replaced with 5.

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 58


MAILAM ENGINEERING COLLEGE CS3451- Introduction to Operating System – UNIT III

Step Reference Memory Frames Page Fault? Explanation


9 2 3, 5, 2 Yes 8 is replaced with 2.
10 6 5, 2, 6 Yes 3 is replaced with 6.
11 8 2, 6, 8 Yes 5 is replaced with 8.
12 1 6, 8, 1 Yes 2 is replaced with 1.
13 0 8, 1, 0 Yes 6 is replaced with 0.
14 7 1, 0, 7 Yes 8 is replaced with 7.
15 2 0, 7, 2 Yes 1 is replaced with 2.
16 4 7, 2, 4 Yes 0 is replaced with 4.
17 1 2, 4, 1 Yes 7 is replaced with 1.
18 3 4, 1, 3 Yes 2 is replaced with 3.
19 5 1, 3, 5 Yes 4 is replaced with 5.
20 8 3, 5, 8 Yes 1 is replaced with 8.

Number of page faults for OPT: 20

4. Consider the following page-reference string 7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7


0 1 2 3 4. How many Page hits would occur in the following page replacement
algorithms, assuming four-page frames? Remember that all frames are initially
empty, so your first unique pages will all cost one fault each.  Least-Recently-
Used  First-In-First-Out replacement  Optimal replacement. (ii) Calculate the
hit ratio for each of the above algorithms. (iii) Which algorithm is the best for the
above case and why? (Nov/Dec-2024)

Page Reference String:

70120304230321201701234

Number of frames: 4

(i) Calculating the number of page hits:

1. Least-Recently-Used (LRU):

LRU replaces the page that hasn't been used for the longest period of time.

 Initial State: All frames are empty.


 Page references are processed one by one, and if the page is not already in the
frame, it results in a page fault (and the page is placed in a frame).
 LRU strategy: If the page is already in the frame, it counts as a hit, otherwise, we
replace the least recently used page.

Steps for LRU:


PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 59
MAILAM ENGINEERING COLLEGE CS3451- Introduction to Operating System – UNIT III

1. 7 → [7] → Fault
2. 0 → [7, 0] → Fault
3. 1 → [7, 0, 1] → Fault
4. 2 → [7, 0, 1, 2] → Fault
5. 0 → [7, 1, 2] → Hit (0 is already in the frame)
6. 3 → [1, 2, 0, 3] → Fault (7 replaced)
7. 0 → [1, 2, 3] → Hit (0 is in the frame)
8. 4 → [2, 3, 0, 4] → Fault (1 replaced)
9. 2 → [3, 0, 4] → Hit (2 is in the frame)
10. 3 → [0, 4, 2] → Hit (3 is in the frame)
11. 0 → [4, 2, 3] → Hit (0 is in the frame)
12. 3 → [4, 2, 0] → Hit (3 is in the frame)
13. 2 → [4, 0, 3] → Hit (2 is in the frame)
14. 1 → [0, 3, 4, 1] → Fault (2 replaced)
15. 2 → [3, 4, 1, 2] → Fault (0 replaced)
16. 0 → [4, 1, 2, 0] → Fault (3 replaced)
17. 1 → [4, 2, 0] → Hit (1 is in the frame)
18. 7 → [2, 0, 1, 7] → Fault (4 replaced)
19. 0 → [2, 1, 7] → Hit (0 is in the frame)
20. 1 → [2, 7] → Hit (1 is in the frame)
21. 2 → [7, 1] → Hit (2 is in the frame)
22. 3 → [1, 7, 2] → Fault (4 replaced)
23. 4 → [7, 1, 2, 3] → Fault (7 replaced)

LRU Hits: 13

2. First-In-First-Out (FIFO):

FIFO replaces the oldest page (the first one that was loaded) when a new page needs to
be placed in a full frame.

Steps for FIFO:

1. 7 → [7] → Fault
2. 0 → [7, 0] → Fault
3. 1 → [7, 0, 1] → Fault
4. 2 → [7, 0, 1, 2] → Fault
5. 0 → [7, 1, 2] → Hit
6. 3 → [1, 2, 0, 3] → Fault (7 replaced)
7. 0 → [1, 2, 3] → Hit
8. 4 → [2, 3, 0, 4] → Fault (1 replaced)
9. 2 → [3, 0, 4] → Hit
10. 3 → [0, 4, 2] → Hit
11. 0 → [4, 2, 3] → Hit
12. 3 → [4, 2, 0] → Hit
13. 2 → [4, 0, 3] → Hit
14. 1 → [0, 3, 4, 1] → Fault (2 replaced)
15. 2 → [3, 4, 1, 2] → Fault (0 replaced)
PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 60
MAILAM ENGINEERING COLLEGE CS3451- Introduction to Operating System – UNIT III

16. 0 → [4, 1, 2, 0] → Fault (3 replaced)


17. 1 → [4, 2, 0] → Hit
18. 7 → [2, 0, 1, 7] → Fault (4 replaced)
19. 0 → [2, 1, 7] → Hit
20. 1 → [2, 7] → Hit
21. 2 → [7, 1] → Hit
22. 3 → [1, 7, 2] → Fault (4 replaced)
23. 4 → [7, 1, 2, 3] → Fault (2 replaced)

FIFO Hits: 11

3. Optimal Replacement:

Optimal replacement replaces the page that will not be needed for the longest period of
time in the future.

Steps for Optimal Replacement:

1. 7 → [7] → Fault
2. 0 → [7, 0] → Fault
3. 1 → [7, 0, 1] → Fault
4. 2 → [7, 0, 1, 2] → Fault
5. 0 → [7, 1, 2] → Hit
6. 3 → [1, 2, 0, 3] → Fault (7 replaced)
7. 0 → [1, 2, 3] → Hit
8. 4 → [2, 3, 0, 4] → Fault (1 replaced)
9. 2 → [3, 0, 4] → Hit
10. 3 → [0, 4, 2] → Hit
11. 0 → [4, 2, 3] → Hit
12. 3 → [4, 2, 0] → Hit
13. 2 → [4, 0, 3] → Hit
14. 1 → [0, 3, 4, 1] → Fault (2 replaced)
15. 2 → [3, 4, 1, 2] → Fault (0 replaced)
16. 0 → [4, 1, 2, 0] → Fault (3 replaced)
17. 1 → [4, 2, 0] → Hit
18. 7 → [2, 0, 1, 7] → Fault (4 replaced)
19. 0 → [2, 1, 7] → Hit
20. 1 → [2, 7] → Hit
21. 2 → [7, 1] → Hit
22. 3 → [1, 7, 2] → Fault (4 replaced)
23. 4 → [7, 1, 2, 3] → Fault (2 replaced)

Optimal Hits: 13

(ii) Calculating the hit ratio for each algorithm:

 LRU Hit Ratio:

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 61


MAILAM ENGINEERING COLLEGE CS3451- Introduction to Operating System – UNIT III

13 hits23 references=1323≈0.565\frac{13 \text{ hits}}{23 \text{ references}} =


\frac{13}{23} \approx 0.56523 references13 hits=2313≈0.565

 FIFO Hit Ratio:

11 hits23 references=1123≈0.478\frac{11 \text{ hits}}{23 \text{ references}} =


\frac{11}{23} \approx 0.47823 references11 hits=2311≈0.478

 Optimal Hit Ratio:

13 hits23 references=1323≈0.565\frac{13 \text{ hits}}{23 \text{ references}} =


\frac{13}{23} \approx 0.56523 references13 hits=2313≈0.565

(iii) Which algorithm is the best?

 Best Algorithm: LRU and Optimal are tied with a hit ratio of approximately
0.565. However, Optimal is generally considered the best algorithm theoretically
because it has the lowest number of page faults. But in practice, LRU is often
used because it's easier to implement and doesn't require knowledge of future
page accesses, unlike Optimal, which is impractical in real systems.
 Why: LRU and Optimal both perform well for this specific reference string, but
LRU is preferred for real-world applications because it's more feasible to
implement, even though the hit ratio for Optimal may be slightly better in theory.

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 62


MAILAM ENGINEERING COLLEGE CS3451- Introduction to Operating System – UNIT III

ANNA UNIVERSITY QUESTIONS


APRIL/MAY- 2023
PART A

1. What is swapping? (Q.No: 6)


2. Define thrashing.( Q.No:30 )

PART B
1. 13)a) What is paging? Elaborate paging with an example and a diagram. (Q.No:3 )
3. b) Explain first in first out page replacement algorithm and optimal page replacement algorithm with an
example and diagrams. (Q.No:12 )

ANNA UNIVERSITY QUESTIONS


NOV/DEC- 2023
PART A

1. What is thrashing? (Q. NO : 30)


2. List the advantages of demand paging. (Q.NO: 13)

PART B

1. Explain the need and concept of paging technique in memory management. (Q.NO:3)

2. Consider the page reference string : 1 2 3 4 1 3 0 1 2 4 1 and 3 page frames. Find the page faults, hit
ratio and miss ratio using FIFO, optimal page replacement and LRU schemes.(Q.NO:1[case study])

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 63


MAILAM ENGINEERING COLLEGE CS3451- Introduction to Operating System – UNIT III

ANNA UNIVERSITY QUESTIONS


APRIL/MAY- 2024
PART A
1. What is the purpose of paging the page tables?(Q.No.33)
2. Define the benefits of virtual memory.(Q.No:34)

PART B
1. Explain the difference between internal and external fragmentation.(part-A-Q.No.23)
2. On a system with paging, a process cannot access memory that it does not own. Why? How could the
Operating system allows access to additional memory? Why should it or should it not?(Q.No.22)
3. Illustrate how pages are loaded into memory using demand paging.(Q.No.8)
4. Under what circumstances do page faults occur? Describe the actions taken by the operating system when
a page fault occurs.(Q.No.23)
PART - C
1. Assume that a program has just referenced an address in virtual memory. Describe a scenario in
which each of the following can occur.(Q.No.2)[Case Study]
(If no such scenario can occur, explain why)
(i) TLB miss with no page fault
(ii) TLB miss with page fault
(iii) TLB hit with no page fault
(iv) TLB hit with page fault
2. Apply the (i) FIFO, (ii) LRU, and (iii) optimal (OPT) replacement algorithms for the page-reference
strings:(Q.No.3)[Case Study]
4, 2, 1, 7, 9, 8, 3, 5, 2, 6, 8, 1, 0, 7, 2, 4, 1, 3, 5, 8
Indicate the number of page faults for each algorithm assuming demand paging with three frames.

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 64


MAILAM ENGINEERING COLLEGE CS3451- Introduction to Operating System – UNIT III

ANNA UNIVERSITY QUESTIONS


NOV/DEC- 2024
PART A

1. What is address binding?(Q.No.35)


2. What is page fault, and how is it handled?(Q.No.31)

PART B

1. Explain paging scheme of memory management.(Q.No.3)


2. Discuss about the different memory allocation techniques.(Q.No.24)
3. Consider the following page-reference string(Q.No;4)[case study]
7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1 2 3 4.
How many Page hits would occur in the following page replacement algorithms, assuming four-page
frames? Remember that all frames are initially empty, so your first unique pages will all cost one
fault each.
 Least-Recently-Used
 First-In-First-Out replacement
 Optimal replacement.
(ii) Calculate the hit ratio for each of the above algorithms.
(iii) Which algorithm is the best for the above case and why?

PREPARED BY : Mr. D. Srinivasan, AP/CSE, Mr. R. Arunkumar,AP/CSE & Mrs. A.Thilagavathi,AP/CSE 65


MAILAM ENGINEERING COLLEGE CS3451-Introduction to Operating System UNIT-IV

UNIT IV

I/O SYSTEMS
.
Mass Storage system – Disk Structure - Disk Scheduling and Management; File-System Interface - File concept - Access
methods - Directory Structure - Directory organization - File system mounting - File Sharing and Protection; File System
Implementation - File System Structure - Directory implementation - Allocation Methods - Free Space Management; I/O
Systems – I/O Hardware, Application I/O interface, Kernel I/O subsystem
PART A

1. Define bus.
A bus is a set of wires and a rigidly defined protocol that specifies a set of messages that can be
sent on the wires. When device A has a cable that plugs into device B, and device B has a cable that plugs
into device C, and device C plugs into a port on the computer, this arrangement is called a daisy chain. A
daisy chain usually operates as a bus.

2. Define controller.
A controller is a collection of electronics that can operate a port, a bus, or a device. A serial port
controller is a simple device controller. It is a single chip in the computer that controls the signals on the wires
of a serial port.

3. How can the processor give commands and data to a controller to accomplish an I/O transfer?
The short answer is that the controller has one or more registers for data and control signals. The processor
communicates with the controller by reading and writing bit patterns in these registers.

4. What are the registers of I/O Port?


 The status register contains bits that can be read by the host.
 The control register can be written by the host to start a command or to change the mode of a device.
 The data-in register is read by the host to get input.
 The data-out register is written by the host to send output.

5. What is polling?
A communications technique that determines when a terminal is ready to send data. The computer
continually interrogates its connected terminals in a round robin sequence. If a terminal has data to send, it
sends back an acknowledgment and the transmission begins. Contrast with an interrupt-driven system, in
which the terminal generates a signal when it has data to send.

PREPARED BY: Mr.D.Srinivasan,AP/CSE , Mrs.A.Thilagavathi,AP/CSE & Mr.R.Arunkumar, AP/CSE 1


MAILAM ENGINEERING COLLEGE CS3451-Introduction to Operating System UNIT-IV

6. What are the advantages of DMA?


 DMA allows a peripheral device to read from/write to memory without going through the CPU.
 DMA allows for faster processing since the processor can be working on something else while the
peripheral can be populating memory.

7. What are the responsibilities of DMA Controller?


In DMA as the name suggest the memory can be accessed directly by i/o module. Thus overcome the
drawback of programmed i/o and interrupt driven i/o where the CPU is responsible for extracting data from
the memory for output & storing data in memory for input.

8. Define maskable and nonmaskable interrupt.

Maskable interrupt: It is reserved for events such as unrecoverable memory errors.


Nonmaskable interrupt: It can be turned off by the CPU before the execution of critical instruction
sequences that must not be interrupted. The maskable interrupt is used by device controllers to request
service.

9. Define trap.[ Nov/Dec 2012]


Trap instruction has an operand that identifies the desired kernel service. When the system call executes
the trap instruction, the interrupt hardware saves the state of the user code, switches to supervisor mode, and
dispatches to the kernel routine that implements the requested service. The trap is given a relatively low
interrupt priority compared to those assigned to device interrupts- executing a system call on behalf of an
application is less urgent than servicing a device controller before its FIFO overflows and loses data.

10. What are the characteristics of I/O device?


aspect variation example
data transfer character, block terminal, disk
mode
access method sequential, random modem, CD-ROM
transfer schedule synchronous, tape, keyboard
asynchronous
sharing dedicated, sharable tape, keyboard
device speed latency, seek time,
transfer rate, delay
between operations

11. Write about blocking I/O and non-blocking I/O.


A simple approach to I/O would be to start the access and then wait for it to complete. But such an approach
(called synchronous I/O or blocking I/O) would block the progress of a program while the communication is

PREPARED BY: Mr.D.Srinivasan,AP/CSE , Mrs.A.Thilagavathi,AP/CSE & Mr.R.Arunkumar, AP/CSE 2


MAILAM ENGINEERING COLLEGE CS3451-Introduction to Operating System UNIT-IV
in progress, leaving system resources idle. When a program makes many I/O operations, this means that the
processor can spend almost all of its time idle waiting for I/O operations to complete.
It is possible to start the communication and then perform processing that does not require that the I/O
has completed.

12. What are the I/O services provided by Kernel?


Kernels provide many services related to I/O. Several services are
 Scheduling
 Caching
 Spooling
 Device reservation
 Error handling

13. What is double buffering? [Nov/Dec 13]


An improvement over single buffering can be done by assigning two system buffers to operation. A
process now transfers data to one buffer while the operating system empties the other. This technique is
known as double buffering or buffer swapping.

14. Define spooling.


A spool is a buffer that holds output for a device, such as printer, that cannot accept interleaved data
streams. When an application finishes printing, the spooling system queues the corresponding spool file for
output to the printer. The spooling system copies the queued spool files to the printer one at a time.

15. What is rotational latency and Seek time? [Nov/Dec 2010] [Apr/May 2013]
Once the head is at right track, it must wait until the desired block rotates under the read- write head.
This delay is latency time or rotational latency. The time taken by the head to move to the appropriate
cylinder or track is called seek time.

16. Define buffering.


A buffer is a memory area that stores data while they are transferred between two devices or between a
device and an application. Buffering is done for three reasons
 To cope with a speed mismatch between the producer and consumer of a data stream
 To adapt between devices that have different data-transfer sizes.

17. What is the various disk-scheduling algorithms? Which disk scheduling algorithm would be best
to optimize the performance of a RAM disk?
The various disk-scheduling algorithms are
 First Come First Served Scheduling
 Shortest Seek Time First Scheduling
 SCAN Scheduling
 C-SCAN Scheduling
 LOOK scheduling

PREPARED BY: Mr.D.Srinivasan,AP/CSE , Mrs.A.Thilagavathi,AP/CSE & Mr.R.Arunkumar, AP/CSE 3


MAILAM ENGINEERING COLLEGE CS3451-Introduction to Operating System UNIT-IV

18. Define disk bandwidth.


The disk bandwidth is the total number of bytes transferred, divided by the time between the first
request for service and the completion of the last transfer.
Bandwidth= total no of data transferred / total amount of time from the first request being
made to the last transfer being completed

19. What is the use of boot block?


For a computer to start running when powered up or rebooted it needs to have an initial program to
run. This bootstrap program tends to be simple. It finds the operating system on the disk loads that kernel into
memory and jumps to an initial address to begin the operating system execution.
The full bootstrap program is stored in a partition called the boot blocks, at fixed location on the disk. A disk
that has boot partition is called boot disk or system disk.

20. What is low-level formatting?


Before a disk can store data, it must be divided into sectors that the disk controller can read and write.
This process is called low-level formatting or physical formatting. Low-level formatting fills the disk with a
special data structure for each sector. The data structure for a sector consists of a header, a data area, and a
trailer.

21. What is sector sparing?


Low-level formatting also sets aside spare sectors not visible to the operating system. The controller can
be told to replace each bad sector logically with one of the spare sectors. This scheme is known as sector
sparing

22. What is a stream?


A stream is a full-duplex connection between a device driver and a user-level process. It consists of a
stream head that interfaces with the user process, a driver end that controls the device, and zero or more
stream modules between them. The stream head, the driver end, and each module contain a pair of queues- a
read queue and a write queue. Message passing is used to transfer data between queues.

23. What are the techniques used for performing I/O.


 Programmed I/O.
 Interrupt driven I/O.
 Direct Memory Access(DMA)

24. What is constant angular velocity?


The disk rotation speed can stay constant, and the density of bits decreases from inner tracks to outer
tracks to keep the data rate constant. This method is used in hard disks and is known as constant angular
velocity.

PREPARED BY: Mr.D.Srinivasan,AP/CSE , Mrs.A.Thilagavathi,AP/CSE & Mr.R.Arunkumar, AP/CSE 4


MAILAM ENGINEERING COLLEGE CS3451-Introduction to Operating System UNIT-IV
25. Why rotational latency is usually not considered in disk scheduling? (May/June 2016)
Most disks do not export their rotational position information to the host. Even if they did, the time for
this information to reach the scheduler would be subject to imprecision and the time consumed by the
scheduler is variable, so the rotational position information would become incorrect. Further, the disk requests
are usually given in terms of logical block numbers, and the mapping between logical blocks and physical
locations is very complex.

26. What is the need of Disk scheduling?[Apr/May 2010]


For a multiprogramming system with many processes, the disk queue may often have several pending
requests. When one request is completed, the operating system chooses which pending request to service next.
To reduce seek time and increase disk bandwidth disk scheduling is required.

27. Writable CD-ROM media are available in both 650 MB and 700 MB versions. What is the
principal disadvantage, other than cost, of the 700 Mb versions? [Nov/Dec 2011]
 Data capacity is higher.
 Low sectors
 Space used for error correction data is high

28. Write the three basic functions which are provided by the hardware clocks and timers. [Apr/May
2011]
 Give the current time
 Give the elapsed time
 Set a timer to trigger operation X at time T

29. What characteristics determine the disk access speed? [Apr/May 2012]
Disk Bandwidth: Bandwidth= total no of data transferred / total amount of time from the first request being
made to the last transfer being completed.
Access Time: The access time or response time of a rotating drive is a measure of the time it takes before the
drive can actually transfer data.
The access time has two major components:
 Seek time
 Rotational latency

PREPARED BY: Mr.D.Srinivasan,AP/CSE , Mrs.A.Thilagavathi,AP/CSE & Mr.R.Arunkumar, AP/CSE 5


MAILAM ENGINEERING COLLEGE CS3451-Introduction to Operating System UNIT-IV

30. Draw a diagram for interrupt driven I/O cycle. [Nov/Dec 13]

35. What is a file? [Nov/Dec 2010]


A file is a named collection of related information that is recorded on secondary storage. A file contains either
programs or data. A file has certain "structure" based on its type.

36. List the various file attributes. [Apr/May 2014, 15 & 2021]
A file has certain other attributes, which vary from one operating system to another, but typically consist of
these: Name, identifier, type, location, size, protection, time, date and user identification

37.What are the various file operations? [Nov/Dec 2010] [Apr/May 2015]
The six basic file operations are
 Creating a file
 Writing a file
 Reading a file
 Repositioning within a file
 Deleting a file
 Truncating a file

38.What are the information associated with an open file?


Several pieces of information are associated with an open file which may be:
 File pointer
 File open count
 Disk location of the file
 Access rights

PREPARED BY: Mr.D.Srinivasan,AP/CSE , Mrs.A.Thilagavathi,AP/CSE & Mr.R.Arunkumar, AP/CSE 6


MAILAM ENGINEERING COLLEGE CS3451-Introduction to Operating System UNIT-IV
39. What are the different accessing methods of a file? [Nov/Dec 2010] [Apr/May 2010]
The different types of accessing a file are:
 Sequential access: Information in the file is accessed sequentially
 Direct access: Information in the file can be accessed without any particular order.
 Other access methods: Creating index for the file, indexed sequential access method (ISAM) etc.

40. What is Directory?


The device directory or simply known as directory records information such as name, location, size, and type
for all files on that particular partition. The directory can be viewed as a symbol table that translates file
names into their directory entries.

41. What are the operations that can be performed on a directory?


The operations that can be performed on a directory are
 Search for a file
 Create a file
 Delete a file
 Rename a file
 List directory
 Traverse the file system

42. What are the most common schemes for defining the logical structure of a directory?
The most common schemes for defining the logical structure of a directory
 Single-Level Directory
 Two-level Directory
 Tree-Structured Directories
 Acyclic-Graph Directories
 General Graph Directory

43. Define UFD and MFD.


In the two-level directory structure, each user has own user file directory (UFD). Each UFD has a
similar structure, but lists only the files of a single user. When a job starts the system's master file
directory (MFD) is searched. The MFD is indexed by the user name or account number, and each entry
points to the UFD for that user.

44. What is a path name?


A pathname is the path from the root through all subdirectories to a specified file. In a two-level directory
structure, a user name and a file name define a path name.

45. What are the various layers of a file system?


The file system is composed of many different levels. Each level in the design uses the feature of the lower
levels to create new features for use by higher levels.
 Application programs

PREPARED BY: Mr.D.Srinivasan,AP/CSE , Mrs.A.Thilagavathi,AP/CSE & Mr.R.Arunkumar, AP/CSE 7


MAILAM ENGINEERING COLLEGE CS3451-Introduction to Operating System UNIT-IV
 Logical file system
 File-organization module
 Basic file system
 I/O control
 Devices

46. What are the structures used in file-system implementation?


Several on-disk and in-memory structures are used to implement a file system
a.On-disk structure include
 Boot control block
 Partition block
 Directory structure used to organize the files
 File control block (FCB)
b. In-memory structure include
 In-memory partition table
 In-memory directory structure
 System-wide open file table
 Per-process open table

47. What are the functions of virtual file system (VFS)? (or) Identify the two important functions
of VFS layer in the concept of file system implementation. (Nov/Dec 2015)
a. It separates file-system-generic operations from their implementation defining a clean VFS interface.
It allows transparent access to different types of file systems mounted locally.
b. VFS is based on a file representation structure, called a vnode. It contains a numerical value for a
network-wide unique file .The kernel maintains one vnode structure for each active file or directory.

48. Mention the objectives of file management systems. [Apr/May 2010]


 To describe the details of implementing local file systems and directory structures
 To describe the implementation of remote file systems
 To discuss block allocation and free-block algorithms and trade-offs

49. What are the allocation methods of a disk space?


Methods of allocating disk space which are widely in use are
 Contiguous allocation
 Linked allocation
 Indexed allocation

50. What are the advantages of Contiguous allocation?


The advantages are
 Supports direct access
 Supports sequential access
 Number of disk seeks is minimal.
PREPARED BY: Mr.D.Srinivasan,AP/CSE , Mrs.A.Thilagavathi,AP/CSE & Mr.R.Arunkumar, AP/CSE 8
MAILAM ENGINEERING COLLEGE CS3451-Introduction to Operating System UNIT-IV

51. What are the drawbacks of contiguous allocation of disk space?


The disadvantages are
 Suffers from external fragmentation
 Suffers from internal fragmentation
 Difficulty in finding space for a new file
 File cannot be extended
 Size of the file is to be declared in advance

52. What are the advantages of Linked allocation?


The advantages are
 No external fragmentation
 Size of the file does not need to be declared

53. What are the disadvantages of linked allocation?


The disadvantages are
 Used only for sequential access of files.
 Direct access is not supported
 Memory space required for the pointers.
 Reliability is compromised if the pointers are lost or damaged

54. What are the advantages of Indexed allocation?


The advantages are
 No external-fragmentation problem
 Solves the size-declaration problems.
 Supports direct access

55. How can the index blocks be implemented in the indexed allocation scheme?
The index block can be implemented as follows
 Linked scheme
 Multilevel scheme
 Combined scheme

56. What does the FCB (file control block) consists of? [Apr/May 2011 &14]
A FCB contains the files information, ownership, and permission and file location contents.

57. What does the partition control block consist of? [Apr/May 2014]
It contains the partition details, such as the number of blocks in the partition, blocks
size, free block count and pointers of free block, and free FCB count and FCB pointers.

58. Give the advantage of bit vector.


1. It is relatively simple and efficient to find the first free block.

PREPARED BY: Mr.D.Srinivasan,AP/CSE , Mrs.A.Thilagavathi,AP/CSE & Mr.R.Arunkumar, AP/CSE 9


MAILAM ENGINEERING COLLEGE CS3451-Introduction to Operating System UNIT-IV
2. ‘n’ consecutive blocks in the disk.
59. Define recovery and Failure.
Recovery is the process of restoring a system to a consistent state. When the consequences in
the form of systems behavior, that is different form its specified or behavior. This situation is called as
failure.

60. What is NFS? [Nov/Dec 2012]


NFS views a set of interconnected workstations as a set of independent machines with
independent file systems. The goal is to allow some degree of sharing among these file systems (on
explicit request) in a transparent manner. Sharing is based on a client-server relationship. A machine
may be, and often is, both a client and a server. Sharing is allowed between any pair of machines, rather
than with only dedicated server machines.

61. Define Thrashing.


If a process does not have enough pages, the page-fault rate is very high. This leads to:
 Low CPU utilization.
 A process is busy swapping pages in and out.
 In other words, a process is spending more time paging than executing.

62. What is Garbage Collection? [Apr/May 2012]


Garbage collection involves traversing the entire file system, marking everything that can be
accessed. Then, a second pass collects everything that is not marked onto a list of free space. (A similar
marking procedure can be used to ensure that a traversal or search will cover everything in the file
system once and only once.) Garbage collection for a disk based file system, however, is extremely
time-consuming and is thus seldom attempted.

63. What is Relative block number? [Nov/Dec 2013]


Relative block numbers are four bytes (full word) binary numbers indicating the block number in the
file. The first block is block 0. This form of addressing can only be used with fixed length block.

64. Name any four common file types. [Nov/Dec 2012]

PREPARED BY: Mr.D.Srinivasan,AP/CSE , Mrs.A.Thilagavathi,AP/CSE & Mr.R.Arunkumar, AP/CSE 10


MAILAM ENGINEERING COLLEGE CS3451-Introduction to Operating System UNIT-IV

65. What are the responsibilities of file manager? [Apr/May 2013]

 Track where each file is stored.


 Determine where and how files will be stored.
 Allocate each file when a user has been cleared for access to it, then record its use.
 Deallocate file when it is returned to storage.

66. Define FAT. (or) Do FAT file system is advantageous? Why? [Nov/Dec 2013, 15 &2021]

The File Allocation Table, FAT, used by DOS is a variation of linked allocation, where all the links are
stored in a separate table at the beginning of the disk. The benefit of this approach is that the FAT table can be
cached in memory, greatly improving random access speeds.

67. Mention the two approaches to identify and reuse free memory area in a heap. [Apr/May 2013]

 First fit: Allocate the first hole that is big enough. Searching can start either at the beginning of the
set of holes or where the previous first-fit search ended. We can stop searching as soon as we find a free hole
that is large enough.
 Best fit: Allocate the smallest hole that is big enough. We must search the entire list, unless the list is
kept ordered by size. This strategy produces the smallest leftover hole.
 Worst fit: Allocate the largest hole. Again, we must search the entire list, unless it is sorted by size.
This strategy produces the largest leftover hole, which may be more useful than the smaller leftover hole from
a best-fit approach.

68. Define ZFS.


 In its management of free space, ZFS uses a combination of techniques to control the size of data
structures Oracle’s ZFS file system (found in Solaris and other operating systems) was designed to
encompass huge numbers of files, directories, and even file systems (in ZFS, we can create file-system
hierarchies) and minimize the I/O needed to manage those structures.
 First, ZFS creates metaslabs to divide the space on the device into chunks of manageable size. A given
volume may contain hundreds of metaslabs.
 Each metaslab has an associated space map. ZFS uses the counting algorithm to store information about
free blocks.

69. How does DMA increase system concurrency? (May/June 2016)


DMA increases system concurrency by allowing the CPU to perform tasks while the DMA system
transfers data via the system and memory buses. Hardware design is complicated because the DMA
controller must be integrated into the system, and the system must allow the DMA controller to be a bus
master. Cycle stealing may also be necessary to allow the CPU and DMA controller to share use of the
memory bus.

PREPARED BY: Mr.D.Srinivasan,AP/CSE , Mrs.A.Thilagavathi,AP/CSE & Mr.R.Arunkumar, AP/CSE 11


MAILAM ENGINEERING COLLEGE CS3451-Introduction to Operating System UNIT-IV
70.Write about file and directory. (May/June 17)

Directory: Directory is a collection of files stored as a group, say it is a group of files with single name. A
directory can be classified into two types.

Root Directory: root is the parent of total directories, say main drive or main filesystem (/) is the root
directory.

Sub directory: these are the directories that comes under the root directory in hierarchy, in general a
directory within a director can be called as sub directory.

File: Files are collection of data items stored in a disk or it is a device which can store the information like
data, music (mp3, ogg), photographs, movie, sound, book etc. It is like every thing you store in a system
should be a file. Files are always associated with devices like hard disk, floppy disk etc. File is the last object
in your file system tree.

71. Write short notes on file system mounting. (April/May-2019)


Mounting is a process by which the operating system makes files and directories on a storage device (such
as hard drive, CD-ROM, or network share) available for users to access via the computer's file system.

72.What is SSD? (April/May-2019)


SSD stands for Solid State Drive. The HDD or SDD is the hardware component in a computer that stores data.
The operating system (usually Windows on PCs and MacOS on Apple computers) is installed on the drive. It
allows the computer to boot into an interface that the user can navigate.

73.What is Sequential access? (April/May-2023)


The simplest access method is sequential access. Information in the file is processed in order, one record
after the other. This mode of access is by far the most common; for example, editors and compilers usually
access files in this fashion.

74. Define an immutable shared file. (April/May-2023)


 Once a file is declared as shared by its creator, it cannot be modified.
 The contents of an immutable file cannot be altered.
 The files are read-only.
PREPARED BY: Mr.D.Srinivasan,AP/CSE , Mrs.A.Thilagavathi,AP/CSE & Mr.R.Arunkumar, AP/CSE 12
MAILAM ENGINEERING COLLEGE CS3451-Introduction to Operating System UNIT-IV
75. Give the role of operating system in free space management. (Nov/Dec-2023)
The operating system manages the free space in the hard disk. This is known as free space management in
operating systems.
The OS maintains a free space list to keep track of the free disk space. The free space list consists of all free
disk blocks that are not allocated to any file or directory.

76. List the various file access method. (Nov/Dec-2023)


There are three file access methods in OS:
1. Sequential Access,
2. Direct Access, and.
3. Indexed Sequential Access.
77. Write short notes on free space management? (April/May-2024)

Free Space Management in operating systems involves keeping track of unused space on storage
devices to efficiently allocate and deallocate space for files. The key methods are:

1. Bitmap: A binary representation where each bit corresponds to a block; 0 indicates free space, and 1
indicates used space. It provides efficient tracking of free blocks.
2. Linked List: Free blocks are linked together, where each free block points to the next free block.
This method is flexible but less efficient than a bitmap in terms of access speed.

78. State the functions of file system? (April/May-2024)

The functions of a file system are:

1. File Creation and Deletion: It manages the creation, naming, and deletion of files on storage devices.
2. File Access and Manipulation: It provides mechanisms for reading, writing, and updating files, as
well as organizing them for easy access.

79. Name the three methods of allocating disk space for file systems?( NOV / DEC-2024)
The three methods of allocating disk space for file systems are:
1. Contiguous Allocation
2. Linked Allocation
3. Indexed Allocation

80. List the operations that can be performed on the directory? ( NOV / DEC-2024)

Operations that can be performed on a directory include:

1. Creation - Adding a new directory to the file system.


2. Deletion - Removing an existing directory and its contents.
3. Renaming - Changing the name of a directory.
4. Listing - Displaying the contents of the directory.
PREPARED BY: Mr.D.Srinivasan,AP/CSE , Mrs.A.Thilagavathi,AP/CSE & Mr.R.Arunkumar, AP/CSE 13
MAILAM ENGINEERING COLLEGE CS3451-Introduction to Operating System UNIT-IV
5. Traversal - Navigating through the directory hierarchy to access files or subdirectories.
6. Searching - Locating a specific file or subdirectory within the directory.
7. Opening/Closing - Accessing the directory to perform operations and then closing it.
8. Modifying Permissions - Changing access control for the directory.

PREPARED BY: Mr.D.Srinivasan,AP/CSE , Mrs.A.Thilagavathi,AP/CSE & Mr.R.Arunkumar, AP/CSE 14


MAILAM ENGINEERING COLLEGE CS3451-Introduction to Operating System UNIT-IV

PART B

1. Explain about FCFS, SSTF, CSCAN and CLOCK disk scheduling algorithms with an example for
each. (or) Discuss disk scheduling Algorithms in detail. (Nov/Dec-2019)
Consider a disk queue with requests for i/o to blocks on cylinders in this following order.
98,183,37,122,14,124,65,67
The disk head pointer is initially at cylinder 53. Outline first come first serve disk scheduling algorithm,
SCAN disk scheduling algorithms and Shortest seek time first disk scheduling algorithm with a
diagram.(April/May-2023)

The disk transfer speeds are limited primarily by seek times and rotational latency. Bandwidth is measured by
the amount of data transferred divided by the total amount of time from the first request being made to the last
transfer being completed, (for a series of disk requests. )

FCFS Scheduling:

 First-Come First-Serve is simple and intrinsically fair, but not very efficient shown in Fig 4.1.
 Consider in the following sequence the wild swing from cylinder 122 to 14 and then back to 124:

Fig 4.1: FCFS Scheduling

SSTF Scheduling

 Shortest Seek Time First scheduling is more efficient, but may lead to starvation if a constant
stream of requests arrives for the same general area of the disk shown in Fig 4.2.
 SSTF reduces the total head movement to 236 cylinders, down from 640 required for the same set of
requests under FCFS. The distance could be reduced still further to 208 by starting with 37 and then 14

PREPARED BY: Mr.D.Srinivasan,AP/CSE , Mrs.A.Thilagavathi,AP/CSE & Mr.R.Arunkumar, AP/CSE 15


MAILAM ENGINEERING COLLEGE CS3451-Introduction to Operating System UNIT-IV
first before processing the rest of the requests

Fig 4.2: SSTF Scheduling

SCAN Scheduling

 The SCAN algorithm, the elevator algorithm moves back and forth from one end of the disk to the
other, similarly to an elevator processing requests in a tall building shown in Fig 4.3.

Fig 4.3: SCAN Scheduling

 Under the SCAN algorithm, if a request arrives just ahead of the moving head, then it will be processed
right away, but if it arrives just after the head has passed, then it will have to wait for the head to pass
going the other way on the return trip.

C-SCAN Scheduling:

 The Circular-SCAN algorithm improves upon SCAN by treating all requests in a circular queue
fashion - Once the head reaches the end of the disk, it returns to the other end without processing any

PREPARED BY: Mr.D.Srinivasan,AP/CSE , Mrs.A.Thilagavathi,AP/CSE & Mr.R.Arunkumar, AP/CSE 16


MAILAM ENGINEERING COLLEGE CS3451-Introduction to Operating System UNIT-IV
requests, and then starts again from the beginning of the disk shown in Fig 4.4:

Fig 4.4 : C-SCAN Scheduling

LOOK Scheduling:

 LOOK scheduling improves upon SCAN by looking ahead at the queue of pending requests, and

not moving the heads any farther towards the end of the disk than is necessary is shown in Fig 4.5.

Fig 4.5: LOOK Scheduling

PREPARED BY: Mr.D.Srinivasan,AP/CSE , Mrs.A.Thilagavathi,AP/CSE & Mr.R.Arunkumar, AP/CSE 17


MAILAM ENGINEERING COLLEGE CS3451-Introduction to Operating System UNIT-IV

2. Brief the various procedure need to be followed in disk management.(April/May-2019)

Disk Formatting:

Before a disk can be used, it has to be low-level formatted, which means laying down all of the headers and
trailers demarking the beginning and ends of each sector.

 Included in the header and trailer are the linear sector numbers, and error-correcting codes, ECC,
which allow damaged sectors to not only be detected, but in many cases for the damaged data to
be recovered (depending on the extent of the damage). Sector sizes are traditionally 512 bytes, but
may be larger, particularly in larger drives.
 ECC calculation is performed with every disk read or write, and if damage is detected but the
data is recoverable, then a soft error has occurred. Soft errors are generally handled by the on-board
disk controller, and never seen by the OS.
 After partitioning, then the filesystems must be logically formatted, which involves laying
down the master directory information.

Boot Block:

 Computer ROM contains a bootstrap program ( OS independent ) with just enough code to find the
first sector on the first hard drive on the first controller, load that sector into memory, and transfer
control over to it.
 The first sector on the hard drive is known as the Master Boot Record, MBR, and contains a very small
amount of code in addition to the partition table.
 The partition table documents how the disk is partitioned into logical disks, and indicates specifically
which partition is the active or boot partition.
 The boot program then looks to the active partition to find an operating system, possibly loading up a
slightly larger / more advanced boot program along the way shown in Fig 4.6.
 In a dual-boot (or larger multi-boot) system, the user may be given a choice of which operating system
to boot, with a default action to be taken in the event of no response within some time frame.

PREPARED BY: Mr.D.Srinivasan,AP/CSE , Mrs.A.Thilagavathi,AP/CSE & Mr.R.Arunkumar, AP/CSE 18


MAILAM ENGINEERING COLLEGE CS3451-Introduction to Operating System UNIT-IV

Fig 4.6: Boot Block

Bad Blocks:

 No disk can be manufactured to 100% perfection, and all physical objects wear out over time. For these
reasons all disks are shipped with a few bad blocks, and additional blocks can be expected to go bad
slowly over time. If a large number of blocks go bad then the entire disk will need to be replaced, but a
few here and there can be handled through other means.
 In the old days, bad blocks had to be checked for manually. Formatting of the disk or running certain
disk-analysis tools would identify bad blocks, and attempt to read the data off of them one last time
through repeated tries.
 Then the bad blocks would be mapped out and taken out of future service. Sometimes the data could be
recovered, and sometimes it was lost forever.

3. Explain about File Concepts. (or) Discuss the functions of file and file implementation. (Apr/May
2012] [Nov/Dec 2012 & 15]

File:
A file is a named collection of related information that is recorded on secondary storage.Data files
may be numeric, alphabetic, alphanumeric, or binary. Files may be free form, such as text files, or may be
formatted rigidly.

A file has a certain defined structure according to its type.

 A text file is a sequence of characters organized into lines (and possibly pages).
 A source file is a sequence of subroutines and functions, each of which is further organized as
declarations followed by executable statements.
 An object file is a sequence of bytes organized into blocks understandable by the system's linker.
 An executable file is a series of code sections that the loader can bring into memory and execute.

PREPARED BY: Mr.D.Srinivasan,AP/CSE , Mrs.A.Thilagavathi,AP/CSE & Mr.R.Arunkumar, AP/CSE 19


MAILAM ENGINEERING COLLEGE CS3451-Introduction to Operating System UNIT-IV
File Attributes:

A file has certain other attributes, which vary from one operating system to another, but typically consist of
these:
 Name: The symbolic file name is the only information kept in human readable form.
 Identifier: This unique tag, usually a number, identifies the file within the file system; it is the non-human-
readable name for the file.
 Type: This information is needed for those systems that support different types.
 Location: This information is a pointer to a device and to the location of the file on that device.
 Size: The current size of the file (in bytes, words, or blocks), and possibly the maximum allowed size are
included in this attribute.
 Protection: Access-control information determines who can do reading, writing, executing, and so on.
 Time, date, and user identification: This information may be kept for creation, last modification, and last
use. These data can be useful for protection, security, and usage monitoring. The information about all files
is kept in the directory structure that also resides on secondary storage.

File Operations:

. The operating system can provide system calls to create, write, read, reposition, delete, and truncate files.
 Creating a file: Two steps are necessary to create a file. First, space in the file system must be found for the
file. Second, an entry for the new file must be made in the directory.
 Writing a file: To write a file, make a system call specifying both the name of the file and the information to
be written to the file. Given the name of the file, the system searches the directory to find the location of the
file.
 Reading a file: To read from a file, use a system call that specifies the name of the file and where (in
memory) the next block of the file should be put. Again, the directory is searched for the associated directory
entry, and the system needs to keep a read pointer to the location in the file where the next read is to take
place.

 Repositioning within a file: The directory is searched for the appropriate entry, and the current-file-position
is set to a given value. Repositioning within a file does not need to involve any actual I/O. This file operation
is also known as a file seeks.
 Deleting a file:
 Truncating a file:
These six basic operations certainly comprise the minimal set of required file operations.
The operating system keeps a small table containing information about all open files (the open-file table).

 File pointer: On systems that do not include a file offset as part of the read and write system calls, the
system must track the last read-write location as a current-file-position pointer. This pointer is unique to

PREPARED BY: Mr.D.Srinivasan,AP/CSE , Mrs.A.Thilagavathi,AP/CSE & Mr.R.Arunkumar, AP/CSE 20


MAILAM ENGINEERING COLLEGE CS3451-Introduction to Operating System UNIT-IV
each process operating on the file, and therefore must be kept separate from the on-disk file attributes.
 File open count: As files are closed, the operating system must reuse its open-file table entries, or it could
run out of space in the table.
 Disk location of the file: Most file operations require the system to modify data within the file. The
information needed to locate the file on disk is kept in memory to avoid having to read it from disk for each
operation.
 Access rights: Each process opens a file in an access mode. This information is stored on the per-process
table so the operating system can allow or deny subsequent I/O requests.

File Types:

 Windows (and some other systems) use special file extensions to indicate the type of each file
shown in Table 4.1:

Table 4.1 :Common File Types

File Structure:

 Some files contain an internal structure, which may or may not be known to the OS.
 For the OS to support particular file formats increases the size and complexity of the OS.
 UNIX treats all files as sequences of bytes, with no further consideration of the internal structure.
(With the exception of executable binary programs, which it must know how to load and find the
first executable statement, etc. )
 Macintosh files have two forks - a resource fork, and a data fork. The resource fork contains
information relating to the UI, such as icons and button images, and can be modified
independently of the data fork, which contains the code or data as appropriate.

PREPARED BY: Mr.D.Srinivasan,AP/CSE , Mrs.A.Thilagavathi,AP/CSE & Mr.R.Arunkumar, AP/CSE 21


MAILAM ENGINEERING COLLEGE CS3451-Introduction to Operating System UNIT-IV
Internal File Structure:

 Internally, locating an offset within a file can be complicated for the operating system. It is unlikely
that the physical record size will exactly match the length of the desired logical record.
 Logical records may even vary in length. Packing a number of logical records into physical blocks is a
common solution to this problem.
 The logical record size, physical block size, and packing technique determine how many logical records
are in each physical block.
 The packing can be done either by the user's application program or by the operating system.

4.What is a directory? Outline a tree structured directory structure and an acycle graph directory
structure with appropriate example.(April/May-2023)

A storage device can be used in its entirely for a file system.

 It can also be subdivided for finer-grained control.


 For example, a disk can be partitioned into quarters, and each quarter can hold a separate file system.

Fig 4.7: Index file and Relative File


 A file system can be created on each of these parts of the disk. Any entity containing a file system is
generally known as a volume.
 Each volume that contains a file system must also contain information about the files in the system.
This information is kept in entries in a device directory or volume table of contents. The device
directory (more commonly known simply as the directory) records information—such as name,
location, size, and type—for all files on that volume shown in Fig 4.7.
 The directory can be viewed as a symbol table that translates file names into their directory entries
shown in Fig 4.8.

PREPARED BY: Mr.D.Srinivasan,AP/CSE , Mrs.A.Thilagavathi,AP/CSE & Mr.R.Arunkumar, AP/CSE 22


MAILAM ENGINEERING COLLEGE CS3451-Introduction to Operating System UNIT-IV

Fig 4.8: Partition of files


Storage Structure:

 A general-purpose computer system has multiple storage devices, and those devices can be sliced up
into volumes that hold file systems.
 Computer systems may have zero or more file systems, and the file systems maybe of varying types.
For example, a typical Solaris system may have dozens of file systems of a dozen different types, as
shown in the file system list in Figure 4.9.

Fig 4.9: Solaris File System

 tmpfs—a “temporary” file system that is created in volatile main memory and has its contents erased if
the system reboots or crashes
 objfs—a “virtual” file system (essentially an interface to the kernel that looks like a file system) that
gives debuggers access to kernel symbols
 ctfs—a virtual file system that maintains “contract” information to manage which processes start when
the system boots and must continue to run during operation
 lofs—a “loop back” file system that allows one file system to be accessed in place of another one
PREPARED BY: Mr.D.Srinivasan,AP/CSE , Mrs.A.Thilagavathi,AP/CSE & Mr.R.Arunkumar, AP/CSE 23
MAILAM ENGINEERING COLLEGE CS3451-Introduction to Operating System UNIT-IV
 procfs—a virtual file system that presents information on all processes as a file system
 ufs, zfs—general-purpose file systems

Directory:

A directory is a file system cataloging structure which contains references to other computer files, and possibly
other directories. on many computers, directories are known as folders, or drawers, analogous to a workbench
or the traditional office filing cabinet.

Directory Overview
 Search for a file
 Create a file
 Delete a file
 List a directory
 Rename a file
 Traverse the file system

(i)Single-Level Directory:

 The simplest directory structure is the single-level directory. All files are contained in the same
directory, which is easy to support and understand.
 A single-level directory has significant limitations, however, when the number of files increases or when
the system has more than one user.
 Since all files are in the same directory, they must have unique names. If two users call their data file test,
then the unique-name rule is violated.
 Even a single user on a single-level directory may find it difficult to remember the names of all the files, as
the number of files increases shown in Fig 4.10.

Fig 4.10: Single-Level Directory

(ii) Two-Level Directory:

 A single-level directory often leads to confusion of file names between different users. The

PREPARED BY: Mr.D.Srinivasan,AP/CSE , Mrs.A.Thilagavathi,AP/CSE & Mr.R.Arunkumar, AP/CSE 24


MAILAM ENGINEERING COLLEGE CS3451-Introduction to Operating System UNIT-IV
standard solution is to create a separate directory for each user.
 In the two-level directory structure, each user has own user file directory (UFD). Each UFD has a
similar structure, but lists only the files of a single user.
 When a user job starts or a user logs in, the system's master file directory (MFD) is searched. The
MFD is indexed by user name or account number, and each entry points to the UFD for that user.
 When a user refers to a particular file, only his own UFD is searched. Thus, different users may have
files with the same name, as long as all the file names within each UFD are unique.
 To create a file for a user, the operating system searches only that user's UFD to ascertain whether
another file of that name exists.
 To delete a file, the operating system confines its search to the local UFD; thus, it cannot accidentally
delete another user's file that has the same name shown in Fig 4.11a.

Fig 4.11a: Two-Level Directory

(iii) Tree-Structured Directories

 An obvious extension to the two-tiered directory structure, and the one with which we are all most
familiar.
 Each user / process has the concept of a current directory from which all (relative) searches take
place.
 Files may be accessed using either absolute pathnames (relative to the root of the tree) or relative
pathnames (relative to the current directory.)

Path names can be of two types:

 An absolute path name begins at the root and follows a path down to the specified file, giving the directory
names on the path.
 A relative path name defines a path from the current directory shown in Fig 4.11b.

PREPARED BY: Mr.D.Srinivasan,AP/CSE , Mrs.A.Thilagavathi,AP/CSE & Mr.R.Arunkumar, AP/CSE 25


MAILAM ENGINEERING COLLEGE CS3451-Introduction to Operating System UNIT-IV

Fig 4.11b: Tree-Structured Directories

(iv) Acyclic-Graph Directories:

A tree structure prohibits the sharing of files or directories. An acyclic graph allows directories to have
shared subdirectories and files.

 The same file or subdirectory may be in two different directories.


 An acyclic graph, that is, a graph with no cycles, is a natural generalization of the tree structured directory
scheme.
 A shared file (or directory) is not the same as two copies of the file. With two copies, each programmer can
view the copy rather than the original, but if one programmer changes the file, the changes will not appear
in the other's copy.

UNIX provides two types of links for implementing the acyclic-graph structure.

A hard link (usually just called a link) involves multiple directory entries that both refer to the same file. Hard
links are only valid for ordinary files in the same filesystem.

A symbolic link, that involves a special file, containing information about where to find the linked file.
Symbolic links may be used to link directories and/or files in other filesystems, as well as ordinary files in the
current filesystem shown in Fig 4.12.

PREPARED BY: Mr.D.Srinivasan,AP/CSE , Mrs.A.Thilagavathi,AP/CSE & Mr.R.Arunkumar, AP/CSE 26


MAILAM ENGINEERING COLLEGE CS3451-Introduction to Operating System UNIT-IV

Fig 4.12: Acyclic-Graph Directories

 Hard links require a reference count, or link count for each file, keeping track of how many directory
entries are currently referring to this file. Whenever one of the references is removed the link count is
reduced, and when it reaches zero, the disk space can be reclaimed.

(v) General Graph Directory:

 One serious problem with using an acyclic-graph structure is ensuring that there are no cycles.
 If we start with a two-level directory and allow users to create subdirectories, a tree-structured
directory results shown in Fig 4.13.
 It should be fairly easy to see that simply adding new files and subdirectories to an existing tree
structured directory preserves the tree-structured nature.

Fig 4.13: General Graph Directory

PREPARED BY: Mr.D.Srinivasan,AP/CSE , Mrs.A.Thilagavathi,AP/CSE & Mr.R.Arunkumar, AP/CSE 27


MAILAM ENGINEERING COLLEGE CS3451-Introduction to Operating System UNIT-IV

5.Explain in detail about File-Sharing. (May/June 2017)

File sharing is very desirable for users who want to collaborate and to reduce the effort required to
achieve a computing goal.

Multiple Users:

 To implement sharing and protection, the system must maintain more file and directory attributes than
are needed on a single-user system.
 Although many approaches have been taken to meet this requirement, most systems have evolved to
use the concepts of file (or directory) owner (or user) and group.
 The owner is the user who can change attributes and grant access and who has the most control over the
file. The group attribute defines a subset of users who can share access to the file.

Remote File Systems:

 The first implemented method involves manually transferring files between machines via programs like
ftp.
 The second major method uses a distributed file system (DFS) in which remote directories is visible
from a local machine.
 In some ways, the third method, the World Wide Web, is a reversion to the first. A browser is needed
to gain access to the remote files, and separate operations (essentially a wrapper for ftp) are used to
transfer files.
 Anonymous access allows a user to transfer files without having an account on the remote system.

(a)The Client–Server Model:

 Remote file systems allow a computer to mount one or more file systems from one or more
remote machines.
 In this case, the machine containing the files is the server, and the machine seeking access to the
files is the client.
 The client–server relationship is common with networked machines.
 Client identification is more difficult.
 A client can be specified by a network name or other identifier, such as an IP address, but these
can be spoofed, or imitated.

(b).Distributed Information Systems:


 To make client–server systems easier to manage, distributed information systems, also known
as distributed naming services, provide unified access to the information needed for remote
computing.

PREPARED BY: Mr.D.Srinivasan,AP/CSE , Mrs.A.Thilagavathi,AP/CSE & Mr.R.Arunkumar, AP/CSE 28


MAILAM ENGINEERING COLLEGE CS3451-Introduction to Operating System UNIT-IV
 The domain name system (DNS) provides host-name-to-network-address translations for the
entire Internet.
 Sun Microsystems (now part of Oracle Corporation) introduced yellow pages (since renamed
network information service, or NIS), and most of the industry adopted its use.
 Microsoft’s common Internet file system (CIFS), network information is used in conjunction
with user authentication (user name and password) to create a network login that the server uses
to decide whether to allow or deny access to a requested file system.
 Microsoft uses active directory as a distributed naming structure to provide a single name space
for users.
 The industry is moving toward use of the lightweight directory-access protocol (LDAP) as a
secure distributed naming mechanism. In fact, active directory is based on LDAP.

(c ).Failure Modes:
 Local file systems can fail for a variety of reasons, including failure of the disk containing the file
system, corruption of the directory structure or other disk-management information (collectively
called metadata), disk-controller failure, cable failure, and host-adapter failure.
 Recovery from failure, some kind of state information may be maintained on both the client and
the server.
 In the situation where the server crashes but must recognize that it has remotely mounted exported
file systems and opened files, NFS takes a simple approach, implementing a stateless DFS.

Consistency Semantics:

Consistency semantics represent an important criterion for evaluating any file system that supports file
sharing. These semantics specify how multiple users of a system are to access a shared file simultaneously.

(a).UNIX Semantics
 Writes to an open file by a user are visible immediately to other users who have this file open.
 One mode of sharing allows users to share the pointer of current location into the file. Thus, the
advancing of the pointer by one user affects all sharing users. Here, a file has a single image that
interleaves all accesses, regardless of their origin.
(b).Session Semantics
The Andrew file system (OpenAFS) uses the following consistency semantics:
 Writes to an open file by a user are not visible immediately to other users that have the same file
open.
 Once a file is closed, the changes made to it are visible only in sessions starting later. Already open
instances of the file do not reflect these changes.
(c ).Immutable-Shared-Files Semantics
 A unique approach is that of immutable shared files.
 Once a file is declared as shared by its creator, it cannot be modified.

PREPARED BY: Mr.D.Srinivasan,AP/CSE , Mrs.A.Thilagavathi,AP/CSE & Mr.R.Arunkumar, AP/CSE 29


MAILAM ENGINEERING COLLEGE CS3451-Introduction to Operating System UNIT-IV

6. Write notes about the protection strategies provided for files. (or) Explain about File access
methods. [Nov/Dec 13] [May/June 17] [April/May-2021]

Files must be kept safe for reliability (against accidental damage), and protection (against
deliberate malicious access.) The former is usually managed with backup copies. This section discusses
the latter.

Types of Access:

 The following low-level operations are often controlled:


 Read - Read from the file.
 Write - Write or rewrite the file.
 Execute - Load the file into memory and execute it.
 Append - Write new information at the end of the file.
 Delete - Delete the file and free its space for possible reuse.
 List - List the name and attributes of the file.

Access Control:
One approach is to have complicated Access Control Lists, ACL, which specify exactly what access is allowed
or denied for specific users or groups.

To condense the length of the access-control list, many systems recognize three classifications of users in
connection with each file:

 Owner. The user who created the file is the owner.


 Group. A set of users who are sharing the file and need similar access is a group, or work group.
 Universe. All other users in the system constitute the universe.
 Access Methods: Files store information. When it is used, this information must be accessed and read
into computer memory.

The information in the file can be accessed in several ways. They are

 Sequential access
 Direct Access
 Other access methods

Sequential access:

The simplest access method is sequential access. Information in the file is processed in
order, one record after the other. This mode of access is by far the most common; for example, editors and
compilers usually access files in this fashion.

PREPARED BY: Mr.D.Srinivasan,AP/CSE , Mrs.A.Thilagavathi,AP/CSE & Mr.R.Arunkumar, AP/CSE 30


MAILAM ENGINEERING COLLEGE CS3451-Introduction to Operating System UNIT-IV
Direct Access:

Another method is direct access (or relative access). Here, a file is made up of fixed-length
logical records that allow programs to read and write records rapidly in no particular order. The direct-
access method is based on a disk model of a file, since disks allow random access to any file block. For direct
access, the file is viewed as a numbered sequence of blocks or records.

Other Access Methods:

Other access methods can be built on top of a direct-access method. These methods
generally involve the construction of an index for the file. The index, like an index in the back of a book,
contains pointers to the various blocks. To find a record in the file, we first search the index and then use the
pointer to access the file directly and to find the desired record.

Other Protection Approaches:

 Another approach to the protection problem is to associate a password with each file. Just as access to
the computer system is often controlled by a password, access to each file can be controlled in the same
way.
 If the passwords are chosen randomly and changed often, this scheme may be effective in limiting
access to a file.
 The use of passwords has a few disadvantages, however. First, the number of passwords that a user
needs to remember may become large, making the scheme impractical.

7. Briefly Explain File System Structure and File System Interface.(Nov/Dec-2023)

Disks provide most of the secondary storage on which file systems are maintained. Two characteristics make
them convenient for this purpose:

File systems:

 File systems provide efficient and convenient access to the disk by allowing data to be stored,
located, and retrieved easily.
 I/O Control consists of device drivers, special software programs ( often written in assembly )
which communicate with the devices by reading and writing special codes directly to and from
memory addresses corresponding to the controller card's registers shown in Fig 4.14.
 The basic file system level works directly with the device drivers in terms of retrieving and
storing raw blocks of data, without any consideration for what is in each block.
 The file organization module knows about files and their logical blocks, and how they map to
physical blocks on the disk.

PREPARED BY: Mr.D.Srinivasan,AP/CSE , Mrs.A.Thilagavathi,AP/CSE & Mr.R.Arunkumar, AP/CSE 31


MAILAM ENGINEERING COLLEGE CS3451-Introduction to Operating System UNIT-IV
 The logical file system deals with all of the meta data associated with a file everything about the
file except the data itself.

Fig 4.14: File systems

 The file-organization module knows about files and their logical blocks, as well as physical blocks. By
knowing the type of file allocation used and the location of the file, the file-organization module can
translate logical block addresses to physical block addresses for the basic file system to transfer. Each file’s
logical blocks are numbered from 0 (or 1) through N.

The logical file system manages metadata information. Metadata includes all of the file-system structure
except the actual data (or contents of the files). A file control block (FCB) (an inode in UNIX file systems)
contains information about the file, including ownership, permissions, and location of the file contents.

 Each operating system has one or more dis kbased file systems. UNIX uses the UNIX file system (UFS),
which is based on the Berkeley Fast File System (FFS). Windows supports disk file-system formats of
FAT, FAT32, and NTFS (or Windows NT File System), as well as CD-ROM and DVD File-system
formats.

 Linux supports over forty different file systems, the standard Linux file system is known as the extended
file system, with the most common versions being ext3 and ext4. There are also distributed file systems in
which a file system on a server is mounted by one or more client computers across a network.

FILE SYSTEM INTERFACE :

A file's attributes vary from one operating system to another but typically consist of these:

 Name: Name is the symbolic file name and is the only information kept in human-readable form.

PREPARED BY: Mr.D.Srinivasan,AP/CSE , Mrs.A.Thilagavathi,AP/CSE & Mr.R.Arunkumar, AP/CSE 32


MAILAM ENGINEERING COLLEGE CS3451-Introduction to Operating System UNIT-IV
 Identifier: This unique tag is a number that identifies the file within the file system; it is in the non-
human-readable form of the file.
 Type: This information is needed for systems that support different types of files or their formats.
 Location: This information is a pointer to a device pointing to the file's location on the device where it
is stored.
 Size: The current file size (which is in bytes, words, etc.), possibly the maximum allowed size, gets
included in this attribute.
 Protection: Access-control information establishes who can do the reading, writing, executing, etc.
 Date, Time, and user identification: This information might be kept for creating the file, its last
modification, and its previous use. These data might be helpful in the field of protection, security, and
monitoring its usage.

8. Explain File System Implementation. (or) In a variable partition scheme, the operating system has to
keep track of allocated and free space. Suggest a means of achieving this. Describe the effects of new
allocations and process terminations in your suggested scheme. (Nov/Dec-2021)

Several on-disk and in-memory structures are used to implement a file system. These vary depending on
the operating system and the file system, but some general principles apply.

 A boot control block can contain information needed by the system to boot an operating from that partition.
If the disk does not contain an operating system, this block can be empty. It is typically the first block of a
partition. In UFS, this is called the boot block; in NTFS, it is the partition boot sector.

 A partition control block contains partition details, such as the number of blocks in the partition, size of the
blocks, free-block count and free-block pointers, and free FCB count and FCB pointers. In UFS this is
called a superblock; in NTFS, it is the Master File Table.

 A directory structure is used to organize the files. An FCB contains many of the file's details, including file
permissions, ownership, size, and location of the data blocks. In UFS this is called the inode. In NTFS, this
information is actually stored within the Master File Table, which uses a relational database structure, with
a row per file.

The in-memory information is used for both file-system management and performance improvement via
caching. The structures can include:

 An in-memory partition table containing information about each mounted partition.


 The system-wide open-file table contains a copy of the FCB of each open file, as well as other
information.
 The per-process open-file table contains a pointer to the appropriate entry in the system-wide open-
file table, as well as other information shown in Fig 4.15.

PREPARED BY: Mr.D.Srinivasan,AP/CSE , Mrs.A.Thilagavathi,AP/CSE & Mr.R.Arunkumar, AP/CSE 33


MAILAM ENGINEERING COLLEGE CS3451-Introduction to Operating System UNIT-IV

Fig 4.15: File Types

There are some of the interactions of file system components when files are created and/or used:

 When a new file is created, a new FCB is allocated and filled out with important information
regarding the new file. The appropriate directory is modified with the new file name and FCB
information.
 When a file is accessed during a program, the open ( ) system call reads in the FCB information
from disk, and stores it in the system-wide open file table is shown in Fig 4.16.
 An entry is added to the per-process open file table referencing the system-wide table, and an
index into the per-process table is returned by the open ( ) system call. UNIX refers to this index
as a file descriptor, and Windows refers to it as a file handle.

Fig 4.16: In Memory File System

Partitions and Mounting:

 Physical disks are commonly divided into smaller units called partitions.

Partitions can either be used as raw devices: Raw partitions are generally used for swap space, and may
also be used for certain programs such as databases that choose to manage their own disk storage system.

PREPARED BY: Mr.D.Srinivasan,AP/CSE , Mrs.A.Thilagavathi,AP/CSE & Mr.R.Arunkumar, AP/CSE 34


MAILAM ENGINEERING COLLEGE CS3451-Introduction to Operating System UNIT-IV
 Partitions containing file systems can generally only be accessed using the file system structure by
ordinary users, but can often be accessed as a raw device also by root.
 The boot block is accessed as part of a raw partition, by the boot program prior to any operating
system being loaded.

The root partition contains the OS kernel and at least the key portions of the OS needed to complete the boot
process. At boot time the root partition is mounted, and control is transferred from the boot program to the
kernel found there.

Virtual File Systems:

Virtual File Systems, VFS, provide a common interface to multiple different filesystem types. In
addition, it provides for a unique identifier (vnode) for files across the entire space, including across all
filesystems of different types shown in Fig 4.17.

(UNIX inodes are unique only across a single filesystem, and certainly do not carry across networked file
systems.)

The VFS in Linux is based upon four key object types:

 The inode object, representing an individual file


 The file object, representing an open file.
 The superblock object, representing a filesystem.
 The dentry object, representing a directory entry.

An abbreviated API for some of the operations for the file object includes:
a. int open (. . .)—Open a file.
b. int close(...)—Close an already-open file.
c. ssize t read (. . .)—Read from a file.
d. ssize t write (. . .)—Write to a file.
e. int mmap(. . .)—Memory-map a file.

Fig 4.17: Virtual File Systems

PREPARED BY: Mr.D.Srinivasan,AP/CSE , Mrs.A.Thilagavathi,AP/CSE & Mr.R.Arunkumar, AP/CSE 35


MAILAM ENGINEERING COLLEGE CS3451-Introduction to Operating System UNIT-IV

Linux VFS provides a set of common functionalities for each filesystem, using function pointers accessed
through a table. The same functionality is accessed through the same table position for all filesystem
types, though the actual functions pointed to by the pointers may be filesystem-specific. See
/usr/include/linux/fs.h for full details. Common operations provided include open ( ), read( ), write( ), and
mmap( ).

9. Explain the Various File Allocation Methods.(or) Explaina bput Contiguous allocation and
linked allocation with example and diagrams.[April/May-2021,23] [Nov/Dec-2021]

There are three major methods of storing files on disks: contiguous, linked, and indexed.

(i) Contiguous Allocation

 Contiguous Allocation requires that all blocks of a file be kept together contiguously.
 Performance is very fast, because reading successive blocks of the same file generally requires no
movement of the disk heads, or at most one small step to the next adjacent cylinder.
 Storage allocation involves the same issues discussed earlier for the allocation of contiguous
blocks of memory shown in Fig 4.18 .
 Problems can arise when files grow, or if the exact size of a file is unknown at creation time:
o Over-estimation of the file's final size increases external fragmentation and wastes disk
space.
o If a file grows slowly over a long time period and the total final space must be allocated
initially, then a lot of space becomes unusable before the file fills the space.

Fig 4.18: Contiguous Allocation

 A variation is to allocate file space in large contiguous chunks, called extents. When a file outgrows its
original extent, then an additional one is allocated.

PREPARED BY: Mr.D.Srinivasan,AP/CSE , Mrs.A.Thilagavathi,AP/CSE & Mr.R.Arunkumar, AP/CSE 36


MAILAM ENGINEERING COLLEGE CS3451-Introduction to Operating System UNIT-IV

Advantages:

 Both the Sequential and Direct Accesses are supported by this. For direct access, the address of the kth
block of the file which starts at block b can easily be obtained as (b+k).
 This is extremely fast since the number of seeks are minimal because of contiguous allocation of file
block
Disadvantages:
 This method suffers from both internal and external fragmentation. This makes it inefficient in terms of
memory utilization.
 Increasing file size is difficult because it depends on the availability of contiguous memory at a
particular instance.
(ii) Linked Allocation

 Disk files can be stored as linked lists, with the expense of the storage space consumed by each link. (
E.g. a block may be 508 bytes instead of 512. )
 Linked allocation involves no external fragmentation, does not require pre-known file sizes, and allows
files to grow dynamically at any time.
 Allocating clusters of blocks reduces the space wasted by pointers, at the cost of internal fragmentation
shown in Fig 4.19.

Fig 4.19: Linked Allocation

 The File Allocation Table, FAT, used by DOS is a variation of linked allocation, where all the links
are stored in a separate table at the beginning of the disk. The benefit of this approach is that the FAT
table can be cached in memory, greatly improving random access speeds shown in Fig 4.20.

PREPARED BY: Mr.D.Srinivasan,AP/CSE , Mrs.A.Thilagavathi,AP/CSE & Mr.R.Arunkumar, AP/CSE 37


MAILAM ENGINEERING COLLEGE CS3451-Introduction to Operating System UNIT-IV

Fig 4.20: File Allocation Table

Advantages:
 This is very flexible in terms of file size. File size can be increased easily since the system does not
have to look for a contiguous chunk of memory.
 This method does not suffer from external fragmentation. This makes it relatively better in terms of
memory utilization.
Disadvantages:
 Because the file blocks are distributed randomly on the disk, a large number of seeks are needed to
access every block individually. This makes linked allocation slower.
 It does not support random or direct access
 Pointers required in the linked allocation incur some extra overhead

(iii) Indexed Allocation

 Indexed Allocation combines all of the indexes for accessing each file into a common block , as
opposed to spreading them all over the disk or storing them in a FAT table.
 Some disk space is wasted ( relative to linked lists or FAT tables ) because an entire index block must
be allocated for each file, regardless of how many data blocks the file contains.
 This leads to questions of how big the index block should be, and how it should be implemented shown
in Fig 4.21

Fig 4.21: Indexed Allocation

PREPARED BY: Mr.D.Srinivasan,AP/CSE , Mrs.A.Thilagavathi,AP/CSE & Mr.R.Arunkumar, AP/CSE 38


MAILAM ENGINEERING COLLEGE CS3451-Introduction to Operating System UNIT-IV

There are several approaches:

Linked Scheme - An index block is one disk block, which can be read and written in a single disk operation.
The first index block contains some header information, the first N block addresses, and if necessary a pointer
to additional linked index blocks.

Multi-Level Index - The first index block contains a set of pointers to secondary index blocks, which in turn
contain pointers to the actual data blocks.

Advantages:
 This supports direct access to the blocks occupied by the file and therefore provides fast access to the
file blocks.
 It overcomes the problem of external fragmentation.
Disadvantages:
 The pointer overhead for indexed allocation is greater than linked allocation.
 For very small files, say files that expand only 2-3 blocks, the indexed allocation would keep one entire
block (index block) for the pointers which is inefficient in terms of memory utilization.

Combined Scheme :

Another alternative, used in UNIX-based file systems, is to keep the first, say, 15 pointers of the index block in
the file’s inode.
 The first 12 of these pointers point to direct blocks; that is, they contain addresses of blocks that contain

data of the file.


 Thus, the data for small files (of no more than 12 blocks) do not need a separate index block. If the block
size is 4 KB, thenup to 48 KB of data can be accessed directly.
 The next three pointers point to indirect blocks. The first points to a single indirect block, which is an
index block containing not data but the addresses of blocks that do contain data shown in Fig 4.22.
 The second points to a double indirect block, which contains the address of a block that contains the
addresses of blocks that contain pointers to the actual data blocks.
 The last pointer contains the address of a triple indirect block.

PREPARED BY: Mr.D.Srinivasan,AP/CSE , Mrs.A.Thilagavathi,AP/CSE & Mr.R.Arunkumar, AP/CSE 39


MAILAM ENGINEERING COLLEGE CS3451-Introduction to Operating System UNIT-IV

Fig 4.22: Combined Scheme

Performance:

A System with mostly sequential access should not use the same method as a system with mostly random
access. For any type of access, contiguous allocation requires only one access to get a disk block.

 Since we can easily keep the initial address of the file in memory, we can calculate immediately the
disk address of the ith block (or the next block) and read it directly.
 For linked allocation, we can also keep the address of the next block in memory and read it directly.
 This method is fine for sequential access; for direct access, however, an access to the ith block might
require i disk reads.
.

10. Write about the Free Space Management. [Apr/May 2010,12, 13& 15] [Nov/Dec 2015 &17]

Another important aspect of disk management is keeping track of and allocating free space.

(i) Bit Vector:

 One simple approach is to use a bit map or bit vector, in which each bit represents a disk block,
set to 1 if free or 0 if allocated.

For example, consider a disk where blocks 2, 3,4,5, 8, 9, 10, 11, 12, 13, 17, 18, 25, 26, and 27 are free, and
the rest of the blocks are allocated. The free space bit map would be

0011110011111100011000000111………

PREPARED BY: Mr.D.Srinivasan,AP/CSE , Mrs.A.Thilagavathi,AP/CSE & Mr.R.Arunkumar, AP/CSE 40


MAILAM ENGINEERING COLLEGE CS3451-Introduction to Operating System UNIT-IV

(ii) Linked List:

 A linked list can also be used to keep track of all free blocks.
 Traversing the list and/or finding a contiguous block of a given size are not easy, but fortunately are not
frequently needed operations. Generally, the system just adds and removes single blocks from the
beginning of the list shown in Fig 4.23.
 The FAT table keeps track of the free list as just one more linked list on the table.

Fig 4.23: Indexed Free Space

(iii) Grouping:
 A modification of the free-list approach is to store the addresses of n free blocks in the first free block. The
first n-1 of these blocks is actually free.
 The last block contains the addresses of another n free block, and so on.

(iv)Counting:

 Another approach is to take advantage of the fact that, generally, several contiguous blocks may be
allocated or freed simultaneously, particularly when space is allocated with the contiguous-allocation
algorithm or through clustering.

(v)Space Maps:

 Oracle’s ZFS file system (found in Solaris and other operating systems) was designed to encompass huge
numbers of files, directories, and even file systems (in ZFS,we can create file-system hierarchies).
 In its management of free space, ZFS uses a combination of techniques to control the size of data structures
and minimize the I/O needed to manage those structures.
 First, ZFS creates metaslabs to divide the space on the device into chunks of manageable size. A given
volume may contain hundreds of metaslabs.

PREPARED BY: Mr.D.Srinivasan,AP/CSE , Mrs.A.Thilagavathi,AP/CSE & Mr.R.Arunkumar, AP/CSE 41


MAILAM ENGINEERING COLLEGE CS3451-Introduction to Operating System UNIT-IV

11. Explain the Directory Implementation in detail.

Directory Implementation:
The selection of directory-allocation and directory-management algorithms has a large effect on the
efficiency, performance, and reliability of the file system.

Linear List:

 To create a new file, we must first search the directory to be sure that no existing file has the same
name. Then, we add a new entry at the end of the directory.
 To delete a file, we search the directory for the named file, then release the space allocated to it. To
reuse the directory entry, we can do one of several things.
Disadvantage
 A linear list of directory entries is the linear search to find a file is the major disadvantage.
 Directory information is used frequently, and users would notice a slow implementation of
access to it.
Advantage:
The sorted list is that a sorted directory listing can be produced without a separate sort step.

Hash Table:
 Another data structure that has been used for a file directory is a hash table. In this method, a linear list
stores the directory entries, but a hash data structure is also used.
 The hash table takes a value computed from the file name and returns a pointer to the file name
in the linear list.
Disadvantage:
The major difficulties with a hash table are its generally fixed size and the dependence of the hash function
on that size.

12. Explain in detail about I/O Hardware. [Nov/Dec 2010] [Apr/May 2013]

A device communicates with a computer system by sending signals over a cable or even through the air. The
device communicates with the machine via a connection point, or port—for example, a serial port. If devices
share a common set of wires, the connection is called a bus.A typical PC bus structure appears in Figure 4.24 .

PREPARED BY: Mr.D.Srinivasan,AP/CSE , Mrs.A.Thilagavathi,AP/CSE & Mr.R.Arunkumar, AP/CSE 42


MAILAM ENGINEERING COLLEGE CS3451-Introduction to Operating System UNIT-IV

Fig 4.24: Bus Structure

 A PCI bus (the common PC system bus) connects the processor–memory subsystem to fast devices,
and an expansion bus connects relatively slow devices, such as the keyboard and serial and USB ports.

 In the upper-right portion of the figure, four disks are connected together on a Small Computer
System Interface (SCSI) bus plugged into a SCSI controller.
 Other common buses used to interconnect main parts of a computer include PCI Express (PCIe), with
throughput of up to 16 GB per second, and Hyper Transport, with throughput of up to 25 GB per
second.

A controller is a collection of electronics that can operate a port, a bus, or a device.

 A serial-port controller is a simple device controller. It is a single chip (or portion of a chip) in the
computer that controls the signals on the wires of a serial port.
 A SCSI bus controller is not simple. Because the SCSI protocol is complex, the SCSI bus controller is
often implemented as a separate circuit board (or a host adapter) that plugs into the computer.

Registers may be one to four bytes in size, and may typically include ( a subset of ) the following four:

 The data-in register is read by the host to get input from the device.
 The data-out register is written by the host to send output.
 The status register has bits read by the host to ascertain the status of the device, such as idle,
ready for input, busy, error, transaction complete, etc.
 The control register has bits written by the host to issue commands or to change settings of the
device such as parity checking, word length, or full- versus half-duplex operation shown in
Table 4.2.

PREPARED BY: Mr.D.Srinivasan,AP/CSE , Mrs.A.Thilagavathi,AP/CSE & Mr.R.Arunkumar, AP/CSE 43


MAILAM ENGINEERING COLLEGE CS3451-Introduction to Operating System UNIT-IV

Table 4.2:Address Range

Polling:

 One simple means of device handshaking involves polling:

1. The host repeatedly checks the busy bit on the device until it becomes clear.
2. The host writes a byte of data into the data-out register, and sets the write bit in the command
register (in either order. )
3. The host sets the command ready bit in the command register to notify the device of the
pending command.
4. When the device controller sees the command-ready bit set, it first sets the busy bit.
5. Then the device controller reads the command register, sees the write bit set, reads the byte of
data from the data-out register, and outputs the byte of data.
6. The device controller then clears the error bit in the status register, the command-ready bit, and
finally clears the busy bit, signaling the completion of the operation.

Interrupts:

 Interrupts allow devices to notify the CPU when they have data to transfer or when an operation is
complete, allowing the CPU to perform other duties when no I/O transfers need its immediate attention.
 The CPU has an interrupt-request line that is sensed after every instruction.
o A device's controller raises an interrupt by asserting a signal on the interrupt request line.

o The CPU then performs a state save, and transfers control to the interrupt handler routine at a
fixed address in memory. (The CPU catches the interrupt and dispatches the interrupt handler.
o The interrupt handler determines the cause of the interrupt, performs the necessary processing,
performs a state restore, and executes a return from interrupt instruction to return control to the

PREPARED BY: Mr.D.Srinivasan,AP/CSE , Mrs.A.Thilagavathi,AP/CSE & Mr.R.Arunkumar, AP/CSE 44


MAILAM ENGINEERING COLLEGE CS3451-Introduction to Operating System UNIT-IV
CPU is shown in Figure 4.25

Fig 4.25: Interrupt-driven I/O procedure

The above description is adequate for simple interrupt-driven I/O, but there are three needs in modern
computing which complicate the picture:

1. The need to defer interrupt handling during critical processing,


2. The need to determine which interrupt handler to invoke, without having to poll all devices to
see which one needs attention, and
3. The need for multi-level interrupts, so the system can differentiate between high- and low-
priority interrupts for proper response.

These issues are handled in modern computer architectures with interrupt-controller hardware.

o Most CPUs now have two interrupt-request lines: One that is non-maskable for critical error
conditions and one that is maskable, that the CPU can temporarily ignore during critical
processing.
o The interrupt mechanism accepts an address, which is usually one of a small set of numbers for
an offset into a table called the interrupt vector. This table (usually located at physical address
zero?) holds the addresses of routines prepared to process specific interrupts.
o The number of possible interrupt handlers still exceeds the range of defined interrupt numbers,
so multiple handlers can be interrupt chained. Effectively the addresses held in the interrupt
vectors are the head pointers for linked-lists of interrupt handlers.

Table 4.3 shows the Intel Pentium interrupt vector. Interrupts 0 to 31 are non-maskable and reserved for
serious hardware and other errors. Maskable interrupts, including normal device I/O interrupts begin at
interrupt

PREPARED BY: Mr.D.Srinivasan,AP/CSE , Mrs.A.Thilagavathi,AP/CSE & Mr.R.Arunkumar, AP/CSE 45


MAILAM ENGINEERING COLLEGE CS3451-Introduction to Operating System UNIT-IV

Table 4.3: Intel Pentium interrupt vector

 At boot time the system determines which devices are present, and loads the appropriate handler
addresses into the interrupt table.
 During operation, devices signal errors or the completion of commands via interrupts.
 Exceptions, such as dividing by zero, invalid memory accesses, or attempts to access kernel mode
instructions can be signaled via interrupts.

Direct Memory Access

Instead this work can be off-loaded to a special processor, known as the Direct Memory Access, DMA, and
Controller.

 The host issues a command to the DMA controller, indicating the location where the data is located, the
location where the data is to be transferred to, and the number of bytes of data to transfer. The DMA
controller handles the data transfer, and then interrupts the CPU when the transfer is complete.
 A simple DMA controller is a standard component in modern PCs, and many bus-mastering I/O cards
contain their own DMA hardware.
 Handshaking between DMA controllers and their devices is accomplished through two wires called the
DMA-request and DMA-acknowledge wires are shown in Fig 4.26.

PREPARED BY: Mr.D.Srinivasan,AP/CSE , Mrs.A.Thilagavathi,AP/CSE & Mr.R.Arunkumar, AP/CSE 46


MAILAM ENGINEERING COLLEGE CS3451-Introduction to Operating System UNIT-IV

Fig 4.26: Direct Memory Access

13. Describe the important concepts of Application I/O Interface. [Nov/Dec 2011]

 Each general kind is accessed through a standardized set of functions—an interface.


 User application access to a wide variety of different devices is accomplished through layering, and
through encapsulating all of the device-specific code into device drivers, while application layers are
presented with a common interface for all (or at least large general categories of) devices shown in Fig
4.27.

Fig 4.27: Kernel

 Devices differ on many different dimensions, as outlined in Table 4.4:

Character-stream or block. A character-stream device transfers bytes one by one, whereas a block
device transfers a block of bytes as a unit.
 Sequential or random access. A sequential device transfers data in a fixed order determined by the
device, whereas the user of a random-access device can instruct the device to seek to any of the
PREPARED BY: Mr.D.Srinivasan,AP/CSE , Mrs.A.Thilagavathi,AP/CSE & Mr.R.Arunkumar, AP/CSE 47
MAILAM ENGINEERING COLLEGE CS3451-Introduction to Operating System UNIT-IV
 available data storage locations.
 Synchronous or asynchronous. A synchronous device performs data transfers with predictable
response times, in coordination with other aspects of the system. An asynchronous device exhibits
irregular or unpredictable response times not coordinated with other computer events.
 Sharable or dedicated. A sharable device can be used concurrently by several processes or threads; a
dedicated device cannot.

Table 4.4: Different dimensions

 Speed of operation. Device speeds range from a few bytes per second to a few gigabytes per second.
 Read–write, read only, or write only. Some devices perform both input and output, but others support
only one data transfer direction.
 Most operating systems also have an escape (or back door) that transparently passes arbitrary
commands from an application to a device driver.
 In UNIX this is the ioctl( ) system call ( I/O Control ). Ioctl( ) takes three arguments - The file
descriptor for the device driver being accessed, an integer indicating the desired function to be
performed, and an address used for communicating or transferring additional information.

Block and Character Devices:

Block devices are accessed a block at a time, and are indicated by a "b" as the first character in a long listing on
UNIX systems. Operations supported include read ( ), write ( ), and seek ( ).

o Accessing blocks on a hard drive directly (without going through the file system structure) is
called raw I/O, and can speed up certain operations by bypassing the buffering and locking
normally conducted by the OS. (It then becomes the application's responsibility to manage those
issues.)
o A new alternative is direct I/O, which uses the normal file system access, but which disables
buffering and locking operations.

Memory-mapped file I/O can be layered on top of block-device drivers.

o Rather than reading in the entire file, it is mapped to a range of memory addresses, and then
PREPARED BY: Mr.D.Srinivasan,AP/CSE , Mrs.A.Thilagavathi,AP/CSE & Mr.R.Arunkumar, AP/CSE 48
MAILAM ENGINEERING COLLEGE CS3451-Introduction to Operating System UNIT-IV
paged into memory as needed using the virtual memory system.
o Access to the file is then accomplished through normal memory accesses, rather than through
read ( ) and write ( ) system calls. This approach is commonly used for executable program
code.

Character stream interface are accessed one byte at a time, and are indicated by a "c" in UNIX long listings.
Supported operations include get ( ) and put ( ), with more advanced functionality such as reading an entire line
supported by higher-level library routines.

Network Devices:

 Because network access is inherently different from local disk access, most systems provide a
separate interface for network devices.
 One common and popular interface is the socket interface, which acts like a cable or pipeline
connecting two networked entities. Data can be put into the socket at one end, and read out sequentially
at the other end. Sockets are normally full-duplex, allowing for bi-directional data transfer.
 The select ( ) system call allows servers (or other applications) to identify sockets which have data
waiting, without having to poll all available sockets.

Clocks and Timers:

 Three types of time services are commonly needed in modern systems:


o Get the current time of day.
o Get the elapsed time (system or wall clock) since a previous event.
o Set a timer to trigger event X at time T.

A programmable interrupt timer, PIT can be used to trigger operations and to measure elapsed time. It can be
set to trigger an interrupt at a specific future time, or to trigger interrupts periodically on a regular basis.

o The scheduler uses a PIT to trigger interrupts for ending time slices.
o The disk system may use a PIT to schedule periodic maintenance cleanup, such as flushing
buffers to disk.
o Networks use PIT to abort or repeat operations that are taking too long to complete. I.e.
resending packets if an acknowledgement is not received before the timer goes off.

Nonblocking and Asynchronous I/O:

 When an application issues a blocking system call, the execution of the application is suspended. The
application is moved from the operating system’s run queue to a wait queue. After the system call
completes, the application is moved back to the run queue, where it is eligible to resume execution.

 Some user-level processes need nonblocking I/O. One example is a user interface that receives
keyboard and mouse input while processing and displaying data on the screen.

PREPARED BY: Mr.D.Srinivasan,AP/CSE , Mrs.A.Thilagavathi,AP/CSE & Mr.R.Arunkumar, AP/CSE 49


MAILAM ENGINEERING COLLEGE CS3451-Introduction to Operating System UNIT-IV
Vectored I/O:

Some operating systems provide another major variation of I/O via their applications interfaces.

 Vectored I/O allows one system call to perform multiple I/O operations involving multiple locations.
 For example, the UNIX readv system call accepts a vector of multiple buffers and either reads from a
source to that vector or writes from that vector to a destination.
 The same transfer could be caused by several individual invocations of system calls, but this scatter–
gather method is useful for a variety of reasons.

14. Explain the components of kernel I/O structure with a diagram.[April/May-2021]

I/O Scheduling:
To schedule a set of I/O requests means to determine a good order in which to execute them.
 Scheduling can improve overall system performance, can share device access fairly among processes,
and can reduce the average waiting time for I/O to complete.

Suppose that a disk arm is near the beginning of a disk and that three-applications issue blocking read calls to
that disk.
 Application 1 requests a block near the end of the disk,
 Application 2 requests one near the beginning,
 Application 3 requests one in the middle of the disk. The operating system can reduce
the distance that the disk arm travels by serving the applications in the order 2, 3, 1.

Rearranging the order of service in this way is the essence of I/O scheduling.
 When an application issues a blocking I/O system call, the request is placed on the queue for that

device.
 The I/O scheduler rearranges the order of the queue to improve the overall system efficiency and the
average response time experienced by applications. The operating system might attach the wait queue
to a device-status table.
 The kernel manages this table, which contains an entry for each I/O device, as shown in Figure 13.9.
Each table entry indicates the device’s type, address, and state (not functioning, idle, or busy).
 If the device is busy with a request, the type of request and other parameters will be stored in the table
entry for that device.
 Scheduling I/O operations is one way in which the I/O subsystem improves the efficiency of the
computer shown in Fig 4.28.
 Another way is by using storage space in main memory or on disk via buffering, caching, and spooling.

PREPARED BY: Mr.D.Srinivasan,AP/CSE , Mrs.A.Thilagavathi,AP/CSE & Mr.R.Arunkumar, AP/CSE 50


MAILAM ENGINEERING COLLEGE CS3451-Introduction to Operating System UNIT-IV

Fig 4.28: Kernel I/O Subsystem

Buffering

 Buffering of I/O is performed for ( at least ) 3 major reasons:


1. Speed differences between two devices. (See Figure 13.10 below.) A slow device may write
data into a buffer, and when the buffer is full, the entire buffer is sent to the fast device all at
once shown in Fig 4.29.
2. So that the slow device still has somewhere to write while this is going on, a second buffer is
used, and the two buffers alternate as each becomes full. This is known as double buffering.
3. Data transfer size differences. Buffers are used in particular in networking systems to break
messages up into smaller packets for transfer, and then for re-assembly at the receiving side.
4. To support copy semantics. For example, when an application makes a request for a disk write,
the data is copied from the user's memory area into a kernel buffer.

Fig 4.29: Sun Devices Data

Caching:
 A cache is a region of fast memory that holds copies of data. Access to the cached copy is more
efficient than access to the original.
 Caching involves keeping a copy of data in a faster-access location than where the data is normally
stored.
 Buffering and caching are very similar, except that a buffer may hold the only copy of a given data

PREPARED BY: Mr.D.Srinivasan,AP/CSE , Mrs.A.Thilagavathi,AP/CSE & Mr.R.Arunkumar, AP/CSE 51


MAILAM ENGINEERING COLLEGE CS3451-Introduction to Operating System UNIT-IV
item, whereas a cache is just a duplicate copy of some other data stored elsewhere.

Spooling and Device Reservation:

 A spool is a buffer that holds output for a device, such as a printer, that cannot accept interleaved
data streams.
 Although a printer can serve only one job at a time, several applications may wish to print their output
concurrently, without having their output mixed together. The operating system solves this problem by
intercepting all output to the printer.
 Each application’s output is spooled to a separate disk file. When an application finishes printing, the
spooling system queues the corresponding spool file for output to the printer.
 The spooling system copies the queued spool files to the printer one at a time.

Error Handling
 An operating system that uses protected memory can guard against many kinds of hardware and
application errors, so that a complete system failure is not the usual result of each minor mechanical
malfunction.
 Device is reported by the SCSI protocol in three levels of detail: a sense key that identifies the general
nature of the failure, such as a hardware error or an illegal request;
 An additional sense code that states the category of failure, such as a bad command parameter or a
self-test failure;
 An additional sense-code qualifier that gives even more detail, such as which command parameter
was in error or which hardware subsystem failed its self-test.

I/O Protection:

 The I/O system must protect against either accidental or deliberate erroneous I/O.
 User applications are not allowed to perform I/O in user mode - All I/O requests are handled through
system calls that must be performed in kernel mode shown in Fig 4.30.
 Memory mapped areas and I/O ports must be protected by the memory management system, but access
to these areas cannot be totally denied to user programs.

PREPARED BY: Mr.D.Srinivasan,AP/CSE , Mrs.A.Thilagavathi,AP/CSE & Mr.R.Arunkumar, AP/CSE 52


MAILAM ENGINEERING COLLEGE CS3451-Introduction to Operating System UNIT-IV

Fig 4.30: System Call


Kernel Data Structures:

The kernel needs to keep state information about the use of I/O components. The kernel uses many similar
structures to track network connections, character-device communications, and other I/O activities.

 UNIX provides file-system access to a variety of entities, such as user files, raw devices, and the
address spaces of processes. Although each of these entities supports a read () operation, the semantics
differ.
 To read a user file, the kernel needs to probe the buffer cache before deciding whether to perform a disk
I/O.
 To read a raw disk, the kernel needs to ensure that the request size is a multiple of the disk sector size
and is aligned on a sector boundary.

The open-file record, contains a dispatch table that holds pointers to the appropriate routines, depending on the
type of file.

 Windows uses a message-passing implementation for I/O. An I/O request is converted into a message
that is sent through the kernel to the I/O manager and then to the device driver, each of which may
change the message contents.
 For output, the message contains the data to be written Shown in Fig 4.31. For input, the message
contains a buffer to receive the data.

PREPARED BY: Mr.D.Srinivasan,AP/CSE , Mrs.A.Thilagavathi,AP/CSE & Mr.R.Arunkumar, AP/CSE 53


MAILAM ENGINEERING COLLEGE CS3451-Introduction to Operating System UNIT-IV

Fig 4.31:Kernel I/O System

27. State and explain the swap space management.(Nov/Dec-2019)

Swap-space management is another low-level task of the operating system. Virtual memory uses disk space
as an extension of main memory. Since disk access is much slower than memory access, using swap space
significantly decreases system performance.

Swap-Space Use:
 The systems that implement swapping may use swap space to hold the entire process image, including
the code and data segments.
 Paging systems may simply store pages that have been pushed out of main memory.

Swap-Space Location:
A swap space can reside in two places: Swap space can be carved out of the normal file system, or it can be in
a separate disk partition.

If the swap space is simply a large file within the file system, normal file-system routines can be used to create
it, name it, and allocate its space. This approach, though easy to implement, is also inefficient.

swap-space storage manager is used to allocate and deallocate the blocks. This manager uses algorithms
optimized for speed, rather than for storage efficiency.

.
Swap-Space Management: An Example

To illustrate the methods used to manage swap space, we now follow the evolution of swapping and paging in
UNIX.

PREPARED BY: Mr.D.Srinivasan,AP/CSE , Mrs.A.Thilagavathi,AP/CSE & Mr.R.Arunkumar, AP/CSE 54


MAILAM ENGINEERING COLLEGE CS3451-Introduction to Operating System UNIT-IV

In Fig 4.32 BSD, swap space is allocated to a process when the process is started. Enough space is set aside to
hold the program, known as the text pages or the text segment, and the data segment of the process.

o Pre allocating all the needed space in this way generally prevents a process from running out of swap
space while it executes. When a process starts, its text is paged in from the file system.

o These pages are written out to swap when necessary, and are read back in from there, so the file system
is consulted only once for each text page.
o Pages from the data segment are read in from the file system, or are created (if they are uninitialized),
and are written to swap space and paged back in as needed.

o One optimization (for instance, when two users run the same editor) is that processes with identical text
pages share these pages, both in physical memory and in swap space. Two per-process swap maps are
used by the kernel to track swap-space use.

o The text segment is a fixed size, so its swap space is allocated in 512 KB chunks, except for the final
chunk, which holds the remainder of the pages, in 1 KB increments

Fig 4.32: BSD text-segment swap map.

The data-segment swap map is more complicated, because the data segment can grow over time. The map is of
fixed size, but contains swap addresses for blocks of varying size. Given index i, a block pointed to by swap-
map entry i is of size 2' x 16 KB, to a maximum of 2 MB.

o When a process tries to grow its data segment beyond the final allocated block in its swap area, the
operating system allocates another block, twice as large as the previous one.
o This scheme results in small processes using only small blocks. It also minimizes fragmentation. The
blocks of large processes can be found quickly, and the swap map remains small.

o In Solaris 1 (SunOS 4), the designers made changes to standard UNIX methods to improve efficiency
and reflect technological changes Shown in Fig 4.33.

PREPARED BY: Mr.D.Srinivasan,AP/CSE , Mrs.A.Thilagavathi,AP/CSE & Mr.R.Arunkumar, AP/CSE 55


MAILAM ENGINEERING COLLEGE CS3451-Introduction to Operating System UNIT-IV

Fig 4.33: BSD data-segment swap map.

28.Explain file system mounting and protection in detail.(Nov/Dec-2019)

File-System Mounting:

A file must be opened before it is used, a file system must be mounted before it can be available to processes
on the system. More specifically, the directory structure can be built out of multiple partitions, which must be
mounted to make them available within the file system name space.

The mount procedure is straightforward. The operating system is given the name of the device, and the location
within the file Shown in Fig 4.34

Fig 4.34: File system. (a) Existing. (b) Unmounted partition

Structure at which to attach the file system (or mount point). Typically, a mount point is an empty directory at
which the mounted file system will be attached.

 For instance, on a UNIX system, a file system containing user's home directories might be mounted as
home; then, to access the directory structure within that file system, one could precede the directory
names with /home, as in /honze/jane. Mounting that file system under /users would result in the path
name /usersljane to reach the same directory.

 Next, the operating system verifies that the device contains a valid file system. It does so by asking the
device driver to read the device directory and verifying that the directory has the expected format.

PREPARED BY: Mr.D.Srinivasan,AP/CSE , Mrs.A.Thilagavathi,AP/CSE & Mr.R.Arunkumar, AP/CSE 56


MAILAM ENGINEERING COLLEGE CS3451-Introduction to Operating System UNIT-IV
 Finally, the operating system notes in its directory structure that a file system is mounted at the
specified mount point. This scheme enables the operating system to traverse its directory structure,
switching among file systems as appropriate.

To illustrate file mounting, consider the file system depicted is Shown in Fig 4.35, where the triangles
represent subtrees of directories that are of interest. an existing file system is shown, while in an unmounted
partition residing on /device/dsk is shown.

At this point, only the files on the existing file system can be accessed. the effects of the mounting of the
partition residing on /device/dsk over /users are shown. If the partition is unmounted, the file system is restored
to the situation depicted in

Fig 4.35: File Mount

Systems impose semantics to clarify functionality.

For example, a system may disallow a mount over a directory that contains files, or make the mounted file
system available at that directory and obscure the directory's existing files until the file system is unmounted,
terminating the use of the file system and allowing access to the original files in that directory.

As another example, a system may allow the same file system to be mounted repeatedly, at different mount
points, or it may only allow one mount per file system.

Consider the actions of the Macintosh operating system. Whenever the system encounters a disk for the first
time (hard disks are found at boot time, floppy disks are seen when they are inserted into the drive), the
Macintosh operating system searches for a file system on the device.

If it finds one, it automatically mounts the file system at the root level, adding a folder icon on the screen
labeled with the name of the file system (as stored in the device directory). The user then is able to click on the
icon and thus to display the newly mounted file system.

ICE PROBLEMS BASED ON C-LOOK DISK SCHEDULING ALGORITHM-

Problem-01:

PREPARED BY: Mr.D.Srinivasan,AP/CSE , Mrs.A.Thilagavathi,AP/CSE & Mr.R.Arunkumar, AP/CSE 57


MAILAM ENGINEERING COLLEGE CS3451-Introduction to Operating System UNIT-IV
Consider a disk queue with requests for I/O to blocks on cylinders 98, 183, 41, 122, 14, 124, 65, 67. The C-
LOOK scheduling algorithm is used. The head is initially at cylinder number 53 moving towards larger
cylinder numbers on its servicing pass. The cylinders are numbered from 0 to 199. The total head movement
(in number of cylinders) incurred while servicing these requests is _______.

Solution-FCFS

Total head movements incurred while servicing these requests

= (98 – 53) + (183 – 98) + (183 – 41) + (122 – 41) + (122 – 14) + (124 – 14) + (124 – 65) + (67 – 65)

= 45 + 85 + 142 + 81 + 108 + 110 + 59 + 2


= 632

C-LOOK DISK SCHEDULING

PREPARED BY: Mr.D.Srinivasan,AP/CSE , Mrs.A.Thilagavathi,AP/CSE & Mr.R.Arunkumar, AP/CSE 58


MAILAM ENGINEERING COLLEGE CS3451-Introduction to Operating System UNIT-IV

Total head movements incurred while servicing these requests

= (65 – 53) + (67 – 65) + (98 – 67) + (122 – 98) + (124 – 122) + (183 – 124) + (183 – 14) + (41 – 14)

= 12 + 2 + 31 + 24 + 2 + 59 + 169 + 27

= 326

LOOK SCHEDULING:

Total head movements incurred while servicing these requests

= (65 – 53) + (67 – 65) + (98 – 67) + (122 – 98) + (124 – 122) + (183 – 124) + (183 – 41) + (41 – 14)

PREPARED BY: Mr.D.Srinivasan,AP/CSE , Mrs.A.Thilagavathi,AP/CSE & Mr.R.Arunkumar, AP/CSE 59


MAILAM ENGINEERING COLLEGE CS3451-Introduction to Operating System UNIT-IV
= 12 + 2 + 31 + 24 + 2 + 59 + 142 + 27

= 299

Alternatively,

Total head movements incurred while servicing these requests

= (183 – 53) + (183 – 14)

= 130 + 169

= 299

SSTF:

Total head movements incurred while servicing these requests

= (65 – 53) + (67 – 65) + (67 – 41) + (41 – 14) + (98 – 14) + (122 – 98) + (124 – 122) + (183 – 124)

= 12 + 2 + 26 + 27 + 84 + 24 + 2 + 59

= 236

SCAN DISK SCHEDULING

PREPARED BY: Mr.D.Srinivasan,AP/CSE , Mrs.A.Thilagavathi,AP/CSE & Mr.R.Arunkumar, AP/CSE 60


MAILAM ENGINEERING COLLEGE CS3451-Introduction to Operating System UNIT-IV

Total head movements incurred while servicing these requests

= (65 – 53) + (67 – 65) + (98 – 67) + (122 – 98) + (124 – 122) + (183 – 124) + (199 – 183) + (199 – 41) + (41
– 14)

= 12 + 2 + 31 + 24 + 2 + 59 + 16 + 158 + 27

= 331

Alternatively,

Total head movements incurred while servicing these requests

= (199 – 53) + (199 – 14)

= 146 + 185

= 331

C-SCAN DISK SCHEDULING ALGORITHM-

PREPARED BY: Mr.D.Srinivasan,AP/CSE , Mrs.A.Thilagavathi,AP/CSE & Mr.R.Arunkumar, AP/CSE 61


MAILAM ENGINEERING COLLEGE CS3451-Introduction to Operating System UNIT-IV

Total head movements incurred while servicing these requests

= (65 – 53) + (67 – 65) + (98 – 67) + (122 – 98) + (124 – 122) + (183 – 124) + (199 – 183) + (199 – 0) + (14 –
0) + (41 – 14)

= 12 + 2 + 31 + 24 + 2 + 59 + 16 + 199 + 14 + 27

= 386

Alternatively,

Total head movements incurred while servicing these requests

= (199 – 53) + (199 – 0) + (41 – 0)

= 146 + 199 + 41

= 386

29. Following are the references attempted to hard disks : 67,22,78,34,21,78,99. Recommend a suitable
disk scheduling algorithm among FIFO, SSTF, SCAN AND LOOK after applying all. Provide
statements that support your recommendation.(Note : Initial head position is at 20)(Nov/Dec-2023)

1. FIFO (First In First Out):


 In FIFO, requests are served in the order they arrive.
 In this scenario, the order of requests is 67, 22, 78, 34, 21, 78, 99.
 The total head movement using FIFO can be calculated.
2.
PREPARED BY: Mr.D.Srinivasan,AP/CSE , Mrs.A.Thilagavathi,AP/CSE & Mr.R.Arunkumar, AP/CSE 62
MAILAM ENGINEERING COLLEGE CS3451-Introduction to Operating System UNIT-IV
3. SSTF (Shortest Seek Time First):
 SSTF selects the request closest to the current head position.
 It minimizes the seek time but may lead to starvation of some requests if there are always
requests closer to the current position.
 The total head movement using SSTF can be calculated.
4. SCAN:
 SCAN moves the disk arm in one direction servicing requests until it reaches the end of the
disk, then it reverses direction.
 SCAN might be suitable if requests are distributed across the disk and there are requests both
closer and farther away from the current head position.
 The total head movement using SCAN can be calculated.
5. LOOK:
 LOOK is similar to SCAN but doesn't go all the way to the end of the disk, it reverses direction
when there are no more requests in the current direction.
 LOOK prevents the head from unnecessarily traversing the entire disk.
 The total head movement using LOOK can be calculated.

Given the initial head position is at 20, and the list of references attempted to the hard disks is 67, 22, 78, 34,
21, 78, 99, we can simulate each algorithm and calculate the total head movement for each. After analyzing the
total head movement for each algorithm, we can recommend the most suitable one.

Let's calculate the total head movement for each algorithm:

1. FIFO:
Total head movement: Calculate the sum of absolute differences between each consecutive
request.
2. SSTF:
Total head movement: Choose the request with the shortest seek time to the current head
position.
3. SCAN:
Total head movement: Calculate the total distance traveled by the head moving from the initial
position to the end of the disk and then back.
4. LOOK:
Total head movement: Calculate the total distance traveled by the head moving from the initial
position to the last request and then back.

After calculating the total head movement for each algorithm, we can compare them to recommend the most
suitable one based on the lowest total head movement.

30.Is disk scheduling, other than FCFS scheduling, useful in a single- user environment? Explain your
answer. ( APRIL / MAY-2024)

PREPARED BY: Mr.D.Srinivasan,AP/CSE , Mrs.A.Thilagavathi,AP/CSE & Mr.R.Arunkumar, AP/CSE 63


MAILAM ENGINEERING COLLEGE CS3451-Introduction to Operating System UNIT-IV
Yes, disk scheduling other than First-Come, First-Served (FCFS) is useful in a single-user environment.
Here's an explanation of why alternative disk scheduling algorithms can be beneficial:

1. Overview of Disk Scheduling

Disk scheduling refers to the method used by an operating system to determine the order in which disk I/O
requests are processed. Disk I/O is typically a time-consuming operation, and optimizing how requests are
handled can greatly improve system performance.

In single-user environments, where there may still be multiple disk I/O requests queued up, using more
advanced scheduling algorithms can lead to reduced wait times and better overall system efficiency.

2. Issues with FCFS Scheduling

 Fairness: While FCFS is simple, it does not prioritize efficiency. It simply handles requests in the order
they arrive.
 Long Waiting Times: FCFS can result in long waiting times, especially if a disk request far away from
the current head position arrives before others, leading to inefficient disk arm movements.
 Uneven Disk Head Movement: FCFS may cause the disk arm to move unnecessarily back and forth,
causing delays in servicing I/O requests. This could increase the overall seek time, especially in
systems with a high volume of I/O requests.

3. Advantages of Alternative Scheduling Algorithms

a) Shortest Seek Time First (SSTF)

 How it Works: SSTF selects the disk request that is closest to the current disk head position.
 Benefits in a Single-User Environment:
o Reduced Seek Time: It minimizes the time spent by the disk arm moving back and forth.
o Improved Performance: It can greatly speed up I/O operations by reducing head movements
compared to FCFS.
o Fairness: While not as fair as FCFS, SSTF is still effective in single-user environments where
performance optimization is critical.

b) SCAN and C-SCAN (Circular SCAN)

 How they Work: The disk arm moves in one direction, serving requests along the way until it reaches
the end (or beginning), then reverses direction (SCAN) or wraps around (C-SCAN).
 Benefits in a Single-User Environment:
o Better Disk Arm Utilization: These algorithms avoid the inefficiency of moving the disk arm
back and forth and ensure a more predictable and systematic approach to servicing requests.
o Improved Seek Time: SCAN and C-SCAN tend to minimize the average seek time compared
to FCFS.
o Less Starvation: While SCAN avoids some of the issues of SSTF (where far requests might

PREPARED BY: Mr.D.Srinivasan,AP/CSE , Mrs.A.Thilagavathi,AP/CSE & Mr.R.Arunkumar, AP/CSE 64


MAILAM ENGINEERING COLLEGE CS3451-Introduction to Operating System UNIT-IV
starve), the disk head movement remains efficient.

c) LOOK and C-LOOK (Circular LOOK)

 How they Work: Similar to SCAN and C-SCAN, but instead of going to the end of the disk, the arm
stops at the last request in the direction of movement.
 Benefits in a Single-User Environment:
o Reduced Seek Time: LOOK and C-LOOK provide a more efficient approach than SCAN,
especially when requests are clustered in specific parts of the disk.
o Optimal for Unpredictable Request Patterns: These algorithms adjust to request locations
dynamically, reducing unnecessary movements.

4. Why Disk Scheduling Other Than FCFS is Useful in a Single-User Environment

 Improved Performance: Even in a single-user environment, there may be several disk operations
queued at any given time. Algorithms like SSTF, SCAN, and LOOK can optimize the order in which
these operations are executed, reducing seek times and improving overall throughput.
 Efficient Resource Utilization: The disk arm’s movement can be optimized, leading to better use of
the mechanical resources of the disk. This is especially important for systems running applications that
involve frequent disk access (e.g., databases or multimedia editing).
 Time Sensitivity: In scenarios where disk I/O performance is critical (e.g., real-time processing),
advanced scheduling algorithms can help meet time constraints by minimizing delays and reducing the
overall time spent servicing requests.
 Adaptability: Some algorithms (like LOOK and C-LOOK) adapt better to the nature of the request
queue, providing more efficient handling of disk requests that might otherwise cause excessive seek
times under FCFS.

31. Describe three circumstances under which blocking I/O should be used. Describe three
circumstances under which non-blocking I/O should be used? ( APRIL / MAY-2024)

Blocking I/O

Blocking I/O operations halt the execution of a program until the operation is complete. This approach is
simpler and often preferred in scenarios where waiting for the result is acceptable or beneficial:

1. Single-threaded, Simple Applications


o Circumstance: When the application is simple and performs only one task at a time, such as a
script that reads user input or a file and processes it sequentially.
o Example: A command-line calculator waiting for user input before proceeding.
2. Predictable and Fast Operations
o Circumstance: When the I/O operation is short and predictable, ensuring minimal delay.
o Example: Reading from a small local file where latency is negligible.
3. Resource-Constrained Systems

PREPARED BY: Mr.D.Srinivasan,AP/CSE , Mrs.A.Thilagavathi,AP/CSE & Mr.R.Arunkumar, AP/CSE 65


MAILAM ENGINEERING COLLEGE CS3451-Introduction to Operating System UNIT-IV
o Circumstance: When the system has limited resources and managing multiple threads or
processes for non-blocking I/O would introduce unnecessary complexity.
o Example: Embedded systems handling one sensor input at a time.

Non-blocking I/O

Non-blocking I/O allows the program to continue executing while the I/O operation is performed. This
approach is advantageous in scenarios requiring responsiveness and concurrency:

1. High-Performance, Concurrent Applications


o Circumstance: When the application needs to handle multiple I/O operations simultaneously
without waiting for each to complete.
o Example: A web server handling requests from many clients concurrently.
2. User-Interactive Systems
o Circumstance: When responsiveness is critical, such as in applications with graphical user
interfaces (GUIs).
o Example: A video player fetching data in the background while playing video seamlessly.
3. Handling Slow or Unpredictable I/O
o Circumstance: When dealing with I/O that might take an unknown amount of time, such as
network communication or accessing external APIs.
o Example: A chat application receiving and sending messages without freezing the interface.

32. Consider a file system in which a file can be deleted and its disk space reclaimed while links to that
file still exist. What problems may occur if a new file is created in the same storage area or with the same
absolute path name? How can these problems be avoided? (APRIL / MAY-2024)
In a file system where files can be deleted and their disk space reclaimed while links to the file still exist,
creating a new file in the same storage area or with the same absolute path name can cause the following
problems:

Problems

1. Dangling Links
o Description: Links pointing to the old file (now deleted) may now point to the newly created
file with the same name or in the same storage area. This can lead to unintended access to
incorrect data or corruption of data integrity.
o Example: If a symbolic link (symlink) points to the deleted file's absolute path, and a new file
is created with the same path, the link will resolve to the new file.
2. Data Corruption
o Description: If the storage area of the deleted file is reused for a new file, but some processes
still reference the old file's metadata, it can result in data corruption or unexpected behavior.
o Example: A program attempting to write to the old file's blocks may inadvertently modify the
new file's data.
3. Security Risks
o Description: Sensitive data might be inadvertently exposed if the new file is accessible through

PREPARED BY: Mr.D.Srinivasan,AP/CSE , Mrs.A.Thilagavathi,AP/CSE & Mr.R.Arunkumar, AP/CSE 66


MAILAM ENGINEERING COLLEGE CS3451-Introduction to Operating System UNIT-IV
links or paths intended for the deleted file.
o Example: A privileged file gets recreated at the same path as a previously deleted file, and
unauthorized links provide access to sensitive information

Solutions

1. Reference Counting and Delayed Deletion


o Explanation: Use a reference count in the file system metadata to track the number of links to a
file. Only delete and reclaim disk space when all links to the file are removed.
o Benefit: Prevents accidental reuse of storage blocks while links exist.
2. Unique Identifiers (Inodes)
o Explanation: Use unique inode numbers to reference files instead of relying solely on absolute
path names. Even if a new file is created at the same path, the links to the old file will not
resolve to the new one because they reference different inodes.
o Benefit: Ensures that links point to the correct file data or show an error if the file is deleted.
3. Path Name Locking or Recycling Delay
o Explanation: Implement a system where a deleted file's path name or storage blocks are locked
or not immediately reused for a certain period.
o Benefit: Reduces the likelihood of accidental conflicts.
4. Garbage Collection Mechanism
o Explanation: Introduce a garbage collection process to handle file deletion, ensuring that no
links or active processes are referencing the file before reclaiming its storage.
o Benefit: Prevents premature reuse of resources.
5. Error Handling for Stale Links
o Explanation: Design the system to detect and report attempts to access stale links or metadata,
guiding users or programs to handle such cases gracefully.
o Benefit: Minimizes confusion and unintended access.

33. Contrast the performance of the three techniques for allocating disk blocks (contiguous, linked, and
indexed) for both sequential and random file access.(APRIL / MAY-2024)

Performance of Disk Block Allocation Techniques

The three common techniques for allocating disk blocks are contiguous, linked, and indexed. Their
performance varies significantly depending on whether file access is sequential or random.

1. Contiguous Allocation

 Technique Description:
Allocates consecutive disk blocks for a file. The entire file occupies a single continuous block of
storage.

PREPARED BY: Mr.D.Srinivasan,AP/CSE , Mrs.A.Thilagavathi,AP/CSE & Mr.R.Arunkumar, AP/CSE 67


MAILAM ENGINEERING COLLEGE CS3451-Introduction to Operating System UNIT-IV
Sequential Access:

 Performance: Excellent
o Data is stored contiguously, so reading sequentially requires minimal seek time and no
additional disk operations.
o High performance, as all required data can often be read in a single I/O operation.
 Advantages: Optimal for workloads involving large, sequential reads or writes (e.g., multimedia files).

Random Access:

 Performance: Very good


o Random access is efficient since the block locations can be calculated directly without
traversing metadata or links.
 Disadvantages: May lead to fragmentation over time, reducing available contiguous space for large
files.

2. Linked Allocation

 Technique Description:
Each disk block contains a pointer to the next block of the file. Blocks can be scattered across the disk.

Sequential Access:

 Performance: Good
o Reading blocks sequentially involves following the pointers, which requires additional disk
operations for pointer traversal.
o Still relatively efficient for small to medium-sized files but slower than contiguous allocation
for large files.

Random Access:

 Performance: Poor
o Random access requires traversing pointers from the start of the file or an index, resulting in
significant overhead and higher seek times.
 Disadvantages: Particularly inefficient for frequent random access due to high pointer traversal
overhead.

3. Indexed Allocation

 Technique Description:
Uses an index block that contains pointers to all the blocks of the file. The file's data blocks can be
located anywhere on the disk.

PREPARED BY: Mr.D.Srinivasan,AP/CSE , Mrs.A.Thilagavathi,AP/CSE & Mr.R.Arunkumar, AP/CSE 68


MAILAM ENGINEERING COLLEGE CS3451-Introduction to Operating System UNIT-IV
Sequential Access:

 Performance: Moderate
o Requires reading the index block initially, then accessing data blocks sequentially.
o Slower than contiguous allocation but avoids fragmentation issues.

Random Access:

 Performance: Good
o Random access is efficient because the index block provides direct access to any data block
without traversal.
o Index lookup time is minimal compared to traversing linked blocks.

34. Write short notes on :


(i) Directory organization.
(ii) File system mounting.
(iii) Kernel (I/O) system.( NOV / DEC-2024)

(i) Directory Organization

A directory is a data structure used by file systems to organize and manage files. Its organization determines
how files are stored, accessed, and manipulated.

 Single-Level Directory:
A simple structure where all files are stored in a single directory. Easy to implement but becomes
cumbersome as the number of files grows.
Example: Early personal computers.
 Two-Level Directory:
Provides separate directories for each user. This improves organization and prevents filename conflicts
but can complicate file sharing.
 Hierarchical (Tree-Structured) Directory:
Organizes directories in a tree-like structure, allowing for nested directories (subdirectories). It provides
flexibility and scalability.
Example: Most modern file systems (e.g., Windows, Linux).
 Acyclic-Graph Directory:
Allows sharing of files or directories by using links, forming an acyclic graph. This supports better
collaboration but requires mechanisms to handle dangling references.
 General Graph Directory:
Permits cyclic links, requiring additional mechanisms to prevent infinite loops during directory
traversal.

(ii) File System Mounting

File system mounting is the process of making a file system accessible to the operating system and its users. It

PREPARED BY: Mr.D.Srinivasan,AP/CSE , Mrs.A.Thilagavathi,AP/CSE & Mr.R.Arunkumar, AP/CSE 69


MAILAM ENGINEERING COLLEGE CS3451-Introduction to Operating System UNIT-IV
integrates a secondary storage device into the system's directory hierarchy.

 Steps in Mounting:
1. Verify File System: The OS ensures the integrity of the file system on the device.
2. Attach to Hierarchy: The file system is attached to a mount point, typically an empty
directory.
3. Access: Once mounted, the files and directories on the device become accessible through the
mount point.
 Types of Mounting:
o Automatic Mounting: File systems are mounted automatically during boot.
o Manual Mounting: Users or administrators explicitly mount file systems as needed.
o Unmounting: A file system must be unmounted before the device can be safely removed or
ejected.
 Purpose: File system mounting allows seamless integration of additional storage devices or remote file
systems into a single directory tree.

(iii) Kernel (I/O) System

The kernel's I/O system is responsible for managing all input and output operations between the hardware
devices and the user-level processes. It abstracts device-specific details and provides a uniform interface.

 Components:
1. Device Drivers: Software modules that enable communication with specific hardware devices.
2. Buffering: Temporarily stores data to handle speed mismatches between devices or between
devices and the CPU.
3. Caching: Maintains a copy of frequently accessed data in memory to improve performance.
4. Spooling: Allows devices like printers to handle one job at a time while queuing others.
 Responsibilities:
o Device independence: Provides a standard interface regardless of the underlying hardware.
o Error handling: Detects and handles errors in I/O operations.
o Resource management: Allocates and manages I/O resources like disk blocks and buffers.
 I/O Scheduling: The kernel prioritizes and schedules I/O requests to optimize performance and reduce
latency, particularly in multi-tasking environments.

35. Suppose that a disk drive has 5000 cylinders numbered 0 to 4999 the drive currently services a
request at cylinder 143, and the previous request was at cylinder 125. the queue of pending requests in
the FIFO order is 86, 1470, 913, 1774, 948, 1509, 1022, 1750 and 130. Starting from the current position,
what is the total distance (in cylinders) that the disk arm moves to satisfy all pending requests for each
of the following algorithms:

(i) FCFS
(ii) SSFT
(iii) SCAN
(iv) LOOK
PREPARED BY: Mr.D.Srinivasan,AP/CSE , Mrs.A.Thilagavathi,AP/CSE & Mr.R.Arunkumar, AP/CSE 70
MAILAM ENGINEERING COLLEGE CS3451-Introduction to Operating System UNIT-IV
(v)C-SCAN. (NOV / DEC-2024)

To calculate the total distance the disk arm moves for each algorithm, we simulate the request servicing order.
Below are detailed calculations for each algorithm:

(i) First-Come, First-Served (FCFS)

Requests are serviced in the order they arrive.

Sequence:

143 → 86 → 1470 → 913 → 1774 → 948 → 1509 → 1022 → 1750 → 130

Calculation:

 Distance:
∣143−86∣+∣86−1470∣+∣1470−913∣+∣913−1774∣+∣1774−948∣+∣948−1509∣+∣1509−1022∣+∣1022−1750∣+
∣1750−130∣|143 - 86| + |86 - 1470| + |1470 - 913| + |913 - 1774| + |1774 - 948| + |948 - 1509| +
|1509 - 1022| + |1022 - 1750| +|1750 - 130|∣143−86∣+∣86−1470∣+ ∣1470−913∣+∣913−1774∣+
∣1774−948∣+∣948−1509∣+∣1509−1022∣+ ∣1022−1750∣+∣1750−130∣
 57+1384+557+861+826+561+487+728+1620=708157 + 1384 + 557 + 861 + 826 + 561 + 487 + 728 +
1620 = 708157+1384+557+861+826+561+487+728+1620=7081

Total Distance: 7081 cylinders

(ii) Shortest Seek Time First (SSTF)

The request closest to the current head position is serviced next.

Sequence:

143 → 130 → 86 → 913 → 948 → 1022 → 1470 → 1509 → 1750 → 1774

Calculation:

 Distance:
∣143−130∣+∣130−86∣+∣86−913∣+∣913−948∣+∣948−1022∣+∣1022−1470∣+∣1470−1509∣+∣1509−1750∣+
∣1750−1774∣|143 - 130| + |130 - 86| + |86 - 913| + |913 - 948| + |948 - 1022| + |1022 - 1470| +
|1470 - 1509| + |1509 - 1750| + |1750 -
1774|∣143−130∣+∣130−86∣+∣86−913∣+∣913−948∣+∣948−1022∣+∣1022−1470∣+∣1470−1509∣+
∣1509−1750∣+∣1750−1774∣
 13+44+827+35+74+448+39+241+24=174513 + 44 + 827 + 35 + 74 + 448 + 39 + 241 + 24 =
PREPARED BY: Mr.D.Srinivasan,AP/CSE , Mrs.A.Thilagavathi,AP/CSE & Mr.R.Arunkumar, AP/CSE 71
MAILAM ENGINEERING COLLEGE CS3451-Introduction to Operating System UNIT-IV
174513+44+827+35+74+448+39+241+24=1745

Total Distance: 1745 cylinders

(iii) SCAN (Elevator Algorithm)

The disk arm moves in one direction, servicing all requests until it reaches the end of the disk, then reverses
direction.

Direction: Initially moving up (towards higher cylinder numbers).

Sequence:

143 → 913 → 948 → 1022 → 1470 → 1509 → 1750 → 1774 → 4999 → 130 → 86

Calculation:

 Distance (upwards):
∣143−913∣+∣913−948∣+∣948−1022∣+∣1022−1470∣+∣1470−1509∣+∣1509−1750∣+∣1750−1774∣+
∣1774−4999∣|143 - 913| + |913 - 948| + |948 - 1022| + |1022 - 1470| + |1470 - 1509| + |1509 - 1750| +
|1750 - 1774| + |1774 -
4999|∣143−913∣+∣913−948∣+∣948−1022∣+∣1022−1470∣+∣1470−1509∣+∣1509−1750∣+∣1750−1774∣+
∣1774−4999∣
=770+35+74+448+39+241+24+3225=4856= 770 + 35 + 74 + 448 + 39 + 241 + 24 + 3225 =
4856=770+35+74+448+39+241+24+3225=4856
 Distance (downwards):
∣4999−130∣+∣130−86∣=4869+44=4913|4999 - 130| + |130 - 86| = 4869 + 44 =
4913∣4999−130∣+∣130−86∣=4869+44=4913

Total Distance: 4856 + 4913 = 9772 cylinders

(iv) LOOK

LOOK is similar to SCAN but does not move to the ends of the disk unless there are requests there. The disk
arm changes direction once all requests in the current direction are serviced.

Direction: Initially moving up (towards higher cylinder numbers).

Sequence:

143 → 913 → 948 → 1022 → 1470 → 1509 → 1750 → 1774 → 130 → 86

Calculation:

 Distance (upwards):

PREPARED BY: Mr.D.Srinivasan,AP/CSE , Mrs.A.Thilagavathi,AP/CSE & Mr.R.Arunkumar, AP/CSE 72


MAILAM ENGINEERING COLLEGE CS3451-Introduction to Operating System UNIT-IV
∣143−913∣+∣913−948∣+∣948−1022∣+∣1022−1470∣+∣1470−1509∣+∣1509−1750∣+
∣1750−1774∣|143 - 913| + |913 - 948| + |948 - 1022| + |1022 - 1470| + |1470 - 1509| +
|1509 - 1750| + |1750 -
1774|∣143−913∣+∣913−948∣+∣948−1022∣+∣1022−1470∣+∣1470−1509∣+∣1509−1750∣+
∣1750−1774∣=770+35+74+448+39+241+24=1631= 770 + 35 + 74 + 448 + 39 + 241 + 24 = 1631=
770+35+74+448+39+241+24=1631
 Distance (downwards):
∣1774−130∣+∣130−86∣=1644+44=1688|1774 - 130| + |130 - 86| = 1644 + 44 =
1688∣1774−130∣+∣130−86∣=1644+44=1688

Total Distance: 1631 + 1688 = 3319 cylinders

(v) C-SCAN (Circular SCAN)

The disk arm moves in one direction (upwards), servicing requests until it reaches the end of the disk, then
jumps back to the beginning and continues.

Direction: Only upwards.

Sequence:

143 → 913 → 948 → 1022 → 1470 → 1509 → 1750 → 1774 → 4999 → 86 → 130

Calculation:

 Distance (upwards):
∣143−913∣+∣913−948∣+∣948−1022∣+∣1022−1470∣+∣1470−1509∣+∣1509−1750∣+∣1750−1774∣+
∣1774−4999∣|143 - 913| + |913 - 948| + |948 - 1022| + |1022 - 1470| + |1470 - 1509| + |1509 - 1750| +
|1750 - 1774| + |1774 -
4999|∣143−913∣+∣913−948∣+∣948−1022∣+∣1022−1470∣+∣1470−1509∣+∣1509−1750∣+∣1750−1774∣+
∣1774−4999∣
=770+35+74+448+39+241+24+3225=4856= 770 + 35 + 74 + 448 + 39 + 241 + 24 + 3225 =
4856=770+35+74+448+39+241+24+3225=4856
 Jump to start:
∣4999−0∣=4999|4999 - 0| = 4999∣4999−0∣=4999
 Distance (upwards from start):
∣0−86∣+∣86−130∣=86+44=130|0 - 86| + |86 - 130| = 86 + 44 = 130∣0−86∣+∣86−130∣=86+44=130

Total Distance: 4856 + 4999 + 130 = 9985 cylinders

PREPARED BY: Mr.D.Srinivasan,AP/CSE , Mrs.A.Thilagavathi,AP/CSE & Mr.R.Arunkumar, AP/CSE 73


MAILAM ENGINEERING COLLEGE CS3451-Introduction to Operating System UNIT-IV

ANNA UNIVERSITY QUESTIONS


MAY /JUNE-2023
PART – A
1. What is a sequential access file? (Q.NO:73 )
2. Define an immutable shared file. (Q.NO:74 )

PART – B
1) What is a directory? Outline a tree structured directory structure and an acycle graph directory structure with
appropriate example. (Q.NO: 4)
2) Explain contiguous allocation and linked allocation of disk space with an examples. (Q.NO: 9)
PART – C

1) Consider a disk queue with requests for i/o to blocks on cylinders in this following order.
98,183,37,122,14,124,65,67.
The disk head pointer is initially at cylinder 53. Outline first come first serve disk scheduling algorithm,
SCAN disk scheduling algorithms and Shortest seek time first disk scheduling algorithm with a
diagram.(Q.No:1)

ANNA UNIVERSITY QUESTIONS


NOV / DEC-2023
PART – A

1. Give the role of operating system in free space management. (Q.NO:75)


2. List the various file access method.(Q. NO:76)

PART – B
1. Write detailed notes on file system interface and file system structure.(Q.NO:7)

2. Following are the references attempted to hard disks : 67,22,78,34,21,78,99. Recommend a suitable
disk scheduling algorithm among FIFO, SSTF, SCAN AND LOOK after applying all. Provide
statements that support your recommendation.(Note : Initial head position is at 20)(Q.NO:29)

PREPARED BY: Mr.D.Srinivasan,AP/CSE , Mrs.A.Thilagavathi,AP/CSE & Mr.R.Arunkumar, AP/CSE 74


MAILAM ENGINEERING COLLEGE CS3451-Introduction to Operating System UNIT-IV

ANNA UNIVERSITY QUESTIONS


APRIL / MAY-2024
PART – A
1. Write short notes on free space management. (Q.No.77)
2. State the functions of file system. (Q.No.78)

PART – B
1. Is disk scheduling, other than FCFS scheduling, useful in a single- user environment? Explain your
answer. (Q.No.30)
2. Describe three circumstances under which blocking I/O should be used. Describe three circumstances
under which non-blocking I/O should be used. ( Q.No.31)
3. Consider a file system in which a file can be deleted and its disk space reclaimed while links to that file
still exist. What problems may occur if a new file is created in the same storage area or with the same absolute
path name? How can these problems be avoided? (Q.No.32)
4. Contrast the performance of the three techniques for allocating disk blocks (contiguous, linked, and
indexed) for both sequential and random file access? (Q.No.33)

ANNA UNIVERSITY QUESTIONS


NOV / DEC-2024
PART – A
1. Name the three methods of allocating disk space for file systems. (Q.No.79)
2. List the operations that can be performed on the directory. (Q.No.80)

PART – B

1. Write short notes on :


(i) Directory organization.
(ii) File system mounting.
(iii) Kernel (I/O) system. (Q.No.34)

2. Suppose that a disk drive has 5000 cylinders numbered 0 to 4999 the drive currently services a request
at cylinder 143, and the previous request was at cylinder 125. the queue of pending requests in the FIFO order
is 86, 1470, 913, 1774, 948, 1509, 1022, 1750 and 130. Starting from the current position, what is the total
distance (in cylinders) that the disk arm moves to satisfy all pending requests for each of the following
algorithms:
(i) FCFS
(ii) SSFT
(iii) SCAN
(iv) LOOK
(v) C-SCAN (Q.No.35)

PREPARED BY: Mr.D.Srinivasan,AP/CSE , Mrs.A.Thilagavathi,AP/CSE & Mr.R.Arunkumar, AP/CSE 75


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System UNIT -V
UNIT V VIRTUAL MACHINES AND MOBILE OS
Virtual Machines – History, Benefits and Features, Building Blocks, Types of Virtual Machines and
their Implementations, Virtualization and Operating-System Components; Mobile OS - iOS and Android.
PART-A

1) What is virtual machine? (April/May-2023)


A Virtual Machine (VM) is a compute resource that uses software instead of a physical computer
to run programs and deploy apps. One or more virtual “guest” machines run on a physical “host”
machine. Each virtual machine runs its own operating system and functions separately from the other
VMs, even when they are all running on the same host. This means that, for example, a virtual MacOS
virtual machine can run on a physical PC.

2) Define virtualization. (Nov/Dec-2024)


● Virtualization is a process that allows for more efficient utilization of physical computer hardware
and is the foundation of cloud computing.

● Virtualization uses software to create an abstraction layer over computer hardware that allows the
hardware elements of a single computer—processors, memory, storage and more—to be divided
into multiple virtual computers, commonly called virtual machines (VMs).
● Each VM runs its own operating system (OS) and behaves like an independent computer, even
though it is running on just a portion of the actual underlying computer hardware.
3) What are the applications of virtual machines?
● Building and deploying apps to the cloud.
● Trying out a new operating system (OS), including beta releases.
● Spinning up a new environment to make it simpler and quicker for developers to run dev-test
scenarios.
● Backing up your existing OS.
● Accessing virus-infected data or running an old application by installing an older OS.

4) What are the benefits of virtual machine? (Nov/Dec-2023)


● Security benefits
● Scalability
● Lowered downtime
● Agility and speed
● Cost savings
5) What are building blocks of e-virtual machine?
(i) Trap-and-Emulate
(ii) Binary Translation and
(iii) Hardware Assistance

PREPARED BY: Mr.D.SRINIVASAN AP/CSE , Mrs.A.THILAGAVATHI AP/CSE & Mr.R.ARUNKUMAR AP/CSE 1


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System UNIT -V
6) What is trap-and-emulate in virtual machine?
Trap-and-emulate is a technique used by the virtual machine to emulate privileged instructions
and registers and pretend to the OS that it's still in kernel mode. An operation system is designed to have
full control of the system.

7) Define binary translation.


The VMM scans the instruction stream and identifies the privileged, control- and behavior-
sensitive instructions. When these instructions are identified, they are trapped into the VMM, which
emulates the behavior of these instructions. The method used in this emulation is called binary
translation.

8) What is hardware assistance in VM?


● When using this assistance, the guest can use a separate mode of execution called guest mode.
● The guest code, whether application code or privileged code, runs in the guest mode.
● On certain events, the processor exits out of guest mode and enters root mode. The hypervisor
executes in the root mode, determines the reason for the exit, takes any required actions, and restarts
the guest in guest mode.
● When you use hardware assistance for virtualization, there is no need to translate the code.
● As a result, system calls or trap-intensive workloads run very close to native speed.
● Some workloads, such as those involving updates to page tables, lead to a large number of exits
from guest mode to root mode. Depending on the number of such exits and total time spent in exits,
hardware-assisted CPU virtualization can speed up execution significantly.

9) What are the types of virtual machines?

● Process virtual machines


● System virtual machines

10) Define Paravirtualization. (April/May-2024)

Para virtualization is the category of CPU virtualization which uses hypercalls for operations to
handle instructions at compile time. In para virtualization, guest OS is not completely isolated but it is
partially isolated by the virtual machine from the virtualization layer and hardware. VMware and Xen are
some examples of para virtualization.

11) What is Programming-Environment Virtualization?

● The virtualization of programming environments is a separate type of virtualization that is based


on a different execution paradigm.
● A programming language is meant to run in a custom-built virtualized environment in this case.
● Oracle's Java, for example, includes several capabilities that rely on it running in the Java virtual
machine (JVM), such as specialized methods for security and memory management.

PREPARED BY: Mr.D.SRINIVASAN AP/CSE , Mrs.A.THILAGAVATHI AP/CSE & Mr.R.ARUNKUMAR AP/CSE 2


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System UNIT -V
12) Define emulation.

Emulation, as name suggests, is a technique in which Virtual machines simulates complete


hardware in software. There are many virtualization techniques that were developed in or inherited from
emulation technique. It is very useful when designing software for various systems. It simply allows us to
use current platform to access an older application, data, or operating system.

13) Define CPU scheduling in VM.

A system with virtualization, even a single-CPU system, frequently acts like a multiprocessor
system. The virtualization software presents one or more virtual CPUs to each of the virtual machines
running on the system and then schedules the use of the physical CPUs among the virtual machines.

14) What is virtual to physical procedure?

Virtual to physical (V2P) is the process of converting or porting a virtual machine (VM) onto
and/or as a standard physical machine. V2P allows a VM to transform into a physical machine without
losing its state, data and overall operations.

15) What are the types of virtualizations?

 Network Virtualization
 Storage Virtualization
 Server Virtualization
 Application Virtualization
 Desktop Virtualization

16) Write some of the key features of UIKit.


 Application lifecycle management
 Application event handling (e.g. touch screen user interaction)
 Multitasking
 Wireless Printing
 Data protection via encryption
 Cut, copy, and paste functionality
 Web and text content presentation and management
 Data handling
 Inter-application integration

17) Write some SDK framework.

 UIKit Framework (UIKit.framework)


 Map Kit Framework (MapKit.framework)
 Message UI Framework (MessageUI.framework)
 Game Kit Framework (GameKit.framework)

PREPARED BY: Mr.D.SRINIVASAN AP/CSE , Mrs.A.THILAGAVATHI AP/CSE & Mr.R.ARUNKUMAR AP/CSE 3


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System UNIT -V
18) What are the iOS Media Layer?

 Core Video Framework (CoreVideo.framework)


 Core Text Framework (CoreText.framework)
 Image I/O Framework (ImageIO.framework)
 Assets Library Framework (AssetsLibrary.framework)
 Core Graphics Framework (CoreGraphics.framework)

19) List the iOS Audio support files.

 Foundation framework (AVFoundation.framework)


 Core Audio Frameworks
 Open Audio Library (OpenAL)
 Media Player Framework (MediaPlayer.framework)
 Core Midi Framework (CoreMIDI.framework)

20) What is iOS Core OS Layer?

The Core OS Layer occupies the bottom position of the iOS stack and, as such, sits directly on top
of the device hardware.
The layer provides a variety of services including low level networking, access to external
accessories and the usual fundamental operating system services such as memory management, file
system handling and threads.

21) List the iOS Core OS Layer.

 Accelerate Framework (Accelerate.framework)


 External Accessory Framework (ExternalAccessory.framework)
 Security Framework (Security.framework)

22) Draw an iOS 6 Architecture.

PREPARED BY: Mr.D.SRINIVASAN AP/CSE , Mrs.A.THILAGAVATHI AP/CSE & Mr.R.ARUNKUMAR AP/CSE 4


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System UNIT -V
23) What are the advantages and disadvantages of writing an operating System in a high-
level language, such as C ? (Nov/Dec-2019)

Advantages: The code can be written faster, is more compact, and is easier to understand and debug. In
addition, improvements in compiler technology will improve the generated code for the entire operating
system by simple recompilation.
An operating system is far easier to port — to move to some other hardware — if it is written in a higher-
level language.
Disadvantages: Using high level language for implementing an operating system leads to a loss in
speed and increase in storage requirements. However in modern systems only a small amount of code is
needed for high performance, such as the CPU scheduler and memory manager.

24) What are the Basic features of Linux?


1.Portable
2.Open Source
3.Multiuser
4.Multiprogramming
5.Hierarchical File system.

25) Write a note an android. (April/May-2023)


Android OS is a Linux-based mobile operating system that primarily runs on smartphones and
tablets. The Android platform includes an operating system based upon the Linux kernel, a GUI, a web
browser and end-user applications that can be downloaded.

26) List any two components that are unique for mobile OS. (Nov/Dec-2023)
Its design lets users manipulate the mobile devices intuitively, with finger movements that mirror
common motions, such as pinching, swiping, and tapping.

27) What is the major design goal for the android platform? (April/May-2024)

The major design goal for the Android platform is to provide an open-source, flexible, and
customizable environment that enables developers to create applications that can run on a wide variety
of devices. Android's design emphasizes:

1. Portability
2. User Experience
3. Interoperability
4. Security
5. Developer Flexibility.

28) State the merits of Android OS. (Nov/Dec -2024)


Android OS offers several key advantages, making it a popular choice for smartphones, tablets,

PREPARED BY: Mr.D.SRINIVASAN AP/CSE , Mrs.A.THILAGAVATHI AP/CSE & Mr.R.ARUNKUMAR AP/CSE 5


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System UNIT -V
and other devices. Here are some of its main merits:

1. Open Source
2. Wide Device Compatibility
3. Customization Options
4. Google Integration
5. Large App Ecosystem
6. Multi-tasking
7. Hardware Variety
8. Google Assistant
9. Frequent Software Updates.

PREPARED BY: Mr.D.SRINIVASAN AP/CSE , Mrs.A.THILAGAVATHI AP/CSE & Mr.R.ARUNKUMAR AP/CSE 6


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System UNIT -V

PART-B

1) Explain in Detail about History of Virtual machines.

Virtual machines first appeared commercially on IBM mainframes in 1972. Virtualization was
provided by the IBM VM operating system. This system has evolved and is still available. In addition, many
of its original concepts are found in other systems, making it worth exploring.

IBM VM370 divided a mainframe into multiple virtual machines, each running its own operating
system. A major difficulty with the VM approach involved disk systems. Suppose that the physical machine
had three disk drives but wanted to support seven virtual machines.

It could not allocate a disk drive to each virtual machine. The solution was to provide virtual disks—
termed minidisks in IBM’s VM operating system.

● The minidisks Virtual Machines to the system’s hard disks in all respects except size. The
system implemented each minidisk by allocating as many tracks on the physical disks as the
minidisk needed.

● Once the virtual machines were created, users could run any of the operating systems or software
packages that were available on the underlying machine. For the IBM VM system, a user
normally ran CMS—a single-user interactive operating system.

● For many years after IBM introduced this technology, virtualization remained in its domain.
Most systems could not support virtualization. However, a formal definition of virtualization
helped to establish system requirements and a target for functionality.

The virtualization requirements stated that:

1. A VMM provides an environment for programs that is essentially identical to the original machine.

2. Programs running within that environment show only minor performance decreases.

3. The VMM is in complete control of system resources. These requirements of fidelity, performance,
and safety still guide virtualization efforts today. By the late 1990s, Intel 80x86 CPUs had become
common, fast, and rich in features. Accordingly, developers launched multiple efforts to implement
virtualization on that platform.

● Both Xen and VMware created technologies, still used today, to allow guest operating systems
to run on the 80x86. Since that time, virtualization has expanded to include all common CPUs,
many commercial and open-source tools, and many operating systems.
● For example, the open-source Virtual Box project (http://www.virtualbox.org) provides a
program than runs on Intel x86 and AMD64 CPUs and on Windows, Linux, Mac OS X, and

PREPARED BY: Mr.D.SRINIVASAN AP/CSE , Mrs.A.THILAGAVATHI AP/CSE & Mr.R.ARUNKUMAR AP/CSE 7


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System UNIT -V
Solaris host operating systems. Possible guest operating systems include many versions of
Windows, Linux, Solaris, and BSD, including even MS-DOS and IBM OS/2.

2) Discuss in Detail about the Benefits and Features of Virtual machines.

Several advantages make virtualization is more attractive. Most of them are fundamentally related
to the ability to share the same hardware yet run several different execution environments (that is,
different operating systems) concurrently. One important advantage of virtualization is that the host
system is protected from the virtual machines, just as the virtual machines are protected from each other.

● A virus inside a guest operating system might damage that operating system but is unlikely to
affect the host or the other guests. Because each virtual machine is almost completely isolated
from all other virtual machines, there are almost no protection problems.

● A potential disadvantage of isolation is that it can prevent sharing of resources. Two approaches
to provide sharing have been implemented.

● First, it is possible to share a file-system volume and thus to share files. Second, it is possible to
define a network of virtual machines, each of which can send information over the virtual
communications network.

● The network is modeled after physical communication networks but is implemented in software.
Of course, the VMM is free to allow any number of its guests to use physical resources, such as a
physical network connection (with sharing provided by the VMM), in which case the allowed
guests could communicate with each other via the physical network.

● One feature common to most virtualization implementations is the ability to freeze, or


suspend, a running virtual machine. Many operating systems provide that basic feature for
processes, but VMMs go one step further and allow copies and snapshots to be made of the guest.

● The copy can be used to create a new VM or to move a VM from one machine to another with
its current state intact. The guest can then resume where it was, as if on its original machine,
creating a clone. The snapshot records a point in time, and the guest can be reset to that point if
necessary (for example, if a change was made but is no longer wanted).

● Often, VMMs allow many snapshots to be taken. For example, snapshots might record a guest’s
state every day for a month, making restoration to any of those snapshot states possible. These
abilities are used to good advantage in virtual environments. A virtual machine system is a perfect
vehicle for operating-system research and development.
● Normally, changing an operating system is a difficult task. Operating systems are large and
complex programs, and a change in one part may cause obscure bugs to appear in some other part.

PREPARED BY: Mr.D.SRINIVASAN AP/CSE , Mrs.A.THILAGAVATHI AP/CSE & Mr.R.ARUNKUMAR AP/CSE 8


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System UNIT -V
The power of the operating system makes changing it particularly dangerous. Because the
operating system executes in kernel mode, a wrong change in a pointer could cause an error that
would destroy the entire file system. Thus, it is necessary to test all changes to the operating
system carefully.

● Furthermore, the operating system runs on and controls the entire machine, meaning that the
system must be stopped and taken out of use while changes are made and tested. This period is
commonly called system-development time. Since it makes the system unavailable to users,
system-development time on shared systems is often scheduled late at night or on weekends,
when system load is low.

● A virtual-machine system can eliminate much of this latter problem. System programmers are
given their own virtual machine, and system development is done on the virtual machine instead
of on a physical machine. Normal system operation is disrupted only when a completed and tested
change is ready to be put into production.

● Another advantage of virtual machines for developers is that multiple operating systems can
run concurrently on the developer’s workstation. This virtualized workstation allows for rapid
porting and testing of programs in varying environments.

● In addition, multiple versions of a program can run, each in its own isolated operating system,
within one system. Similarly, quality assurance engineers can test their applications in multiple
environments without buying, powering, and maintaining a computer for each environment.

● A major advantage of virtual machines in production data-center use is system consolidation,


which involves taking two or more separate systems and running them in virtual machines on one
system. Such physical-to-virtual conversions result in resource optimization, since many lightly
used systems can be combined to create one more heavily used system.

● Virtual Machines Consider, too, that management tools that are part of the VMM allow
system administrators to manage many more systems than they otherwise could. A virtual
environment might include 100 physical servers, each running 20 virtual servers.

● Without virtualization, 2,000 servers would require several system administrators. With
virtualization and its tools, the same work can be managed by one or two administrators. One of
the tools that make this possible is templating, in which one standard virtual machine image,

PREPARED BY: Mr.D.SRINIVASAN AP/CSE , Mrs.A.THILAGAVATHI AP/CSE & Mr.R.ARUNKUMAR AP/CSE 9


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System UNIT -V
including an installed and configured guest operating system and applications, is saved and used
as a source for multiple running VMs.

● Other features include managing the patching of all guests, backing up and restoring the guests,
and monitoring their resource use. Virtualization can improve not only resource utilization but
also resource management. Some VMMs include a live migration feature that moves a running
guest from one physical server to another without interrupting its operation or active network
connections.

● If a server is overloaded, live migration can thus free resources on the source host while not
disrupting the guest. Similarly, when host hardware must be repaired or upgraded, guests can be
migrated to other servers, the evacuated host can be maintained, and then the guests can be
migrated back.

● This operation occurs without downtime and without interruption to users. Think about the
possible effects of virtualization on how applications are deployed. If a system can easily add,
remove, and move a virtual machine, then why install applications on that system directly?
Instead, the application could be preinstalled on a tuned and customized operating system in a
virtual machine.

● Thus, they need not be expensive, high-performance components. Other uses of virtualization are
sure to follow as it becomes more prevalent and hardware support continues to improve.

3) Explain in detail about Building Blocks of virtual machine.

Although the virtual machine concept is useful, it is difficult to implement. Much work is required
to provide an exact duplicate of the underlying machine. This is especially a challenge on dual-mode
systems, where the underlying machine has only user mode and kernel mode.

● The building blocks that are needed for efficient virtualization. The ability to virtualize
depends on the features provided by the CPU. If the features are sufficient, then it is possible to
write a VMM that provides a guest environment. Otherwise, virtualization is impossible.
● VMMs use several techniques to implement virtualization, including trap-and-emulate and
binary translation. We discuss each of these techniques in this section, along with the hardware
support needed to support virtualization.

● One important concept found in most virtualization options is the implementation of a virtual
CPU (VCPU). The VCPU does not execute code. Rather, it represents the state of the CPU as the
guest machine believes it to be.

PREPARED BY: Mr.D.SRINIVASAN AP/CSE , Mrs.A.THILAGAVATHI AP/CSE & Mr.R.ARUNKUMAR AP/CSE 10


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System UNIT -V
● For each guest, the VMM maintains a VCPU representing that guest’s current CPU state.
When the guest is context-switched onto a CPU by the VMM, information from the VCPU is
used to load the right context, much as a general-purpose operating system would use the PCB.

● The kernel, of course, runs in kernel mode, and it is not safe to allow user-level code to run in
kernel mode. Just as the physical machine has two modes, however, so must the virtual machine.
Consequently, we must have a virtual user mode and a virtual kernel mode, both of which run in
physical user mode.

● Those actions that cause a transfer from user mode to kernel mode on a real machine (such as a
system call, an interrupt, or an attempt to execute a privileged instruction) must also cause a
transfer from virtual user mode to virtual kernel mode in the virtual machine. How can such a
transfer be accomplished?

● The procedure is as follows: When the kernel in the guest attempts to execute a privileged
instruction, that is an error (because the system is in user mode) and causes a trap to the VMM in
the real machine.

● The VMM gains control and executes (or “emulates”) the action that was attempted by the guest
kernel on the part of the guest. It then returns control to the virtual machine. This is called the
trap-and-emulate method.

● Most virtualization products use this method to one extent or other. With privileged instructions,
time becomes an issue. All non privileged instructions run natively on the hardware, providing the
same performance
● Privileged instructions create extra overhead, however, causing the guest to run more slowly
than it would natively. In addition, the CPU is being multiprogrammed among many virtual
machines, which can further slow down the virtual machines in unpredictable ways.

● This problem has been approached in various ways. IBM VM, for example, allows normal
instructions for the virtual machines to execute directly on the hardware. Only the privileged
instructions (needed mainly for I/O) must be emulated and hence execute more slowly.

● In general, with the evolution of hardware, the performance of trap-and-emulate


functionality has been improved, and cases in which it is needed have been reduced. For
example, many CPUs now have extra modes added to their standard dual-mode operation.

PREPARED BY: Mr.D.SRINIVASAN AP/CSE , Mrs.A.THILAGAVATHI AP/CSE & Mr.R.ARUNKUMAR AP/CSE 11


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System UNIT -V
● The VCPU need not keep track of what mode the guest operating system is in, because the
physical CPU performs that function. In fact, some CPUs provide guest CPU state management in
hardware, so the VMM need not supply that functionality, removing the extra overhead.

(i) Trap-and-Emulate

On a typical dual-mode system, the virtual machine guest can execute only in user mode
(unless extra hardware support is provided). The kernel, of course, runs in kernel mode, and it is not safe
to allow user-level code to run in kernel mode.

● Just as the physical machine has two modes, however, so must the virtual machine. Consequently,
we must have a virtual user mode and a virtual kernel mode, both of which run in physical user
mode.

● Those actions that cause a transfer from user mode to kernel mode on a real machine (such as a
system call, an interrupt, or an attempt to execute a privileged instruction) must also cause a
transfer from virtual user mode to virtual kernel mode in the virtual machine.

● How can such a transfer be accomplished? The procedure is as follows: When the kernel in the
guest attempts to execute a privileged instruction, that is an error (because the system is in user
mode) and causes a trap to the VMM in the real machine.
● The VMM gains control and executes (or “emulates”) the action that was attempted by the guest
kernel on the part of the guest. It then returns control to the virtual machine. This is called the
trap-and-emulate method and is shown in Fig 5.1.
Most virtualization products use this method to one extent or other. With privileged instructions, time
becomes an issue. All nonprivileged instructions run natively on the hardware, providing the same
performance

Fig 5.1: Trap-and-Emulate

PREPARED BY: Mr.D.SRINIVASAN AP/CSE , Mrs.A.THILAGAVATHI AP/CSE & Mr.R.ARUNKUMAR AP/CSE 12


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System UNIT -V
● Privileged instructions create extra overhead, however, causing the guest to run more slowly than
it would natively. In addition, the CPU is being multiprogrammed among many virtual machines,
which can further slow down the virtual machines in unpredictable ways.

● This problem has been approached in various ways. IBM VM, for example, allows normal
instructions for the virtual machines to execute directly on the hardware. Only the privileged
instructions (needed mainly for I/O) must be emulated and hence execute more slowly.

● In general, with the evolution of hardware, the performance of trap-and-emulate functionality has
been improved, and cases in which it is needed have been reduced. For example, many CPUs now
have extra modes added to their standard dual-mode operation.

The VCPU need not keep track of what mode the guest operating system is in, because the physical
CPU performs that function. In fact, some CPUs provide guest CPU state management in hardware, so
the VMM need not supply that functionality, removing the extra overhead.

(ii) Binary Translation

Some CPUs do not have a clean separation of privileged and non privileged instructions.
Unfortunately for virtualization implementers, the Intel x86 CPU line is one of them.

● No thought was given to running virtualization on the x86 when it was designed. (In fact, the first
CPU in the family—the Intel 4004, released in 1971—was designed to be the core of a
calculator.) The chip has maintained backward compatibility throughout its lifetime, preventing
changes that would have made virtualization easier through many generations.

● Let’s consider an example of the problem. The command popf loads the flag register from the
contents of the stack. If the CPU is in privileged mode, all of the flags are replaced from the stack.
If the CPU is in user mode, then only some flags are replaced, and others are ignored.

Binary translation is fairly simple in concept but complex in implementation. The basic steps are as
follows:

1. If the guest VCPU is in user mode, the guest can run its instructions natively on a physical CPU.

2. If the guest VCPU is in kernel mode, then the guest believes that it is running in kernel mode. The
VMM examines every instruction the guest executes in virtual kernel mode by reading the next few
instructions that the guest is going to execute, based on the guest’s program counter. Instructions other
than special instructions are run natively. Special instructions are translated into a new set of instructions
that perform the equivalent task—for example changing the flags in the VCPU. Binary translation is
shown in Fig 5.2.

PREPARED BY: Mr.D.SRINIVASAN AP/CSE , Mrs.A.THILAGAVATHI AP/CSE & Mr.R.ARUNKUMAR AP/CSE 13


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System UNIT -V
3. It is implemented by translation code within the VMM. The code reads native binary instructions
dynamically from the guest, on demand, and generates native binary code that executes in place of the
original code.

The basic method of binary translation just described would execute correctly but perform poorly.
Fortunately, the vast majority of instructions would execute natively.

● VMware tested the performance impact of binary translation by booting one such system,
Windows XP, and immediately shutting it down while monitoring the elapsed time and the
number of translations produced by the binary translation method.

● The result was 950,000 translations, taking 3 microseconds each, for a total increase of 3 seconds
(about 5%) over native execution of Windows XP. To achieve that result, developers used many
performance improvements that we do not discuss here. For more information, consult the
bibliographical notes at the end of this chapter.

Fig 5.2: Binary Translation

(iii) Hardware Assistance

Without some level of hardware support, virtualization would be impossible. The more hardware
support available within a system, the more feature-rich and stable the virtual machines can be and the
better they can perform. In the Intel x86 CPU family, Intel added new virtualization support in successive
generations (the VT-x instructions) beginning in 2005.

● Now, binary translation is no longer needed. In fact, all major general-purpose CPUs are
providing extended amounts of hardware support for virtualization. For example, AMD
virtualization technology (AMD-V) has appeared in several AMD processors starting in 2006.

PREPARED BY: Mr.D.SRINIVASAN AP/CSE , Mrs.A.THILAGAVATHI AP/CSE & Mr.R.ARUNKUMAR AP/CSE 14


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System UNIT -V
● It defines two new modes of operation—host and guest—thus moving from a dual-mode to a
multimode processor. The VMM can enable host mode, define the characteristics of each guest
virtual machine, and then switch the system to guest mode, passing control of the system to a
guest operating system that is running in the virtual machine.

● In guest mode, the virtualized operating system thinks it is running on native hardware and
sees whatever devices are included in the host’s definition of the guest.

● The functionality in Intel VT-X is similar, providing root and non root modes, equivalent to
host and guest modes. Both provide guest VCPU state data structures to load and save guest CPU
state automatically during guest context switches.

● In addition, virtual machine control structures (VMCSs) are provided to manage guest and
host state, as well as the various guest execution controls, exit controls, and information about
why guests exit back to the host.

● In the latter case, for example, a nested page-table violation caused by an attempt to access
unavailable memory can result in the guest’s exit. AMD and Intel have also addressed memory
management in the virtual environment. With AMD’s RVI and Intel’s EPT memory management
enhancements, VMMs no longer need to implement software NPTs.

● In essence, these CPUs implement nested page tables in hardware to allow the VMM to fully
control paging while the CPUs accelerate the translation from virtual to physical addresses. The
NPTs add a new layer, one representing the guest’s view of logical-to-physical address
translation.

● The CPU page-table walking function includes this new layer as necessary, walking through
the guest table to the VMM table to find the physical address desired. A TLB miss results in a
performance penalty, because more tables must be traversed (the guest and host page tables) to
complete the lookup.

● Shows the extra translation work performed by the hardware to translate from a guest virtual
address to a final physical address. I/O is another area improved by hardware assistance. Consider
that the standard direct-memory-access (DMA) controller accepts a target memory address and a
source I/O device and transfers data between the two without operating-system action.

PREPARED BY: Mr.D.SRINIVASAN AP/CSE , Mrs.A.THILAGAVATHI AP/CSE & Mr.R.ARUNKUMAR AP/CSE 15


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System UNIT -V
● Without hardware assistance, a guest might try to set up a DMA transfer that affects the memory
of the VMM or other guests. In CPUs that provide hardware-assisted DMA (such as Intel CPUs
with VT-d), even DMA has a level of indirection.

● First, the VMM sets up protection domains to tell the CPU which physical memory belongs to
each guest. Next, it assigns the I/O devices to the protection domains, allowing them direct access
to those memory regions and only those regions.

● The hardware then transforms the address in a DMA request issued by an I/O device to the host
physical memory address associated with the I/O. In this manner DMA transfers are passed
through between a guest and a device without VMM interference. Similarly, interrupts must be
delivered to the appropriate guest and must not be visible to other guests.
● By providing an interrupt remapping feature, CPUs with virtualization hardware assistance
automatically deliver an interrupt destined for a guest to a core that is currently running a thread
of that guest.

● That way, the guest receives interrupts without the VMM’s needing to intercede in their
delivery. Without interrupt remapping, malicious guests can generate interrupts that can be used
to gain control of the host system. (See the bibliographical notes at the end of this chapter for
more details.)

4) Explain in detail about Types of Virtual Machines and Their Implementations. (April/May-2023
& Nov/Dec-2023)

Without some level of hardware support, virtualization would be impossible. The more hardware
support available within a system, the more feature-rich and stable the virtual machines can be and the
better they can perform.

● In the Intel x86 CPU family, Intel added new virtualization support in successive generations
(the VT-x instructions) beginning in 2005. Now, binary translation is no longer needed. In
fact, all major general-purpose CPUs are providing extended amounts of hardware support for
virtualization.

● The NPTs add a new layer, one representing the guest’s view of logical-to-physical address
translation. The CPU page-table walking function includes this new layer as necessary, walking
through the guest table to the VMM table to find the physical address desired.

PREPARED BY: Mr.D.SRINIVASAN AP/CSE , Mrs.A.THILAGAVATHI AP/CSE & Mr.R.ARUNKUMAR AP/CSE 16


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System UNIT -V
● A TLB miss results in a performance penalty, because more tables must be traversed (the
guest and host page tables) to complete the lookup. Translation work performed by the hardware
to translate from a guest virtual address to a final physical address.
● I/O is another area improved by hardware assistance. Consider that the standard direct-memory-
access (DMA) controller accepts a target memory address and a source I/O device and
transfers data between the two without operating-system action. Without hardware
assistance, a guest might try to set up a DMA transfer that affects the memory of the VMM or
other guests.

● In CPUs that provide hardware-assisted DMA. First, the VMM sets up protection domains to tell
the CPU which physical memory belongs to each guest. Next, it assigns the I/O devices to the
protection domains, allowing them direct access to those memory regions and only those regions.

● The hardware then transforms the address in a DMA request issued by an I/O device to the host
physical memory address associated with the I/O. In this manner DMA transfers are passed
through between a guest and a device without VMM interference. Similarly, interrupts must
be delivered to the appropriate guest and must not be visible to other guests.

● By providing an interrupt remapping feature, CPUs with virtualization hardware assistance


automatically deliver an interrupt destined for a guest to a core that is currently running a thread
of that guest.
● That way, the guest receives interrupts without the VMM’s needing to intercede in their delivery.
Without interrupt remapping, malicious guests can generate interrupts that can be used to gain
control of the host system.

Types of Virtual Machines and Their Implementations

We’ve now looked at some of the techniques used to implement virtualization. Next, we consider
the major types of virtual machines, their implementation, their functionality, and how they use the
building blocks just described to create a virtual environment.

Of course, the hardware on which the virtual machines are running can cause great variation in
implementation methods. Here, we discuss the implementations in general, with the understanding that
VMMs take advantage of hardware assistance where it is available Shown in Fig 5.3.

PREPARED BY: Mr.D.SRINIVASAN AP/CSE , Mrs.A.THILAGAVATHI AP/CSE & Mr.R.ARUNKUMAR AP/CSE 17


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System UNIT -V

Fig 5.3: VMM Machine

(i) The Virtual Machine Life Cycle

Whatever the hypervisor type, at the time a virtual machine is created, its creator gives the
VMM certain parameters. These parameters usually include the number of CPUs, amount of memory,
networking details, and storage details that the VMM will take into account when creating the guest.

For example, a user might want to create a new guest with two virtual CPUs, 4 GB of memory,
10 GB of disk space, one network interface that gets its IP address via DHCP, and access to the DVD
drive.

● The VMM then creates the virtual machine with those parameters. In the case of a type 0
hypervisor, the resources are usually dedicated. In this situation, if there are not two virtual CPUs
available and unallocated, the creation request in our example will fail. For other hypervisor
types, the resources are dedicated or virtualized, depending on the type.

● Certainly, an IP address cannot be shared, but the virtual CPUs are usually multiplexed on the
physical CPUs. Similarly, memory management usually involves allocating more memory to
guests than actually exists in physical memory.

PREPARED BY: Mr.D.SRINIVASAN AP/CSE , Mrs.A.THILAGAVATHI AP/CSE & Mr.R.ARUNKUMAR AP/CSE 18


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System UNIT -V
● Finally, when the virtual machine is no longer needed, it can be deleted. When this happens, the
VMM first frees up any used disk space and then removes the configuration associated with the
virtual machine, essentially forgetting the virtual machine.

● These steps are quite simple compared with building, configuring, running, and removing
physical machines. Creating a virtual machine from an existing one can be as easy as clicking the
“clone” button and providing a new name and IP address.

● This ease of creation can lead to virtual machine sprawl, which occurs when there are so many
virtual machines on a system that their use, history, and state become confusing and difficult to
track.

(ii) Type 0 Hypervisor

Type 0 hypervisors have existed for many years under many names, including “partitions” and
“domains”. They are a hardware feature, and that brings its own positives and negatives. Operating
systems need do nothing special to take advantage of their features. The VMM itself is encoded in the
firmware and loaded at boot time. In turn, it loads the guest images to run in each partition.

The feature set of a type 0 hypervisor tends to be smaller than those of the other types because it
is implemented in hardware. For example, a system might be split into four virtual systems, each with
dedicated CPUs, memory, and I/O devices.

● Each guest believes that it has dedicated hardware because it does, simplifying many
implementation details. I/O presents some difficulty, because it is not easy to dedicate I/O devices
to guests if there are not enough.

● What if a system has two Ethernet ports and more than two guests? For example, either all guests
must get their own I/O devices, or the system must provide I/O device sharing. In these cases, the
hypervisor manages shared access or grants all devices to a control partition.

● In the control partition, a guest operating system provides services (such as networking) via
daemons to other guests, and the hypervisor routes I/O requests appropriately.

● Some type 0 hypervisors are even more sophisticated and can move physical CPUs and memory
between running guests. In these cases, the guests are para virtualized, aware of the virtualization
and assisting in its execution is Shown in Fig 5.4.

PREPARED BY: Mr.D.SRINIVASAN AP/CSE , Mrs.A.THILAGAVATHI AP/CSE & Mr.R.ARUNKUMAR AP/CSE 19


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System UNIT -V

Fig 5.4: Type 0 Hypervisor

(iii)Type 1 Hypervisor

Type 1 hypervisors are commonly found in company data centers and are in a sense becoming
“the data-center operating system.” They are special-purpose operating systems that run natively on the
hardware, but rather than providing system calls and other interfaces for running programs, they create,
run, and manage guest operating systems.

● In addition to running on standard hardware, they can run on type 0 hypervisors, but not on other
type 1 hypervisors. Whatever the platform, guests generally do not know they are running on
anything but the native hardware.

● Type 1 hypervisors run in kernel mode, taking advantage of hardware protection. Where the host
CPU allows, they use multiple modes to give guest operating systems their own control and
improved performance. They implement device drivers for the hardware they run on, because no
other component could do so.

● Because they are operating systems, they must also provide CPU scheduling, memory
management, I/O management, protection, and even security. Frequently, they provide APIs, but
those APIs support applications in guests or external applications that supply features like
backups, monitoring, and security.

● The price of this increased manageability is the cost of the VMM (if it is a commercial product),
the need to learn new management tools and methods, and the increased complexity. Another
type of type 1 hypervisor includes various general-purpose operating systems with VMM
functionality.

(iv) Type 2 Hypervisor

PREPARED BY: Mr.D.SRINIVASAN AP/CSE , Mrs.A.THILAGAVATHI AP/CSE & Mr.R.ARUNKUMAR AP/CSE 20


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System UNIT -V
Type 2 hypervisors are less interesting to us as operating-system explorers, because there is
very little operating-system involvement in these application level virtual machine managers.

● This type of VMM is simply another process run and managed by the host, and even the host does
not know virtualization is happening within the VMM. Type 2 hypervisors have limits not
associated with some of the other types.

● For example, a user needs administrative privileges to access many of the hardware assistance
features of modern CPUs. If the VMM is being run by a standard user without additional
privileges, the VMM cannot take advantage of these features.

● Due to this limitation, as well as the extra overhead of running a general-purpose operating
system as well as guest operating systems, type 2 hypervisors tend to have poorer overall
performance than type 0 or 1.

● As is often the case, the limitations of type 2 hypervisors also provide some benefits. They run on
a variety of general-purpose operating systems, and running them requires no changes to the host
operating system. A student can use a type 2 hypervisor, for example, to test a non-native
operating system without replacing the native operating system.
● In fact, on an Apple laptop, a student could have versions of Windows, Linux, Unix, and less
common operating systems all available for learning and experimentation.

(v) Para virtualization:

As we’ve seen, Para virtualization takes a different tack than the other types of virtualizations.
Rather than try to trick a guest operating system into believing it has a system to itself, Para virtualization
presents the guest with a system that is similar but not identical to the guest’s preferred system.

● The guest must be modified to run on the Para virtualized virtual hardware. The gain for this extra
work is more efficient use of resources and a smaller virtualization layer. The Xen VMM, which
is the leader in Para virtualization, has implemented several techniques to optimize the
performance of guests as well as of the host system.
● For example, as we have seen, some VMMs present virtual devices to guests that appear to be real
devices. Instead of taking that approach, the Xen VMM presents clean and simple device
abstractions that allow efficient I/O, as well as good communication between the guest and the
VMM about device I/O. For each device used by each guest, there is a circular buffer shared by
the guest and the VMM via shared memory. Read and write data are placed in this buffer.

● For memory management, Xen does not implement nested page tables. Rather, each guest has its
own set of page tables, set to read-only. Xen requires the guest to use a specific mechanism, a
hyper call from the guest to the hypervisor VMM, when a page-table change is needed.

PREPARED BY: Mr.D.SRINIVASAN AP/CSE , Mrs.A.THILAGAVATHI AP/CSE & Mr.R.ARUNKUMAR AP/CSE 21


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System UNIT -V

● This means that the guest operating system’s kernel code must be changed from the default code
to these Xen-specific methods. To optimize performance, Xen allows the guest to queue up
multiple page-table changes asynchronously via hyper calls and then check to ensure that the
changes are complete before continuing operation.

● Xen allowed virtualization of x86 CPUs without the use of binary translation, instead requiring
modifications in the guest operating systems like the one described above. Over time, Xen has
taken advantage of hardware features supporting virtualization.

● As a result, it no longer requires modified guests and essentially does not need the Para
virtualization method. Para virtualization is still used in other solutions, however, such as type 0
hypervisors Shown in Fig 5.5.

Fig 5.5: Para virtualization

(vi) Programming - Environment Virtualization:

Another kind of virtualization, based on a different execution model, is the virtualization of


programming environments. Here, a programming language is designed to run within a custom-built
virtualized environment.

● For example, Oracle’s Java has many features that depend on its running in the Java virtual
machine (JVM), including specific methods for security and memory management. If we define
virtualization as including only duplication of hardware, this is not really virtualization at all.

PREPARED BY: Mr.D.SRINIVASAN AP/CSE , Mrs.A.THILAGAVATHI AP/CSE & Mr.R.ARUNKUMAR AP/CSE 22


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System UNIT -V
● But we need not limit ourselves to that definition. Instead, we can define a virtual environment,
based on APIs, that provides a set of features that we want to have available for a particular
language and programs written in that language. Java programs run within the JVM environment,
and the JVM is compiled to be a native program on systems on which it runs.

● This arrangement means that Java programs are written once and then can run on any system
(including all of the major operating systems) on which a JVM is available. The same can be said
for interpreted languages, which run inside programs that read each instruction and interpret it
into native operations.

(vii) Emulation

Virtualization is probably the most common method for running applications designed for one
operating system on a different operating system, but on the same CPU. This method works relatively
efficiently because the applications were compiled for the same instruction set as the target system uses.

● But what if an application or operating system needs to run on a different CPU? Here, it is
necessary to translate the entire source CPU’s instructions so that they are turned into the
equivalent instructions of the target CPU.

● Such an environment is no longer virtualized but rather is fully emulated. Emulation is useful
when the host system has one system architecture and the guest system was compiled for a
different architecture.

● For example, suppose a company has replaced its outdated computer system with a new system
but would like to continue to run certain important programs that were compiled for the old
system. The programs could be run in an emulator that translates each of the outdated system’s
instructions into the native instruction set of the new system.

● Emulation can increase the life of programs and allow us to explore old architectures without
having an actual old machine. As may be expected, the major challenge of emulation is
performance. Instruction-set emulation can run an order of magnitude slower than native
instructions.

(viii) Application Containment

The goal of virtualization in some instances is to provide a method to segregate applications,


manage their performance and resource use, and create an easy way to start, stop, move, and manage
them.

PREPARED BY: Mr.D.SRINIVASAN AP/CSE , Mrs.A.THILAGAVATHI AP/CSE & Mr.R.ARUNKUMAR AP/CSE 23


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System UNIT -V
● In such cases, perhaps full-fledged virtualization is not needed. If the applications are all compiled
for the same operating system, then we do not need complete virtualization to provide these
features. We can instead use application containment is Shown in Fig 5.6.

Fig 5.6: Application Containment

● Consider one example of application containment. Starting with version 10, Oracle Solaris has
included containers, or zones, that create a virtual layer between the operating system and the
applications.

● In this system, only one kernel is installed, and the hardware is not virtualized. Rather, the
operating system and its devices are virtualized, providing processes within a zone with the
impression that they are the only processes on the system.

● One or more containers can be created, and each can have its own applications, network stacks,
network address and ports, user accounts, and so on. CPU and memory resources can be divided
among the zones and the system-wide processes.

● Each zone in fact can run its own scheduler to optimize the performance of its applications on the
allotted resources.

5) Explain in detail about Virtualization and Operating-System Components. (April/May-2023)

Thus far, we have explored the building blocks of virtualization and the various types of
virtualizations. In this section, we take a deeper dive into the operating system aspects of virtualization,
including how the VMM provides core operating-system functions like scheduling, I/O, and memory
management.

PREPARED BY: Mr.D.SRINIVASAN AP/CSE , Mrs.A.THILAGAVATHI AP/CSE & Mr.R.ARUNKUMAR AP/CSE 24


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System UNIT -V
Here, we answer questions such as these: How do VMMs schedule CPU use when guest operating
systems believe they have dedicated CPUs? How can memory management work when many guests
require large amounts of memory?

(i) CPU Scheduling

A system with virtualization, even a single-CPU system, frequently acts like a multiprocessor
system. The virtualization software presents one or more virtual CPUs to each of the virtual machines
running on the system and then schedules the use of the physical CPUs among the virtual machines.

● The significant variations among virtualization technologies make it difficult to summarize the
effect of virtualization on scheduling. First, let’s consider the general case of VMM scheduling.
The VMM has a number of physical CPUs available and a number of threads to run on those
CPUs.

● In this situation, the guests act much like native operating systems running on native CPUs. Of
course, in other situations, there may not be enough CPUs to go around. The VMM itself needs
some CPU cycles for guest management and I/O management and can steal cycles from the
guests by scheduling its threads across all of the system CPUs, but the impact of this action is
relatively minor.
● More difficult is the case of over commitment, in which the guests are configured for more CPUs
than exist in the system. Here, a VMM can use standard scheduling algorithms to make progress
on each thread but can also add a fairness aspect to those algorithms.

● For example, if there are six hardware CPUs and 12 guest-allocated CPUs, the VMM could
allocate CPU resources proportionally, giving each guest half of the CPU resources it believes it
has.
● The VMM can still present all 12 virtual CPUs to the guests, but in mapping them onto physical
CPUs, the VMM can use its scheduler to share them appropriately. Even given a scheduler that
provides fairness, any guest operating-system scheduling algorithm that assumes a certain amount
of progress in a given amount of time will be negatively affected by virtualization.

● Consider a timesharing operating system that tries to allot 100 milliseconds to each time slice to
give users a reasonable response time. Within a virtual machine, this operating system is at the
mercy of the virtualization system as to what CPU resources it actually receives.

(ii) Memory Management

Efficient memory use in general-purpose operating systems is one of the major keys to
performance. In virtualized environments, there are more users of memory (the guests and their
applications, as well as the VMM), leading to more pressure on memory use.

PREPARED BY: Mr.D.SRINIVASAN AP/CSE , Mrs.A.THILAGAVATHI AP/CSE & Mr.R.ARUNKUMAR AP/CSE 25


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System UNIT -V
● Further adding to this pressure is that VMMs typically overcommit memory, so that the total
memory with which guests are configured exceeds the amount of memory that physically exists in
the system. The extra need for efficient memory use is not lost on the implementers of VMMs,
who take great measures to ensure the optimal use of memory.

● For example, VMware ESX uses at least three methods of memory management. Before memory
optimization can occur, the VMM must establish how much real memory each guest should use.
To do that, the VMM first evaluates the maximum memory size of each guest as dictated when it
is configured.

● General-purpose operating systems do not expect the amount of memory in the system to change,
so VMMs must maintain the illusion that the guest has that amount of memory. Next, the VMM
computes a target real memory allocation for each guest based on the configured memory for that
guest and other factor, such as over commitment and system load.

● It then uses the three low-level mechanisms below to reclaim memory from the guests. The
overall effect is to enable guests to behave and perform as if they had the full amount of memory
requested although in reality they have less.

● Recall that a guest believes it controls memory allocation via its page table management, whereas
in reality the VMM maintains a nested page table that re-translates the guest page table to the real
page table. The VMM can use this extra level of indirection to optimize the guest’s use of
memory without the guest’s knowledge or help.

● One approach is to provide double paging, in which the VMM has its own page-replacement
algorithms and pages to backing-store pages that the guest believes are in physical memory. Of
course, the VMM has knows less about the guest’s memory access patterns than the guest does,
so its paging is less efficient, creating performance problems.

● VMMs do use this method when other methods are not available or are not providing enough free
memory. However, it is not the preferred approach.

● At the same time, the guest is using its own memory management and paging algorithms to
manage the available memory, which is the most efficient option. If memory pressure within the
entire system decreases, the VMM will tell the balloon process within the guest to unpin and free
some or all of the memory, allowing the guest more pages for its use.

Another common method for reducing memory pressure is for the VMM to determine if the same
page has been loaded more than once. If this is the case, to the VMM reduces the number of copies of the
page to one and maps the other users of the page to that one copy.

PREPARED BY: Mr.D.SRINIVASAN AP/CSE , Mrs.A.THILAGAVATHI AP/CSE & Mr.R.ARUNKUMAR AP/CSE 26


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System UNIT -V
● for example, randomly samples guest memory and creates a hash for each page sampled. That
hash value is a “thumbprint” of the page. The hash of every page examined is compared with
other hashes already stored in a hash table. If there is a match, the pages are compared byte by
byte to see if they really are identical.

● If they are, one page is freed, and its logical address is mapped to the other’s physical address.
This technique might seem at first to be ineffective, but consider that guests run operating
systems. If multiple guests run the same operating system, then only one copy of the active
operating-system pages need be in memory.

(iii) Input / Output:

In the area of I/O, hypervisors have some leeway and can be less concerned with exactly
representing the underlying hardware to their guests. Because of all the variation in I/O devices,
operating systems are used to dealing with varying and flexible I/O mechanisms.

● For example, operating systems have a device-driver mechanism that provides a uniform interface
to the operating system whatever the I/O device. Device-driver interfaces are designed to allow
third-party hardware manufacturers to provide device drivers connecting their devices to the
operating system.

● In the area of networking, VMMs also have work to do. General-purpose operating systems
typically have one Internet protocol (IP) address, although they sometimes have more than one—
for example, to connect to a management network, backup network, and production network.
● With virtualization, each guest needs at least one IP address, because that is the guest’s main
mode of communication. Therefore, a server running a VMM may have dozens of addresses, and
the VMM acts as a virtual switch to route the network packets to the addressed guest. The guests
can be “directly” connected to the network by an IP address that is seen by the broader network
(this is known as bridging).

● Alternatively, the VMM can provide a network address translation (NAT) address. The NAT
address is local to the server on which the guest is running, and the VMM provides routing
between the broader network and the guest. The VMM also provides firewalling, moderating
connections between guests within the system and between guests and external systems.

(iv) Storage Management

An important question in determining how virtualization works is this: If multiple operating


systems have been installed, what and where is the boot disk? Clearly, virtualized environments need to
approach the area of storage management differently from native operating systems.

● Even the standard multi boot method of slicing the root disk into partitions, installing a boot
manager in one partition, and installing each other operating system in another partition is not

PREPARED BY: Mr.D.SRINIVASAN AP/CSE , Mrs.A.THILAGAVATHI AP/CSE & Mr.R.ARUNKUMAR AP/CSE 27


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System UNIT -V
sufficient, because partitioning has limits that would prevent it from working for tens or hundreds
of virtual machines.

● Once again, the solution to this problem depends on the type of hypervisor. Type 0 hypervisors do
tend to allow root disk partitioning, partly because these systems tend to run fewer guests than
other systems.
● Alternatively, they may have a disk manager as part of the control partition, and that disk
manager provides disk space (including boot disks) to the other partitions.

● The guest then executes as usual, with the VMM translating the disk I/O requests coming from
the guest into file I/O commands to the correct files. Frequently, VMMs provide a mechanism to
capture a physical system as it is currently configured and convert it to a guest that the VMM can
manage and run.
● Based on the discussion above, it should be clear that this physicalto-virtual (P-to-V) conversion
reads the disk blocks of the physical system’s disks and stores them within files on the VMM’s
system or on shared storage that the VMM can access.

(v) Live Migration:

One feature not found in general-purpose operating systems but found in type 0 and type 1
hypervisors is the live migration of a running guest from one system to another. We mentioned this
capability earlier.

1. The source VMM establishes a connection with the target VMM and confirms that it is allowed to
send a guest.

2. The target creates a new guest by creating a new VCPU, new nested page table, and other state
storage.

3. The source sends all read-only memory pages to the target.

4. The source sends all read-write pages to the target, marking them as clean.

5. The source repeats step 4, as during that step some pages were probably modified by the guest and
are now dirty. These pages need to be sent again and marked again as clean.

6. When the cycle of steps 4 and 5 becomes very short, the source VMM freezes the guest, sends the
VCPU’s final state, sends other state details, sends the final dirty pages, and tells the target to start
running the guest. Once the target acknowledges that the guest is running, the source terminates the guest
is shown in fig 5.7.

PREPARED BY: Mr.D.SRINIVASAN AP/CSE , Mrs.A.THILAGAVATHI AP/CSE & Mr.R.ARUNKUMAR AP/CSE 28


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System UNIT -V

Fig 5.7: VMM Source

● Before virtualization, this did not happen, as the MAC address was tied to physical hardware.
With virtualization, the MAC must be movable for existing networking connections to continue
without resetting. Modern network switches understand this and route traffic wherever the MAC
address is, even accommodating a move. A limitation of live migration is that no disk state is
transferred.

6) Write a note on Mobile OS – iOS and Android of Architecture and SDK Framework.
(April/May-2021)(Nov/Dec-2023)
iPhone OS becomes iOS

 Prior to the release of the iPad in 2010, the operating system running on the iPhone was generally
referred to as iPhone OS. Unfortunately, iOS is also the name used by Cisco for the operating
system on its routers. When performing an internet search for iOS, therefore, be prepared to see
large numbers of results for Cisco “iOS which have absolutely nothing to do with Apple “iOS.

An Overview of the iOS 6 Architecture

 iOS consists of a number of different software layers, each of which provides programming
frameworks for the development of applications that run on top of the underlying hardware.
 Some diagrams designed to graphically depict the iOS software stack show an additional box
positioned above the Cocoa Touch layer to indicate the applications running on the device.
 In the above diagram we have not done so since this would suggest that the only interface
available to the app is Cocoa Touch. In practice, an app can directly call down any of the layers of
the stack to perform tasks on the physical device.
 That said, however, each operating system layer provides an increasing level of abstraction away
from the complexity of working with the hardware.

PREPARED BY: Mr.D.SRINIVASAN AP/CSE , Mrs.A.THILAGAVATHI AP/CSE & Mr.R.ARUNKUMAR AP/CSE 29


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System UNIT -V
 As an iOS developer you should, therefore, always look for solutions to your programming goals
in the frameworks located in the higher-level iOS layers before resorting to writing code that
reaches down to the lower-level layers.
 In general, the higher level of layer you program to, the less effort and fewer lines of code you
will have to write to achieve your objective. The layers of IOS are given below in fig 5.8.

Fig5.8: iOS Architecture

The Cocoa Touch Layer

The Cocoa Touch layer sits at the top of the iOS stack and contains the frameworks that are most
commonly used by iPhone application developers. Cocoa Touch is primarily written in Objective-
C, is based on the standard Mac OS X Cocoa API (as found on Apple desktop and laptop
computers) and has been extended and modified to meet the needs of the iPhone hardware.

The Cocoa Touch layer provides the following frameworks for iPhone app development:

UIKit Framework (UIKit.framework)

 The UIKit framework is a vast and feature rich Objective-C based programming interface.
It is, without question, the framework with which you will spend most of your time
working. Entire books could, and probably will, be written about the UIKit framework
alone.

Some of the key features of UIKit are as follows:

 User interface creation and management (text fields, buttons, labels, colors, fonts etc)
 Application lifecycle management
 Application event handling (e.g. touch screen user interaction)
 Multitasking
 Wireless Printing

PREPARED BY: Mr.D.SRINIVASAN AP/CSE , Mrs.A.THILAGAVATHI AP/CSE & Mr.R.ARUNKUMAR AP/CSE 30


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System UNIT -V
 Data protection via encryption
 Cut, copy, and paste functionality
 Web and text content presentation and management
 Data handling
 Inter-application integration
 Push notification in conjunction with Push Notification Service
 Local notifications (a mechanism whereby an application running in the background can
gain the user’s attention)
 Accessibility
 Accelerometer, battery, proximity sensor, camera and photo library interaction
 Touch screen gesture recognition
 File sharing (the ability to make application files stored on the device available via
iTunes)
 Blue tooth-based peer to peer connectivity between devices
 Connection to external displays

Map Kit Framework (MapKit.framework)

The Map Kit framework provides a programming interface which enables you to build map-based
capabilities into your own applications. This allows you to, amongst other things, display
scrollable maps for any location, display the map corresponding to the current geographical
location of the device and annotate the map in a variety of ways.

Push Notification Service

 The Push Notification Service allows applications to notify users of an event even when the
application is not currently running on the device. Since the introduction of this service it has
most commonly been used by news based applications.

 Typically, when there is breaking news, the service will generate a message on the device with the
news headline and provide the user the option to load the corresponding news app to read more
details. This alert is typically accompanied by an audio alert and vibration of the device. This
feature should be used sparingly to avoid annoying the user with frequent interruptions.

Message UI Framework (MessageUI.framework)

 The Message UI framework provides everything you need to allow users to compose and send
email messages from within your application. In fact, the framework even provides the user
interface elements through which the user enters the email addressing information and message
content.
 Alternatively, this information may be pre-defined within your application and then displayed for
the user to edit and approve prior to sending

PREPARED BY: Mr.D.SRINIVASAN AP/CSE , Mrs.A.THILAGAVATHI AP/CSE & Mr.R.ARUNKUMAR AP/CSE 31


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System UNIT -V
Game Kit Framework (GameKit.framework)

 The Game Kit framework provides peer-to-peer connectivity and voice communication between
multiple devices and users allowing those running the same app to interact.
 When this feature was first introduced it was anticipated by Apple that it would primarily be used
in multi-player games (hence the choice of name) but the possible applications for this feature
clearly extend far beyond games development.

iAd Framework (iAd.framework)

 The purpose of the iAd Framework is to allow developers to include banner advertising within
their applications. All advertisements are served by Apple‟s own ad service. Event Kit UI
Framework (EventKit.framework) The Event Kit UI framework was introduced in iOS 4 and is
provided to allow the calendar and reminder events to be accessed and edited from within an
application.

 Accounts Framework (Accounts.framework) iOS 5 introduced the concept of system accounts.


These essentially allow the account information for other services to be stored on the iOS device
and accessed from within application code.

Social Framework (Social.framework)

 The Social Framework allows Twitter, Facebook and SinaWeibo integration to be added to
applications. The framework operates in conjunction the Accounts Framework to gain access to
the user‟s social network account information.

7.) Explain in detail about the iOS Media Layer.

The iOS Media Layer

 The role of the Media layer is to provide iOS with audio, video, animation and graphics
capabilities. As with the other layers comprising the iOS stack, the Media layer comprises a
number of frameworks which may be utilized when developing iPhone apps. In this section we
will look at each one in turn.

Core Video Framework (CoreVideo.framework)


 The Core Video Framework provides buffering support for the Core Media framework. Whilst
this may be utilized by application developers it is typically not necessary to use this framework.

Core Text Framework (CoreText.framework)


 The iOS Core Text framework is a C-based API designed to ease the handling of advanced text
layout and font rendering requirements.
Image I/O Framework (ImageIO.framework)

PREPARED BY: Mr.D.SRINIVASAN AP/CSE , Mrs.A.THILAGAVATHI AP/CSE & Mr.R.ARUNKUMAR AP/CSE 32


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System UNIT -V
 The Image I/O framework, the purpose of which is to facilitate the importing and exporting of
image data and image metadata, was introduced in iOS 4. The framework supports a wide range
of image formats including PNG, JPEG, TIFF and GIF.

Assets Library Framework (AssetsLibrary.framework)

 The Assets Library provides a mechanism for locating and retrieving video and photo files
located on the iPhone device. In addition to accessing existing images and videos, this framework
also allows new photos and videos to be saved to the standard device photo album.

Core Graphics Framework (CoreGraphics.framework)

 The iOS Core Graphics Framework (otherwise known as the Quartz 2D API) provides a
lightweight two-dimensional rendering engine. Features of this framework include PDF document
creation and presentation, vector-based drawing, transparent layers, path-based drawing, anti-
aliased rendering, color manipulation and management, image rendering and gradients. Those
familiar with the Quartz 2D API running on MacOS X will be pleased to learn that the
implementation of this API is the same on iOS.

Core Image Framework (CoreImage.framework)

 A new framework introduced with iOS 5 providing a set of video and image filtering and
manipulation capabilities for application developers.

Quartz Core Framework (QuartzCore.framework)

 The purpose of the Quartz Core framework is to provide animation capabilities on the iPhone. It
provides the foundation for the majority of the visual effects and animation used by the UIKit
framework and provides an Objective-C based programming interface for creation of specialized
animation within iPhone apps.

OpenGL ES framework (OpenGLES.framework)

 For many years the industry standard for high performance 2D and 3D graphics drawing has been
OpenGL. Originally developed by the now defunct Silicon Graphics, Inc (SGI) during the 1990s
in the form of GL, the open version of this technology (OpenGL) is now under the care of a non-
profit consortium comprising a number of major companies including Apple, Inc., Intel, Motorola
and ARM Holdings.

GLKit Framework (GLKit.framework) :

The GLKit framework is an Objective-C based API designed to ease the task of creating OpenGL
ES based applications.

NewsstandKit Framework (NewsstandKit.framework)

PREPARED BY: Mr.D.SRINIVASAN AP/CSE , Mrs.A.THILAGAVATHI AP/CSE & Mr.R.ARUNKUMAR AP/CSE 33


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System UNIT -V
The Newsstand application is a new feature of iOS 5 and is intended as a central location for users to
gain access to newspapers and magazines. The NewsstandKit framework allows for the development of
applications that utilize this new service.

iOS Audio Support


 iOS is capable of supporting audio in AAC, Apple Lossless (ALAC), A-law, IMA/ADPCM,
Linear PCM, µ-law, DVI/Intel IMA ADPCM, Microsoft GSM 6.10 and AES3-2003 formats
through the support provided by the following frameworks.

AV Foundation framework (AVFoundation.framework)

 An Objective-C based framework designed to allow the playback, recording and management of
audio content.

Core Audio Frameworks (CoreAudio.framework, AudioToolbox.framework and


AudioUnit.framework)

 The frameworks that comprise Core Audio for iOS define supported audio types, playback and
recording of audio files and streams and also provide access to the device‟s built-in audio
processing units.

Open Audio Library (OpenAL)

 OpenAL is a cross platform technology used to provide high-quality, 3D audio effects (also
referred to as positional audio). Positional audio may be used in a variety of applications though is
typically used to provide sound effects in games.

Media Player Framework (MediaPlayer.framework)

 The iOS Media Player framework is able to play video in .mov, .mp4, .m4v, and .3gp formats at a
variety of compression standards, resolutions and frame rates.

Core Midi Framework (CoreMIDI.framework)

 Introduced in iOS 4, the Core MIDI framework provides an API for applications to interact with
MIDI compliant devices such as synthesizers and keyboards via the iPhone‟s dock connector.

8) Explain in detail about TheiOS Core Services Layer.

The iOS Core Services Layer

 The iOS Core Services layer provides much of the foundation on which the previously referenced
layers are built and consists of the following frameworks.

Address Book Framework (AddressBook.framework)

PREPARED BY: Mr.D.SRINIVASAN AP/CSE , Mrs.A.THILAGAVATHI AP/CSE & Mr.R.ARUNKUMAR AP/CSE 34


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System UNIT -V
 The Address Book framework provides programmatic access to the iPhone Address Book contact
database allowing applications to retrieve and modify contact entries.

CFNetwork Framework (CFNetwork.framework)

 The CFNetwork framework provides a C-based interface to the TCP/IP networking protocol stack
and low level access to BSD sockets. This enables application code to be written that works with
HTTP, FTP and Domain Name servers and to establish secure and encrypted connections using
Secure Sockets Layer (SSL) or Transport Layer Security (TLS).

Core Data Framework (CoreData.framework)

 This framework is provided to ease the creation of data modeling and storage in Model-View
Controller (MVC) based applications. Use of the Core Data framework significantly reduces the
amount of code that needs to be written to perform common tasks when working with structured
data within an application.

Core Foundation Framework (CoreFoundation.framework)

 The Core Foundation framework is a C-based Framework which provides basic functionality such
as data types, string manipulation, raw block data management, URL manipulation, threads and
run loops, date and times, basic XML manipulation and port and socket communication.
 Additional XML capabilities beyond those included with this framework are provided via the
libXML2 library. Though this is a C-based interface, most of the capabilities of the Core
Foundation framework are also available with Objective-C wrappers via the Foundation
Framework.

Core Media Framework (CoreMedia.framework)

 The Core Media framework is the lower level foundation upon which the AV Foundation layer is
built. Whilst most audio and video tasks can, and indeed should, be performed using the higher
level AV Foundation framework, access is also provided for situations where lower level control
is required by the iOS application developer.

Core Telephony Framework (CoreTelephony.framework)

 The iOS Core Telephony framework is provided to allow applications to interrogate the device
for information about the current cell phone service provider and to receive notification of
telephony related events.

EventKit Framework (EventKit.framework)

 An API designed to provide applications with access to the calendar, reminders and alarms on the
device.

PREPARED BY: Mr.D.SRINIVASAN AP/CSE , Mrs.A.THILAGAVATHI AP/CSE & Mr.R.ARUNKUMAR AP/CSE 35


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System UNIT -V

Foundation Framework (Foundation.framework)

 The Foundation framework is the standard Objective-C framework that will be familiar to those
who have programmed in Objective-C on other platforms (most likely Mac OS X). Essentially,
this consists of Objective-C wrappers around much of the C-based Core Foundation Framework.

Core Location Framework (CoreLocation.framework)


 The Core Location framework allows you to obtain the current geographical location of the
device (latitude, longitude and altitude) and compass readings from with your own applications.
 The method used by the device to provide coordinates will depend on the data available at the
time the information is requested and the hardware support provided by the particular iPhone
model on which the app is running (GPS and compass are only featured on recent models). This
will either be based on GPS readings, Wi-Fi network data or cell tower triangulation (or some
combination of the three).

Mobile Core Services Framework (MobileCoreServices.framework)

 The iOS Mobile Core Services framework provides the foundation for Apple‟s Uniform Type
Identifiers (UTI) mechanism, a system for specifying and identifying data types.

 A vast range of predefined identifiers have been defined by Apple including such diverse data
types as text, RTF, HTML, JavaScript, PowerPoint .ppt files, PhotoShop images and MP3 files.

Store Kit Framework (StoreKit.framework)

 The purpose of the Store Kit framework is to facilitate commerce transactions between your
application and the Apple App Store. Prior to version 3.0 of iOS, it was only possible to charge a
customer for an app at the point that they purchased it from the App Store. iOS 3.0 introduced the
concept of the “in app purchase” whereby the user can be given the option to make additional
payments from within the application.

SQLite library

 Allows for a lightweight, SQL based database to be created and manipulated from within your
iPhone application.

System Configuration Framework (SystemConfiguration.framework)

 The System Configuration framework allows applications to access the network configuration
settings of the device to establish information about the “reachability” of the device (for example
whether Wi-Fi or cell connectivity is active and whether and how traffic can be routed to a
server).

PREPARED BY: Mr.D.SRINIVASAN AP/CSE , Mrs.A.THILAGAVATHI AP/CSE & Mr.R.ARUNKUMAR AP/CSE 36


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System UNIT -V
Quick Look Framework (QuickLook.framework)

 The Quick Look framework provides a useful mechanism for displaying previews of the contents
of file types loaded onto the device (typically via an internet or network connection) for which the
application does not already provide support. File format types supported by this framework
include iWork, Microsoft Office document, Rich Text Format, Adobe PDF, Image files,
public.text files and comma separated (CSV).

9) Explain about the iOS Core OS Layer.

The iOS Core OS Layer

 The Core OS Layer occupies the bottom position of the iOS stack and, as such, sits directly on top
of the device hardware. The layer provides a variety of services including low level networking,
access to external accessories and the usual fundamental operating system services such as
memory management, file system handling and threads.

Accelerate Framework (Accelerate.framework)

 The Accelerate Framework provides a hardware optimized C-based API for performing complex
and large number math, vector, digital signal processing (DSP) and image processing tasks and
calculations.

External Accessory Framework (ExternalAccessory.framework)

 Provides the ability to interrogate and communicate with external accessories connected
physically to the iPhone via the 30-pin dock connector or wirelessly via Bluetooth.

Security Framework (Security.framework)

 The iOS Security framework provides all the security interfaces you would expect to find on a
device that can connect to external networks including certificates, public and private keys, trust
policies, keychains, encryption, digests and Hash-based Message Authentication Code (HMAC).

System (LibSystem)

 As we have previously mentioned, iOS is built upon a UNIX-like foundation. The System
component of the Core OS Layer provides much the same functionality as any other UNIX like
operating system. This layer includes the operating system kernel (based on the Mach kernel
developed by Carnegie Mellon University) and device drivers.

 The kernel is the foundation on which the entire iOS platform is built and provides the low-level
interface to the underlying hardware. Amongst other things, the kernel is responsible for memory
allocation, process lifecycle management, input/output, inter-process communication, thread
management, low level networking, file system access and thread management.

PREPARED BY: Mr.D.SRINIVASAN AP/CSE , Mrs.A.THILAGAVATHI AP/CSE & Mr.R.ARUNKUMAR AP/CSE 37


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System UNIT -V
 As an app developer your access to the System interfaces is restricted for security and stability
reasons. Those interfaces that are available to you are contained in a C-based library called
LibSystem. As with all other layers of the iOS stack, these interfaces should be used only when
you are absolutely certain there is no way to achieve the same objective using a framework
located in a higher iOS layer.

10) Write the features of mobile OS. (April/May-2019)

Mobile devices are becoming less about the hardware and more about the operating system (OS)
running atop of it. Here are four of the most important aspects of a mobile operating system:

1. Speed:

Menus and buttons are about as vital to a mobile experience as much as what the user can do with
them. Impossible is it, to download apps and access the very things that a device is supposed to allow us
to do if the settings and options to do are as complicated as figuring out quantum physics. It’s the
difference between ‘Open App’ and ‘Begin Using Software Application’. It’s also the difference between
button shapes and if they’re easily noticeable and understandable.

2. Power to the User:

When it comes to our gadgets, there are few things we enjoy more than a breadth of options. Given
the option to change everything from colour schemes to simple concepts like being able to change the
background image on the device or to decide how the device greets us on switching it on, usually, the
more options the better.

The developer of the operating system want a completely different design to that of the user so while
simplistic use may be fine for some, allowing them to add nuts, bolts, screws and brackets to the
operating system bracket (besides apps) could be a great thing.

3. Apps

Smartphones and tablets made such a big splash when they were first introduced to the market in
part because of how wondrous and fantastical were the devices’ touchscreens, the other reason for the
success, is likely down to apps.

 Eschewing having to load up the device’s built-in web browser to fire up a website and access it
that way, apps make that far easier, often providing even more features to their browser
counterparts.

 In fact, apps are so advanced that you can even download a brand-new browser to access the
Internet on. The only problem is that not every app is available on every OS.

 As it stands, Apple’s iOS has far more apps listen on its store and some even have iOS exclusivity
so if you want to get your hands on plenty of applications application, that’s where you should

PREPARED BY: Mr.D.SRINIVASAN AP/CSE , Mrs.A.THILAGAVATHI AP/CSE & Mr.R.ARUNKUMAR AP/CSE 38


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System UNIT -V
look. So, while ‘quality over quantity’ is a good mantra to go by, the more apps available on an
operating system, the better.

4. Multi-Tasking

The hardware of a device covers how well and how fast each app or process on the device runs.
It’s responsible for how many crashes an app will log in a use and what keeps them running as smoothly
as possible and, with your hardware up to the challenge of running all of these apps as they should, why
not take advantage and ask for more of it?

 Multi-tasking is mostly a new feature in terms of operation systems, with Apple’s updated iOS
allowing for multiple apps at one time and with Android’s latest Jellybean update also letting
users multi-task too.

 With devices more and more being used as extensions of offices and workspaces, it makes sense
for them to allow us to run as many things as we’d want from our laptops and with devices that do
that, it’s a wonder why we’d find much use from computers at all.

11) Explain about iOS SDK( Software Development Kit):

The iOS SDK (Software Development Kit) (formerly iPhone SDK) is a software development
kit developed by Apple Inc. The kit allows for the development of mobile apps on Apple's iOS operating
system.

 While originally developing iPhone prior to its unveiling in 2007, Apple's then-CEO Steve
Jobs did not intend to let third-party developers build native apps for iOS, instead directing them
to make web applications for the Safari web browser.

 However, backlash from developers prompted the company to reconsider, with Jobs announcing
in October 2007 that Apple would have a software development kit available for developers by
February 2008. The SDK was released on March 6, 2008.

 The SDK is a free download for users of Mac personal computers. It is not available for Microsoft
Windows PCs. The SDK contains sets giving developers access to various functions and services
of iOS devices, such as hardware and software attributes.

 It also contains an iPhone simulator to mimic the look and feel of the device on the computer
while developing. New versions of the SDK accompany new versions of iOS. In order to test
applications, get technical support, and distribute apps through App Store, developers are required
to subscribe to the Apple Developer Program.

PREPARED BY: Mr.D.SRINIVASAN AP/CSE , Mrs.A.THILAGAVATHI AP/CSE & Mr.R.ARUNKUMAR AP/CSE 39


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System UNIT -V
 Combined with Xcode, the iOS SDK helps developers write iOS apps using officially supported
programming languages, including Swift and Objective-C. Other companies have also created
tools that allow for the development of native iOS apps using their respective programming
languages.

12) Explain about Mobile Operating Systems.

1. Android OS (Google Inc.)

The Android mobile operating system is Google's open and free software stack that includes an
operating system, middleware and also key applications for use on mobile devices, including
smartphones.
Updates for the open source Android mobile operating system have been developed under "dessert-
inspired" version names (Cupcake, Donut, Eclair, Gingerbread, Honeycomb, Ice Cream Sandwich) with
each new version arriving in alphabetical order with new enhancements and improvements.

2. Bada (Samsung Electronics)

Bada is a proprietary Samsung mobile OS that was first launched in 2010. The Samsung Wave
was the first smartphone to use this mobile OS. Bada provides mobile features such as multipoint-touch,
3D graphics and of course, application downloads and installation.

3. BlackBerry OS (Research In Motion)

The BlackBerry OS is a proprietary mobile operating system developed by Research In Motion for
use on the company’s popular BlackBerry handheld devices.

The BlackBerry platform is popular with corporate users as it offers synchronization with Microsoft
Exchange, Lotus Domino, Novell GroupWise email and other business software, when used with the
BlackBerry Enterprise Server.

4. iPhone OS / iOS (Apple)

Apple's iPhone OS was originally developed for use on its iPhone devices. Now, the mobile
operating system is referred to as iOS and is supported on a number of Apple devices including the
iPhone, iPad, iPad 2 and iPod Touch.

The iOS mobile operating system is available only on Apple's own manufactured devices as the
company does not license the OS for third-party hardware. Apple iOS is derived from Apple's Mac OS X
operating system.

PREPARED BY: Mr.D.SRINIVASAN AP/CSE , Mrs.A.THILAGAVATHI AP/CSE & Mr.R.ARUNKUMAR AP/CSE 40


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System UNIT -V

5. MeeGo OS (Nokia and Intel)

A joint open source mobile operating system which is the result of merging two products based on
open source technologies: Maemo (Nokia) and Moblin (Intel). MeeGo is a mobile OS designed to work
on a number of devices including smartphones, netbooks, tablets, in-vehicle information systems and
various devices using Intel Atom and ARMv7 architectures.

6. Palm OS (Garnet OS)

The Palm OS is a proprietary mobile operating system (PDA operating system) that was originally
released in 1996 on the Pilot 1000 handheld.
Newer versions of the Palm OS have added support for expansion ports, new processors, external
memory cards, improved security and support for ARM processors and smartphones.
Palm OS 5 was extended to provide support for a broad range of screen resolutions, wireless connections
and enhanced multimedia capabilities and is called Garnet OS.

7. Symbian OS (Nokia)

Symbian is a mobile operating system (OS) targeted at mobile phones that offers a high-level of
integration with communication and personal information management (PIM) functionality. Symbian OS
combines middleware with wireless communications through an integrated mailbox and the integration
of Java and PIM functionality (agenda and contacts).

Nokia has made the Symbian platform available under an alternative, open and direct model, to
work with some OEMs and the small community of platform development collaborators. Nokia does not
maintain Symbian as an open source development project.

8. webOS (Palm/HP)

WebOS is a mobile operating system that runs on the Linux kernel. WebOS was initially
developed by Palm as the successor to its Palm OS mobile operating system. It is a proprietary Mobile
OS which was eventually acquired by HP and now referred to as webOS (lower-case w) in HP literature.
HP uses webOS in a number of devices including several smartphones and HP TouchPads. HP has
pushed its webOS into the enterprise mobile market by focusing on improving security features and
management with the release of webOS 3.x. HP has also announced plans for a version of webOS to run
within the Microsoft Windows operating system and to be installed on all HP desktop and notebook
computers in 2012.

PREPARED BY: Mr.D.SRINIVASAN AP/CSE , Mrs.A.THILAGAVATHI AP/CSE & Mr.R.ARUNKUMAR AP/CSE 41


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System UNIT -V
9. Windows Mobile (Windows Phone)

Windows Mobile is Microsoft's mobile operating system used in smartphones and mobile devices –
with or without touchscreens. The Mobile OS is based on the Windows CE 5.2 kernel.

13) Compare the features of iOS and Android. (Nov/Dec-2019 & Nov/Dec-2023)

Comparing iOS with Android OS involves examining various aspects such as user experience,
design philosophy, customization, ecosystem, security, and availability across devices. Here's a
comparison:

User Experience and Design Philosophy:

iOS: Known for its simplicity, consistency, and intuitive design. Apple emphasizes a
controlled and uniform user experience across all iOS devices.
Android: Offers more flexibility and customization options. Different manufacturers
often implement their own user interfaces (UI), leading to a more diverse user experience
across devices.
Customization:
iOS: Limited customization options compared to Android. Users have control over
wallpapers, app arrangement, and some widget placement, but customization beyond that
is restricted.
Android: Highly customizable. Users can change themes, install custom launchers,
widgets, and even modify system-level settings with greater freedom.
Ecosystem:
iOS: Tightly integrated with Apple's ecosystem, including services like iCloud, iMessage,
FaceTime, and seamless integration with other Apple devices like Macs, iPads, and Apple
Watch.
Android: Offers a more open ecosystem with integration across various Google services
such as Gmail, Google Drive, and Google Photos. It also supports integration with other
platforms and devices.
Security:

PREPARED BY: Mr.D.SRINIVASAN AP/CSE , Mrs.A.THILAGAVATHI AP/CSE & Mr.R.ARUNKUMAR AP/CSE 42


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System UNIT -V
iOS: Generally considered more secure due to Apple's strict control over the App Store
and the closed nature of the operating system. Regular security updates are pushed directly
by Apple.
Android: While Google has implemented various security measures over the years, the
open nature of Android can make it more susceptible to malware and security
vulnerabilities, particularly on devices not receiving regular updates.
Fragmentation:
iOS: Limited fragmentation due to Apple's control over both hardware and software. Most
iOS devices receive updates promptly, ensuring a consistent user experience across
devices.
Android: Fragmentation is a significant issue due to the vast array of devices running
different versions of Android and customized UIs by manufacturers. This can lead to
delays in software updates and inconsistent user experiences.
Device Availability:
iOS: Exclusive to Apple devices such as iPhone, iPad, and iPod Touch.
Android: Available on a wide range of devices from various manufacturers, including
smartphones, tablets, smartwatches, TVs, and even cars.

14) Describe four virtualization-like execution environments, and explain how they differ from
"true" virtualization. (April/May-2024)

Virtualization-like execution environments provide isolated environments for running


applications or systems, but they differ from "true" virtualization in various ways, particularly in how
they manage hardware resources and isolate execution environments. Here are four such environments:

Containers (e.g., Docker)

Description: Containers are lightweight execution environments that package applications and
their dependencies together.

Difference from True Virtualization: Unlike true virtualization (where each virtual machine
runs its own full OS), containers share the host OS kernel and do not require a hypervisor.

Paravirtualization (e.g., Xen)

Description: Paravirtualization is a type of virtualization where the guest OS is modified to be


aware of the hypervisor, allowing it to interact more efficiently with the underlying hardware. The
guest OS communicates directly with the hypervisor for some operations, reducing overhead.

PREPARED BY: Mr.D.SRINIVASAN AP/CSE , Mrs.A.THILAGAVATHI AP/CSE & Mr.R.ARUNKUMAR AP/CSE 43


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System UNIT -V
Difference from True Virtualization: In "true" virtualization (hardware virtualization), the guest
OS doesn't need any modifications and is unaware of the hypervisor, relying on the hypervisor to
emulate hardware. Paravirtualization requires changes to the guest OS, so it's generally more
efficient but less flexible in terms of compatibility.

Emulation (e.g., QEMU)

Description: Emulation involves simulating an entire hardware environment in software,


enabling the execution of software written for one architecture on a different architecture (e.g.,
running ARM software on an x86 machine). Emulators can mimic hardware features like CPU,
memory, and I/O devices.

Difference from True Virtualization: Emulation emulates hardware at a low level, which
typically incurs higher performance overhead compared to true virtualization. In contrast, true
virtualization involves direct interaction with physical hardware via a hypervisor, often offering
more efficient execution.

User-mode Virtualization (e.g., Linux User Mode Linux)

Description: User-mode virtualization involves running an entire operating system within a


process in user space, without direct access to the hardware. The operating system runs as a
normal user process, and user-mode software simulates system calls and interactions with the
hardware.

Difference from True Virtualization: In user-mode virtualization, there is no direct hardware


access or kernel-based isolation. The guest OS operates within a single user-space process of the
host, making it less performant and more limited in terms of what it can do compared to true
virtualization, where each VM has full control over a virtualized hardware environment.

15) Why are VMMs unable to implement trap-and-emulate-based virtualization on some CPUs?
Lacking the ability to trap and emulate, what method can a VMM use to implement virtualization?
(April/May-2024)

Virtual Machine Monitors (VMMs) rely on trap-and-emulate-based virtualization to provide


virtualization by trapping certain instructions that the guest operating system attempts to execute and then
emulating those instructions in the hypervisor. Here's why and the alternative method VMMs can use in
these cases:

Reasons VMMs Can't Implement Trap-and-Emulate on Some CPUs:

Lack of Hardware Support for Privileged Instructions: Some CPUs don't have the ability to
trap certain privileged instructions (such as those that modify memory management, interrupt
handling, or hardware control registers) that are necessary for virtualization.

PREPARED BY: Mr.D.SRINIVASAN AP/CSE , Mrs.A.THILAGAVATHI AP/CSE & Mr.R.ARUNKUMAR AP/CSE 44


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System UNIT -V
CPU Privilege Level Limitations: On some older or less advanced CPUs, the privilege levels
required for efficient trapping and emulating of system calls are not well-defined or insufficiently
supported.

Complexity of Emulation: Even if trapping works for certain privileged instructions, the process
of emulating the entire instruction set and managing the state changes during these traps can be
computationally expensive.

Non-Virtualizable Hardware: Some CPU architectures (e.g., older x86 processors before Intel
VT-x and AMD-V were introduced) weren't designed with virtualization in mind.

Alternative Method: Binary Translation

When trap-and-emulate is not feasible or efficient, VMMs can use binary translation (BT) as an
alternative method to implement virtualization. Here's how it works:

Dynamic Code Translation: The VMM translates guest instructions that would normally cause
traps into equivalent instructions that can be safely executed on the host hardware.

Instruction Replacement: Instead of trapping the instruction and emulating it, the VMM
replaces it with instructions that achieve the same outcome without causing conflicts with the
hypervisor.

Efficiency: Binary translation allows for more efficient execution in cases where trap-and-
emulate would incur high overhead, although it can still introduce performance costs, especially if
the translation process is frequent or the code is complex.

16.) Describe the three types of traditional hypervisors. (April/May-2024)

Traditional hypervisors are software solutions that enable virtualization by creating and
managing virtual machines (VMs) on a host system. There are three main types of hypervisors, each with
its own method of interacting with the host system and virtualizing hardware. These types are:

Type 1 Hypervisor (Bare-Metal Hypervisor)

Description: A Type 1 hypervisor runs directly on the physical hardware of the host system,
without the need for an underlying operating system. It is the "bare-metal" hypervisor because it
interacts directly with the machine’s hardware, managing VMs that operate in isolation from one
another.

Key Characteristics:

No Host OS: The hypervisor is installed directly on the physical machine and is
responsible for managing the hardware, including CPU, memory, storage, and I/O devices.

PREPARED BY: Mr.D.SRINIVASAN AP/CSE , Mrs.A.THILAGAVATHI AP/CSE & Mr.R.ARUNKUMAR AP/CSE 45


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System UNIT -V
High Performance: Because it runs directly on hardware and does not rely on a host
operating system, a Type 1 hypervisor often provides better performance and lower
overhead.

Greater Security: As there is no host OS, the attack surface is smaller, potentially
offering better security. VMs are more isolated from the host system.

Examples:

o VMware ESXi
o Microsoft Hyper-V (in some configurations)
o Xen
o KVM (Kernel-based Virtual Machine, when used with Linux kernel)

Type 2 Hypervisor (Hosted Hypervisor)

Description: A Type 2 hypervisor runs as an application or software on top of an existing host


operating system. It relies on the host OS to access hardware resources, making it less efficient
compared to a Type 1 hypervisor, but easier to install and use for general-purpose virtualization.

Key Characteristics:

Host OS Dependency: The hypervisor is a software application that operates within a


host operating system, meaning it must go through the host OS to access system resources
like CPU, memory, and I/O.

Lower Performance: Since it depends on the host OS for access to hardware, a Type 2
hypervisor introduces additional overhead, which may reduce performance compared to
Type 1.

Ease of Use: Type 2 hypervisors are typically easier to install and configure, making them
more suitable for personal use, testing, or development environments.

Examples:

o VMware Workstation
o Oracle Virtual Box
o Parallels Desktop
o QEMU (when used in a hosted configuration)

Type 3 Hypervisor (Hybrid Hypervisor)

Description: A Type 3 hypervisor is less common and is typically a hybrid approach that
combines elements of both Type 1 and Type 2 hypervisors. It might run directly on the hardware
like a Type 1 hypervisor but still leverage some components of a host OS for managing certain
tasks or virtual machine management.

PREPARED BY: Mr.D.SRINIVASAN AP/CSE , Mrs.A.THILAGAVATHI AP/CSE & Mr.R.ARUNKUMAR AP/CSE 46


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System UNIT -V
Key Characteristics:

Combination of Type 1 and Type 2: A Type 3 hypervisor may run some components directly
on the hardware, but still rely on certain host OS elements for specific functions (like device
drivers or user management). It often involves a modular architecture.

Use in Specialized or Proprietary Systems: Type 3 hypervisors are more specialized and
are often used in proprietary or niche systems where some elements of a host OS are
needed for compatibility or functionality.

Flexibility: Type 3 hypervisors are typically more flexible in terms of system integration
and can potentially combine the strengths of both Type 1 and Type 2 approaches.

Examples:

o Some implementations of Xen (when used with a hosted operating system)


o Certain proprietary hypervisor implementations used in specialized hardware

17) Discuss about the mobile operating system with suitable example. (April/May-2024)

A mobile operating system (OS) is software that manages a mobile device's hardware and
provides the necessary platform for running applications and services on that device. Mobile OSes are
designed to optimize performance, battery life, and connectivity in mobile environments, while also
supporting a wide range of sensors, touch interfaces, and wireless communication options.

Key Features of Mobile Operating Systems:

Touch Interface Support: Mobile OSes are typically optimized for touchscreens, supporting
multi-touch gestures, swipes, taps, and other touch-based interactions.

Battery Efficiency: Mobile OSes are designed to minimize power consumption, managing
resources like CPU, memory, and network usage to maximize battery life.

App Management: Mobile OSes include app stores or marketplaces for downloading and
updating apps, along with app management features like permissions and multitasking.

Connectivity: They provide robust support for wireless communication technologies, including
Wi-Fi, Bluetooth, cellular networks, NFC, and GPS.

Security: Mobile OSes implement features like encryption, app sandboxing, and biometric
authentication to ensure user data and privacy are protected.

Examples of Mobile Operating Systems:

Here are some of the most widely used mobile operating systems:

PREPARED BY: Mr.D.SRINIVASAN AP/CSE , Mrs.A.THILAGAVATHI AP/CSE & Mr.R.ARUNKUMAR AP/CSE 47


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System UNIT -V
I. Android

Developer: Google

Market Share: Android is the most widely used mobile OS globally, with a market share that
exceeds 70%.

Key Features:

o Open-Source: Android is based on the Linux kernel and is open-source, meaning that
manufacturers and developers can modify and customize the OS for various devices.
o Google Play Store: The primary app marketplace, which provides a wide variety of apps
for users.
o Customizability: Android allows deep customization, such as changing the look and feel
of the OS, using third-party launchers, and modifying system settings.
o Multitasking: Android supports running multiple apps simultaneously with advanced
task-switching features.

Examples of Devices: Samsung Galaxy series, Google Pixel, Xiaomi, OnePlus, and many
more.

II. iOS

Developer: Apple

Market Share: iOS holds a significant share of the global mobile market, especially in high-end
smartphones, and is particularly popular in the United States and Europe.

Key Features:

o Closed Ecosystem: iOS is a proprietary OS developed by Apple, and it's designed


specifically for Apple devices like iPhones, iPads, and iPods.
o App Store: iOS apps are distributed through the Apple App Store, which has strict
guidelines for app submission and ensures high-quality standards.
o Smooth Integration with Apple Devices: iOS offers seamless integration with other
Apple products (e.g., macOS, Apple Watch, and iPads) through features like iCloud,
Continuity, and Handoff.
o Security and Privacy: iOS is known for its strong security model, including features like
App Sandboxing, biometric authentication (Face ID, Touch ID), and strong encryption.

Examples of Devices: iPhone, iPad, iPod Touch, and Apple Watch.

III. Windows Mobile (Legacy)

Developer: Microsoft

PREPARED BY: Mr.D.SRINIVASAN AP/CSE , Mrs.A.THILAGAVATHI AP/CSE & Mr.R.ARUNKUMAR AP/CSE 48


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System UNIT -V
Market Share: Windows Mobile has largely been replaced by Windows Phone and is no longer
a significant player in the mobile OS market, as it was officially discontinued in 2017.

Key Features:

o Live Tiles: The Windows Mobile operating system was known for its unique "Live Tiles"
interface, where apps displayed real-time information on the home screen.
o Integration with Microsoft Services: It offered seamless integration with Microsoft
Office, OneDrive, and other Microsoft services, which were particularly attractive to
enterprise users.
o Continuum: In later versions, Windows Phone featured Continuum, allowing the OS to
be used as a desktop OS when connected to a monitor and keyboard.

Examples of Devices: Microsoft Lumia series, HTC Windows Phones.

IV. HarmonyOS

Developer: Huawei

Market Share: HarmonyOS is a relatively new mobile OS primarily designed for Huawei
smartphones and other IoT devices.

Key Features:

o Cross-Platform: HarmonyOS is designed to work across a wide range of devices,


including smartphones, tablets, wearables, smart TVs, and IoT gadgets, offering a unified
experience across devices.
o Open-Source: Similar to Android, HarmonyOS is based on a microkernel architecture
and is open-source, allowing customization and development by third parties.
o Integration with Huawei Ecosystem: HarmonyOS aims to create a cohesive ecosystem
of Huawei devices, offering features like Huawei Share for seamless connectivity between
Huawei devices.

Examples of Devices: Huawei Mate 40, Huawei P40, Honor devices, and more.

V. Ubuntu Touch

Developer: UBports (community-driven)

Market Share: Ubuntu Touch is a niche mobile OS aimed at providing a Linux-based


experience on mobile devices.

Key Features:

o Linux-Based: Ubuntu Touch is built on the Ubuntu Linux distribution and offers a
familiar experience to Linux desktop users.

PREPARED BY: Mr.D.SRINIVASAN AP/CSE , Mrs.A.THILAGAVATHI AP/CSE & Mr.R.ARUNKUMAR AP/CSE 49


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System UNIT -V
o Open-Source: It is an open-source operating system, with a strong focus on privacy and
user control.
o Convergence: Ubuntu Touch includes the concept of convergence, meaning that it can
adapt to work both as a phone OS and as a desktop OS when connected to an external
monitor.

Examples of Devices: PinePhone, Fairphone, and some older Nexus devices (via community
ports).

VI. KaiOS

Developer: KaiOS Technologies

Market Share: KaiOS is a mobile OS used primarily on feature phones and is growing in
popularity, especially in emerging markets.

Key Features:

o Lightweight: Designed for low-powered feature phones, KaiOS is lightweight, providing


essential smartphone-like features without consuming significant resources.
o App Support: While not as extensive as Android or iOS, KaiOS supports apps like
WhatsApp, Facebook, YouTube, and Google Assistant, making it suitable for users in
emerging markets.
o Web-Based: KaiOS integrates a web-based platform, allowing for HTML5 apps and
lightweight applications to run on low-end devices.

Examples of Devices: JioPhone, Nokia 8110 4G, and others.

18.) Distinguish the various functional behaviors of IOS and Android with suitable examples.
(Nov/Dec-2024)
iOS and Android are the two dominant mobile operating systems, each with distinct
functional behaviors and design philosophies. Below is a comparison of the functional behaviors of iOS
and Android, along with suitable examples:

I. User Interface (UI) and Design

iOS:

o Consistency and Simplicity: iOS is known for its clean, minimalistic design with a focus
on consistency.
o Example: The Home Screen has a uniform grid of icons and a predictable, consistent
layout across devices.

Android:

PREPARED BY: Mr.D.SRINIVASAN AP/CSE , Mrs.A.THILAGAVATHI AP/CSE & Mr.R.ARUNKUMAR AP/CSE 50


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System UNIT -V
o Customization and Flexibility: Android allows extensive customization of the UI. Users
can change the look and feel using widgets, themes, launchers, and icons. Android's
flexibility appeals to users who want to personalize their devices.
o Example: Users can install third-party launchers like Nova Launcher to completely
change the layout, animation, and icon set of their home screen.

II. App Stores and App Distribution

iOS:

o App Store Exclusivity: iOS apps are distributed through the Apple App Store. Apple has
a strict app review process to ensure that all apps meet quality and security standards.
Apps also must adhere to Apple's privacy policies.
o Example: To install an app, users must visit the App Store, search for the app, and
download it. Apps like WhatsApp, Instagram, and TikTok are distributed exclusively
through the App Store.

Android:

o Multiple App Stores and Side Loading: Android provides access to multiple app stores,
including the Google Play Store and other third-party stores (e.g., Amazon App store).
Android also allows side-loading of apps, meaning users can install APK files from
external sources.
o Example: Besides the Google Play Store, users can download apps like Amazon App
store or side load apps via APKs, enabling more flexibility in app acquisition.

III. Customization and Control

iOS:

o Limited Customization: iOS is more restrictive when it comes to customization. Users


can change the wallpaper, rearrange apps, and enable/disable widgets on the home screen.
However, customization beyond these options is limited.
o Example: iOS does not allow users to change default apps for tasks like web browsing
(Safari is the default browser) or messaging (iMessage is the default messaging app).

Android:

o Extensive Customization: Android provides a high level of control over the system and
user interface. Users can change default apps, use third-party widgets, customize the home
screen, and modify system settings via developer options.
o Example: Android users can set Google Chrome as the default browser, WhatsApp as
the default messaging app, and install third-party apps like Tasker to automate tasks
based on conditions (e.g., location, time).

PREPARED BY: Mr.D.SRINIVASAN AP/CSE , Mrs.A.THILAGAVATHI AP/CSE & Mr.R.ARUNKUMAR AP/CSE 51


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System UNIT -V
IV. System Updates and Software Lifecycle

iOS:

o Frequent and Unified Updates: iOS offers regular, unified system updates across all
compatible devices. Apple releases major updates (e.g., iOS 14, iOS 15) and security
patches to all supported devices at the same time.
o Example: iOS 15 introduced new features such as Focus mode, improved FaceTime, and
better privacy controls, and was rolled out to all compatible devices in one go.

Android:

o Fragmented Updates: Android updates are more fragmented because multiple device
manufacturers (e.g., Samsung, Google, OnePlus) provide their own software versions.
While Google Pixel devices receive quick updates, other manufacturers often have delays.
o Example: Samsung devices may receive updates later than Google Pixel devices, and
some older models may not receive major Android version upgrades (e.g., Android 12).

V. App Permissions and Privacy

iOS:

o Granular Privacy Controls: iOS emphasizes user privacy, offering granular permission
controls. Users are notified when an app accesses sensitive data (e.g., camera, microphone,
location), and they can control access on an app-by-app basis.
o Example: iOS 14 introduced App Tracking Transparency, where apps must ask for user
permission before tracking them across other apps or websites.

Android:

o Flexible Permission Model: Android allows users to manage permissions, but it's
generally less restrictive than iOS. Starting from Android 6.0 (Marshmallow), Android
introduced runtime permissions, giving users control over app-specific permissions.
o Example: In Android, users can go into Settings to control permissions such as location,
camera, and contacts for individual apps. Android 12 enhanced privacy features with
indicators showing when the camera or microphone is in use.

VI. Multitasking and App Management

iOS:

o App Switching and Limited Multitasking: iOS has a straightforward app-switching


model where users can quickly switch between apps using the home gesture or app
switcher. However, iOS does not support true "multi-window" multitasking (except on
iPad).

PREPARED BY: Mr.D.SRINIVASAN AP/CSE , Mrs.A.THILAGAVATHI AP/CSE & Mr.R.ARUNKUMAR AP/CSE 52


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System UNIT -V
o Example: On iOS, users can quickly swipe up from the bottom of the screen to view
recently used apps and switch between them. iOS 14 introduced App Clips, allowing
users to use parts of an app without installing it fully.

Android:

o True Multitasking and Split-Screen: Android supports true multitasking, allowing users
to run two apps simultaneously in split-screen mode or use picture-in-picture (PiP) for
video playback.
o Example: On Android, users can open apps like Google Chrome and YouTube in split-
screen mode, allowing them to browse and watch videos simultaneously.

VII. Voice Assistants

iOS:

o Siri: Apple's voice assistant, Siri, is deeply integrated into the iOS ecosystem. It can
perform tasks like setting reminders, sending messages, playing music, and controlling
smart home devices.
o Example: Users can activate Siri by saying "Hey Siri" or holding the home button. Siri
can also be used for in-car integration (CarPlay).

Android:

o Google Assistant: Google Assistant is Android's voice assistant, providing more powerful
AI-driven features. It integrates with Google's vast search and cloud services and offers
multi-lingual support.
o Example: Google Assistant is available on most Android devices and can perform tasks
like searching the web, controlling smart devices, setting reminders, and even making
phone calls. It can also be used on Android TV and other devices in the Google
ecosystem.

VIII. App Ecosystem

iOS:

o App Store: iOS apps are available through the Apple App Store, and Apple maintains
tight control over app approval and content. This results in a curated, high-quality app
ecosystem, but it can limit app availability (especially apps that don’t meet Apple’s strict
guidelines).
o Example: Popular apps like Apple Music, Safari, and Pages are pre-installed, and all
third-party apps must pass Apple’s review process before being published.

Android:

o Google Play Store and Other Stores: Android allows multiple app stores and side
loading, resulting in a more diverse ecosystem. While Google Play is the primary store,

PREPARED BY: Mr.D.SRINIVASAN AP/CSE , Mrs.A.THILAGAVATHI AP/CSE & Mr.R.ARUNKUMAR AP/CSE 53


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System UNIT -V
users can also download apps from alternative sources, giving more freedom but less
control over app quality.
o Example: Apps like Google Maps, YouTube, and Spotify are available through the
Google Play Store, but users can also download apps from third-party stores like Amazon
App store.

19.) Explain the concept of virtual machines with a suitable sketch. Also, bring out its benefits and
features. (Nov/Dec-2024)

Concept of Virtual Machines (VMs)

A Virtual Machine (VM) is a software-based emulation of a physical computer. It runs an


operating system (OS) and applications just like a physical machine, but it operates within an isolated
environment on top of an existing physical system.

How Virtual Machines Work

Hypervisor: The hypervisor is the layer between the physical hardware and the virtual machines.
It is responsible for allocating physical resources to each VM and ensuring they operate
independently.

o Type 1 Hypervisor (Bare-Metal): Runs directly on the host hardware (e.g., VMware
ESXi, Microsoft Hyper-V).
o Type 2 Hypervisor (Hosted): Runs on top of a host OS (e.g., VMware Workstation,
Oracle Virtual Box).

Guest OS: The operating system that runs inside the virtual machine. Each VM can have a
different guest OS (e.g., Windows, Linux, macOS) on the same host system.

VM: A virtual machine runs its own OS and applications, independently of other VMs, even
though they share the same physical resources.

Sketch of Virtual Machine Architecture:


+-----------------------------------------------+
| Host Physical Machine |
| +--------------------+--------------------+ |
| | Hypervisor (VMM) | Physical Hardware | |
| +--------------------+--------------------+ |
| |
| +-------------------------------------+ |
| | Virtual Machine 1 | |
| | +-----------------------------+ | |
| | | Guest OS (Windows/Linux) | | |
| | | Applications | | |
| | +-----------------------------+ | |
| +-------------------------------------+ |

PREPARED BY: Mr.D.SRINIVASAN AP/CSE , Mrs.A.THILAGAVATHI AP/CSE & Mr.R.ARUNKUMAR AP/CSE 54


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System UNIT -V
| |
| +-------------------------------------+ |
| | Virtual Machine 2 | |
| | +-----------------------------+ | |
| | | Guest OS (Linux) | | |
| | | Applications | | |
| | +-----------------------------+ | |
| +-------------------------------------+ |
| |
| +-------------------------------------+ |
| | Virtual Machine 3 | |
| | +-----------------------------+ | |
| | | Guest OS (macOS) | | |
| | | Applications | | |
| | +-----------------------------+ | |
| +-------------------------------------+ |
+-----------------------------------------------+

Benefits of Virtual Machines:

Resource Isolation:

o Each VM operates independently of the others, even though they share the same physical
hardware. This isolation ensures that one VM’s failure or resource usage does not affect
other VMs.
o Example: Running a Linux VM and a Windows VM on the same machine without
conflicts.

Hardware Utilization:

o VMs allow better utilization of physical hardware by running multiple guest operating
systems simultaneously on a single machine. This maximizes the use of CPU, memory,
and storage resources.
o Example: On a high-performance server, you can run several VMs for different purposes
(web server, database server, etc.) instead of using separate physical machines.

Disaster Recovery and Backup:

o Virtual machines can be easily backed up, cloned, or restored. VMs can be snapshot at any
point in time, creating a restore point for disaster recovery.
o Example: You can take a snapshot of a critical application VM before performing an
upgrade, allowing you to revert to the previous state if something goes wrong.

Cross-Platform Compatibility:

o VMs allow running different operating systems on the same physical machine. For
instance, you can run Windows on a host machine that uses Linux, or vice versa.

PREPARED BY: Mr.D.SRINIVASAN AP/CSE , Mrs.A.THILAGAVATHI AP/CSE & Mr.R.ARUNKUMAR AP/CSE 55


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System UNIT -V
o Example: A user running macOS on their laptop can run a Windows VM for testing
software or using Windows-specific tools.

Security and Sandboxing:

o VMs offer a secure environment for running untrusted applications. If a VM is


compromised, it does not affect the host system or other VMs.
o Example: A developer can test potentially malicious software in a VM to analyze its
behavior without risking the integrity of their main system.

Testing and Development:

o VMs are ideal for software testing and development because they provide isolated
environments where developers can test applications on different OSes or configurations
without the need for multiple physical machines.
o Example: A developer can test their application on multiple versions of Windows or
different Linux distributions by running separate VMs.

Cost Savings:

o Virtualization allows businesses to reduce hardware costs by consolidating servers and


maximizing resource usage. Instead of purchasing multiple physical machines, a single
machine can host many VMs.
o Example: A data center can consolidate hundreds of servers into a smaller number of
physical machines, reducing power and cooling requirements.

Portability:

o VMs are portable because they are encapsulated in virtualized files. VMs can be easily
migrated from one host machine to another, allowing for flexible workload distribution.
o Example: Moving a VM from an old server to a new server is as simple as copying the
VM files and configuring the new host.

Features of Virtual Machines:

Independent Virtualized Hardware:

o VMs emulate physical hardware for the guest OS, providing virtualized CPU, memory,
storage, network interfaces, etc.
o Example: A VM might be allocated 2GB of RAM, 2 virtual CPUs, and 100GB of virtual
disk space, even if the physical host machine has more resources.

Snapshot and Cloning:

o VMs support snapshots (point-in-time copies) and cloning, which allow for easy recovery,
testing, and deployment.

PREPARED BY: Mr.D.SRINIVASAN AP/CSE , Mrs.A.THILAGAVATHI AP/CSE & Mr.R.ARUNKUMAR AP/CSE 56


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System UNIT -V
o Example: A snapshot can capture the state of a VM before a system update, allowing the
user to revert to the previous state if needed.

Resource Allocation:

o Hypervisors allow resource allocation such as CPU cores, memory, and disk space to each
VM, making efficient use of physical hardware.
o Example: The hypervisor can allocate 4 GB of RAM to one VM and 8 GB to another
based on their needs.

Live Migration:

o Many virtualization platforms support live migration, where VMs can be moved between
physical machines without shutting them down. This is useful for load balancing and
maintenance without downtime.
o Example: A VM can be moved from one server to another in a cloud data center without
any interruption to the services it provides.

PREPARED BY: Mr.D.SRINIVASAN AP/CSE , Mrs.A.THILAGAVATHI AP/CSE & Mr.R.ARUNKUMAR AP/CSE 57


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System UNIT -V
ANNA UNIVERSITY QUESTIONS

APRIL / MAY - 2023

PART – A

1. What is a Virtual Machine? (Q.NO:1)


2. Write a note an Android. (Q.NO: 25)

PART – B

1) Present an outline of the types of virtual machine Explain in detail. (Q.NO:4)

2) Outline the operating system aspects of virtualization in the context of operating system

function scheduling .I/O and memory management. (Q.NO: 6)

ANNA UNIVERSITY QUESTIONS

NOV /DEC-2023

PART – A

1. What are the benefits of virtual machines? (Q.NO:4)


2. List any two components that are unique for mobile OS. (Q.NO:26)

PART – B

1. Explain various types of virtual machines and their implementations in detail. (Q. No:4)

2. Explain the architecture of Android OS. (Q. No: 7)

3. Compare Ios with Android OS. (Q.NO:13)

PREPARED BY: Mr.D.SRINIVASAN AP/CSE , Mrs.A.THILAGAVATHI AP/CSE & Mr.R.ARUNKUMAR AP/CSE 58


MAILAM ENGINEERING COLLEGE CS3451 – Introduction to Operating System UNIT -V
ANNA UNIVERSITY QUESTIONS

APRIL / MAY-2024

PART – A

1. What is paravirtualization? (Q.NO:10)

2. What is the major design goal for the android platform? (Q.NO:27)

PART – B

1. Describe four virtualization-like execution environments, and explain how they differ from "true"
virtualization. (Q.NO:14)

2. Why are VMMs unable to implement trap-and-emulate-based virtualization on some CPUs?


Lacking the ability to trap and emulate, what method can a VMM use to implement
virtualization? (Q.NO:15)

3. Describe the three types of traditional hypervisors. (Q.NO:16)

4. Discuss about the mobile operating system with suitable example. (Q.NO:17)

ANNA UNIVERSITY QUESTIONS

NOV /DEC-2024

PART – A

1. Define virtualization. (Q.NO:2)


2. State the merits of Android OS. (Q.NO:28)

PART – B

1. Distinguish the various functional behaviors of IOS and Android with suitable examples.
(Q.NO:18)

2. Explain the concept of virtual machines with a suitable sketch. Also, bring out its benefits and
features. (Q.NO:19)

PREPARED BY: Mr.D.SRINIVASAN AP/CSE , Mrs.A.THILAGAVATHI AP/CSE & Mr.R.ARUNKUMAR AP/CSE 59


EnggTree.com

Downloaded from EnggTree.com


EnggTree.com

Downloaded from EnggTree.com


EnggTree.com

Downloaded from EnggTree.com

You might also like