0% found this document useful (0 votes)
56 views34 pages

Operating System Zahid

This document contains notes on operating system concepts including processes, threads, process scheduling, interprocess communication, and client-server systems. It defines key terms like process, context switching, and scheduling queues. Process states and operations are described, as well as advantages of cooperating processes through information sharing, speed, and modularity. Methods of interprocess communication include message passing, naming techniques, synchronization, and buffering approaches. The document also briefly discusses threads, multithreading benefits, and communication in client-server systems using sockets and remote procedure calls.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
56 views34 pages

Operating System Zahid

This document contains notes on operating system concepts including processes, threads, process scheduling, interprocess communication, and client-server systems. It defines key terms like process, context switching, and scheduling queues. Process states and operations are described, as well as advantages of cooperating processes through information sharing, speed, and modularity. Methods of interprocess communication include message passing, naming techniques, synchronization, and buffering approaches. The document also briefly discusses threads, multithreading benefits, and communication in client-server systems using sockets and remote procedure calls.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 34

2013

Operating System
Key Notes
Prepared by Zahid Javed
Operating System Notes and it all important questions units wise

Zahid Javed
[email protected]
8/14/2013
1|Page

KEY NOTES
UNIT I - PROCESSES AND THREADS
1. Introduction to Operating System
 A program that acts as an intermediary between a user of a computer and the
computer hardware
 Operating system goals:
 Execute user programs and make solving user problems easier
 Make the computer system convenient to use
 Use the computer hardware in an efficient manner
2. Review of computer organization
 Computer System Components
 Hardware – provides basic computing resources (CPU, memory, I/O
devices)
 Operating system – controls and coordinates the use of the hardware
among the various application programs for the various users
 Applications programs – define the ways in which the system resources
are used to solve the computing problems of the users (compilers, database
systems, video games, business programs)
 Users (people, machines, other computers).
3. Operating system structures
 System components
 Process management
A process is a program in execution: (A program is passive, a process
active.)
A process has resources (CPU time, files) and attributes that must be
managed.
Management of processes includes:
 Process Scheduling (priority, time management, . . . )
 Creation/termination
 Block/Unblock (suspension/resumption )
2|Page

 Synchronization
 Communication
 Deadlock handling
 Main-memory management
 Allocation/de-allocation for processes, files, I/O.
 Maintenance of several processes at a time
 Keep track of who's using what memory
 Movement of process memory to/from secondary storage
 File management Keep track of what's on secondary storage
 file == logical entity,
 disk == physical entity
 Map logical file locations onto physical disk locations.
 May involve management of a file structure ( directory hierarchy )
 Does file/directory creation/deletion
 File manipulation ( rename, move, append )
 File backup
 I/O system management
 Buffer caching system
 Generic device driver code
 Drivers for each device - translate read/write requests into disk
position commands.
 Secondary storage management
 Disks, tapes, optical, ...
 Free space management ( paging/swapping )
 Storage allocation ( what data goes where on disk )
 Disk scheduling
 Networking
 Protection system
 Command-interpreter system
3|Page

 Operating services
 Program execution – system capability to load a program into memory and
to run it
 I/O operations – since user programs cannot execute I/O operations
directly, the operating system must provide some means to perform I/O
 File-system manipulation – program capability to read, write, create, and
delete files
 Communications – exchange of information between processes executing
either on the same computer or on different systems tied together by a
network. Implemented via shared memory or message passing
 Error detection – ensure correct computing by detecting errors in the CPU
and memory hardware, in I/O devices, or in user programs
 Resource allocation - When multiple users or multiple jobs running
concurrently, resources must be allocated to each of them
 Many types of resources - Some (such as CPU cycles, main
memory and file storage) may have special allocation code,
others (such as I/O devices) may have general request and
release code
 Accounting - To keep track of which users use how much and what kinds
of computer resources
 Protection and security - The owners of information stored in a multiuser
or networked computer system may want to control use of that
information, concurrent processes should not interfere with each other
4. System calls
 System calls provides the interface between a process and the operating system
 Categories
 Process control
 File management
 Device management
 Information maintenance
4|Page

 Communications

5. System programs
 System programs provide a convenient environment for program development
and execution.
 They can be divided into:
 File manipulation
 Status information
 File modification
 Programming language support
 Program loading and execution
 Communications
 Application programs
6. System structure
 Simple structure
Ex:MS-DOS
 Layered structure
Ex:UNIX,OS/2
 Microkernels
Ex:QNX
7. Virtual Machines
 Implementation
 Benefits
8. Process Concept
 An operating system executes a variety of programs:
 Batch system – jobs
 Time-shared systems – user programs or tasks
 The terms job and process almost interchangeably.
 Process – a program in execution; process execution must progress in sequential
fashion.
5|Page

 A process includes:
 program counter
 stack
 data section
 Process State
 As a process executes, it changes state
 new: The process is being created.
 running: Instructions are being executed.
 waiting: The process is waiting for some event to occur.
 ready: The process is waiting to be assigned to a process.
 terminated: The process has finished execution.
9. Process Scheduling
 Scheduling queues
 Job queue – set of all processes in the system.
 Ready queue – set of all processes residing in main memory, ready and
waiting to execute.
 Device queue – set of processes waiting for an I/O device.
 Schedulers
 Long-term scheduler (or job scheduler) – selects which processes should be
brought into the ready queue.
 Short-term scheduler (or CPU scheduler) – selects which process should be
executed next and allocates CPU.
 Context Switch
 When CPU switches to another process, the system must save the state of the
old process and load the saved state for the new process.
 Context-switch time is overhead; the system does no useful work while
switching.
6|Page

 Time dependent on hardware support.


10. Operations on processes
 Process Creation
 Process termination
11. Cooperating Processes
 Independent process cannot affect or be affected by the execution of another
process.
 Cooperating process can affect or be affected by the execution of another process
 Advantages of process cooperation
 Information sharing
 Computation speed-up
 Modularity
 Convenience
12. Interprocess Communication
 Message passing system
 Naming
 Direct communication
 Indirect communication
 Synchronization
 Blocking send
 Blocking receive
 Nonblocking send
 Nonblocking receive
 Buffering
 Zero capacity- – 0 messages Sender must wait for receiver (rendezvous)
 Bounded capacity – finite length of n messages Sender must wait if link
full
 Unbounded capacity-infinite length sender never waits
13. Communication in client-server systems
 Communication using sockets
7|Page

 Communication using remote procedure calls


 Communication using remote method invocation

14. Threads
 Benefits of multithreading
 Responsiveness
 Resource sharing
 Economy
 Utilization of multiprocessor architectures
 Multithreading
 Many-to-one
 One-to-one
 Many-to-many
 Threading issues
 Semantics of fork() and exec() system calls.
 Thread cancellation.
 Signal handling
 Thread pools
 Thread specific data
 Pthreads - a POSIX standard (IEEE 1003.1c) API for thread creation and
synchronization

UNIT II - PROCESS SCHEDULING AND SYNCHRONIZATION


1. CPU scheduling
 Selects from among the processes in memory that are ready to execute, and
allocates the CPU to one of them
 CPU scheduling decisions may take place when a process:
 Switches from running to waiting state
8|Page

 Switches from running to ready state


 Switches from waiting to ready
 Terminates

2. Scheduling Criteria
 CPU utilization – keep the CPU as busy as possible
 Throughput – # of processes that complete their execution per time unit
 Turnaround time – amount of time to execute a particular process
 Waiting time – amount of time a process has been waiting in the ready queue
 Response time – amount of time it takes from when a request was submitted until
the first response is produced, not output (for time-sharing environment)
3. Scheduling algorithms
 FCFS scheduling
 SJF scheduling
 Priority scheduling
 Round robin scheduling
 Multilevel queue scheduling
 Multilevel feedback queue scheduling
4. Multiple processor scheduling
5. Real-time scheduling
 Hard real-time systems
 Soft real time systems
6. Algorithm Evaluation
7. Process synchronization
8. The Critical-Section Problem
 n processes all competing to use some shared data
 Each process has a code segment, called critical section, in which the shared data
is accessed.
9|Page

 Problem – ensure that when one process is executing in its critical section, no
other process is allowed to execute in its critical section.
1. Synchronization hardware

10. Semaphores
 Synchronization tool that does not require busy waiting.
 Semaphore S – integer variable
 can only be accessed via two indivisible (atomic) operations
wait (S):
while S  0 do no-op;
S--;
signal (S):
S++;
11. Classical Problems of Synchronization
 Bounded-Buffer Problem
 Readers and Writers Problem
 Dining-Philosophers Problem
12. Critical regions
 High-level synchronization construct
 A shared variable v of type T, is declared as:
v: shared T
 Variable v accessed only inside statement
region v when B do S
where B is a boolean expression.
 While statement S is being executed, no other process can access variable v.
13. Monitors
 High-level synchronization construct that allows the safe sharing of an abstract data
type among concurrent processes.
10 | P a g e

monitor monitor-name
{
shared variable declarations
procedure body P1 (…) {
...
}
procedure body P2 (…) {
...
}
procedure body Pn (…) {
...
}
{
initialization code
}
}
14. Deadlocks
 A set of blocked processes each holding a resource and waiting to acquire a resource
held by another process in the set.
 Example
 System has 2 tape drives.
 P1 and P2 each hold one tape drive and each needs another one.
15. System model
 Resource types R1, R2, . . ., Rm
CPU cycles, memory space, I/O devices
 Each resource type Ri has Wi instances.
 Each process utilizes a resource as follows:
 request
 use
 release
11 | P a g e

16. Deadlock characterization


 Mutual exclusion: only one process at a time can use a resource.
 Hold and wait: a process holding at least one resource is waiting to acquire additional
resources held by other processes.
 No preemption: a resource can be released only voluntarily by the process holding it,
after that process has completed its task.
 Circular wait: there exists a set {P0, P1, …, P0} of waiting processes such that P0 is
waiting for a resource that is held by P1, P1 is waiting for a resource that is held by
P2, …, Pn–1 is waiting for a resource that is held by
Pn, and P0 is waiting for a resource that is held by P0.
17. Methods for handling deadlocks
 Ensure that the system will never enter a deadlock state.
 Allow the system to enter a deadlock state and then recover.
 Ignore the problem and pretend that deadlocks never occur in the system; used by most
operating systems, including UNIX.
18. Deadlock prevention
 Mutual Exclusion – not required for sharable resources; must hold for nonsharable
resources.
 Hold and Wait – must guarantee that whenever a process requests a resource, it does not
hold any other resources.
 Require process to request and be allocated all its resources before it begins
execution, or allow process to request resources only when the process has none.
 Low resource utilization; starvation possible
 No Preemption –
 If a process that is holding some resources requests another resource that cannot
be immediately allocated to it, then all resources currently being held are released.
 Preempted resources are added to the list of resources for which the process is
waiting.
 Process will be restarted only when it can regain its old resources, as well as the
new ones that it is requesting.
12 | P a g e

 Circular Wait – impose a total ordering of all resource types, and require that each
process requests resources in an increasing order of enumeration.
19. Deadlock avoidance
 Simplest and most useful model requires that each process declare the maximum number
of resources of each type that it may need.
 The deadlock-avoidance algorithm dynamically examines the resource-allocation state to
ensure that there can never be a circular-wait condition.
 Resource-allocation state is defined by the number of available and allocated resources,
and the maximum demands of the processes
 Safe state
 Resource allocation graph algorithm
 Banker’s algorithm
 Safety algorithm
 Resource-request algorithm
20. Deadlock detection
 Allow system to enter deadlock state
 Detection algorithm
 Recovery scheme
 Single Instance of Each Resource Type
 Several Instances of a Resource Type
21. Recovery from deadlock
 Process termination
 Abort all deadlocked processes
 Abort one process at a time until the deadlock cycle is eliminated
 Resource preemption
 Selecting a victim
 Rollback
 Starvation

UNIT III - STORAGE MANAGEMENT


13 | P a g e

1. Memory Management: Background

 Program must be brought into memory and placed within a process for it to be run.
 Input queue – collection of processes on the disk that are waiting to be brought into
memory to run the program.
 Address binding- Binding of Instructions and Data to Memory
 Compile time
 Load time
 Execution time
 Logical versus physical address space – The concept of a logical address space that is
bound to a separate physical address space is central to proper memory management.
 Logical address – generated by the CPU; also referred to as virtual address.
 Physical address – address seen by the memory unit.
 Dynamic loading - Routine is not loaded until it is called
 Better memory-space utilization; unused routine is never loaded
 Useful when large amounts of code are needed to handle infrequently occurring
cases.
 No special support from the operating system is required implemented through
program design.
 Dynamic linking and shared libraries - Linking postponed until execution time.
 Small piece of code, stub, used to locate the appropriate memory-resident library
routine
 Stub replaces itself with the address of the routine, and executes the routine.
 Operating system needed to check if routine is in processes’ memory address.
 Dynamic linking is particularly useful for libraries.
 Overlays - Keep in memory only those instructions and data that are needed at any given
time.
 Needed when process is larger than amount of memory allocated to it.
 Implemented by user, no special support needed from operating system,
programming design of overlay structure is complex
14 | P a g e

2. Swapping
A process can be swapped temporarily out of memory to a backing store, and then brought back
into memory for continued execution.
3. Contiguous Allocation
 Main memory divided into two partitions:
 Resident operating system, usually held in low memory with interrupt vector.
 User processes then held in high memory.
 Single partition allocation
 Multiple partition allocation
 Memory protection
 Memory allocation
 First fit
 Best fit
 Worst fit
 Fragmentation
 External Fragmentation – total memory space exists to satisfy a request, but it is
not contiguous.
 Internal Fragmentation – allocated memory may be slightly larger than
requested memory; this size difference is memory internal to a partition, but not
being used.
4. Paging
 Logical address space of a process can be noncontiguous; process is allocated physical
memory whenever the latter is available.
 Divide physical memory into fixed-sized blocks called frames (size is power of 2,
between 512 bytes and 8192 bytes).
 Divide logical memory into blocks of same size called pages.
 Address Translation Scheme
Address generated by CPU is divided into:
 Page number (p) – used as an index into a page table which contains base address
of each page in
15 | P a g e

 physical memory.
Page offset (d) – combined with base address to define the physical memory
address that is sent to the memory unit.
 Implementation of Page Table
 Page table is kept in main memory.
 Page-table base register (PTBR) points to the page table.
 Page-table length register (PRLR) indicates size of the page table.
 Page Table Structure
 Hierarchical Page tables
 Hashed Page Tables
 Inverted Page Tables
 Shared Pages
Shared code
 One copy of read-only (reentrant) code shared among processes (i.e., text editors,
compilers, window systems).
 Shared code must appear in same location in the logical address space of all
processes.
5. Segmentation
 Memory-management scheme that supports user view of memory.
 A program is a collection of segments.
 A segment is a logical unit such as:
 main program,
 procedure,
 function,
 method,
 object,
 local variables, global variables,
 common block,
 stack,
 symbol table, arrays
16 | P a g e

 Logical address consists of a two tuple:


<segment-number, offset>,
 Segment table – maps two-dimensional physical addresses; each table entry has:
 base – contains the starting physical address where the segments reside in
memory.
 limit – specifies the length of the segment.
 Segment-table base register (STBR) points to the segment table’s location in memory.
 Segment-table length register (STLR) indicates number of segments used by a program.
6. Segmentation with paging
 Local descriptor table(LDT)
 Global descriptor table(GDT)
 Example:Intel 80386
7. Virtual Memory Background – separation of user logical memory from physical memory.
 Only part of the program needs to be in memory for execution.
 Logical address space can therefore be much larger than physical address space.
 Allows address spaces to be shared by several processes.
 Allows for more efficient process creation.
 Virtual memory can be implemented via:
 Demand paging
 Demand segmentation
8. Demand Paging - Bring a page into memory only when it is needed.
 Less I/O needed
 Less memory needed
 Faster response
 More users
9. Process creation
 Copy-on-Write (COW) allows both parent and child processes to initially share the same
pages in memory
 Memory-mapped file I/O allows file I/O to be treated as routine memory access by
mapping a disk block to a page in memory
17 | P a g e

10. Page Replacement


 Prevent over-allocation of memory by modifying page-fault service routine to include
page replacement
 Basic Page Replacement
 Find the location of the desired page on disk
 Find a free frame:
- If there is a free frame, use it
- If there is no free frame, use a page replacement algorithm to select a victim
frame
 Read the desired page into the (newly) free frame. Update the page and frame
tables
 Restart the process

 Page Replacement Algorithms
 FIFO
 Optimal
 LRU
 LRU approximation
 Counting based
 Page buffering
11. Allocation of frames
 Each process needs minimum number of pages.
 Example: IBM 370 – 6 pages to handle SS MOVE instruction:
 Instruction is 6 bytes, might span 2 pages.
 2 pages to handle from.
 2 pages to handle to.
 Two major allocation schemes.
 Fixed allocation
 Priority allocation
 Fixed Allocation
18 | P a g e

 Equal allocation – e.g., if 100 frames and 5 processes, give each 20 pages.
 Proportional allocation – Allocate according to the size of process.
 Priority Allocation
 Use a proportional allocation scheme using priorities rather than size.
 If process Pi generates a page fault,
1. Select for replacement one of its frames.
2. Select for replacement a frame from a process with lower priority number.
12. Thrashing – A process is thrashing if it is spending more time paging than executing.
 Cause of thrashing
 Low CPU utilization.
 Operating system thinks that it needs to increase the degree of multiprogramming.
 Working set model
 Working-set window  a fixed number of page references Example: 10,000
instruction
 WSSi (working set of Process Pi) = total number of pages referenced in the most
recent  (varies in time)
 Page fault frequency
 Establish “acceptable” page-fault rate.
 If actual rate too low, process loses frame.
 If actual rate too high, process gains frame.

UNIT IV - FILE SYSTEMS


1. File-System Interface
File concept– A file is a named collection of related information that is recorded on secondary
storage
 File attributes
 Name
 Identifier
 Type
 Location
19 | P a g e

 Size
 Protection
 Time, date and user identification
 File Operations
 Create
 Write
 Read
 Reposition within file
 Delete
 Truncate
 Open(Fi)
 Close (Fi)
 File types
 executable
 object
 source code
 batch
 text
 word processor
 library
 print or view
 archive
 multimedia
 File Structure
 None - sequence of words, bytes
 Simple record structure
 Lines
 Fixed length
 Variable length
 Complex Structures
20 | P a g e

 Formatted document
 Relocatable load file
 Access Methods
 Sequential Access-Information in the file is processed in order, one record after
the other.
 Direct Access-A file is made up of fixed length logical records that allow
programs to read and write records rapidly in no particular order.
2. Directory structure
Directory - A collection of nodes containing information about all files.
 Operations Performed on Directory
 Search for a file
 Create a file
 Delete a file
 List a directory
 Rename a file
 Traverse the file system
3. Logical structures of directory
 Single-Level Directory -A single directory for all users.
 Two-Level Directory - Separate directory for each user.
 Tree-structured directories – absolute and relative path
 Acyclic-Graph Directories - Have shared subdirectories and files.
 General Graph Directory

4. File system mounting

 A file system must be mounted before it can be accessed.


 A unmounted file system is mounted at a mount point.
5. Protection
 File owner/creator should be able to control.
 Types of access
21 | P a g e

 Read
 Write
 Execute
 Append
 Delete
 List
 Access control
 Owner
 Group
 Universe
6. File system implementation
 File structure
 Logical storage unit
 Collection of related information
 File system resides on secondary storage (disks).
 File system organized into layers.
 File control block – storage structure consisting of information about a file.
 Overview
 On-disk structures
 Boot control block
 Partition control block
 A directory structure
 An FCB
 In-memory structures
 An in-memory partition table
 An in-memory directory structure
 System-wide open-file table
 Per-process open-file table
 Partitions and mounting
22 | P a g e

 Virtual file systems - Virtual File Systems (VFS) provide an object-oriented way of
implementing file systems.
7. Directory Implementation
 Linear list of file names with pointer to the data blocks.
 simple to program
 time-consuming to execute
 Hash Table – linear list with hash data structure.
 decreases directory search time
 collisions – situations where two file names hash to the same location
 fixed size
8. Allocation Methods
 An allocation method refers to how disk blocks are allocated for files:
 Contiguous allocation - Each file occupies a set of contiguous blocks on the disk.
 Linked allocation - Each file is a linked list of disk blocks: blocks may be
scattered anywhere on the disk.
 Indexed allocation - Brings all pointers together into the index block.

9. Free-space management
 Bit vector
 Linked list
 Grouping
 Counting
10. Efficiency and performance
11. Recovery
 Consistency checking
 Backup and restore
12. Log-structured file system
23 | P a g e

UNIT V - I/O SYSTEMS


1. I/O systems – the control of devices connected to the computer
2. I/O hardware
 A typical PC bus structure
 Bus
 Port
 PCI bus
 Expansion bus
 Controller
 I/O registers – status, control, data-in and data-out
 Polling
 Interrupts
 Nonmaskable interrupts
 Maskable interrupts
 Direct memory access
3. Application I/O interface
 Block and character devices
 read()
 write()

 Network devices
 Clocks and timers
 Give the current time
 Give the elapsed time
 Set a timer to trigger operation X at time T
 Blocking and non-blocking I/O
4. Kernel I/O subsystem
 I/O scheduling
 Buffering
 Caching
24 | P a g e

 Spooling and device reservation


 Error handling
 Kernel data structures
5. Streams – A stream is a full-duplex connection between a device driver and a user-level
process.
It consists of,
 Stream head
 Driver end
 Stream modules
6. Performance
7. Mass-Storage Structure
 Disk structure – Disk drives are addressed as large one-dimensional arrays of logical
blocks.
 Disk scheduling
 FCFS scheduling
 SSTF scheduling
 SCAN scheduling
 C-SCAN scheduling
 LOOK scheduling
 Selecting a Disk-Scheduling Algorithm
 SSTF is common and has a natural appeal
 SCAN and C-SCAN perform better for systems that place a heavy load on the
disk.
 Performance depends on the number and types of requests.
 Requests for disk service can be influenced by the file-allocation method.
 The disk-scheduling algorithm should be written as a separate module of the
operating system, allowing it to be replaced with a different algorithm if
necessary.
 Either SSTF or LOOK is a reasonable choice for the default algorithm.
 Disk management
25 | P a g e

 Low-level formatting, or physical formatting — Dividing a disk into sectors that


the disk controller can read and write.
 To use a disk to hold files, the operating system still needs to record its own data
structures on the disk.
 Partition the disk into one or more groups of cylinders.
 Logical formatting or “making a file system”.
 Boot block initializes system.
 The bootstrap is stored in ROM.
 Bootstrap loader program.
 Methods such as sector sparing used to handle bad blocks.
 Swap-Space Management
 Swap-space — Virtual memory uses disk space as an extension of main memory.
 Swap-space can be carved out of the normal file system, or, more commonly, it
can be in a separate disk partition.
 Swap-space management
 4.3BSD allocates swap space when process starts; holds text segment (the
program) and data segment.
 Kernel uses swap maps to track swap-space use.
 Solaris 2 allocates swap space only when a page is forced out of physical
memory, not when the virtual memory page is first created.

 RAID
 Improvement of reliability via redundancy
 Improvement in performance via parallelism
 RAID levels
 RAID 0:non-reduntant striping
 RAID 1:mirrored disks
 RAID 2:memory-style error-correcting codes
 RAID 3:bit-interleaved parity
26 | P a g e

 RAID 4:block-interleaved parity


 RAID 5:block-interleaved distributed parity
 RAID 6:P+Q redundancy
 Selecting a RAID level
 Extensions
 Disk attachment
 Host-attached storage – a storage accessed via local I/O ports
 Network-attached storage – a special purpose storage system that is accessed
remotely over a data network
 Storage-area network – a private network among the servers and storage units,
separate from the LAN or WAN that connects the servers to the clients
 Stable storage
 A disk writes results in one of three outcomes:
 Successful completion
 Partial failure
 Total failure
 Tertiary storage
 Tertiary storage devices
 Removable disks
 Tapes
 Future technology
 Operating system jobs
 Application interface
 File naming
 Hierarchical storage management
 Performance issues
 Speed
 Reliability
 Cost
27 | P a g e

QUESTION BANK

UNIT I - PROCESSES AND THREADS

PART - A
1. What is an operating system?
2. What are the 4 components of a computer system?
3. What are the 2 viewpoints of operating system?
4. What is meant by control program?
5. Differentiate symmetric and asymmetric multiprocessing.
6. Differentiate tightly coupled and loosely coupled systems.
7. Define hard and soft real-time systems
8. What are the operating system components?
9. Define distributed system.
10. What are the operating system services?
11. Define system call.
12. Write the 5 categories of system calls.
13. What are the 2 types of communication models?
14. Define a system program and give their types.
15. Write the advantages of virtual machines.
16. Define a process
17. Write the states of a process.
18. What are the 3 scheduling queues?
19. Write the 2 types of schedulers.
20. What is meant by context switching?
21. Define IPC.
22. What are the operations in a message passing system?
23. Write the difference between direct and indirect communication.
24. What is meant by socket? Give an example.
25. Write the use of RPC.
26. What is meant by stub and skeleton.
28 | P a g e

27. Define RMI.


28. Define a thread.
29. Differentiate single threading and multi threading.
30. Write the benefits of multithreading.
31. What are the types of multithreading models?
32. What is pthread?

PART - B
1. Explain about the components of a computer system.
2. Briefly discuss about the various system components.
3. Explain about system calls and their types.
4. Briefly discuss about the linux, os/2 and windows NT structures.
5. Explain about virtual machine and non-virtual machine.
6. Briefly discuss about interprocess communication.
7. Explain about RPC and RMI.
8. Explain the various threading issues.

UNIT II - PROCESS SCHEDULING AND SYNCHRONIZATION


PART - A
1. Differentiate preemptive and non-preemptive scheduling.
2. Write the functions of a dispatcher.
3. Write down the criteria for scheduling.
4. Define turnaround time, waiting time and response time.
5. What is Gantt chart?
6. Write the scheduling algorithms.
7. Differentiate multilevel queue scheduling and multilevel feedback queue scheduling.
8. What is meant by multiple processor scheduling?
9. What are the 2 methods in analytic evaluation?
10. What is meant by critical section?
11. Write the requirements of critical section problem?
29 | P a g e

12. Define mutual exclusion.


13. .Define semaphore and write its 2 operations.
14. Differentiate counting semaphore and binary semaphore.
15. Write the use of monitors.
16. Define a deadlock.
17. What are the conditions for deadlocks?
18. What is deadlock prevention and deadlock avoidance?
19. What is meant by safe state?
20. Write the data structures used in banker’s algorithm?
21. What are the 2 solutions for deadlock recovery?

PART - B
1. Consider the following set of processes, with the length of the CPU burst time given in
milliseconds
Process Burst time Priority
P1 10 3
P2 1 1
P3 2 3
P4 1 4
P5 5 2
The processes are assumed to have arrived in the order P1, P2, P3, P4 and P5 at time 0.
a) Draw four Gantt charts illustrating the execution of these processes using FCFS, SJF, a
non-preemptive priority (a smaller priority number implies a higher priority) and RR
(quantum=1) scheduling.
b) What is the turnaround time of each process for each of the scheduling algorithms in part
a?
c) What is the waiting time of each process for each of the scheduling algorithms in part a?
d) Which of the schedules in part a results in the minimal average waiting time (over all
processes)?
2. Explain about the following.
30 | P a g e

a) Multilevel queue scheduling


b) Multilevel feedback queue scheduling
c) Multiprocessor scheduling
d) Real time scheduling
3. Explain the various algorithm evaluation methods.
4. Explain about the following
a) Bounded buffer, b) Readers and writers problem and c) Dining philosophers problem
5. Briefly discuss about the various algorithms in critical section problem.
6. Explain about the 4 necessary conditions for deadlock prevention.
7. Apply banker’s algorithm and find whether the system is in a safe state. Find the total number
of resources in the system.
Maximum Allocation Available
A B C A B C A B C
P0 7 3 2 4 1 0 3 2 2
P1 6 4 5 3 2 3
P2 5 4 3 4 3 1
P3 4 3 2 4 2 1
P4 3 3 1 3 0 1
8. Explain about deadlock detection with neat example.

UNIT III - STORAGE MANAGEMENT


PART - A
1. What are the steps in address binding?
2. What is physical address and logical address?
3. Write the use of MMU.
4. Write the use of dynamic loading.
5. Differentiate static linking and dynamic linking.
6. What is the use of overlays?
7. Define swapping.
8. What is 50-percent rule?
31 | P a g e

9. Write the 3 memory allocation strategies.


10. What is internal and external fragmentation?
11. Define paging.
12. What is TLB?
13. Define hit ratio.
14. Differentiate paging and segmentation.
15. What is virtual memory?
16. What are the techniques used to implement virtual memory?
17. Define lazy swapper.
18. What is pure demand paging?
19. Define thrashing.

PART-B
1. Briefly discuss about paging.
2. Explain the segmentation method of memory management.
3. Describe the following allocation algorithms
a) First fit, b) Best fit and c)Worst fit
4. Explain the implementation of page table using TLB.
5. Briefly discuss about demand paging with its performance.
6. Explain about the various page replacement strategies.
7. Apply FIFO, Optimal, LRU page replacement algorithms and find the number of page faults.
Use 4 frames. The reference string is, 0, 1, 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 7, 3, 1, 2, 0, 2, 3.
8. Explain how sharing can be achieved in a segmentation environment.

UNIT IV - FILE SYSTEMS


PART - A
32 | P a g e

1. Define a file.
2. What are the file attributes?
3. Write the file operations.
4. Give some common file types.
5. hat are the file access methods?
6. What is absolute path and relative path?
7. What is a mount point?
8. What is boot control block and partition control block?
9. Write the in-memory structures to implement a file system.
10. What are the 3 disk space allocation methods?
11. What is the use of consistency checker?

PART - B
1. Briefly discuss about the various file access methods.
2. Explain the various directory structures.
3. Explain the methods for implementing free space management.
4. Explain the different methods for allocating space on disk.
5. Explain the methods for implementing directory.
6. Explain the various file protection methods.

UNIT V - I/O SYSTEMS


PART - A
1. Define device driver.
2. Define port and bus.
3. What is the I/O port registers?
4. Define a stream.
5. Define seek time.
6. Define rotational latency.
7. Define bandwidth.
8. What is boot block and bad block?
33 | P a g e

9. What is sector sparing and sector slipping?


10. Write the purpose of RAID?
11. What are the 5 levels of RAID?
12. Define host-attached storage and network-attached storage.
13. Give examples for tertiary storage structure.

PART - B
1. Explain about the typical bus structure.
2. Explain about interrupts.
3. Explain about DMA with its steps.
4. Briefly discuss about application i/o interface.
5. Explain about the kernel i/o subsystem.
6. Briefly explain about the various disk scheduling algorithms with example.
7. Briefly discuss about RAID.
8. Explain the following a) Disk attachment and b) Stable-storage implementation.
9. Suppose that a disk drive has 5,000 cylinders, numbered 0 to 4999.The drive is currently
serving a request at cylinder 143,and the previous request was at cylinder 125.The queue
of pending requests in FIFO order is, 86, 1470, 913, 1774, 948, 1509, 1022, 1750,130
Starting from the current head position. What is the total distance (in cylinders)that the
disk arm moves to satisfy all the pending requests for each of the following disk-
scheduling algorithms?
a) FCFS, b) SSTF, c) SCAN, d) C-SCAN, e) LOOK and f) C-LOOK

You might also like