Os QP Solved1
Os QP Solved1
• One set of operating-system services provides functions that are helpful to the user
• Communications – Processes may exchange information, on the same computer or
between computers over a network.
• Communications may be via shared memory or through message passing (packets
moved by the OS)
• Error detection – OS needs to be constantly aware of possible errors may occur in the
CPU and memory hardware, in I/O devices, in user program
• For each type of error, OS should take the appropriate action to ensure correct and
consistent computing.
• Debugging facilities can greatly enhance the user’s and programmer’s abilities to
efficiently use the system.
• Another set of OS functions exists for ensuring the efficient operation of the system
itself via resource sharing
• Resource allocation - When multiple users or multiple jobs running concurrently,
resources must be allocated to each of them
• Many types of resources - Some (such as CPU cycles, main memory, and file storage)
may have special allocation code, others (such as I/O devices) may have general request
and release code
• Accounting - To keep track of which users use how much and what kinds of computer
resources
• Protection and security - The owners of information stored in a multiuser or
networked computer system may want to control use of that information, concurrent
processes should not interfere with each other.
• Protection involves ensuring that all access to system resources is controlled.
• Security of the system from outsiders requires user authentication, extends to
defending external I/O devices from invalid access attempts.
• If a system is to be protected and secure, precautions must be instituted throughout it.
A chain is only as strong as its weakest link.
1.b
SystemCalls
Programminginterfacetotheservicesprovided bytheOS
Typicallywritteninahigh-level language(CorC++)
Mostly accessed by programs via a high-level Application Program Interface (API)
rather thandirect system call usenThree most common APIs are Win32 API for
Windows, POSIX API forPOSIX-based systems (including virtually all versions of
UNIX, Linux, and Mac OS X), and JavaAPIfor theJava virtualmachine(JVM)
Why use APIs rather than system calls?(Note that the system-call names used throughout this textaregeneric)
Exampleof SystemCalls
SystemCallImplementation
Typically,anumber associatedwitheachsystemcall
System-callinterfacemaintainsatableindexedaccordingtothesenumbers
Thesystemcall interfaceinvokesintendedsystemcallinOSkernel andreturnsstatusofthesystemcalland
anyreturnvalues
Thecallerneed knownothingabouthowthesystemcallisimplemented
Justneeds toobeyAPIand understandwhatOSwilldoas aresultcall
Most details of OS interface hidden from programmer by API Managed by run-time
supportlibrary(setoffunctionsbuiltinto librariesincluded with compiler)
Dual-ModeOperation
3.a
ProcessState
Thesenamesarearbitrary,andtheyvaryacrossoperatingsystems.Certainoperatingsystemsalsomorefinelydelineateproce
ssstates.Itisimportanttorealize thatonlyone process can be runningon any processor at any instant.
3. B.
4. a.
Each process is represented in the operating system by a process control block (PCB)—also called a taskcontrol
block.
Program counter-The counter indicates the address of the next instruction to be executed for thisprocess.
• CPU registers- The registers vary in number and type, depending on the computer architecture. Theyinclude
accumulators, index registers, stack pointers, and general-purpose registers, plus any condition-codeinformation.
CPU-schedulinginformation-
Thisinformationincludesaprocesspriority,pointerstoschedulingqueues,andanyotherschedulingparameters.
Memory-management information- This information may include such information as the value of thebase and
limit registers, the page tables, or the segment tables, depending on the memory system used bythe
operatingsystem
Accounting information-This information includes the amount of CPU and real time used, time limits,account
members,job or processnumbers, and so on.
I/O status information-This information includes the list of I/O devices allocated to the process, a list ofopen
files,and soon.
4.b.
Threads
Traditional processes have a single thread of control. It is also called as heavyweight process. There is one
program counter, and one sequence of instructions that can be carried out at any given time.
A multi-threaded application have multiple threads within a single process, each having their own program
counter, stack and set of registers, but sharing common code, data, and certain structures such as open files.
Such process are called as lightweight process.
Advantages
Responsiveness – may allow continued execution if part of process is blocked, especially important for user
interfaces
Resource Sharing – threads share resources of process, easier than shared memory or message passing
Economy – cheaper than process creation, thread switching lower overhead than context switching
Types of Threads
1) User Threads are above the kernel and are managed without kernel support.
There are 3 types of relationships b/w user threads and kernel threads.
Each thread is represented by a PC, registers, stack and a small control block, all stored in the user process
address space.
Three primary thread libraries: POSIX Pthreads ,Win32 threads ,Java threads
User-level threads implement in user-level libraries, rather than via systems calls, so
thread switching does not need to call operating system and to cause interrupt to the kernel.
Fast and Efficient: Thread switching is not much more expensive than a procedure call.
Simple Management: Creating a thread, switching between threads and synchronization between threads can
all be done without intervention of the kernel.
Instead of thread table in each process, the kernel has a thread table that keeps track of all threads in the
system.
The kernel-level threads are slow and inefficient. For instance, threads operations are hundreds of times
slower than that of user-level threads.
Kernel must manage and schedule threads as well as processes. It requires a full thread control block (TCB)
for each thread to maintain information about threads.
Multithreaded Processes
Many-to-One Model
Multiple threads may not run in parallel on muticore system because only one may be in kernel at a time.
If one of the thread makes a blocking system call then the entire process will be blocked.
Examples:
One-to-One Model
Examples
Windows
Linux
Many-to-Many Model
Multicore Programming
A recent trend in computer architecture is to produce chips with multiple cores, or CPUs on a single chip.
A multi-threaded application running on a traditional single-core chip, would have to execute the threads
one after another.
On a multi-core chip, the threads could be spread across the available cores, allowing true parallel
processing.
Thread Library
A thread library provides the programmer with an API for creating and managing thread.
There are two primary ways of implementing thread library, which are as follows −
• The first approach is to provide a library entirely in user space with kernel support. All code and data
structures for the library exist in a local function call in user space and not in a system call.
• The second approach is to implement a kernel level library supported directly by the operating system. In
this case the code and data structures for the library exist in kernel space.
Invoking a function in the API for the library typically results in a system call to the kernel.
The main thread libraries which are used are given below −
• POSIX threads − Pthreads, the threads extension of the POSIX standard, may be provided as either a user
level or a kernel level library.
• WIN 32 thread − The windows thread library is a kernel level library available on windows systems.
• JAVA thread − The JAVA thread API allows threads to be created and managed directly as JAVA
programs.
Threading Issues
Cancellation
Signal handling
Thread pools
When a thread program calls fork( ), The new process can be a copy of the parent, with all the threads
The new process is a copy of the single thread only (that invoked the process)
If the thread invokes the exec( ) system call, the program specified in the parameter to exec( ) will be
executed by the thread created.
Cancellation
Terminating the thread before it has completed its task is called thread cancellation.
Example : Multiple threads required in loading a webpage is suddenly cancelled, if the browser window is closed.
Threads that are no longer needed may be cancelled in one of two ways:
2. Deferred Cancellation - the target thread periodically check whether it has to terminate, thus gives an
opportunity to the thread, to terminate itself in an orderly fashion.
Signal Handling
Signals are software interrupts sent to a process to indicate that an important event has occurred .
5.a.
Critical-Section Problem
• Critical-section is a segment-of-code in which a process may be
→ changing common variables
→ updating a table or
→ writing a file.
• Each process has a critical-section in which the shared-data is accessed.
• General structure of a typical process has following (Figure 2.12):
1) Entry-section
● Requests permission to enter the critical-section.
2) Critical-section
● Mutually exclusive in time i.e. no other process can execute in its critical-section.
3) Exit-section
● Follows the critical-section.
4) Remainder-section
Figure 2.12 General structure of a typical process
• Problem statement:
―Ensure that when one process is executing in its critical-section, no other process is to be allowed to execute in
its critical-section‖.
• A solution to the problem must satisfy the following 3 requirements:
1) Mutual Exclusion:
● No more than one process can be in critical-section at a given time.
2) Progress:
● When no process is in the critical section, any process that requests entry into the critical section must be
permitted without any delay..
3) Bounded Waiting (No starvation):
● There is an upper bound on the number of times a process enters the critical section,
while another is waiting.
• Two approaches used to handle critical-sections:
1) Preemptive Kernels
● Allows a process to be preempted while it is running in kernel-mode.
● More suitable for real-time proframming
2) Non-preemptive Kernels
● Does not allow a process running in kernel-mode to be preempted as it is free from race conditions on kernel
data structures, as only one process is active in the kernel at a time.
Peterson’s Solution
*****Detailed understanding: https://nptel.ac.in/courses/106106144/26*****
• This is a classic software-based solution to the critical-section problem.
• This is limited to 2 processes.
• The 2 processes alternate execution between
→ critical-sections and
→ remainder-sections.
do {
flag[i] = TRUE;
turn = j;
while (flag[j] && turn == j);
critical section
flag[i] = FALSE;
remainder section
} while (TRUE);
UNLOCK LOCK
• To enter the critical-section,
→ firstly, process Pi sets flag[i] to be true and
→ then sets turn to the value j.
• If both processes try to enter at the same time, turn will be set to both i and j at roughly the same time.
• The final value of turn determines which of the 2 processes is allowed to enter its critical-section first.
• To prove that this solution is correct, we show that:
1) Mutual-exclusion is preserved:
• Observation1: Pi enters the CS only if flag[j]== false or turn ==i.
• Observation2: If both processes can be executing in their CSs at the same time, then
flag[i]==flag[j]==true.
These two observations imply that Pi and Pj could not have successfully executed their while statements at about
the same time, since the value of turn can be either i or j but cannot be both.
Hence, the process which sets ‘turn’ first will execute and Mutual Exclusion is preserved.
2) The progress requirement & The bounded-waiting requirement is met:
• The process which executes while statement first (say Pi), doesn’t change the value of
turn. So other process (Say Pj) will enter the CS (Progress) after at most one entry
(Bounded Waiting)
5.b
Semaphores
*****Detailed understanding: https://nptel.ac.in/courses/106106144/30*****
• A semaphore is a synchronization-tool.
• It used to control access to shared-variables so that only one process may at any point
in time change the value of the shared-variable.
• A semaphore(S) is an integer-variable that is accessed only through 2 atomic-operations:
1) wait() and
2) signal().
• wait() is termed P ("to test or decrement” ) signal() is termed V ("to increment").
Definition of wait(): Definition of signal():
• When one process modifies the semaphore-value, no other process can simultaneously
modify that same semaphore-value.
Semaphore Usage:
a) Binary Semaphore
● The value of a semaphore can range only between 0 and 1.
● On some systems, binary semaphores are known as mutex locks, as they are locks
that provide mutual-exclusion.
● Used for two processes
b) Counting Semaphore:
● The value of a semaphore can ranges over an unrestricted domain
● Used for multiple processes
wait (s) {
while s <= 0 // no-op
s = s -1
}
signal (s) {
s = s+1
}
6.a.
Deadlocks
• Deadlock is a situation where a set of processes are blocked because each process is
→ holding a resource and
→ waiting for another resource held by some other process.
• Real life example:
When 2 trains are coming toward each other on same track and there is only one track,
none of the trains can move once they are in front of each other.
• Similar situation occurs in operating systems when there are two or more processes hold some
resources and wait for resources held by other(s).
Deadlock Characterization
• In a deadlock, processes never finish executing, and system resources are tied
up,preventing other jobs from starting.
1) Necessary Conditions
• There are four conditions that are necessary to achieve deadlock:
i) Mutual Exclusion
At least one resource must be held in a non-sharable mode.
i.e., If one process holds a non-sharable resource and if any other process requests this
resource, then the requesting-process must wait for the resource to be released.
6.b
iii) Yes
7. a.
PROGRAM: a
#include<stdio.h>
#include<conio.h>
main()
{
int i,m,n,tot,s[20];
clrscr();
printf("Enter total memory size:");
scanf("%d",&tot);
printf("Enter no. of processes:");
scanf("%d",&n);
printf("Enter memory for OS:");
scanf("%d",&m);
for(i=0;i<n;i++)
{
printf("Enter size of process %d:",i+1);
scanf("%d",&s[i]);
}
tot=tot-m;
for(i=0;i<n;i++)
{
if(tot>=s[i])
{
printf("Allocate memory to process %d\n",i+1);
tot=tot-s[i];
}
else
printf("process p%d is blocked\n",i+1);
}
printf("External Fragmentation is=%d",tot);
getch();
}
OUTPUT:
Enter total memory size : 50
Enter no.of pages : 4
Enter memory for OS :10
Enter size of page : 10
Enter size of page : 9
Enter size of page : 9
Enter size of page : 10
External Fragmentation is = 2
7. b
• Paging is a memory-management scheme.
• This permits the physical-address space of a process to be non-contiguous.
• This also solves the considerable problem of fitting memory-chunks of varying
sizesonto the backing-store.
• Traditionally: Support for paging has been handled by
hardware.Recent designs: The hardware & OS are
closely integrated.
1) Basic Method
□ Physical-memory is broken into fixed-sized blocks
calledframes (Figure 3.16). Logical-memory is
broken into same-sized blocks called pages.
□When a process is to be executed, its pages are loaded into any
availablememory-frames from the backing-store.
□The backing-store is divided into fixed-sized blocks that are of the same size as
the memory-frames.
Figure 3.16 Paging hardware
the memory-unit.
Figure 3.17 Paging model of
logical andphysical-memory
□The page-size (like the frame size) is defined by the hardware (Figure 3.18).
□If the size of the logical-address space is 2m, and a page-size is 2n addressing-units
8.a
8.b
(i) LRU with 3 frames:
Frames 7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1
1 7 7 7 2 2 2 2 4 4 4 0 0 0 1 1 1 1 1 1 1
2 0 0 0 0 0 0 0 0 3 3 3 3 3 3 0 0 0 0 0
3 1 1 1 3 3 3 2 2 2 2 2 2 2 2 2 7 7 7
No. of Page faults √ √ √ √ √ √ √ √ √ √ √ √
No of page faults=12
Conclusion: The optimal page replacement algorithm is most efficient among three algorithms, as it has
lowest page faults i.e. 9.
9.a
Sequential Access
• This is based on a tape model of a file.
• This works both on
→ sequential-access devices and
→ random-access devices.
• Info. in the file is processed in order (Figure 4.15).
For ex: editors and compilers
• Reading and writing are the 2 main operations on the file.
• File-operations:
1) read next
This is used to
→ read the next portion of the file and
→ advance a file-pointer, which tracks the I/O location.
2) write next
This is used to
→ append to the end of the file and
→ advance to the new end of file.
9.b
4.9.1 Single Level Directory
• All files are contained in the same directory (Figure 4.19).
• Disadvantages (Limitations):
1) Naming problem: All files must have unique names.
2) Grouping problem: Difficult to remember names of all files, as number of files increases.
10.a
4.16 Allocation Methods
• The direct-access nature of disks allows us flexibility in the implementation of files.
• In almost every case, many files are stored on the same disk.
• Main problem:
How to allocate space to the files so that
→ disk-space is utilized effectively and
→ files can be accessed quickly.
• Three methods of allocating disk-space:
1) Contiguous
2) Linked and
3) Indexed
• Each method has advantages and disadvantages.
• Some systems support all three (Data General's RDOS for its Nova line of computers).
10.b
• Modify linked list to store address of next n-1 free blocks in first free block, plus
a pointer to next block that contains free-block-pointers (like this one)
• Counting
• Keep address of first free block and count of following free blocks
• Free space list then has entries containing addresses and counts