0% found this document useful (0 votes)
67 views20 pages

OS

1. The document discusses several topics in operating systems including microkernel systems, virtual memory architecture, the dining philosophers problem, implementing threads in kernel space vs user space, barriers synchronization, and scheduling algorithms. 2. Key points about microkernel systems include restricting privileges, using message passing for communication, and examples like MACH. Virtual memory allows a single machine to appear as multiple machines using an illusion of separate memory spaces. 3. The dining philosophers problem illustrates avoiding deadlock, and an algorithm is presented to ensure no philosopher starves while allowing maximum parallel eating.

Uploaded by

faiyaz pardiwala
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
67 views20 pages

OS

1. The document discusses several topics in operating systems including microkernel systems, virtual memory architecture, the dining philosophers problem, implementing threads in kernel space vs user space, barriers synchronization, and scheduling algorithms. 2. Key points about microkernel systems include restricting privileges, using message passing for communication, and examples like MACH. Virtual memory allows a single machine to appear as multiple machines using an illusion of separate memory spaces. 3. The dining philosophers problem illustrates avoiding deadlock, and an algorithm is presented to ensure no philosopher starves while allowing maximum parallel eating.

Uploaded by

faiyaz pardiwala
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

Solution Set: Operating System (FYBScIT-ATKT-March,2017)

1. Attempt any three of the following:


a. Write a note on Microkernel Systems.
A microkernel design of the operating system architecture targets robustness. In this
design the privileges granted to the individual parts of the operating system are
restricted as much as possible and the communication between the parts relies on
specialized communication mechanisms that enforce the privileges as necessary.
The communication overhead inside the microkernel operating system can be higher
than the communication overhead inside other software, as this overhead to be
manageable.
In reality very few individual parts of the operating system need to have more
privileges than common applications. The microkernel design therefore leads to a
small system kernel, accompanied by additional system applications that provide most
of the operating system features.
MACH is a prominent example of a microkernel that has been used in modern
operating systems, including the NextStep and OpenStep systems and, notably, OS X.
Microkernel Advantages
1. Service separation has the advantage that if one service (called a server) fails
others can still work so reliability is the primary feature. For example if a device
driver crashes does not cause the entire system to crash. Only that driver need
to be restarted rather than having the entire system die. This means more
persistence as one server can be substituted with another. It also means
maintenance is easier.
2. Different services are built into special modules which can be loaded or
unloaded when needed. Patches can be tested separately then swapped to take
over on a production instance.
3. Message passing allows independent communication and allows extensibility
4. The fact that there is no need to reboot the kernel implies rapid test and
development.
5. Easy and faster integration with 3d party modules.
Microkernel Disadvantages
1. Memory foot print is large
2. Potential performance loss (more software interfaces due to message passing)
3. Message passing bugs are not easy to fix
4. Process management is complex.
(* Reference: Modern Operating Systems,By Tanenbaum, 4th Edition, Page No: 65 )
b. Brief about Virtual memory architecture of operating system. Draw necessary
diagram.

It is an illusion of a real machine. It is created by a real machine operating system,


which make a single real machine appears to be several real machine. The architecture
of virtual machine is shown above. The best example of virtual machine architecture is
IBM 370 computer. In this system each user can choose a different operating system.
Actually, virtual machine can run several operating systems at once, each of them on its
virtual machine.
It is like a multiprogramming system shares the resource of a single machine in
different manner. Some important concepts of virtual machine are explained below:-
1. Control program (cp):- It creates the environment in which virtual machine can
execute. It gives to each user facilities of real machine such as processor, storage I/0
devices.
2. Conversation monitor system (cons):- It is a system application having features of
developing program. It contains editor, language translator, and various application
packages.
3. Remote spooling communication system (RSCS):- It provides virtual machine
with the ability to transmit and receive file in distributed system.
4. IPCS (interactive problem control system):- It is used to fix the virtual machine
software problems.
(* Reference: Modern Operating Systems,By Tanenbaum, 4th Edition, Page No: 69 )
c. Explain The Dining Philosophers Problem.
There are N philosphers sitting around a circular table eating spaghetti and discussing
philosphy. The problem is that each philosopher needs 2 forks to eat, and there are
only N forks, one between each 2 philosophers. Design an algorithm that the
philosophers can follow that insures that none starves as long as each philosopher
eventually stops eating, and such that the maximum number of philosophers can eat at
once.
• Philosophers eat/think
• Eating needs 2 forks
• Pick one fork at a time
• How to prevent deadlock

The problem was designed to illustrate the problem of avoiding deadlock, a system
state in which no progress is possible. One idea is to instruct each philosopher to
behave as follows:
• think until the left fork is available; when it is, pick it up
• think until the right fork is available; when it is, pick it up
• eat
• put the left fork down
• put the right fork down
• repeat from the start
This solution is incorrect: it allows the system to reach deadlock. Suppose that all five
philosophers take their left forks simultaneously. None will be able to take their right
forks, and there will be a deadlock.
The solution presented below is deadlock-free and allows the maximum parallelism for
an arbitrary number of philosophers. It uses an array, state, to keep track of whether a
philosopher is eating, thinking, or hungry (trying to acquire forks). A philosopher may
move into eating state only if neither neighbour is eating. Philosopher i's neighbours
are defined by the macros LEFT and RIGHT. In other words, if i is 2, LEFT is 1 and
RIGHT is 3.
(* Reference: Modern Operating Systems,By Tanenbaum, 4th Edition, Page No: 167 )
d. How to implement Threads in the Kernel space and Threads in the User space?
Implementing Threads in User Space

In this process, threads are completely implemented in user space. The kernel has no
information about the threads and it handles single thread processes. Advantage of this
process is that threads can be used with an operating system which doesn’t support
threads. Every thread has a private thread table in user space which stores program
counter, stack pointer, registers, state, etc. of a thread. Thread table is handled by run
time system. When threads are in user space each process can have a different
scheduling algorithm. Disadvantages of threads in user space are that they block
system calls and they cause page fault. When the page (data) requested by a program is
not available in the memory, it is called as a page fault.
Implementing Threads in the Kernel Space

In this process, run time system is not required and there is no thread table for each
thread in user space. All the threads are tracked by a thread table in kernel space. To
create or kill a new thread, a kernel call is made by the thread. Blocking system calls
and page fault problem is taken care of in this case. Disadvantage of this
implementation is that processing overhead is high. Also, it cannot manage signals
effectively.
(* Reference: Modern Operating Systems,By Tanenbaum, 4th Edition, Page No: 108 )
e. Explain the Barriers synchronization method.
Barriers
It is a type of synchronization method. Some applications are divided into phases. And
have the rule that no process may proceed to the next phase until all processes are
ready to proceed to the next phase. This behaviour may be achieved by placing a
Barrier at the end of each phase. When a process reaches the barrier, it is blocked until
all processes reach the barrier.

In above figure, first part shows processes approaching a barrier. Second part shows
all processes but one blocked at the barrier. Third part shows when the last process
arrives at the barrier, all of them are let through.

The potential problems in this method are as follows:


1.When sequential barriers using the same pass/block state variable are implemented,
a deadlock could happen in the first barrier whenever a thread reaches the second and
there are still some threads have not got out of the first barrier.

2.Due to all the threads repeatedly accessing the global variable for pass/stop, the
communication traffic is rather high, which decreases the scalability.
The following Sense-Reversal Centralized Barrier is designed to resolve the first
problem. And the second problem can be resolved by regrouping the threads and using
multi-level barrier, e.g. Combining Tree Barrier. Also hardware implementations may
have the advantage of higher scalability.

(* Reference: Modern Operating Systems,By Tanenbaum, 4th Edition, Page No: 146 )
f. Consider the following set of processes, with the arrival times and the CPU burst
times given in milliseconds.

Process Burst Time Arrival Time


P1 15 0
P2 5 0
P3 13 0

Draw Gantt chart, calculate Turnaround Time, Waiting Time, Average Turnaround
Time and Average Waiting Time for….
1. First-Come First-Served. 2. Shortest Job First.
I)FIRST COME FIRST SERVE:
Gantt Chart

P1 P2 P3
0 15 20 33
WaitTime= FinishTime=
Proces Burst Arriva Start TurnAroundTime=
StartTime - StartTime +
s Time l Time Time FinishTime - ArrivalTime
ArrivalTime BurstTime
P1 15 0 0 0-0=0 0+15=15 15-0=15
P2 5 0 15 15-0=15 15+5=20 20-0=20
P3 13 0 20 20-0=20 20+13=33 33-0=33
Now calculate
Turnaround Time: - (15+20+33)=68
Waiting Time: - (0+15+20)=35
Average Turnaround Time: - 68/3=22.66 Ms
Average Waiting Time :- 35/3=11.66 Ms

II)SHORTEST JOB FIRST :


Gantt Chart

P2 P3 P1
0 5 18 33

Arriv WaitTime= FinishTime=


Proces Burst Start TurnAroundTime=
al StartTime - StartTime +
s Time Time FinishTime - ArrivalTime
Time ArrivalTime BurstTime
P1 15 0 18 18 - 0 = 18 18 + 15 = 33 33 - 0 = 33
P2 5 0 0 0-0=0 0 +5 = 5 5 - 0 =5
P3 13 0 5 5-0=5 5 + 13 = 18 18 - 0 =8

Now calculate
Turnaround Time: - (33+5+8)=64
Waiting Time: - (18+0+5)=23
Average Turnaround Time: - 64/3=21.33 Ms
Average Waiting Time :- 23/3=7.66 Ms
(* Reference: Modern Operating Systems,By Tanenbaum, 4th Edition, Page No: 65 )
2. Attempt any three of the following:
a. What are the design issues with paging system?
In order to obtain a good performance we must consider the design issues of a paging
system.

3.5.1 Local versus Global Allocation Policies


A local PRA( Page Replacement Algorithm) is one in which a victim page is chosen
among the pages of the same process that requires a new page. That is the number of
pages for each process is fixed. So LRU means the page least recently used by this
process

A global policy is one in which the choice of victim is made among all pages of all
processes.
If we apply global LRU extensively with some sort of RR processor scheduling policy,
and memory is somewhat over committed, then by the time we get around to a
process, all the others have run and have probably paged out this process.

If this happens each process will need to page fault at a high rate; this is
called thrashing.
It is therefore important to get a good idea of how many pages a process needs, so that
we can balance the local and global desires. The working set W(t,w) is good for this.
An approximation to the working set policy that is useful for determining how many
frames a process needs is the Page Fault Frequency (PFF) algorithm.
 For each process keep track of the page fault frequency, which is the number of
faults divided by the number of references.
 Actually, must use a window or a weighted calculation since you are really
interested in the recent page fault frequency.
 If the PFF is too high, allocate more frames to this process. Either
1. Raise its number of frames and use a local policy; or
2. Bar its frames from eviction (for a while) and use a global policy.
 If there are not enough frames then have to reduce the MPL.
3.5.2 Load Control
Despite good designs, system may still have thrashing in situation when combined
working sets of all processes exceed the capacity of memory.
When PFF algorithm indicates that some processes need more memory and no one
needs less. This situation can be solved as by swapping one or more to disk, divide up
pages they held; we can reconsider degree of multiprogramming and also CPU-bound
and I/O-bound mixing can be used to balance the load.

3.5.3 Page Size


The page size is a parameter that can be chosen by the operating system. Even if the
hardware has been designed. Page size must be a multiple of the disk block size
because when copying out a page if we have a partial disk block, we must do a
read/modify/write.

Characteristics of a large page size: It is good for user I/O. If I/O done using physical
addresses, then I/O crossing a page boundary is not contiguous and hence requires
multiple I/O. If I/O uses virtual addresses, then page size doesn't effect this aspect of
I/O. That is the addresses are contiguous in virtual address and hence one I/O is done.
It is also good for demand paging I/O. It gives better options to swap in/out one big
page than several small pages. But if page is too big you will be swapping in data that is
really not local and hence might well not be used.

3.5.4 Separate Instruction and Data Spaces


If the single virtual address space is not enough for both program and data then we
have to double the available virtual address space, and need to use page sharing of
multiple processes. Both address spaces can be paged, each has own page table.
Another solution is to to have separate address spaces for instructions and data, called
as I-space and D-space. Here both address spaces can be paged, independently from
one another. Each one has its own page table, with its own mapping of virtual pages to
physical page frames.

3.5.5 Shared Pages


Page table also enables page sharing.

Shared code: In this one copy of read-only code shared among processes (i.e., text
editors, compilers, window systems). It is similar to multiple threads sharing the same
process pace. Also useful for inter-process communication if sharing of read-write
pages is allowed

Private code and data: Here each process keeps a separate copy of the code and data.
The pages for the private code and data can appear anywhere in the logical address
space.
(* Reference: Modern Operating Systems,By Tanenbaum, 4th Edition, Page No: 222 )
b. Brief about the basic concept of segmentation
Segmentation is a Memory management scheme that supports user view of memory i.e.
a collection of variable-sized segments, with no necessary ordering among segments.
Segments are numbered and are referred to by a segment number i.e. a logical address
consists of a two tuple:
< segment-number, offset >
A program is a collection of segments, compiler constructs separate segments for the
code, Global variables, the heap from which memory is allocated, the stacks used by
each thread and the standard C library.
Ex.
Users view of a program:
Logical View of Segmentation:

Advantages: The Segmentation simplifies handling of growing data structures. It


allows programs to be altered and recompiled independently, without relinking and
reloading. It offers itself to sharing among processes and protection. Some systems
combine segmentation with paging.
(* Reference: Modern Operating Systems,By Tanenbaum, 4th Edition, Page No: 240 )
c. Explain WSClock Page Replacement Algorithm with an example.
The WSClock Page Replacement Algorithm is not really a natural outgrowth of the idea
of a working set. It is, rather, a somewhat arbitrary hodge-podge of independent ideas,
one of which is dimly connected to the idea of a working set. All the same it's worth
because:
 1. It works pretty well and is in fairly common use.
 2. The individual ideas are interesting.
 3. Hodge-podges are more common and more important in CS generally and OS
in particular than the impression you generally get from theoreticians and
academicians.
It is an improvement of Working set page replacement algorithm. Here, pages are put
in a circular list just like in a clock. For each entry, Time of last use field and value of
referenced bit, R is checked.
1) For an entry if R = 1, page is not removed. R bit is set to 0 and clock hand is
moved to next page.
2) If R = 0 and Age is calculated by: (Current virtual time – Time of last use)
a) If Age > t, then the page is not in working set and is replaced by a new page.
If Age ≤ t, then the page is in the working set and is not removed. Clock hand is moved
to next page.
WSClock is based on LRU and the working set, respectively. Both give good paging
performance and can be implemented efficiently. A few other good algorithms exist,
but these two are probably the most important in practice.
(* Reference: Modern Operating Systems,By Tanenbaum, 4th Edition, Page No: 219 )
d. Write a note on I-nodes and linked list allocation.
I-Nodes: The i-node are also known as Index Nodes. In OS each file is assigned an
I-node that holds its attributes and the addresses of the disk blocks that contain its
data. It essentially break up the FAT into a bunch of smaller tables. Only the I-nodes for
the files currently open need to be in memory.
The I-nodes are used by UNIX OS. All the attributes for the file are stored in an I-node
entry, which is loaded into memory when the file is opened. The I-node also contains a
number of direct pointers to disc blocks. Typically there are twelve direct pointers. In
addition there are three additional, indirect pointers. These pointers point to further
data structures which eventually lead to a disc block address. The first of these
pointers is a single level of indirection, the next pointer is a double indirect pointer and
the third pointer is a triple indirect pointer. Depending on the size of the disc (and the
file), the triple indirect pointer may not be needed as all the blocks associated with the
file can be accessed using the other pointers.
This system has some advantages as, it keeps the flexibility of a linked structure. These
are having slightly faster random access than when using file allocation tables. It only
needs to hold the allocation table for open files. On the other hand, as a disadvantage
each i-node has a fixed number of allocation table entries for listing physical disk
blocks.

Linked List Allocation: In this approach a blocks of a file could be represented using
linked lists. Only need is the address of the first block that the file occupies. Each block
of the file contains not only data but also a pointer to the next block. The diagram
below shows such an implementation for two files.

This method has following advantages.


 In this method file growth, shrinkage, and deletion don't lead to external disk
fragmentation.
 Here only need is to store the first block in the directory entry.
 Here sequential file access is still reasonably good.

This method has following advantages


 In this approach random access is very slow.
 Space is lost within each block due to the pointer. This does not allow the number
of bytes to be a power of two. This is not serious, but does have an impact on
performance.
Reliability could be a problem. It only needs one corrupt block pointer and the whole
system might become corrupted.
(* Reference: Modern Operating Systems,By Tanenbaum, 4th Edition, Page No: 286 )
e. List and explain any five file operations.
1. Creating a file: Two steps are necessary to create a file. First, space in the file
system must be found for the file. Second, an entry for the new file must be made in the
directory. The directory entry records the name of the file and the location in the file
system.

2. Writing a file: To write a file, we make a system call specifying both the name of the
file and the information to be written to the file. Given the name of the file, the system
searches the directory to find the location of the file. The system must keep a write
pointer to the location in the file where the next write is to take place. The write
pointer must be updated whenever a write occurs.

3. Reading a file: To read from a file, we use a system call that specifies the name of
the file and where (in memory) the next block of the file should be put. Again, the
directory is searched for the associated directory entry, and the system needs to keep a
read pointer to the location in the file where the next read is to take place.

4. Repositioning within a file: The directory is searched for the appropriate entry,
and the current-file-position is set to a given value. Repositioning within a file does not
need to involve any actual I/O.

5. Deleting a file: To delete a file, we search the directory for the named file. Having
found the associated directory entry, we release all file space, so that it can be reused
by other files, and erase the directory entry.

6. Truncating a file: The user may want to erase the contents of a file but keep its
attributes. Rather than forcing the user to delete the file and then recreate it, this
function allows all attributes to length zero.

Files also support operations like Open, Close, Rename etc.


(* Reference: Modern Operating Systems,By Tanenbaum, 4th Edition, Page No: 272 )
f. Explain UNIX V7 file system.
In this file system each directory entry contains a name and a pointer to the
corresponding i-node. The metadata for a file or directory is stored in the
corresponding I-node. The early UNIX limited file names to 14 characters, stored in a
fixed length field. The name field now is of varying length and file names can be quite
long. Here the directory hierarchy takes two steps: get the I-node, get the file or
subdirectory. This shows how important it is not to parse filenames for each I/O
operation, i.e., why the open( ) system call is important.

UNIX V7 file system was used in PDP-11 minicomputers. Files are placed in a tree
structure having a root directory. New items which are added to the root directory in
the tree are known as links. Here, file names can be at a maximum of 14 characters.
ASCII characters can be used in the file name except the forward slash (/) since it is
used as a separator in path names.

UNIX V7 directory entry: Here are the components of a UNIX V7 directory entry:
1) I-node number: It is used to represent the number of I-nodes for a file. It has a size
of 2 bytes. I-nodes has attributes including file size, information about creation, last
access, and last modification of file, group, protection information, owner, and count of
number of directory entries that point to i-node. Whenever a link is added, count is
increased by 1 and whenever a link is removed, count is decreased by 1.

2) File name: It represents the file name. It has a size of 14 bytes.

(* Reference: Modern Operating Systems,By Tanenbaum, 4th Edition, Page No: 323 )
3. Attempt any three of the following:
a. What are the goals of the I/O Software?
One of the main functionality of an Operating System is to manage various I/O devices
including mouse, keyboards, touch pad, disk drives, display adapters, USB devices, Bit-
mapped screen, LED, Analog-to-digital converter, On/off switch, network connections,
audio I/O, printers etc.
Goals of the I/O Software
Following are the goals of I/O software.
1. Device independence: The programs shouldn't depend on the actual device being
used e.g., keyboard, disk, screen.
2. Uniform naming: The name of a file or device should be independent of the device
used.
3. Error handling: It should be handled so that minimum damage happens.
4. Synchronous vs asynchronous transfers: The DMA enables much I/O to be
asynchronous. Synchronous I/O is often easier to program with. The OS can make
asynchronous I/O look synchronous.
5. Buffering: It means temporarily storing data, it is often necessary before it is
delivered. HDD controller’s buffer disk block reads in order to check for errors
before transferring the block to memory. Such copying can slow I/O operations
and isn't always desirable.
6. Shareable vs dedicated - Some devices are easily shared while others are not
(e.g., printers). Dedicated devices introduce the problem of deadlock.
(* Reference: Modern Operating Systems,By Tanenbaum, 4th Edition, Page No: 351 )
b. Define deadlock. Give example for the same.
A deadlock occurs when every member of a set of processes is waiting for an event
that can only be caused by a member of the set. Generally the event waited for is the
release of a resource. These are also called as gridlocks.

Deadlocks can occur when processes have been granted exclusive access to devices,
files and so forth. A resources can be a hardware device or a piece of information. A
resource is anything that can be used by only a single process at any instant of time.

A set of process is in a deadlock state if each process in the set is waiting for an event
that can be caused by only another process in the set. It can be defined as, each
member of the set of deadlock processes is waiting for a resource that can be released
only by a deadlock process. None of the processes can run, none of them can release
any resources, and none of them can be awakened. It is important to note that the
number of processes and the number and kind of resources possessed and requested
are unimportant. The resources may be either physical or logical. Examples of physical
resources are Printers, Tape Drivers, Memory Space, and CPU Cycles. Examples of
logical resources are Files, Semaphores, and Monitors.

Consider two processes T and S that each want to print a file currently on tape.
1. T has obtained ownership of the printer and will release it after printing one file.
2. S has obtained ownership of the tape drive and will release it after reading one file.
3. T tries to get ownership of the tape drive, but is told to wait for S to release it.
4. S tries to get ownership of the printer, but is told to wait for T to release the printer.
The above situation is a deadlock.
(* Reference: Modern Operating Systems,By Tanenbaum, 4th Edition, Page No: 435 )
c. What is interrupt? Explain its types.
The interrupt is a signal from a device attached to a computer or from a program
within the computer that causes the main program that operates the computer to stop
and figure out what to do next. A single computer can perform only one computer
instruction at a time. But, because it can be interrupted, it can take turns in which
programs or sets of instructions that it performs. This is known as multitasking. It
allows the user to do a number of different things at the same time.

The Device controllers generate interrupts by putting a specific signal on part of


the system bus. Then the interrupt controller detects interrupts on the system bus. If
no other interrupts are pending and no higher-priority interrupts are simultaneously
received, then the interrupt is serviced.

There are two types of interrupt initiated I/O as Vectored interrupt, in which the
source that interrupts supplies the branch information to the computer. This
information is called the interrupt vector and Non-vectored interrupt in which the
branch address is assigned to a fixed location in memory.

Interrupt types: The interrupt type are categorized by examining how the CPU
executes instructions. These are of two types
1. Precise Interrupts are characterized by the instructions "before" the program
counter (PC) having all been completed, and those "after" the PC NOT completed. The
instruction the PC points to may or may not have completed but which is clearly
indicated by the CPU. This makes saving the current state of an interrupted process
much easier.
2. ImPrecise Interrupts occur when some instructions "before" the PC may not have
completed, while some instructions "after" the PC may have completed.
This greatly complicates saving the current state of the running process.
(* Reference: Modern Operating Systems,By Tanenbaum, 4th Edition, Page No: 347 )
d. Write a note on power management.
The first electronic computer (ENIAC), was run using 18,000 vacuum tubes and
consumed 140,000 watts of power which is very costly in terms of power use. After the
invention of the transistor, power usage dropped radically. Nowadays again peoples
are thinking about power consumption in which operating system play a vital role.
Normally a desktop PC has a 200-watt power supply. If 100 million of these machines
are turned on at once worldwide, together they use 20,000 megawatts of electricity.
Which is almost similar to output of 20 nuclear power plants. If this consumption
reduced to half, we save energy equivalent of 10 nuclear power plants. This is going to
a make a big difference in environment point of view.
In battery powered computers such as notebooks, handhelds and Webpads, energy
consumption is a big issue because the batteries only hold power up to few hours. Even
there is a huge research is going on not so remarkable output is yet to produce. So
effective way can be saving an energy only. The operating system plays a major role
here.
The hardware vendors are trying to make their electronics more energy efficient.
Techniques used include reducing transistor size, employing dynamic voltage scaling,
using low swing and adiabatic buses, and similar techniques.
(Hardware Issues, Operating System Issues, Hard Disk, CPU, Memory, Wireless
Communication- energy management related to these issues can also be
considered)
(* Reference: Modern Operating Systems,By Tanenbaum, 4th Edition, Page No: 417 )
e. Explain Starvation.
It is a situation where a process is perpetually denied necessary resources. Without
those resources, the program can never finish its task. Starvation is related to deadlock.
A starvation is a problem that is encountered when multiple threads or processes wait
for the same resource, which is called a deadlock.

In order to get out from a deadlock, one of the processes or threads should have to give
up or roll back so that the other thread or process can use the resource. If this
continuously happens and the same process or thread have to give up or roll back each
time while letting other processes or threads to use the resource, then the selected
process or thread, which rolled back will undergo a situation called starvation.

Therefore, in order to get out from a deadlock, starvation is one of the solutions.
Therefore, sometimes starvation is called a kind of a livelock. When there are many
high priority processes or threads, a lower priority process or thread will always
starve in a deadlock. There can be many starvations such as starving on
resources and starving on CPU.

Causes of Starvation
 Processes hand on resources to other processes without control:
 Processes' priorities are strictly enforced:
 "Random" selection is used:
 Not enough resources:

Starvation can happen at any organised scheduling level, though it is more likely in the
automatic allocation processes than in the higher-level manual parts.
(* Reference: Modern Operating Systems,By Tanenbaum, 4th Edition, Page No: 463 )
f. Describe Livelock.
There is a variant of deadlock called as livelock. This is a situation in which two or
more processes continuously change their state in response to changes in the other
processes without doing any useful work. This is similar to deadlock in that no
progress is made but differs in that neither process is blocked or waiting for anything.
A human example of livelock would be two people who meet face-to-face in a corridor
and each moves aside to let the other pass, but they end up with moving back and forth
from side to side without making any progress because they always move the same
way at the same time.
A real-world example of livelock occurs when two people meet in a narrow corridor,
and each tries to be polite by moving aside to let the other pass, but they end up
swaying from side to side without making any progress because they both repeatedly
move the same way at the same time.
Livelock is a risk with some algorithms that detect and recover from deadlock. If more
than one process takes action, the deadlock detection algorithm can be repeatedly
triggered. This can be avoided by ensuring that only one process (chosen randomly or
by priority) takes action.
Attempting to use back-off- retry later-or resource preemption approaches to
preventing deadlock may cause livelock.
Processes don't deadlock, but fall to make progress either.
For example.
Suppose Mr. X and Mr. Y both want to listen to a Lamest Z.95 hits CD on a personal CD
player. Suppose:
 Mr. X has the CD player but wants the CD.
 Mr. Y has the CD but wants the CD player.
 Mr. X steals (i.e.. preempts) the CD from Mr. Y. Meanwhile, Mr. Y steals the CD
player from Mr. X.
 Now. Mr. X has The CD and Mr. Y has the CD player.
 Mr. X steals the CD player from Larry. Meanwhile. Mr. Y steals the CD from Mr. X.
 Now. Mr. X has The CD player and Mr. Y has the CD.
It’s Livelock!
(* Reference: Modern Operating Systems,By Tanenbaum, 4th Edition, Page No: 464 )

4. Attempt any three of the following:


a Give advantages of Cloud Computing.
Cloud computing is a model for enabling ubiquitous, convenient, on-demand network
access to a shared pool of configurable computing resources (e.g., networks, servers,
storage, applications, and services) that can be rapidly provisioned and released with
minimal management effort or service provider interaction. This cloud model is
composed of five essential characteristics, three service models, and four deployment
models.
Advantages of Cloud Computing:
Cloud computing provides numerous benefits both to end users and businesses of all
sizes. It set up as a virtual office that provides the flexibility of connecting our business
anywhere, any time. The following are the benefits to moving your business to the
cloud.
Cost Efficiency: The cloud is available at lower initial cost than traditional technology.
We can save on licensing fees and eliminates charges such as storage cost, software
updates, etc.
Scalability: It is a built-in feature of cloud deployments. The cloud instances are
deployed automatically only when needed.
Backup and Recovery: It provides flexible and reliable backup or recovery solutions.
Unlimited Storage: It provides unlimited storage capacity. There is no need to worry
about increasing our current storage space availability.
Easy Deployment: It allows to us to deploy quickly which is the most important
advantage of this technology. The entire system can be fully functional within few
minutes.
Etc.
(* Reference: Modern Operating Systems,By Tanenbaum, 4th Edition, Page No: 471 )
b How to migrate a virtual machine more quickly?
Migration is the process of moving a virtual machine from one host or storage
location to another. Copying a virtual machine creates a new virtual machine. It is not
a form of migration.
Virtual machine migration is one of the most important benefits of server
virtualization technology but it can be complicated to implement. Luckily, several
tools can help us migrate a virtual machine and address security concerns.
The ability to migrate a virtual machine (VM) from one physical host to another can
significantly boost an organization's disaster recovery efforts and improve business
agility. It also comes in handy when an administrator needs to shut down a physical
server for maintenance or upgrades because server downtime no longer equals
application downtime.

Migrating a virtual machine quickly:


A virtual machine migration usually takes two to five hours. But with the help of
migration tools this time can be reduced upto 30 minutes or less. Live migration
technologies from VMware, Microsoft and Citrix can migrate a virtual machine faster
and with no downtime. Third-party vendor tools can automate the VM migration
process.
In Virtual Machine Manager (VMM) 2008, if you migrate a running virtual machine,
VMM places the virtual machine into a saved state during the migration. You can
migrate a virtual machine between hosts that are using the same virtualization
software or from a Virtual Server host to a Hyper-V host. And, you can migrate a
virtual machine’s files to a different storage location on the same host.
VMware vMotion is a vSphere feature that allows you to move a running VMware
virtual machine from one host to another, with no significant impact on your
production environment.
We can also migrate Virtual Machines in the vSphere Client.
It is also migrate a virtual machine quickly from ESX host to another faster Host.

Live migration:
It is a process of migrate a virtual machine from one physical host to another without
significantly interrupting application availability. Live migration captures a VM's
complete memory state and the state of all its processor registers and sends that data
to memory space on another server. That server then loads the processor registers,
and the VM picks up right where it left off. All of major virtualization vendors offer
live migration: VMware offers vMotion, Microsoft has Live Migration in Hyper-V, and
Citrix Systems has XenMotion.
(* Reference: Modern Operating Systems,By Tanenbaum, 4th Edition, Page No: 496 )
c Explain 2*2 Multistage Switching Network.
The multiprocessors are usually referred as Shared-memory processors. These allows
development of parallel software that supports sharing of code and data. It is also
referred as parallel random access machine (PRAM).
There are two major groups of multiprocessors:
 UMA (Uniform memory access): Here all processors share a unique
centralized primary memory, so each CPU has the same memory access time.
The every memory word can read as fast as any other memory word.
 NUMA (Nonuniform memory access): These processors does not share
unique memory. These systems have a shared logical address space, but
physical memory is distributed among CPUs, so that access time to data
depends on data position, in local or in a remote memory.

Multistage Switching Networks: A different multiprocessor architecture can be


achieved using the normal 2x2 switch.

The message arrives either on input A or B and is routed according to some header
information to the output X or Y. The message header is composed by 4 fields:

 1. Module: Tells witch memory to use


 2. Address: Specifies an address within the module
 3 and 4 OpCode: Operation (read or write)

The switch looks at the Module-field and decides if the message should be sent to
output X or Y.

Using the 2x2 switch larger networks can be built. We can also build an omega. The
omega network uses n CPU's and n memories log2n stages are needed, with n/2
switches per stage. Instead of n2 switches only (n/2)log2n switches are needed to
build the omega network.
The image illustrates how 2 CPU's A (001) and B (011) accesses two different memory
blocks M (001) and N (110).
If another CPU wants to access the same memory block simultaneously, then it has to
wait. Therefore the omega network is a blocking network.

(* Reference: Modern Operating Systems,By Tanenbaum, 4th Edition, Page No: 523 )
d State and explain the Type 1 and Type 2 Hypervisors.
A Hypervisor also known as Virtual Machine Monitor (VMM), it can be a piece of
software, firmware or hardware that gives an impression to the guest machines(virtual
machines) as if they were operating on a physical hardware. It allows multiple
operating system to share a single host and its hardware. The hypervisor manages
requests by virtual machines to access to the hardware resources (RAM, CPU, NIC etc)
acting as an independent machine.

Hypervisor is mainly divided into two types namely


 Type 1/Native/Bare Metal Hypervisor
 Type 2/Hosted Hypervisor

Type 1 Hypervisor
This is also known as Bare Metal or Embedded or Native Hypervisor. It works directly
on the hardware of the host and can monitor operating systems that run above the
hypervisor. It is completely independent from the Operating System. This hypervisor is
small as its main task is sharing and managing hardware resources between different
operating systems. A major advantage is that any problems in one virtual machine or
guest operating system do not affect the other guest operating systems running on the
hypervisor.

Examples: VMware ESXi Server, Microsoft Hyper-V, Citrix/Xen Server

Type 2 Hypervisor
This is also known as Hosted Hypervisor. In this case, the hypervisor is installed on an
operating system and then supports other operating systems above it. It is completely
dependent on host Operating System for its operations While having a base operating
system allows better specification of policies, any problems in the base operating
system a effects the entire system as well even if the hypervisor running above the
base OS is secure.

Examples: VMware Workstation, Microsoft Virtual PC, Oracle Virtual Box

(* Reference: Modern Operating Systems,By Tanenbaum, 4th Edition, Page No: 477 )
e Explain Master-Slave Multiprocessors.
Multiprocessor operating systems are similar to regular operating systems. They
handle system calls, do memory management, provide a file system, and manage I/O
devices. They also have some unique features. These include process synchronization,
resource management, and scheduling. We will get a brief look at multiprocessor
hardware and then will see these operating systems issues.

Master-Slave Multiprocessors: In this architecture both I/O devices and memory are
shared by the CPUs. A single CPU is selected as the master which runs the single
instance of the OS. The remaining CPUs works as a slaves. Here all system calls are
handled by the master CPU. The master CPU schedules which processes to run on
slaves, with balancing the load between all the slave CPUs. The Page frames are shared
in the single memory, and there is only one set of buffer caches on the master,
addressing some problems when each CPU runs a separate OS. The master is the
bottleneck for this design, which limits the number of slaves it can keep up with.
Following figure demonstrates the architecture of Master-Slave.

The master-slave model solves most of the problems of the other models. There is a
single data structure (e.g., one list or a set of prioritized lists) that keeps track of
ready processes. When a CPU goes idle, it asks the operating system for a process to
run and it is assigned one. Thus it can never happen that one CPU is idle while
another is overloaded. Similarly, pages can be allocated among all the processes
dynamically and there is only one buffer cache, so inconsistencies never occur.

The problem with this model is that with many CPUs, the master will become a
bottleneck. After all, it must handle all system calls from all CPUs. If, say, 10% of all
time is spent handling system calls, then 10 CPUs will pretty much saturate the
master, and with 20 CPUs it will be completely overloaded. Thus this model is simple
and workable for small multiprocessors, but for large ones it fails.
(* Reference: Modern Operating Systems,By Tanenbaum, 4th Edition, Page No: 532 )
f Write a note on Document-Based Middleware.
Every Web page has a unique address, called a URL (Uniform Resource Locator), of
the form protocol://DNS-name/file-name. The protocol is most commonly http
(HyperText Transfer Protocol), but ftp and others also exist. Then comes the DNS name
of the host containing the file. Finally, there is a local file name telling which file is
needed. Thus a URL uniquely specifies a single file worldwide.
The Web is fundamentally a client-server system, with the user being the client and the
Website being the server. When the user provides the browser with a URL, either by
typing it in or clicking on a hyperlink on the current page, the browser takes certain
steps to fetch the requested Web page. As a simple example, suppose the URL provided
is http://www. muresults.net/index.html. The browser then takes the following steps
to get the page.
1. The browser asks DNS for the IP address of www.muresults.net
2. DNS replies with 50.62.160.33
3. The browser makes a TCP connection to port 80 on 50.62.160.33
4. It then sends a request asking for the file index.html.
5. The www.muresults.net server sends the file index.html.
6. The browser displays all the text in index.html.
7. Meanwhile, the browser fetches and displays all images on the page.
8. The TCP connection is released.
To a first approximation, that is the basis of the Web and how it works. Many other
features have since been added to the basic Web, including style sheets, dynamic Web
pages that are generated on the fly, Web pages that contain small programs or scripts
that execute on the client machine, and more.
(* Reference: Modern Operating Systems,By Tanenbaum, 4th Edition, Page No: 576 )

5. Attempt any three of the following:


a Describe Linux kernel with appropriate diagram.
The Linux kernel consists of several important parts: process management, memory
management, hardware device drivers, file system drivers, network management, and
various other bits and pieces. Following figures sows some of them.

The most important part of the kernel is memory management and process
management. The Memory management takes care of assigning memory areas and
swap space areas to processes, parts of the kernel, and for the buffer cache. The
Process management creates processes, and implements multitasking by switching the
active process on the processor.

At the lowest level, the kernel contains a hardware device driver for each kind of
hardware it supports. It supports large number of hardware device drivers. The Linux
kernel has, System libraries, special functions or programs using which application
programs or system utilities accesses Kernel's features. These libraries implement
most of the functionalities of the operating system and do not requires kernel module's
code access rights.
The various network protocols have been abstracted into one programming interface,
the BSD socket library. The virtual file system (VFS) layer abstracts the file system
operations away from their implementation. Each file system type provides an
implementation of each file system operation. When some entity tries to use a file
system, the request goes via the VFS, which routes the request to the proper file system
driver.
(* Reference: Modern Operating Systems,By Tanenbaum, 4th Edition, Page No: 731 )
b Write a short note on Synchronization in Linux.
AT the time of porting software from a single core environment to run on multi-core
cluster, we might need to modify code to perform the following operations:
 Enforce a particular order of execution.
 Control parallel access to shared peripherals or global data.
Linux kernel provide couple of synchronization mechanism.
1. Atomic operation: This is the very simple approach to avoid race
condition or deadlock.
2. Semaphore: This is another kind of synchronization mechanism which
will be provided by the Linux kernel.
3. Spin-lock: This is special type of synchronization mechanism which is
preferable to use in multi-processor(SMP) system.
4. Sequence Lock: This is very useful synchronization mechanism to
provide a lightweight and scalable lock for the scenario where many
readers and a few writers are present.

The Linux kernel uses number of different synchronization primitives for this purpose.
The Process Synchronization is the principle of time slicing where each process is give
a little bit of time for its execution. Here a fork( ) creates a child process that is a
duplicate of the parent process. Every process in Linux has a PID (process id) which
help to differentiate between processes. A child process is a duplicate of a parent.
Parent process might terminates before the child process. In this case the child process
is referred as orphaned process. Linux recognize a deadlock situation and sort it out.
When deadlock occurs, Linux will print an error message and kills the parent process.
Linux provide primitives know as semaphores which multiple processes synchronize
their access to shared memory. Also Linux provides several types of synchronization
variables, both used internally in the kernel, and available to user-level applications
and libraries.
The Linux provides wrappers around the hardware-supported atomic instructions, via
operations such as atomic set and atomic read. In addition, since modern hardware
reorders memory operations, Linux provides memory barriers. Using operations like
rmb and wmb guarantees that all read/write memory operations preceding the
barrier call have completed before any subsequent accesses take place. In this way
synchronisation is achieved in Linux.
(* Reference: Modern Operating Systems,By Tanenbaum, 4th Edition, Page No: 750 )
c Explain any 5 memory management system calls Windows.
1. VirtualAlloc: Reserve or commit a region
2. VirtualFree: Release or decommit a region
3. VirtualProtect: Change the read/write/execute protection on a region
4. VirtualQuery: Inquire about the status of a region
5. VirtualLock: Make a region memory resident (i.e., disable paging for it)
6. VirtualUnlock: Make a region pageable in the usual way
7. CreateFileMapping: Create a file-mapping object and assign it a name
8. MapViewOfFile: Map (part of) a file into the address space
9. UnmapViewOfFile: Remove a mapped file from the address space
10. OpenFileMapping: Open a previously created file-mapping object

(One or two line explanation of any five system calls expected.)


(* Reference: Modern Operating Systems,By Tanenbaum, 4th Edition, Page No: 756 )
d Write a note on caching in Windows.
It is the feature that improves system performance by using fast volatile memory
(RAM) to collect write commands sent to data storage devices and cache them until the
slower storage device can be written to later. In Windows caches file data, read from
disks and written to disks. The read operations, read file data from an area in system
memory known as the system file cache. The write operations, write file data to the
system file cache. This type of cache is referred to as a write back cache. In windows
caching is managed per file object. It occurs under the direction of the cache manager,
which operates continuously while Windows is running.
File data in the system file cache is written to the disk at intervals determined by the
OS, and the memory previously used by that file data is freed this is referred to
as flushing the cache. The policy of delaying the writing of the data to the file and
holding it in the cache until the cache is flushed is called lazy writing, and it is
triggered by the cache manager at a determinate time interval. The time at which a
block of file data is flushed is partially based on the amount of time it has been stored
in the cache and the amount of time since the data was last accessed in a read
operation. This ensures that file data that is frequently read will stay accessible in the
system file cache for the maximum amount of time. This file data caching process is
illustrated in the following figure.

(* Reference: Modern Operating Systems,By Tanenbaum, 4th Edition, Page No: 942 )

e Explain process lifecycle in Android.


Developers must understand the android application life-cycle to efficiently manage
resources and ensure effective response to the users. Every android application runs
inside its own instance of Dalvik Virtual Machine (DVM).
The Android application does not completely controls its application life-cycle. It has
limited control over their life-cycles. In Android every application should be prepared
for untimely termination.
Android gives priority to ensure smoother user experience by killing or terminating
other processes. Applications are terminated based on its priority and state of the
process.
If two applications has the same priority, the process that has been at that priority
longest will be killed first.

Android Application priority: It is the sequence in which the process are killed to
reclaim resources is determined by the priority of their hosted applications. Android
process can be in one of the following states.

1. Active process: These processes are given highest level of priority. These are
application components with which the user is interacting with. A process is said to be
active, when any of it host….
 An activity that the user is currently interacting.
 A service that's bound to the activity that the user is interacting.
 A BroadcastReceiver that's executing its onReceive( ) method.
 A service executing either its onCreate( ), onResume( ) or onStart( ) callbacks

2. Visible Process: These process are visible but they are not in the foreground or
responding to the users. Only few visible process are there and they will be killed only
under extreme circumstances.

3. Service Process: It is the process which is running a service and started with
startService( ) call. Services don't directly interact with the user, they receive lesser
priority than the above two process. When the system terminates the running service it
will attempt to restart them when resources become available. Exmpale Playing music,
downloading data etc.

4. Background Process: These are the Processes hosting activities that are not visible
to the user, which process onstop( ) method. Usually there are many background
process running, they are killed on last recently used (LRU) method to free up
resources.

5. Empty process: This is the process that doesn't hold any active
application components. The only reason to keep this kind of process
alive is for caching purposes, to improve start-up time the next time a
component needs to run in it. The system often kills these processes in
order to balance overall system resources between process caches and
the underlying kernel caches.
(* Reference: Modern Operating Systems,By Tanenbaum, 4th Edition, Page No: 809 )
f How Android supports security?
In Android apps cannot directly interact with each other, also not with any other
processes on the system. All of the processes that start with booting process, run as the
root user, but anything started after that run under its own user ID (UID).

Android has a few separate file systems. The /system file system is "essentially the
ROM. It is mounted read-only and contains the Android OS, system libraries and apps,
system executable, and so on. The application developer and user have no access to
that file system, and it contains no user data, so it doesn't need to be backed up or
encrypted.

The /data partition is mounted read-write and contains all of the downloaded apps and
the storage for all apps. The/data/data directory is the location where apps store their
data. A subdirectory named after the app is created that is owned by its UID/GID and
has permissions that does not allow access from other UIDs

A common security method Android uses is explicit user interfaces for allowing or
removing specific types of access. In this approach, there is some way an application
indicates it can optionally provide some functionally, and a system supplied trusted
user interface that provides control over this access.

Android's Five Key Security Features:


1. Security at the operating system level through the Linux kernel
2. Mandatory application sandbox
3. Secure interprocess communication
4. Application signing
5. Application-defined and user-granted permissions

The following core security features help us to build secure apps:


 The Android Application Sandbox, which isolates your app data and code
execution from other apps.
 An application framework with robust implementations of common security
functionality such as cryptography, permissions, and secure IPC.
 Technologies like ASLR, NX, ProPolice, safe_iop, OpenBSD dlmalloc, OpenBSD
calloc, and Linux mmap_min_addr to mitigate risks associated with common
memory management errors.
 An encrypted file system that can be enabled to protect data on lost or stolen
devices.
 User-granted permissions to restrict access to system features and user data.
 Application-defined permissions to control application data on a per-app basis.
(* Reference: Modern Operating Systems,By Tanenbaum, 4th Edition, Page No: 838 )

_____________________________

You might also like