OS
OS
The problem was designed to illustrate the problem of avoiding deadlock, a system
state in which no progress is possible. One idea is to instruct each philosopher to
behave as follows:
• think until the left fork is available; when it is, pick it up
• think until the right fork is available; when it is, pick it up
• eat
• put the left fork down
• put the right fork down
• repeat from the start
This solution is incorrect: it allows the system to reach deadlock. Suppose that all five
philosophers take their left forks simultaneously. None will be able to take their right
forks, and there will be a deadlock.
The solution presented below is deadlock-free and allows the maximum parallelism for
an arbitrary number of philosophers. It uses an array, state, to keep track of whether a
philosopher is eating, thinking, or hungry (trying to acquire forks). A philosopher may
move into eating state only if neither neighbour is eating. Philosopher i's neighbours
are defined by the macros LEFT and RIGHT. In other words, if i is 2, LEFT is 1 and
RIGHT is 3.
(* Reference: Modern Operating Systems,By Tanenbaum, 4th Edition, Page No: 167 )
d. How to implement Threads in the Kernel space and Threads in the User space?
Implementing Threads in User Space
In this process, threads are completely implemented in user space. The kernel has no
information about the threads and it handles single thread processes. Advantage of this
process is that threads can be used with an operating system which doesn’t support
threads. Every thread has a private thread table in user space which stores program
counter, stack pointer, registers, state, etc. of a thread. Thread table is handled by run
time system. When threads are in user space each process can have a different
scheduling algorithm. Disadvantages of threads in user space are that they block
system calls and they cause page fault. When the page (data) requested by a program is
not available in the memory, it is called as a page fault.
Implementing Threads in the Kernel Space
In this process, run time system is not required and there is no thread table for each
thread in user space. All the threads are tracked by a thread table in kernel space. To
create or kill a new thread, a kernel call is made by the thread. Blocking system calls
and page fault problem is taken care of in this case. Disadvantage of this
implementation is that processing overhead is high. Also, it cannot manage signals
effectively.
(* Reference: Modern Operating Systems,By Tanenbaum, 4th Edition, Page No: 108 )
e. Explain the Barriers synchronization method.
Barriers
It is a type of synchronization method. Some applications are divided into phases. And
have the rule that no process may proceed to the next phase until all processes are
ready to proceed to the next phase. This behaviour may be achieved by placing a
Barrier at the end of each phase. When a process reaches the barrier, it is blocked until
all processes reach the barrier.
In above figure, first part shows processes approaching a barrier. Second part shows
all processes but one blocked at the barrier. Third part shows when the last process
arrives at the barrier, all of them are let through.
2.Due to all the threads repeatedly accessing the global variable for pass/stop, the
communication traffic is rather high, which decreases the scalability.
The following Sense-Reversal Centralized Barrier is designed to resolve the first
problem. And the second problem can be resolved by regrouping the threads and using
multi-level barrier, e.g. Combining Tree Barrier. Also hardware implementations may
have the advantage of higher scalability.
(* Reference: Modern Operating Systems,By Tanenbaum, 4th Edition, Page No: 146 )
f. Consider the following set of processes, with the arrival times and the CPU burst
times given in milliseconds.
Draw Gantt chart, calculate Turnaround Time, Waiting Time, Average Turnaround
Time and Average Waiting Time for….
1. First-Come First-Served. 2. Shortest Job First.
I)FIRST COME FIRST SERVE:
Gantt Chart
P1 P2 P3
0 15 20 33
WaitTime= FinishTime=
Proces Burst Arriva Start TurnAroundTime=
StartTime - StartTime +
s Time l Time Time FinishTime - ArrivalTime
ArrivalTime BurstTime
P1 15 0 0 0-0=0 0+15=15 15-0=15
P2 5 0 15 15-0=15 15+5=20 20-0=20
P3 13 0 20 20-0=20 20+13=33 33-0=33
Now calculate
Turnaround Time: - (15+20+33)=68
Waiting Time: - (0+15+20)=35
Average Turnaround Time: - 68/3=22.66 Ms
Average Waiting Time :- 35/3=11.66 Ms
P2 P3 P1
0 5 18 33
Now calculate
Turnaround Time: - (33+5+8)=64
Waiting Time: - (18+0+5)=23
Average Turnaround Time: - 64/3=21.33 Ms
Average Waiting Time :- 23/3=7.66 Ms
(* Reference: Modern Operating Systems,By Tanenbaum, 4th Edition, Page No: 65 )
2. Attempt any three of the following:
a. What are the design issues with paging system?
In order to obtain a good performance we must consider the design issues of a paging
system.
A global policy is one in which the choice of victim is made among all pages of all
processes.
If we apply global LRU extensively with some sort of RR processor scheduling policy,
and memory is somewhat over committed, then by the time we get around to a
process, all the others have run and have probably paged out this process.
If this happens each process will need to page fault at a high rate; this is
called thrashing.
It is therefore important to get a good idea of how many pages a process needs, so that
we can balance the local and global desires. The working set W(t,w) is good for this.
An approximation to the working set policy that is useful for determining how many
frames a process needs is the Page Fault Frequency (PFF) algorithm.
For each process keep track of the page fault frequency, which is the number of
faults divided by the number of references.
Actually, must use a window or a weighted calculation since you are really
interested in the recent page fault frequency.
If the PFF is too high, allocate more frames to this process. Either
1. Raise its number of frames and use a local policy; or
2. Bar its frames from eviction (for a while) and use a global policy.
If there are not enough frames then have to reduce the MPL.
3.5.2 Load Control
Despite good designs, system may still have thrashing in situation when combined
working sets of all processes exceed the capacity of memory.
When PFF algorithm indicates that some processes need more memory and no one
needs less. This situation can be solved as by swapping one or more to disk, divide up
pages they held; we can reconsider degree of multiprogramming and also CPU-bound
and I/O-bound mixing can be used to balance the load.
Characteristics of a large page size: It is good for user I/O. If I/O done using physical
addresses, then I/O crossing a page boundary is not contiguous and hence requires
multiple I/O. If I/O uses virtual addresses, then page size doesn't effect this aspect of
I/O. That is the addresses are contiguous in virtual address and hence one I/O is done.
It is also good for demand paging I/O. It gives better options to swap in/out one big
page than several small pages. But if page is too big you will be swapping in data that is
really not local and hence might well not be used.
Shared code: In this one copy of read-only code shared among processes (i.e., text
editors, compilers, window systems). It is similar to multiple threads sharing the same
process pace. Also useful for inter-process communication if sharing of read-write
pages is allowed
Private code and data: Here each process keeps a separate copy of the code and data.
The pages for the private code and data can appear anywhere in the logical address
space.
(* Reference: Modern Operating Systems,By Tanenbaum, 4th Edition, Page No: 222 )
b. Brief about the basic concept of segmentation
Segmentation is a Memory management scheme that supports user view of memory i.e.
a collection of variable-sized segments, with no necessary ordering among segments.
Segments are numbered and are referred to by a segment number i.e. a logical address
consists of a two tuple:
< segment-number, offset >
A program is a collection of segments, compiler constructs separate segments for the
code, Global variables, the heap from which memory is allocated, the stacks used by
each thread and the standard C library.
Ex.
Users view of a program:
Logical View of Segmentation:
Linked List Allocation: In this approach a blocks of a file could be represented using
linked lists. Only need is the address of the first block that the file occupies. Each block
of the file contains not only data but also a pointer to the next block. The diagram
below shows such an implementation for two files.
2. Writing a file: To write a file, we make a system call specifying both the name of the
file and the information to be written to the file. Given the name of the file, the system
searches the directory to find the location of the file. The system must keep a write
pointer to the location in the file where the next write is to take place. The write
pointer must be updated whenever a write occurs.
3. Reading a file: To read from a file, we use a system call that specifies the name of
the file and where (in memory) the next block of the file should be put. Again, the
directory is searched for the associated directory entry, and the system needs to keep a
read pointer to the location in the file where the next read is to take place.
4. Repositioning within a file: The directory is searched for the appropriate entry,
and the current-file-position is set to a given value. Repositioning within a file does not
need to involve any actual I/O.
5. Deleting a file: To delete a file, we search the directory for the named file. Having
found the associated directory entry, we release all file space, so that it can be reused
by other files, and erase the directory entry.
6. Truncating a file: The user may want to erase the contents of a file but keep its
attributes. Rather than forcing the user to delete the file and then recreate it, this
function allows all attributes to length zero.
UNIX V7 file system was used in PDP-11 minicomputers. Files are placed in a tree
structure having a root directory. New items which are added to the root directory in
the tree are known as links. Here, file names can be at a maximum of 14 characters.
ASCII characters can be used in the file name except the forward slash (/) since it is
used as a separator in path names.
UNIX V7 directory entry: Here are the components of a UNIX V7 directory entry:
1) I-node number: It is used to represent the number of I-nodes for a file. It has a size
of 2 bytes. I-nodes has attributes including file size, information about creation, last
access, and last modification of file, group, protection information, owner, and count of
number of directory entries that point to i-node. Whenever a link is added, count is
increased by 1 and whenever a link is removed, count is decreased by 1.
(* Reference: Modern Operating Systems,By Tanenbaum, 4th Edition, Page No: 323 )
3. Attempt any three of the following:
a. What are the goals of the I/O Software?
One of the main functionality of an Operating System is to manage various I/O devices
including mouse, keyboards, touch pad, disk drives, display adapters, USB devices, Bit-
mapped screen, LED, Analog-to-digital converter, On/off switch, network connections,
audio I/O, printers etc.
Goals of the I/O Software
Following are the goals of I/O software.
1. Device independence: The programs shouldn't depend on the actual device being
used e.g., keyboard, disk, screen.
2. Uniform naming: The name of a file or device should be independent of the device
used.
3. Error handling: It should be handled so that minimum damage happens.
4. Synchronous vs asynchronous transfers: The DMA enables much I/O to be
asynchronous. Synchronous I/O is often easier to program with. The OS can make
asynchronous I/O look synchronous.
5. Buffering: It means temporarily storing data, it is often necessary before it is
delivered. HDD controller’s buffer disk block reads in order to check for errors
before transferring the block to memory. Such copying can slow I/O operations
and isn't always desirable.
6. Shareable vs dedicated - Some devices are easily shared while others are not
(e.g., printers). Dedicated devices introduce the problem of deadlock.
(* Reference: Modern Operating Systems,By Tanenbaum, 4th Edition, Page No: 351 )
b. Define deadlock. Give example for the same.
A deadlock occurs when every member of a set of processes is waiting for an event
that can only be caused by a member of the set. Generally the event waited for is the
release of a resource. These are also called as gridlocks.
Deadlocks can occur when processes have been granted exclusive access to devices,
files and so forth. A resources can be a hardware device or a piece of information. A
resource is anything that can be used by only a single process at any instant of time.
A set of process is in a deadlock state if each process in the set is waiting for an event
that can be caused by only another process in the set. It can be defined as, each
member of the set of deadlock processes is waiting for a resource that can be released
only by a deadlock process. None of the processes can run, none of them can release
any resources, and none of them can be awakened. It is important to note that the
number of processes and the number and kind of resources possessed and requested
are unimportant. The resources may be either physical or logical. Examples of physical
resources are Printers, Tape Drivers, Memory Space, and CPU Cycles. Examples of
logical resources are Files, Semaphores, and Monitors.
Consider two processes T and S that each want to print a file currently on tape.
1. T has obtained ownership of the printer and will release it after printing one file.
2. S has obtained ownership of the tape drive and will release it after reading one file.
3. T tries to get ownership of the tape drive, but is told to wait for S to release it.
4. S tries to get ownership of the printer, but is told to wait for T to release the printer.
The above situation is a deadlock.
(* Reference: Modern Operating Systems,By Tanenbaum, 4th Edition, Page No: 435 )
c. What is interrupt? Explain its types.
The interrupt is a signal from a device attached to a computer or from a program
within the computer that causes the main program that operates the computer to stop
and figure out what to do next. A single computer can perform only one computer
instruction at a time. But, because it can be interrupted, it can take turns in which
programs or sets of instructions that it performs. This is known as multitasking. It
allows the user to do a number of different things at the same time.
There are two types of interrupt initiated I/O as Vectored interrupt, in which the
source that interrupts supplies the branch information to the computer. This
information is called the interrupt vector and Non-vectored interrupt in which the
branch address is assigned to a fixed location in memory.
Interrupt types: The interrupt type are categorized by examining how the CPU
executes instructions. These are of two types
1. Precise Interrupts are characterized by the instructions "before" the program
counter (PC) having all been completed, and those "after" the PC NOT completed. The
instruction the PC points to may or may not have completed but which is clearly
indicated by the CPU. This makes saving the current state of an interrupted process
much easier.
2. ImPrecise Interrupts occur when some instructions "before" the PC may not have
completed, while some instructions "after" the PC may have completed.
This greatly complicates saving the current state of the running process.
(* Reference: Modern Operating Systems,By Tanenbaum, 4th Edition, Page No: 347 )
d. Write a note on power management.
The first electronic computer (ENIAC), was run using 18,000 vacuum tubes and
consumed 140,000 watts of power which is very costly in terms of power use. After the
invention of the transistor, power usage dropped radically. Nowadays again peoples
are thinking about power consumption in which operating system play a vital role.
Normally a desktop PC has a 200-watt power supply. If 100 million of these machines
are turned on at once worldwide, together they use 20,000 megawatts of electricity.
Which is almost similar to output of 20 nuclear power plants. If this consumption
reduced to half, we save energy equivalent of 10 nuclear power plants. This is going to
a make a big difference in environment point of view.
In battery powered computers such as notebooks, handhelds and Webpads, energy
consumption is a big issue because the batteries only hold power up to few hours. Even
there is a huge research is going on not so remarkable output is yet to produce. So
effective way can be saving an energy only. The operating system plays a major role
here.
The hardware vendors are trying to make their electronics more energy efficient.
Techniques used include reducing transistor size, employing dynamic voltage scaling,
using low swing and adiabatic buses, and similar techniques.
(Hardware Issues, Operating System Issues, Hard Disk, CPU, Memory, Wireless
Communication- energy management related to these issues can also be
considered)
(* Reference: Modern Operating Systems,By Tanenbaum, 4th Edition, Page No: 417 )
e. Explain Starvation.
It is a situation where a process is perpetually denied necessary resources. Without
those resources, the program can never finish its task. Starvation is related to deadlock.
A starvation is a problem that is encountered when multiple threads or processes wait
for the same resource, which is called a deadlock.
In order to get out from a deadlock, one of the processes or threads should have to give
up or roll back so that the other thread or process can use the resource. If this
continuously happens and the same process or thread have to give up or roll back each
time while letting other processes or threads to use the resource, then the selected
process or thread, which rolled back will undergo a situation called starvation.
Therefore, in order to get out from a deadlock, starvation is one of the solutions.
Therefore, sometimes starvation is called a kind of a livelock. When there are many
high priority processes or threads, a lower priority process or thread will always
starve in a deadlock. There can be many starvations such as starving on
resources and starving on CPU.
Causes of Starvation
Processes hand on resources to other processes without control:
Processes' priorities are strictly enforced:
"Random" selection is used:
Not enough resources:
Starvation can happen at any organised scheduling level, though it is more likely in the
automatic allocation processes than in the higher-level manual parts.
(* Reference: Modern Operating Systems,By Tanenbaum, 4th Edition, Page No: 463 )
f. Describe Livelock.
There is a variant of deadlock called as livelock. This is a situation in which two or
more processes continuously change their state in response to changes in the other
processes without doing any useful work. This is similar to deadlock in that no
progress is made but differs in that neither process is blocked or waiting for anything.
A human example of livelock would be two people who meet face-to-face in a corridor
and each moves aside to let the other pass, but they end up with moving back and forth
from side to side without making any progress because they always move the same
way at the same time.
A real-world example of livelock occurs when two people meet in a narrow corridor,
and each tries to be polite by moving aside to let the other pass, but they end up
swaying from side to side without making any progress because they both repeatedly
move the same way at the same time.
Livelock is a risk with some algorithms that detect and recover from deadlock. If more
than one process takes action, the deadlock detection algorithm can be repeatedly
triggered. This can be avoided by ensuring that only one process (chosen randomly or
by priority) takes action.
Attempting to use back-off- retry later-or resource preemption approaches to
preventing deadlock may cause livelock.
Processes don't deadlock, but fall to make progress either.
For example.
Suppose Mr. X and Mr. Y both want to listen to a Lamest Z.95 hits CD on a personal CD
player. Suppose:
Mr. X has the CD player but wants the CD.
Mr. Y has the CD but wants the CD player.
Mr. X steals (i.e.. preempts) the CD from Mr. Y. Meanwhile, Mr. Y steals the CD
player from Mr. X.
Now. Mr. X has The CD and Mr. Y has the CD player.
Mr. X steals the CD player from Larry. Meanwhile. Mr. Y steals the CD from Mr. X.
Now. Mr. X has The CD player and Mr. Y has the CD.
It’s Livelock!
(* Reference: Modern Operating Systems,By Tanenbaum, 4th Edition, Page No: 464 )
Live migration:
It is a process of migrate a virtual machine from one physical host to another without
significantly interrupting application availability. Live migration captures a VM's
complete memory state and the state of all its processor registers and sends that data
to memory space on another server. That server then loads the processor registers,
and the VM picks up right where it left off. All of major virtualization vendors offer
live migration: VMware offers vMotion, Microsoft has Live Migration in Hyper-V, and
Citrix Systems has XenMotion.
(* Reference: Modern Operating Systems,By Tanenbaum, 4th Edition, Page No: 496 )
c Explain 2*2 Multistage Switching Network.
The multiprocessors are usually referred as Shared-memory processors. These allows
development of parallel software that supports sharing of code and data. It is also
referred as parallel random access machine (PRAM).
There are two major groups of multiprocessors:
UMA (Uniform memory access): Here all processors share a unique
centralized primary memory, so each CPU has the same memory access time.
The every memory word can read as fast as any other memory word.
NUMA (Nonuniform memory access): These processors does not share
unique memory. These systems have a shared logical address space, but
physical memory is distributed among CPUs, so that access time to data
depends on data position, in local or in a remote memory.
The message arrives either on input A or B and is routed according to some header
information to the output X or Y. The message header is composed by 4 fields:
The switch looks at the Module-field and decides if the message should be sent to
output X or Y.
Using the 2x2 switch larger networks can be built. We can also build an omega. The
omega network uses n CPU's and n memories log2n stages are needed, with n/2
switches per stage. Instead of n2 switches only (n/2)log2n switches are needed to
build the omega network.
The image illustrates how 2 CPU's A (001) and B (011) accesses two different memory
blocks M (001) and N (110).
If another CPU wants to access the same memory block simultaneously, then it has to
wait. Therefore the omega network is a blocking network.
(* Reference: Modern Operating Systems,By Tanenbaum, 4th Edition, Page No: 523 )
d State and explain the Type 1 and Type 2 Hypervisors.
A Hypervisor also known as Virtual Machine Monitor (VMM), it can be a piece of
software, firmware or hardware that gives an impression to the guest machines(virtual
machines) as if they were operating on a physical hardware. It allows multiple
operating system to share a single host and its hardware. The hypervisor manages
requests by virtual machines to access to the hardware resources (RAM, CPU, NIC etc)
acting as an independent machine.
Type 1 Hypervisor
This is also known as Bare Metal or Embedded or Native Hypervisor. It works directly
on the hardware of the host and can monitor operating systems that run above the
hypervisor. It is completely independent from the Operating System. This hypervisor is
small as its main task is sharing and managing hardware resources between different
operating systems. A major advantage is that any problems in one virtual machine or
guest operating system do not affect the other guest operating systems running on the
hypervisor.
Type 2 Hypervisor
This is also known as Hosted Hypervisor. In this case, the hypervisor is installed on an
operating system and then supports other operating systems above it. It is completely
dependent on host Operating System for its operations While having a base operating
system allows better specification of policies, any problems in the base operating
system a effects the entire system as well even if the hypervisor running above the
base OS is secure.
(* Reference: Modern Operating Systems,By Tanenbaum, 4th Edition, Page No: 477 )
e Explain Master-Slave Multiprocessors.
Multiprocessor operating systems are similar to regular operating systems. They
handle system calls, do memory management, provide a file system, and manage I/O
devices. They also have some unique features. These include process synchronization,
resource management, and scheduling. We will get a brief look at multiprocessor
hardware and then will see these operating systems issues.
Master-Slave Multiprocessors: In this architecture both I/O devices and memory are
shared by the CPUs. A single CPU is selected as the master which runs the single
instance of the OS. The remaining CPUs works as a slaves. Here all system calls are
handled by the master CPU. The master CPU schedules which processes to run on
slaves, with balancing the load between all the slave CPUs. The Page frames are shared
in the single memory, and there is only one set of buffer caches on the master,
addressing some problems when each CPU runs a separate OS. The master is the
bottleneck for this design, which limits the number of slaves it can keep up with.
Following figure demonstrates the architecture of Master-Slave.
The master-slave model solves most of the problems of the other models. There is a
single data structure (e.g., one list or a set of prioritized lists) that keeps track of
ready processes. When a CPU goes idle, it asks the operating system for a process to
run and it is assigned one. Thus it can never happen that one CPU is idle while
another is overloaded. Similarly, pages can be allocated among all the processes
dynamically and there is only one buffer cache, so inconsistencies never occur.
The problem with this model is that with many CPUs, the master will become a
bottleneck. After all, it must handle all system calls from all CPUs. If, say, 10% of all
time is spent handling system calls, then 10 CPUs will pretty much saturate the
master, and with 20 CPUs it will be completely overloaded. Thus this model is simple
and workable for small multiprocessors, but for large ones it fails.
(* Reference: Modern Operating Systems,By Tanenbaum, 4th Edition, Page No: 532 )
f Write a note on Document-Based Middleware.
Every Web page has a unique address, called a URL (Uniform Resource Locator), of
the form protocol://DNS-name/file-name. The protocol is most commonly http
(HyperText Transfer Protocol), but ftp and others also exist. Then comes the DNS name
of the host containing the file. Finally, there is a local file name telling which file is
needed. Thus a URL uniquely specifies a single file worldwide.
The Web is fundamentally a client-server system, with the user being the client and the
Website being the server. When the user provides the browser with a URL, either by
typing it in or clicking on a hyperlink on the current page, the browser takes certain
steps to fetch the requested Web page. As a simple example, suppose the URL provided
is http://www. muresults.net/index.html. The browser then takes the following steps
to get the page.
1. The browser asks DNS for the IP address of www.muresults.net
2. DNS replies with 50.62.160.33
3. The browser makes a TCP connection to port 80 on 50.62.160.33
4. It then sends a request asking for the file index.html.
5. The www.muresults.net server sends the file index.html.
6. The browser displays all the text in index.html.
7. Meanwhile, the browser fetches and displays all images on the page.
8. The TCP connection is released.
To a first approximation, that is the basis of the Web and how it works. Many other
features have since been added to the basic Web, including style sheets, dynamic Web
pages that are generated on the fly, Web pages that contain small programs or scripts
that execute on the client machine, and more.
(* Reference: Modern Operating Systems,By Tanenbaum, 4th Edition, Page No: 576 )
The most important part of the kernel is memory management and process
management. The Memory management takes care of assigning memory areas and
swap space areas to processes, parts of the kernel, and for the buffer cache. The
Process management creates processes, and implements multitasking by switching the
active process on the processor.
At the lowest level, the kernel contains a hardware device driver for each kind of
hardware it supports. It supports large number of hardware device drivers. The Linux
kernel has, System libraries, special functions or programs using which application
programs or system utilities accesses Kernel's features. These libraries implement
most of the functionalities of the operating system and do not requires kernel module's
code access rights.
The various network protocols have been abstracted into one programming interface,
the BSD socket library. The virtual file system (VFS) layer abstracts the file system
operations away from their implementation. Each file system type provides an
implementation of each file system operation. When some entity tries to use a file
system, the request goes via the VFS, which routes the request to the proper file system
driver.
(* Reference: Modern Operating Systems,By Tanenbaum, 4th Edition, Page No: 731 )
b Write a short note on Synchronization in Linux.
AT the time of porting software from a single core environment to run on multi-core
cluster, we might need to modify code to perform the following operations:
Enforce a particular order of execution.
Control parallel access to shared peripherals or global data.
Linux kernel provide couple of synchronization mechanism.
1. Atomic operation: This is the very simple approach to avoid race
condition or deadlock.
2. Semaphore: This is another kind of synchronization mechanism which
will be provided by the Linux kernel.
3. Spin-lock: This is special type of synchronization mechanism which is
preferable to use in multi-processor(SMP) system.
4. Sequence Lock: This is very useful synchronization mechanism to
provide a lightweight and scalable lock for the scenario where many
readers and a few writers are present.
The Linux kernel uses number of different synchronization primitives for this purpose.
The Process Synchronization is the principle of time slicing where each process is give
a little bit of time for its execution. Here a fork( ) creates a child process that is a
duplicate of the parent process. Every process in Linux has a PID (process id) which
help to differentiate between processes. A child process is a duplicate of a parent.
Parent process might terminates before the child process. In this case the child process
is referred as orphaned process. Linux recognize a deadlock situation and sort it out.
When deadlock occurs, Linux will print an error message and kills the parent process.
Linux provide primitives know as semaphores which multiple processes synchronize
their access to shared memory. Also Linux provides several types of synchronization
variables, both used internally in the kernel, and available to user-level applications
and libraries.
The Linux provides wrappers around the hardware-supported atomic instructions, via
operations such as atomic set and atomic read. In addition, since modern hardware
reorders memory operations, Linux provides memory barriers. Using operations like
rmb and wmb guarantees that all read/write memory operations preceding the
barrier call have completed before any subsequent accesses take place. In this way
synchronisation is achieved in Linux.
(* Reference: Modern Operating Systems,By Tanenbaum, 4th Edition, Page No: 750 )
c Explain any 5 memory management system calls Windows.
1. VirtualAlloc: Reserve or commit a region
2. VirtualFree: Release or decommit a region
3. VirtualProtect: Change the read/write/execute protection on a region
4. VirtualQuery: Inquire about the status of a region
5. VirtualLock: Make a region memory resident (i.e., disable paging for it)
6. VirtualUnlock: Make a region pageable in the usual way
7. CreateFileMapping: Create a file-mapping object and assign it a name
8. MapViewOfFile: Map (part of) a file into the address space
9. UnmapViewOfFile: Remove a mapped file from the address space
10. OpenFileMapping: Open a previously created file-mapping object
(* Reference: Modern Operating Systems,By Tanenbaum, 4th Edition, Page No: 942 )
Android Application priority: It is the sequence in which the process are killed to
reclaim resources is determined by the priority of their hosted applications. Android
process can be in one of the following states.
1. Active process: These processes are given highest level of priority. These are
application components with which the user is interacting with. A process is said to be
active, when any of it host….
An activity that the user is currently interacting.
A service that's bound to the activity that the user is interacting.
A BroadcastReceiver that's executing its onReceive( ) method.
A service executing either its onCreate( ), onResume( ) or onStart( ) callbacks
2. Visible Process: These process are visible but they are not in the foreground or
responding to the users. Only few visible process are there and they will be killed only
under extreme circumstances.
3. Service Process: It is the process which is running a service and started with
startService( ) call. Services don't directly interact with the user, they receive lesser
priority than the above two process. When the system terminates the running service it
will attempt to restart them when resources become available. Exmpale Playing music,
downloading data etc.
4. Background Process: These are the Processes hosting activities that are not visible
to the user, which process onstop( ) method. Usually there are many background
process running, they are killed on last recently used (LRU) method to free up
resources.
5. Empty process: This is the process that doesn't hold any active
application components. The only reason to keep this kind of process
alive is for caching purposes, to improve start-up time the next time a
component needs to run in it. The system often kills these processes in
order to balance overall system resources between process caches and
the underlying kernel caches.
(* Reference: Modern Operating Systems,By Tanenbaum, 4th Edition, Page No: 809 )
f How Android supports security?
In Android apps cannot directly interact with each other, also not with any other
processes on the system. All of the processes that start with booting process, run as the
root user, but anything started after that run under its own user ID (UID).
Android has a few separate file systems. The /system file system is "essentially the
ROM. It is mounted read-only and contains the Android OS, system libraries and apps,
system executable, and so on. The application developer and user have no access to
that file system, and it contains no user data, so it doesn't need to be backed up or
encrypted.
The /data partition is mounted read-write and contains all of the downloaded apps and
the storage for all apps. The/data/data directory is the location where apps store their
data. A subdirectory named after the app is created that is owned by its UID/GID and
has permissions that does not allow access from other UIDs
A common security method Android uses is explicit user interfaces for allowing or
removing specific types of access. In this approach, there is some way an application
indicates it can optionally provide some functionally, and a system supplied trusted
user interface that provides control over this access.
_____________________________