Operating System
Operating System
Advantages:
The overall time taken by the system to execute all the programmes
will be reduced.
The Batch Operating System can be shared between multiple users.
Disadvantages:
Manual interventions are required between two batches.
The CPU utilization is low because the time taken in loading and
unloading of batches is very high as compared to execution time.
Advantages:
Process having higher priority will not get the chance to be executed
first because the equal opportunity is given to each process.
Advantages:
Since the systems are connected with each other so, the failure of
one system can't stop the execution of processes because other
systems can do the execution.
Resources are shared between each other.
The load on the host computer gets distributed and this, in turn,
increases the efficiency.
Disadvantages:
Since the data is shared among all the computers, so to make the data
secure and accessible to few computers, you need to put some extra
efforts.
If there is a problem in the communication network then the whole
communication will be broken.
Advantages:
Advantages:
Multilevel Queues
In this algorithm, programs are split into different queues by type — for
example, system programs, or interactive programs. The programs of each
type form a “queue”.
One algorithm will determine how CPU time is split between these
queues. For example, one possibility is that the queues have different
priorities, and programs in higher priority queues always run before
those in lower priority queues (similar to priority scheduling).
For each queue, a different queue scheduling algorithm will decide how
the CPU time is split within that queue. For example, one queue may
use Round Robin scheduling, while another uses priority scheduling.
3. Discuss Inter Process Communication and critical-section problem
along with the use of semaphores.
Ans: Inter process communication (IPC) is used for exchanging data
between multiple threads in one or more processes or programs. The Processes
may be running on single or multiple computers connected by a network. The
full form of IPC is Inter-process communication.
It is a set of programming interface which allow a programmer to coordinate
activities among various program processes which can run concurrently in an
operating system. This allows a specific program to handle many user requests
at the same time.
Since every single user request may result in multiple processes running in the
operating system, the process may require to communicate with each other.
Each IPC protocol approach has its own advantage and limitation, so it is not
unusual for a single program to use all of the IPC methods.
Approaches for Inter-Process Communication
Pipes
Pipe is widely used for communication between two related processes. This is
a half-duplex method, so the first process communicates with the second
process. However, in order to achieve a full-duplex, another pipe is needed.
Message Passing:
It is a mechanism for a process to communicate and synchronize. Using
message passing, the process communicates with each other without resorting
to shared variables.
IPC mechanism provides two operations:
Send (message)- message size fixed or variable
Received (message)
Message Queues:
A message queue is a linked list of messages stored within the kernel. It is
identified by a message queue identifier. This method offers communication
between single or multiple processes with full-duplex capacity.
Direct Communication:
In this type of inter-process communication process, should name each other
explicitly. In this method, a link is established between one pair of
communicating processes, and between each pair, only one link exists.
Indirect Communication:
Indirect communication establishes like only when processes share a common
mailbox each pair of processes sharing several communication links. A link
can communicate with many processes. The link may be bi-directional or
unidirectional.
Shared Memory:
Shared memory is a memory shared between two or more processes that are
established using shared memory between all the processes. This type of
memory requires to protect from each other by synchronizing access across
all the processes.
FIFO:
Communication between two unrelated processes. It is a full-duplex method,
which means that the first process can communicate with the second process,
and the opposite can also happen.
The critical section problem is used to design a protocol followed by a
group of processes, so that when one process has entered its critical section,
no other process is allowed to execute in its critical section.
The critical section refers to the segment of code where processes access
shared resources, such as common variables and files, and perform write
operations on them.
Since processes execute concurrently, any process can be interrupted mid-
execution. In the case of shared resources, partial execution of processes can
lead to data inconsistencies. When two processes access and manipulate the
shared resource concurrently, and the resulting execution outcome depends on
the order in which processes access the resource; this is called a race
condition.
Race conditions lead to inconsistent states of data. Therefore, we need a
synchronization protocol that allows processes to cooperate while
manipulating shared resources, which essentially is the critical section
problem.
Semaphore Solution
Semaphore is simply a variable that is non-negative and shared between
threads. It is another algorithm or solution to the critical section problem. It is
a signaling mechanism and a thread that is waiting on a semaphore, which can
be signaled by another thread.
It uses two atomic operations, 1)wait, and 2) signal for the process
synchronization.
Example
WAIT ( S ):
while ( S <= 0 );
S = S - 1;
SIGNAL ( S ):
S = S + 1;
SET II
Direct Access Method: This access method is also called real-time access
where the records can be read irrespective of their sequence. This means they can
be accessed as a file that is accessed from a disk where each record carries a
sequence number. For example, block 40 can be accessed first followed by block
10 and then block 30, and so on. This eliminates the need for sequential read or
write operations.
Processors communicate with each other and the shared memory through the
shared bus. Variations of this basic scheme are possible where processors may or
may not have local memory, I/O devices may be attached to individual processors
or the shared bus and the shared memory itself can have multiple banks of
memory. The bus and the memory being shared resources there is always a
possibility of contention. Cache memory is often used to release contention.
Cache associated with individual processors provides a better performance. A
90% cache hit ratio improves the speed of the multiprocessor systems nearly 10
times as compared to systems without cache. Existence of multiple caches in
individual processors creates problems. Cache coherence is a problem to be
addressed. Multiple physical copies of the same data must be consistent in case
of an update. Maintaining cache coherence increases bus traffic and reduces the
achieved speedup by some amount. Use of a parallel bus increases bandwidth.
The tightly coupled, shared bus organization usually supports 10 processors.
Because of its simple implementation many commercial designs of
multiprocessor systems are based on shared-bus concept.
Cube topology has one processor at each node / vertex. Given a 3- dimentional
cube (a higher dimensional cube cannot be visualized), 23 = 8 processors are
interconnected. The result is a NORMA type multiprocessor and is a common
hypercube implementation. Each processor at a node has a direct link to log2N
nodes where N is the total number of nodes in the hypercube. For example, in a
3-dimensional hypercube, N = 8 and each node is connected to log28 = 3 nodes.
Hypercube can be recursive structures with high dimension cubes containing low
dimension cubes as proper subsets. For example, a 3- dimensional cube has two
2-dimensional cubes as subsets. Hypercube have a good basis for scalability since
complexity grows logarithmically whereas it is quadratic in the previous case.
They are best suited for problems that map on to a cube structure, those that rely
on recursion or exhibit locality of reference in the form of frequent
communication with adjacent nodes. Hypercube form a promising basis for large-
scale multiprocessors. Message passing is used for inter-processor
communication and synchronization. Increased bandwidth is sometimes provided
through dedicated nodes that act as sources / repositories of data for clusters of
nodes.
Multistage switch-based systems Processors and memory in a multiprocessor
system can be interconnected by use of a multistage switch. A generalized type
of interconnection links N inputs and N outputs through log2N stages, each stage
having N links to N / 2 interchange boxes. The structure of a multistage switch
network is shown below in figure