0% found this document useful (0 votes)
54 views22 pages

Operating System

This document contains the details of a student named Samya Pal enrolled in the MCA program at semester 2. It discusses their course on Operating Systems and includes answers to two questions - 1) The types of operating systems and 2) The differences between preemptive and non-preemptive scheduling and some common CPU scheduling algorithms.

Uploaded by

Samya Pal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
54 views22 pages

Operating System

This document contains the details of a student named Samya Pal enrolled in the MCA program at semester 2. It discusses their course on Operating Systems and includes answers to two questions - 1) The types of operating systems and 2) The differences between preemptive and non-preemptive scheduling and some common CPU scheduling algorithms.

Uploaded by

Samya Pal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

Name: SAMYA PAL

Roll No: 2114504018


Program: MCA
Semester: Sem2
Course Name: OPERATING SYSTEM

DCA6201 – OPERATING SYSTEM


SET I

1. a. Discuss the types of Operating System.


Ans:
Batch Operating System: In a Batch Operating System, the similar jobs
are grouped together into batches with the help of some operator and these
batches are executed one by one. For example, let us assume that we have 10
programs that need to be executed. Some programs are written in C++, some
in C and rest in Java. Now, every time when we run these programmes
individually then we will have to load the compiler of that particular language
and then execute the code. But what if we make a batch of these 10
programmes. The benefit with this approach is that, for the C++ batch, we
need to load the compiler only once. Similarly, for Java and C, the compiler
needs to be loaded only once and the whole batch gets executed. The following
image describes the working of a Batch Operating System.

Advantages:

 The overall time taken by the system to execute all the programmes
will be reduced.
 The Batch Operating System can be shared between multiple users.
Disadvantages:
 Manual interventions are required between two batches.
 The CPU utilization is low because the time taken in loading and
unloading of batches is very high as compared to execution time.

Time-Sharing Operating System: In a Multi-tasking Operating System,


more than one processes are being executed at a particular time with the help
of the time-sharing concept. So, in the time-sharing environment, we decide a
time that is called time quantum and when the process starts its execution then
the execution continues for only that amount of time and after that, other
processes will be given chance for that amount of time only. In the next cycle,
the first process will again come for its execution and it will be executed for
that time quantum only and again next process will come. This process will
continue. The following image describes the working of a Time-Sharing
Operating System

Advantages:

 Since equal time quantum is given to each process, so each process


gets equal opportunity to execute.
 The CPU will be busy in most of the cases and this is good to have
case.
Disadvantages:

 Process having higher priority will not get the chance to be executed
first because the equal opportunity is given to each process.

Distributed Operating System: In a Distributed Operating System, we have


various systems and all these systems have their own CPU, main memory,
secondary memory, and resources. These systems are connected to each other
using a shared communication network. Here, each system can perform its
task individually. The best part about these Distributed Operating System is
remote access i.e. one user can access the data of the other system and can
work accordingly. So, remote access is possible in these distributed Operating
Systems. The following image shows the working of a Distributed Operating
System.

Advantages:

 Since the systems are connected with each other so, the failure of
one system can't stop the execution of processes because other
systems can do the execution.
 Resources are shared between each other.
 The load on the host computer gets distributed and this, in turn,
increases the efficiency.
Disadvantages:

 Since the data is shared among all the computers, so to make the data
secure and accessible to few computers, you need to put some extra
efforts.
 If there is a problem in the communication network then the whole
communication will be broken.

Embedded Operating System: An Embedded Operating System is designed


to perform a specific task for a particular device which is not a computer. For
example, the software used in elevators is dedicated to the working of
elevators only and nothing else. So, this can be an example of Embedded
Operating System. The Embedded Operating System allows the access of
device hardware to the software that is running on the top of the Operating
System.

Advantages:

 Since it is dedicated to a particular job, so it is fast.


 Low cost.
 These consume less memory and other resources.
Disadvantages:

 Only one job can be performed.


 It is difficult to upgrade or is nearly scalable.

Real-time Operating System: The Real-time Operating Systems are used in


the situation where we are dealing with some real-time data. So, as soon as the
data comes, the execution of the process should be done and there should be
no dealy i.e. no buffer delays should be there. Real-time OS is a time-sharing
system that is based on the concept of clock interrupt. So, whenever you want
to process a large number of request in a very short period of time, then you
should use Real-time Operating System. For example, the details of the
temperature of the petroleum industry are very crucial and this should be done
in real-time and in a very short period of time. A small delay can result in a
life-death situation. So, this is done with the help of Real-time Operating
System. There are two types of Real-time Operating System:
 Hard Real-time: In this type, a small delay can lead to drastic
change. So, when the time constraint is very important then we use
the Hard Real-time.
 Soft Real-time: Here, the time constraint is not that important but
here also we are dealing with some real-time data.

Advantages:

 There is maximum utilization of devices and resources.


 These systems are almost error-free.
Disadvantages:

 The algorithms used in Real-time Operating System is very


complex.
 Specific device drivers are used for responding to the interrupts as
soon as possible.

b. What is VMware? Write a short note on it.


VMware is a software tool for Windows operating system that lets you to
install a virtual operating system within the windows operating system.
Suppose you are familiar with Windows operating system and now for some
reason you need to use other operating system, let's say Fedora then you might
be reluctant to remove Windows and install Fedora from your system, so in
that case you can use VMware on your Windows operating system and add
virtual Fedora operating system to have experience of Fedora without actually
installing it on your system.
2. What are Pre-emptive and Non-pre-emptive Scheduling? Discuss the
CPU scheduling algorithms.
Ans:
Preemptive Scheduling is a CPU scheduling technique that works by
dividing time slots of CPU to a given process. The time slot given might be
able to complete the whole process or might not be able to it. When the burst
time of the process is greater than CPU cycle, it is placed back into the ready
queue and will execute in the next chance. This scheduling is used when the
process switch to ready state.
Algorithms that are backed by preemptive Scheduling are round-robin (RR),
priority, SRTF (shortest remaining time first).
Non-preemptive Scheduling is a CPU scheduling technique the process
takes the resource (CPU time) and holds it till the process gets terminated or
is pushed to the waiting state. No process is interrupted until it is completed,
and after that processor switches to another process.
Algorithms that are based on non-preemptive Scheduling are non-preemptive
priority, and shortest Job first.

CPU Scheduling Algorithms: There are many different algorithms for


scheduling computer programs, with different advantages and disadvantages.
In all cases, we want to make sure that each program gets a fair amount of
processor time, so that all of the programs have a chance to run and complete.
We also want to make sure that we make the best use of our computing
resources: for example, we don’t want a program to be blocking the use of the
CPU while it waits for a file to be slowly read into RAM from a hard disk

Here are a few common scheduling algorithms.


First Come, First Served (FCFS)
Whichever program is added to the queue first is run until it finishes. Then the
next program in the queue is run, and so on.
Shortest Job Next (SJN)
For this algorithm, the OS needs to know (or guess) the time each program
will take to run. It picks the program which will take the shortest amount of
time as the one to run next.
Priority Scheduling
The OS assigns each program a priority. This could be based on how much
memory they take, or how important they are for maintaining a responsive
user interface. The program with the highest priority is picked to run next.
Shortest Remaining Time
This is similar to Shortest Job Next, except that if a new program is started,
the OS compares the time it needs with the time the currently running program
has left. If the new program would finish sooner, then the currently running
program is switched out and the CPU starts processing the new program.
Round Robin (RR) scheduling
Time on the CPU is divided into equal parts called “time slices”. Time slices
are allocated to each program equally and cyclically. This means that if we
had a list of three programs running, the CPU would run:
 Program 1 for one time slice
 Program 2 for one time slice
 Program 3 for one time slice
and would repeat this order until one of the programs finished.

Multilevel Queues
In this algorithm, programs are split into different queues by type — for
example, system programs, or interactive programs. The programs of each
type form a “queue”.
 One algorithm will determine how CPU time is split between these
queues. For example, one possibility is that the queues have different
priorities, and programs in higher priority queues always run before
those in lower priority queues (similar to priority scheduling).
 For each queue, a different queue scheduling algorithm will decide how
the CPU time is split within that queue. For example, one queue may
use Round Robin scheduling, while another uses priority scheduling.
3. Discuss Inter Process Communication and critical-section problem
along with the use of semaphores.
Ans: Inter process communication (IPC) is used for exchanging data
between multiple threads in one or more processes or programs. The Processes
may be running on single or multiple computers connected by a network. The
full form of IPC is Inter-process communication.
It is a set of programming interface which allow a programmer to coordinate
activities among various program processes which can run concurrently in an
operating system. This allows a specific program to handle many user requests
at the same time.
Since every single user request may result in multiple processes running in the
operating system, the process may require to communicate with each other.
Each IPC protocol approach has its own advantage and limitation, so it is not
unusual for a single program to use all of the IPC methods.
Approaches for Inter-Process Communication
Pipes
Pipe is widely used for communication between two related processes. This is
a half-duplex method, so the first process communicates with the second
process. However, in order to achieve a full-duplex, another pipe is needed.
Message Passing:
It is a mechanism for a process to communicate and synchronize. Using
message passing, the process communicates with each other without resorting
to shared variables.
IPC mechanism provides two operations:
 Send (message)- message size fixed or variable
 Received (message)
Message Queues:
A message queue is a linked list of messages stored within the kernel. It is
identified by a message queue identifier. This method offers communication
between single or multiple processes with full-duplex capacity.
Direct Communication:
In this type of inter-process communication process, should name each other
explicitly. In this method, a link is established between one pair of
communicating processes, and between each pair, only one link exists.
Indirect Communication:
Indirect communication establishes like only when processes share a common
mailbox each pair of processes sharing several communication links. A link
can communicate with many processes. The link may be bi-directional or
unidirectional.
Shared Memory:
Shared memory is a memory shared between two or more processes that are
established using shared memory between all the processes. This type of
memory requires to protect from each other by synchronizing access across
all the processes.
FIFO:
Communication between two unrelated processes. It is a full-duplex method,
which means that the first process can communicate with the second process,
and the opposite can also happen.
The critical section problem is used to design a protocol followed by a
group of processes, so that when one process has entered its critical section,
no other process is allowed to execute in its critical section.
The critical section refers to the segment of code where processes access
shared resources, such as common variables and files, and perform write
operations on them.
Since processes execute concurrently, any process can be interrupted mid-
execution. In the case of shared resources, partial execution of processes can
lead to data inconsistencies. When two processes access and manipulate the
shared resource concurrently, and the resulting execution outcome depends on
the order in which processes access the resource; this is called a race
condition.
Race conditions lead to inconsistent states of data. Therefore, we need a
synchronization protocol that allows processes to cooperate while
manipulating shared resources, which essentially is the critical section
problem.
Semaphore Solution
Semaphore is simply a variable that is non-negative and shared between
threads. It is another algorithm or solution to the critical section problem. It is
a signaling mechanism and a thread that is waiting on a semaphore, which can
be signaled by another thread.
It uses two atomic operations, 1)wait, and 2) signal for the process
synchronization.
Example
WAIT ( S ):
while ( S <= 0 );
S = S - 1;
SIGNAL ( S ):
S = S + 1;
SET II

4. a. What is a Process Control Block? What information is stored in it?


Ans:
While creating a process the operating system performs several operations. To
identify the processes, it assigns a process identification number (PID) to each
process. As the operating system supports multi-programming, it needs to
keep track of all the processes. For this task, the process control block (PCB)
is used to track the process’s execution status. Each block of memory contains
information about the process state, program counter, stack pointer, status of
opened files, scheduling algorithms, etc. All these information is required and
must be saved when the process is switched from one state to another. When
the process makes a transition from one state to another, the operating system
must update information in the process’s PCB.
A process control block (PCB) contains information about the process, i.e.
registers, quantum, priority, etc. The process table is an array of PCB’s, that
means logically contains a PCB for all of the current processes in the system.

 Pointer – It is a stack pointer which is required to be saved when the


process is switched from one state to another to retain the current
position of the process.
 Process state – It stores the respective state of the process.
 Process number – Every process is assigned with a unique id known as
process ID or PID which stores the process identifier.
 Program counter – It stores the counter which contains the address of
the next instruction that is to be executed for the process.
 Register – These are the CPU registers which includes: accumulator,
base, registers and general purpose registers.
 Memory limits – This field contains the information about memory
management system used by operating system. This may include the
page tables, segment tables etc.
 Open files list – This information includes the list of files opened for a
process.
The PCB simply serves as the repository of any information that may vary
from process to process.

b. What is thrashing? What are its causes?


Ans: When a process does not have enough frames or when a process is
executing with a minimum set of frames allocated to it which are in active use,
there is always a possibility that the process will page fault quickly. The page in
active use becomes a victim and hence page faults will occur again and again. In
this case a process spends more time in paging than executing. This high paging
Activity is called thrashing

Thrashing is caused by under allocation of the minimum number of pages


required by a process, forcing it to continuously page fault. The system can detect
thrashing by evaluating the level of CPU utilization as compared to the level of
multiprogramming. It can be eliminated by reducing the level of
multiprogramming.
5. a. Discuss the different File Access Methods.
Ans:
There are several files that will be stored in computer memory. When an
application needs a file, the operating system should search in the computer
memory and has to access the required files. There are several ways the operating
system can access the information in the files called File Access Methods. The
file access methods provide operations like reading from or writing to a file stored
in the computer memory. The various file access methods are,

 Sequential Access Method


 Direct Access Method
 Indexed Access Method
 Indexed Sequential Access Method.

Sequential Access Method:


Among all the access methods, it is considered the simplest method. As the
name itself suggest it is the sequential (or series) processing of information
present in a file. Due to its simplicity most of the compilers, editors, etc., use this
method.
Processing is carried out with the use of two operations namely, read and
write. Read operation is responsible for reading the next portion of the file and
after a successful read of the record, the pointer proceeds automatically to the
next record which tracks the I/O location. Write operation is responsible for
writing at the end of the file and shifts the pointer towards the end of the newly
added record.

In this type of access, while processing the records sequentially, some of


the records can be skipped in both directions (either forward or backward) and
can also be reset or rewind to the head of the file (beginning). The above figure
shows a tape model of sequential access of files.

Direct Access Method: This access method is also called real-time access
where the records can be read irrespective of their sequence. This means they can
be accessed as a file that is accessed from a disk where each record carries a
sequence number. For example, block 40 can be accessed first followed by block
10 and then block 30, and so on. This eliminates the need for sequential read or
write operations.

Indexed Access Method:


This method is typically an advancement in the direct access method which is
the consideration of index. A particular record is accessed by browsing through
the index and the file is accessed directly with the use of a pointers or addresses
present in the index as shown below.
Indexed Sequential Access Method: To overcome the drawback associated
with indexed access, this method is used where an index of an index is created.
Primary index points to the secondary index and the secondary index points to
the actual data items as shown below.

b. What are I/O Control Strategies?


Ans:
Programmed I/O
The programmed I/O method controls the transfer of data between connected
devices and the computer. Each I/O device connected to the computer is
continually checked for inputs. Once it receives an input signal from a device, it
carries out that request until it no longer receives an input signal. Let's say you
want to print a document. When you select print on your computer, the request is
sent through the central processing unit (CPU) and the communication signal is
acknowledged and sent out to the printer.
Interrupt-Based I/O
The interrupt-based I/O method controls the data transfer activity to and from
connected I/O devices. It allows the CPU to continue to process other work
instead and will be interrupted only when it receives an input signal from an I/O
device. For example, if you strike a key on a keyboard, the interrupt I/O will send
a signal to the CPU that it needs to pause from its current task and carry out the
request from the keyboard stroke.
Direct Memory Access
The name itself explains what the direct memory access I/O method does. It
directly transfers blocks of data between the memory and I/O devices without
having to involve the CPU. If the CPU was involved, it would slow down the
computer. When an input signal is received from an I/O device that requires
access to memory, the DMA will receive the necessary information required to
make that transfer, allowing the CPU to continue with its other tasks. For
example, if you need to transfer pictures from a camera plugged into a USB port
on your computer, instead of the CPU processing this request, the signal will be
sent to the DMA, which will handle it.
6. Explain the different Multiprocessor Interconnections and types of
Multiprocessor Operating Systems.
Ans:
The nature of multiprocessor interconnections has an effect on the bandwidth
for communication. Complexity, cost, IPC and scalability are some features
considered in interconnections. Basic architectures for multiprocessor
interconnections are as follows:
 Bus-oriented systems
 Crossbar-connected systems
 Hyper cubes
 Multistage switch-based systems

Bus-oriented systems A shared bus connects processors and memory in the


multiprocessor system as shown in figure

Processors communicate with each other and the shared memory through the
shared bus. Variations of this basic scheme are possible where processors may or
may not have local memory, I/O devices may be attached to individual processors
or the shared bus and the shared memory itself can have multiple banks of
memory. The bus and the memory being shared resources there is always a
possibility of contention. Cache memory is often used to release contention.
Cache associated with individual processors provides a better performance. A
90% cache hit ratio improves the speed of the multiprocessor systems nearly 10
times as compared to systems without cache. Existence of multiple caches in
individual processors creates problems. Cache coherence is a problem to be
addressed. Multiple physical copies of the same data must be consistent in case
of an update. Maintaining cache coherence increases bus traffic and reduces the
achieved speedup by some amount. Use of a parallel bus increases bandwidth.
The tightly coupled, shared bus organization usually supports 10 processors.
Because of its simple implementation many commercial designs of
multiprocessor systems are based on shared-bus concept.

Crossbar-connected systems An interconnection of processors and memory


in a multiprocessor system using crossbar approach is shown in figure

Simultaneous access of ‘n’ processors and ‘n’ memories is possible if each of


the processors accesses a different memory. The crossbar switch is the only cause
of delay between processor and memory. If no local memory is available in the
processors then the system is a UMA multiprocessor system. Contention occurs
when more than one processor attempts to access the same memory at the same
time. Careful distribution of data among the different memory locations can
reduce or eliminate contention. High degree of parallelism exists between
unrelated tasks but contention is possible if inter-process and inter-processor
communication and synchronization are based on shared memory, for example,
semaphore. Since ‘n’ processors and ‘n’ memory locations are fully connected,
n2 cross points exist. The quadratic growth of the system makes the system
expensive and limits scalability.
Hyper cubes A 3-dimensional hypercube can be visualized as shown in figure

Cube topology has one processor at each node / vertex. Given a 3- dimentional
cube (a higher dimensional cube cannot be visualized), 23 = 8 processors are
interconnected. The result is a NORMA type multiprocessor and is a common
hypercube implementation. Each processor at a node has a direct link to log2N
nodes where N is the total number of nodes in the hypercube. For example, in a
3-dimensional hypercube, N = 8 and each node is connected to log28 = 3 nodes.
Hypercube can be recursive structures with high dimension cubes containing low
dimension cubes as proper subsets. For example, a 3- dimensional cube has two
2-dimensional cubes as subsets. Hypercube have a good basis for scalability since
complexity grows logarithmically whereas it is quadratic in the previous case.
They are best suited for problems that map on to a cube structure, those that rely
on recursion or exhibit locality of reference in the form of frequent
communication with adjacent nodes. Hypercube form a promising basis for large-
scale multiprocessors. Message passing is used for inter-processor
communication and synchronization. Increased bandwidth is sometimes provided
through dedicated nodes that act as sources / repositories of data for clusters of
nodes.
Multistage switch-based systems Processors and memory in a multiprocessor
system can be interconnected by use of a multistage switch. A generalized type
of interconnection links N inputs and N outputs through log2N stages, each stage
having N links to N / 2 interchange boxes. The structure of a multistage switch
network is shown below in figure

The network has log2N = log2 23 = 3 stages of N / 2 = 8 / 2 = 4 switches each.


Each switch is a 2x2 crossbar that can do anyone of the following: Copy input
to output Swap input and output Copy input to both output Routing is fixed
and is based on the destination address and the source address. In general to go
from source S to destination D the ith stage switch box in the path from S to D
should be set to swap if Si Di and set to straight if Si = Di. Illustration: If S =
000 and D = 000 then Si = Di for all bits. Therefore all switches are straight.
If S = 010 and D = 100 then S1 D1 , S2 D2 and S3 = D3. Therefore
switches in the first two stages should be set to swap and the third to straight.
Multistage switching network provides a form of circuit switching where traffic
can flow freely using full bandwidth when blocks of memory are requested at a
time. All inputs can be connected to all outputs provided each processor is
accessing a different memory. Contention at the memory module or within the
interconnection network may occur. Buffering can relieve contention to some
extent.

You might also like