0% found this document useful (0 votes)
165 views60 pages

Os Final Sem Answers

The document provides information on different operating system concepts: 1. It defines several services provided by operating systems including program execution, I/O operations, file system manipulation, communication, error detection, resource allocation, and protection. 2. It explains concepts such as simple batch systems, parallel processing, and time-sharing operating systems. A simple batch system groups similar jobs together to improve efficiency, parallel processing uses multiple processors simultaneously to complete tasks faster, and time-sharing divides processes into time quanta to allow multitasking. 3. Distributed operating systems allow independent computers on a network to communicate and remotely access files, improving availability even if one system fails.

Uploaded by

Arjun Singh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
165 views60 pages

Os Final Sem Answers

The document provides information on different operating system concepts: 1. It defines several services provided by operating systems including program execution, I/O operations, file system manipulation, communication, error detection, resource allocation, and protection. 2. It explains concepts such as simple batch systems, parallel processing, and time-sharing operating systems. A simple batch system groups similar jobs together to improve efficiency, parallel processing uses multiple processors simultaneously to complete tasks faster, and time-sharing divides processes into time quanta to allow multitasking. 3. Distributed operating systems allow independent computers on a network to communicate and remotely access files, improving availability even if one system fails.

Uploaded by

Arjun Singh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 60

OPERATING SYSTEM QUESTION BANK

1.List out different services of Operating Systems and explain each service
Ans:
An operating system is an interface which provides services to both the user and to
the
programs. It provides an environment for the program to execute. It also provides users with
the
services of how to execute programs in a convenient manner. The operating system provides
some 9
services to program and also to the users of those programs. The specific services provided
by the
OS are off course different.
Following are the common services provided by an operating system:
1. Program execution
2. I/O operations
3. File system manipulation
4. Communication
5. Error detection
6. Resource allocation
7. Protection
1) Program Execution
• An operating system must be able to load many kinds of activities into the memory and to
run it.
The program must be able to end its execution, either normally or abnormally.
• A process includes the complete execution of the written program or code. There are some
of the
activities which are performed by the operating system:
o
The operating system Loads program into memory
o
It also Executes the program
o
It Handles the program’s execution
o
It Provides a mechanism for process synchronization
o
It Provides a mechanism for process communication
2) I/O Operations

The communication between the user and devices drivers are managed by the operating
system.

I/O devices are required for any running process. In I/O a file or an I/O devices can be
involved.

I/O operations are the read or write operations which are done with the help of input-output
devices.10
• Operating system give the access to the I/O devices when it required.
3) File system manipulation
OPERATING SYSTEM QUESTION BANK

• The collection of related information which represent some content is known as a file. The
computer can store files on the secondary storage devices. For long-term storage purpose.
examples
of storage media include magnetic tape, magnetic disk and optical disk drives like CD, DVD.
• A file system is a collection of directories for easy understand and usage. These directories
contain some files. There are some major activities which are performed by an operating
system with
respect to file management.
o
The operating system gives an access to the program for performing an operation on the
file.
o
Programs need to read and write a file.
o
The user can create/delete a file by using an interface provided by the operating system.
o
The operating system provides an interface to the user creates/ delete directories.
o
The backup of the file system can be created by using an interface provided by the
operating system.
4) Communication
In the computer system, there is a collection of processors which do not share memory
peripherals
devices or a clock, the operating system manages communication between all the processes.
Multiple processes can communicate with every process through communication lines in the
network. There are some major activities that are carried by an operating system with respect
to
communication.

Two processes may require data to be transferred between the process.

Both the processes can be on one computer or a different computer, but are connected
through a
computer network.
5) Error handling
An error is one part of the system that may cause malfunctioning of the complete system. The
operating system constantly monitors the system for detecting errors to avoid some situations.
This
give relives to the user of the worry of getting an error in the various parts of the system
causing
malfunctioning.
The error can occur anytime and anywhere. The error may occur anywhere in the computer
system
like in CPU, in I/O devices or in the memory hardware. There are some activities that are
performed
by an operating system:

The OS continuously checks for the possible errors.11
• The OS takes an appropriate action to correct errors and consistent computing.
6) Resource management
OPERATING SYSTEM QUESTION BANK

When there are multiple users or multiple jobs running at the same time resources must be
allocated
to each of them. There are some major activities that are performed by an operating system:

The OS manages all kinds of resources using schedulers.

CPU scheduling algorithm is used for better utilization of CPU.
7) Protection
The owners of information stored in a multi-user computer system want to control its use.
When
several disjoints processes execute concurrently it should not be possible for any process to
interfere
with another process. Every process in the computer system must be secured and
controlled.

2.
Explain the following terms and their working with diagram
a) Simple Batch System

In a Batch Operating System, the similar jobs are grouped together into batches with the help
of
some operator and these batches are executed one by one. For example, let us assume that we
have
10 programs that need to be executed. Some programs are written in C++, some in C and rest
in
Java. Now, every time when we run these programmes individually then we will have to load
the
compiler of that particular language and then execute the code. But what if we make a batch
of
these 10 programmes. The benefit with this approach is that, for the C++ batch, you need to
load
the compiler only once. Similarly, for Java and C, the compiler needs to be loaded only once
and
the whole batch gets executed. The following image describes the working of a Batch
Operating
System.

Advantages:
1. The overall time taken by the system to execute all the programmes will be reduced.
2. The Batch Operating System can be shared between multiple users.
Disadvantages:
1. Manual interventions are required between two batches.
OPERATING SYSTEM QUESTION BANK

2. The CPU utilization is low because the time taken in loading and unloading of batches is
very
high as compared to execution time

b) Parallel Processing

Parallel processing requires multiple processors and all the processor works
simultaneously in
the system. Here, the task is divided into subparts and these subparts are then
distributed
among the available processors in the system. Parallel processing completes the job on
the
shortest possible time.
All the processors in the parallel processing environment should run on the same operating
system.
All processors here are tightly coupled and are packed in one casing. All the processors in
the
system share the common secondary storage like the hard disk. As this is the first place
where the
programs are to be placed.
There is one more thing that all the processors in the system share i.e. the user terminal
(from
where the user interact with the system). The user need not to be aware of the inner
architecture of 6
the machine. He should feel that he is dealing with the single processor only and his
interaction with
the system would be the same as in a single processor,

Advantages
1. It saves time and money as many resources working together will reduce the time and cut
potential costs.
2. It can be impractical to solve larger problems on Serial Computing.
3. It can take advantage of non-local resources when the local resources are finite.
4. Serial Computing ‘wastes’ the potential computing power, thus Parallel Computing makes
better work of the hardware.
Disadvantages
1. It addresses such as communication and synchronization between multiple sub-tasks and
processes which is difficult to achieve.
OPERATING SYSTEM QUESTION BANK

2. The algorithms must be managed in such a way that they can be handled in a parallel
mechanism.
3. The algorithms or programs must have low coupling and high cohesion. But it’s difficult to
create such programs.
4. More technically skilled and expert programmers can code a parallelism-based program
well

c) Time sharing

In a Multi-tasking Operating System, more than one processes are being executed at a
particular
time with the help of the time-sharing concept. So, in the time-sharing environment, we
decide a
time that is called time quantum and when the process starts its execution then the execution
continues for only that amount of time and after that, other processes will be given chance for
that
amount of time only. In the next cycle, the first process will again come for its execution and
it
will be executed for that time quantum only and again next process will come. This process
will
continue. The following image describes the working of a Time-Sharing Operating System.

Advantages:
1. Since equal time quantum is given to each process, so each process gets equal opportunity
to
execute.
2. The CPU will be busy in most of the cases and this is good to have case.
Disadvantages:5
1. Process having higher priority will not get the chance to be executed first because the equal
opportunity is given to each process
OPERATING SYSTEM QUESTION BANK

d) Distributed Operating System

These types of the operating system is a recent advancement in the world of computer
technology
and are being widely accepted all over the world and, that too, with a great pace. Various
autonomous interconnected computers communicate with each other using a shared
communication network. Independent systems possess their own memory unit and CPU.
These are
referred to as loosely coupled systems or distributed systems. These system’s processors
differ in
size and function. The major benefit of working with these types of the operating system is
that it
is always possible that one user can access the files or software which are not actually present
on 7
his system but some other system connected within this network i.e., remote access is enabled
within the devices connected in that network.

Advantages of Distributed Operating System:



Failure of one will not affect the other network communication, as all systems are
independent from each other

Electronic mail increases the data exchange speed

Since resources are being shared, computation is highly fast and durable

Load on host computer reduces

These systems are easily scalable as many systems can be easily added to the network

Delay in data processing reduces
Disadvantages of Distributed Operating System:

OPERATING SYSTEM QUESTION BANK

Failure of the main network will stop the entire communication



To establish distributed systems the language which is used are not well defined yet

These types of systems are not readily available as they are very expensive.

e) Real-time Operating System

It is developed for real-time applications where data should be processed in a fixed, small
duration
of time. It is used in an environment where multiple processes are supposed to be accepted
and 8
processed in a short time. RTOS requires quick input and immediate response, e.g., in a
petroleum
refinery, if the temperate gets too high and crosses the threshold value, there should be an
immediate
response to this situation to avoid the explosion. Similarly, this system is used to control
scientific
instruments, missile launch systems, traffic lights control systems, air traffic control systems,
etc.
This system is further divided into two types based on the time constraints:
Hard Real-Time Systems:
These are used for the applications where timing is critical or response time is a major factor;
even a
delay of a fraction of the second can result in a disaster. For example, airbags and automatic
parachutes that open instantly in case of an accident. Besides this, these systems lack virtual
memory.
Soft Real-Time Systems:
These are used for application where timing or response time is less critical. Here, the failure
to meet
the deadline may result in a degraded performance instead of a disaster. For example, video
OPERATING SYSTEM QUESTION BANK

surveillance (cctv), video player, virtual reality, etc. Here, the deadlines are not critical for
every task
every time.
Advantages of real-time operating system:
o
The output is more and quick owing to the maximum utilization of devices and system
o
Task shifting is very quick, e.g., 3 microseconds, due to which it seems that several tasks are
executed simultaneously
o
Gives more importance to the currently running applications than the queued application
o
It can be used in embedded systems like in transport and others.
o
It is free of errors.
o
Memory is allocated appropriately.
Disadvantages of real-time operating system:
o
A fewer number of tasks can run simultaneously to avoid errors.
o
It is not easy for a designer to write complex and difficult algorithms or proficient programs
required to get the desired output.
o
Specific drivers and interrupt signals are required to respond to interrupts quickly.
o
It may be very expensive due to the involvement of the resources required to work

1. List out the commands in LINUX File system with explanation.

LINUX File system


Linux file structure files are grouped according to purpose. Ex: commands, data files,
documentation. Parts of a Unix directory tree are listed below. All directories are grouped
under the
root entry "/". That part of the directory tree is left out of the below diagram.
1. / – Root

Every single file and directory starts from the root directory.

Only root user has write privilege under this directory.

Please note that /root is root user‘s home directory, which is not same as /.
2. /bin – User Binaries

Contains binary executables.

Common linux commands you need to use in single-user modes are located
under thisdirectory.

Commands used by all the users of the system are located here.

For example: ps, ls, ping, grep, cp.
OPERATING SYSTEM QUESTION BANK

3. /sbin – System Binaries



Just like /bin, /sbin also contains binary executables.

But, the linux commands located under this directory are used typically
by systemaministrator, for system maintenance purpose.16
 For example: iptables, reboot, fdisk, ifconfig, swapon
4. /etc – Configuration Files

Contains configuration files required by all programs.

This also contains startup and shutdown shell scripts used to start/stop
individualprograms.

For example: /etc/resolv.conf, /etc/logrotate.conf
5. /dev – Device Files

Contains device files.

These include terminal devices, usb, or any device attached to the system.

For example: /dev/tty1, /dev/usbmon0
6. /proc – Process Information

Contains information about system process.

This is a pseudo filesystem contains information about running process. For example:
/proc/{pid} directory contains information about the process with that particular pid.

This is a virtual filesystem with text information about system resources. For example:
/proc/uptime
7. /var – Variable Files

var stands for variable files.

Content of the files that are expected to grow can be found under this directory.

This includes — system log files (/var/log); packages and database files
(/var/lib); emails (/var/mail); print queues (/var/spool); lock files (/var/lock);
temp files needed across reboots (/var/tmp);
8. /tmp – Temporary Files

Directory that contains temporary files created by system and users.

Files under this directory are deleted when system is rebooted.
9. /usr – User Programs

Contains binaries, libraries, documentation, and source-code for second level programs.

/usr/bin contains binary files for user programs. If you can‘t find a user binary under
/bin, look under /usr/bin. For example: at, awk, cc, less, scp

/usr/sbin contains binary files for system administrators. If you can‘t find a
systembinary under /sbin, look under /usr/sbin. For example: atd, cron, sshd,
OPERATING SYSTEM QUESTION BANK

useradd, userdel

/usr/lib contains libraries for /usr/bin and /usr/sbin17
 /usr/local contains users programs that you install from source. For example,
when youinstall apache from source, it goes under /usr/local/apache2
10. /home – Home Directories

Home directories for all users to store their personal files.

For example: /home/john, /home/nikita
11. /boot – Boot Loader Files

Contains boot loader related files.

Kernel initrd, vmlinux, grub files are located under /boot

For example: initrd.img-2.6.32-24-generic, vmlinuz-2.6.32-24-generic
12. /lib – System Libraries

Contains library files that supports the binaries located under /bin and /sbin

Library filenames are either ld* or lib*.so.*

For example: ld-2.11.1.so, libncurses.so.5.7
13. /opt – Optional add-on Applications

opt stands for optional.

Contains add-on applications from individual vendors.

add-on applications should be installed under either /opt/ or /opt/ sub-directory.
14. /mnt – Mount Directory
 Temporary mount directory where sysadmins can mount filesystems.
15. /media – Removable Media Devices

Temporary mount directory for removable devices.

For examples, /media/cdrom for CD-ROM; /media/floppy for floppy drives;
/media/cdrecorder for CD writer
16. /srv – Service Data

srv stands for service.

Contains server specific services related data.

For example, /srv/cvs contains CVS related data

4. How many methods are there to Initializing a process? Give a brief note on those
methods.

Initializing a process
OPERATING SYSTEM QUESTION BANK

A process can be run in two ways:


Method 1: Foreground Process : Every process when started runs in foreground by default,
receives input
from the keyboard, and sends output to the screen. When issuing pwd command
$ ls pwd
Output:
$ /home/geeksforgeeks/root
When a command/process is running in the foreground and is taking a lot of time, no other
processes can be
run or started because the prompt would not be available until the program finishes
processing and comes
out.
Method 2: Background Process: It runs in the background without keyboard input and
waits till keyboard
input is required. Thus, other processes can be done in parallel with the process running in the
background
since they do not have to wait for the previous process to be completed.
Adding & along with the command starts it as a background process
$ pwd &
Since pwd does not want any input from the keyboard, it goes to the stop state until moved to
the foreground
and given any data input. Thus, on pressing Enter:
Output:
[1] + Done pwd
$
That first line contains information about the background process – the job number and the
process ID. It
tells you that the ls command background process finishes successfully. The second is a
prompt for another
command.

a) Network Information Tools

1.
ping: The ping command is used to check if a remote system is running or up. In short this
command is used to detect whether a system is connected to the network or not.
Syntax:
$ ping www.geeksforgeeks.com
Note: In place of using domain name you can use IP address also. A ping operation can fail if
ping access is
denied by a network firewall.
2. host: This command is used to obtain network address information about a remote system
connected to
your network. This information usually consists of system’s IP address, domain name address
and
sometimes mail server also.
Syntax:
$ host www.google.com
3. finger: One can obtain information about the user on its network and the who command to
see what users
OPERATING SYSTEM QUESTION BANK

are currently online on your system. The who command list all users currently connected,
along with when, 21
how long, and where they logged in. finger can operate on large networks, though most
systems block it for
security reasons.
Syntax:
$ finger www.ABC.com
In place of ABC you can use any website domain or IP address.
4. traceroute: This command is use to track the sequence of computer networks. You can
track to check the
route through which you are connected to a host. mtr or xmtr tools can also be used to
perform
both ping and traces. Options are available for specifying parameters like the type of service
(-t) or the
source host (-s).
5. netstat: This command is used to check the status of ports whether they are open, closed,
waiting and
masquerade connections. Network Statistic (netstat) command display connection
information, routing table
information etc.
Syntax:
$ netstat
Note: To display routing table information use (netstat -r).
6. tracepath: tracepath performs a very similar function to that of traceroute command. The
main difference
between these command is that tracepath doesn’t take complicated options. This command
doesn’t require
root privileges.
Syntax:
$ tracepath www.google.com
7. dig: dig(Domain Information Groper) query DNS related information like a record, cname,
mxrecord etc.
This command is used to solve DNS related queries.
Syntax:
$ dig www.google.com
8. hostname: This command is used to see the hostname of your computer. You can change
hostname
permanently in etc/sysconfig/network. After changing the hostname you need to reboot the
computer.
Syntax:
$ hostname
9. route: The route command is used to display or modify the routing table. To add a
gateway use (-n).
Syntax:
$ route -n
10. nslookup: You can use nslookup(name server lookup) command to find out DNS related
query or testing
and troubleshooting DNS server.
Syntax:
$ nslookup google.com
OPERATING SYSTEM QUESTION BANK

b) Filters in Linux

Filters are programs that take plain text(either stored in a file or produced by another
program) as standard
input, transforms it into a meaningful format, and then returns it as standard output. Linux has
a number of
filters. Some of the most commonly used filters are explained below:
1. cat : Displays the text of the file line by line.
Syntax:
cat [path]
2. head : Displays the first n lines of the specified text files. If the number of lines is not
specified then by 22
default prints first 10 lines.
Syntax:
head [-number_of_lines_to_print] [path]
3. tail : It works the same way as head, just in reverse order. The only difference in tail is, it
returns the lines
from bottom to up.
Syntax:
tail [-number_of_lines_to_print] [path]
4. sort : Sorts the lines alphabetically by default but there are many options available to
modify the sorting
mechanism. Be sure to check out the main page to see everything it can do.
Syntax:
sort [-options] [path]
5. uniq : Removes duplicate lines. uniq has a limitation that it can only remove continuous
duplicate
lines(although this can be fixed by the use of piping). Assuming we have the following data.
Syntax:
uniq [options] [path]
When applying uniq to sorted data, it removes the duplicate lines because, after sorting data,
duplicate lines
come together.
6. wc : wc command gives the number of lines, words and characters in the data.
Syntax:
wc [-options] [path]
In above image the wc gives 4 outputs as:

number of lines

number of words

number of characters

path
7. grep : grep is used to search a particular information from a text file.
Syntax:
grep [options] pattern [path]
Below are the two ways in which we can implement grep.
8. tac : tac is just the reverse of cat and it works the same way, i.e., instead of printing from
lines 1 through
OPERATING SYSTEM QUESTION BANK

n, it prints lines n through 1. It is just reverse of cat command.


Syntax:
tac [path]
9. sed : sed stands for stream editor. It allows us to apply search and replace operation on our
data
effectively. sed is quite an advanced filter and all its options can be seen on its man page.
Syntax:
sed [path]
10. nl : nl is used to number the lines of our text data.
Syntax:
nl [-options] [path]

5.
(a) Explain the Layered Architecture of an Operating System.

Layered Architecture:

Linux System Architecture is consists of following layers



Hardware layer - Hardware consists of all peripheral devices (RAM/ HDD/ CPU etc).

Kernel - Core component of Operating System, interacts directly with hardware,
provides low level services to upper layer components.

Shell - An interface to kernel, hiding complexity of kernel's functions from users.
Takescommands from user and executes kernel's functions.

Utilities - Utility programs giving user most of the functionalities of an operating
Systems

(b) Write a brief note on Logical File System

The logical file system is the level of the file system at which users can request file
operations by system call. This level of the file system provides the kernel with a
consistent view of what might be multiple physical file systems and multiple file
OPERATING SYSTEM QUESTION BANK

system implementations. As far as the logical file system is concerned, file system
types, whether local, remote, or strictly logical, and regardless of implementation,
are indistinguishable.

A consistent view of file system implementations is made possible by the virtual file
system abstraction. This abstraction specifies the set of file system operations that
an implementation must include in order to carry out logical file system requests.
Physical file systems can differ in how they implement these predefined operations,
but they must present a uniform interface to the logical file system. A list of file
system operators can be found at Requirements for a File System Implementation.
For more information on the virual file system, see Virtual File System Overview.

1. Explain about process scheduling? Explain different types of schedulers?

CPU is always busy in Multiprogramming. Because CPU switches from one job to another
job. But in
simple computers CPU sit idle until the I/O request granted.
scheduling is a important OS function. All resources are scheduled before use.(cpu, memory,
devices…..)
Process scheduling is an essential part of a Multiprogramming operating systems. Such
operating systems
allow more than one process to be loaded into the executable memory ata time and the loaded
process shares
the CPU using time multiplexing
.
Scheduling Objectives
Maximize throughput.
Maximize number of users receiving acceptable response times.Be predictable.
Balance resource use.43
Avoid indefinite postponement.
Enforce Priorities.
Give preference to processes holding key resources
SCHEDULING QUEUES: people live in rooms. Process are present in rooms knows as
queues. There are
3types
1.
job queue: when processes enter the system, they are put into a job queue, which consists all
processes in the system. Processes in the job queue reside on mass storage and awaitthe
allocation of main
memory.
2.
ready queue: if a process is present in main memory and is ready to be allocated tocpu for
execution,
is kept in ready queue.
OPERATING SYSTEM QUESTION BANK

3.
device queue: if a process is present in waiting state (or) waiting for an i/o event tocomplete
is said to
bein device queue.(or)
The processes waiting for a particular I/O device is called device queue.
Schedulers : There are 3 schedulers
1.
Long term scheduler.
2.
Medium termscheduler
3.
Short term scheduler.
Scheduler duties:

Maintains the queue.

Select the process from queues assign to CPU.
Types of schedulers
1.
Long term scheduler:
select the jobs from the job pool and loaded these jobs into main memory (ready queue).Long
term scheduler
is also called job scheduler.
2.
Short term scheduler:
select the process from ready queue, and allocates it to the cpu.
If a process requires an I/O device, which is not present available then process enters
devicequeue.
short term scheduler maintains ready queue, device queue. Also called as cpu scheduler.
3.
Medium term scheduler: if process request an I/O device in the middle of the execution,
then
the process removed from the main memory and loaded into the waiting queue. When the I/O
operation
completed, then the job moved from waiting queue to ready queue. These two operations
performed by
medium term scheduler.

2.
a) Differentiate between process and threads

Process Thread

Process takes more time to create. Thread takes less time to create
it takes more time to complete execution & Less time to terminate.
terminate.
Execution is very slow. Execution is very fast
It takes more time to switch b/w two It takes less time to switch b/w two
processes. thread
Communication b/w two processes is difficult Communication b/w two threads is
easy.
OPERATING SYSTEM QUESTION BANK

System calls are requested to communicate System calls are not required.
each other.
Process is loosely coupled Threads are tightly coupled.
It requires more resources to execute. Requires few resources to execute.
Process can’t share the same memory area. Threads can share same memory area.

b)Write a note on Process states

When a process executed, it changes the


state, generally the state of process is
determined by the current activity of the process. Each process may be in one of the
following states:
1.
New
: The process is being created.
2.
Running
: The process is being executed.
3.
Waiting
: The process is waiting for some event to occur.
4.
Ready
: The process is waiting to be assigned to a processor.
5.
Terminated : The Process has finished execution.
Only one process can be running in any processor at any time, But many process may be
inready and waiting states. The ready processes are loaded into a “ready queue”.
Diagram of process state

a)
New ->Ready
: OS creates process and prepares the process to be executed, then OS
moved the process into ready queue.
b)
Ready->Running
: OS selects one of the Jobs from ready Queue and move themfrom ready
to Running.
c)
OPERATING SYSTEM QUESTION BANK

Running->Terminated : When the Execution of a process has Completed, OS terminates


that process
from running state. Sometimes OS terminates the process for some other reasons including
Time exceeded,
memory unavailable, access violation, protection Error, I/O failure and soon.
d)
Running->Ready : When the time slot of the processor expired (or) If the processor received
any
interrupt signal, the OS shifted Running -> Ready State.
e)
Running -> Waiting : A process is put into the waiting state, if the process need anevent
occur (or) an
I/O Device require.
f)
Waiting->Ready
: A process in the waiting state is moved to readystate when the event for which it
has been Completed.

c) Explain the scheduling criteria and Scheduling Queues

SCHEDULING CRITERIA:
1.
Throughput: how many jobs are completed by the cpu with in a timeperiod.
2.
Turn around time : The time interval between the submission of the process and time of the
completion is turn around time.
TAT = Waiting time in ready queue + executing time + waiting time in waiting queue
forI/O.
3.
Waiting time: The time spent by the process to wait for cpu to beallocated.
4.
Response time: Time duration between the submission and firstresponse.
5.
Cpu Utilization: CPU is costly device, it must be kept as busy aspossible.Eg: CPU efficiency
is
90% means it is busy for 90 units, 10 units idle.

SCHEDULING QUEUES: people live in rooms. Process are present in rooms knows as
queues. There are
3types
1.
job queue: when processes enter the system, they are put into a job queue, which consists all
processes in the system. Processes in the job queue reside on mass storage and awaitthe
allocation of main
memory.
2.
ready queue: if a process is present in main memory and is ready to be allocated tocpu for
execution,
is kept in ready queue.
OPERATING SYSTEM QUESTION BANK

3.
device queue: if a process is present in waiting state (or) waiting for an i/o event tocomplete
is said to
bein device queue.(or)
The processes waiting for a particular I/O device is called device queue.

3.
a) Explain Priority scheduling algorithm with example

Priority scheduling is a non-preemptive algorithm and one of the most common


scheduling algorithms in batch systems. Each process is assigned a priority. Process
with highest priority is to be executed first and so on. Processes with same priority are
executed on first come first served basis.

P4 has the highest priority. Allocate the CPU to process P4 first next P1, P5, P2, P3.

AVERAGE WAITING TIME :


Waiting time for P1 => 3-0
=3
Waiting time for P2 => 13-0 =
13
Waiting time for P3 => 25-0 =
25
Waiting time for P4 => 0
Waiting time for P5 => 9-0 =951
Average waiting time => ( 3+13+25+0+9 )/5 = 10 ms
AVERAGE TURN AROUND TIME :
Turn around time for P1 =>3+6 = 9
Turn around time for P2 => 13+12= 25
Turn around time for P3 => 25+1 = 26
Turn around time for P4 => 0+3= 3
Turn around time for P5 => 9+4 = 13
Average Turn around time => ( 9+25+26+3+13 )/5 = 15.2 ms

b) Explain Round Robin scheduling algorithm with example.

When the time quantum expired, the CPU switched to another job. A small unit of time,
called
OPERATING SYSTEM QUESTION BANK

a time quantum or time slice. A time quantum is generally from 10 to 100 ms. The time
quantum is generally depending on OS. Here ready queue is a circular queue. CPU scheduler
picks the first process from ready queue, sets timer to interrupt after one time quantum and
dispatches the process.

AVERAGE WAITING TIME :


Waiting time for P1 => 0+(15-5)+(24-20) => 0+10+4 = 14
Waiting time for P2 => 5+(20-10) => 5+10 = 15
Waiting time for P3 => 10+(21-15) => 10+6 = 16
Average waiting time => (14+15+16)/3 = 15 ms.
AVERAGE TURN AROUND TIME :
FORMULA : Turn around time = waiting time + burst Time
Turn around time for P1 => 14+30 =44
Turn around time for P2 => 15+6 = 21
Turn around time for P3 => 16+8 = 24
Average turn around time => ( 44+21+24 )/3 = 29.66 ms

c)Explain FCFS scheduling algorithm with example.

First come First served scheduling: (FCFS): The process that request the CPU
first is holds the cpu first. If a process request the cpu then it is loaded into the ready queue,
connect CPU to that process.
Consider the following set of processes that arrive at time 0, the length of the cpu burst time
given in milli seconds.
burst time is the time, required the cpu to execute that job, it is in milli seconds
OPERATING SYSTEM QUESTION BANK

d) Explain SJF scheduling algorithm with example

Which process having the smallest CPU burst time, CPU is assigned to that process . If
two process having the same CPU burst time, FCFS is used.

P5 having the least CPU burst time ( 3ms ). CPU assigned to that ( P5 ). After completion of
P5 short term scheduler search for nest ( P1 ).......
Average Waiting Time :
Formula = Staring Time - Arrival Time
waiting Time for P1 => 3-0 = 3
waiting Time for P2 => 34-0 = 34
waiting Time for P3 => 18-0 = 18
waiting Time for P4 =>8-0=8
waiting time for P5=0
Average waiting time => ( 3+34+18+8+0 )/5 => 63/5 =12.6 ms
Average Turn Around Time :
Formula = waiting Time + burst Time
Turn Around Time for P1 => 3+5 =8
Turn Around for P2 => 34+24 =5848
Turn Around for P3 => 18+16 = 34
Turn Around Time for P4 => 8+10 =18
Turn Around Time for P5 => 0+3 = 3
Average Turn around time => ( 8+58+34+18+3 )/5 => 121/5 = 24.2 ms
Average Response Time :
Formula : First Response - Arrival Time
First Response time for P1 =>3-0 = 3
OPERATING SYSTEM QUESTION BANK

First Response time for P2 => 34-0 = 34


First Response time for P3 => 18-0 = 18
First Response time for P4 => 8-0 = 8
First Response time for P5 = 0
Average Response Time => ( 3+34+18+8+0 )/5 => 63/5 = 12.6 ms
SJF is Non primitive scheduling algorithm
Advantages : Least average waiting time
Least average turn around time Least
average response time
Average waiting time ( FCFS ) = 25 ms
Average waiting time ( SJF ) = 12.6 ms 50% time saved in SJF.
Disadvantages:

Knowing the length of the next CPU burst time is difficult.

Aging ( Big Jobs are waiting for long time for CPU)

4. What is starvation? Explain the technique to overcome Starvation.

Starvation or indefinite blocking is a phenomenon associated with the Priority scheduling


algorithms. A process that is present in the ready state and has low priority keeps waiting for
the CPU allocation because some other process with higher priority comes with due respect
time. Higher-priority processes can prevent a low-priority process from getting the CPU.

process Burst time Priority


1 10 2
2 5 0
3 8 1

1 3 2
0 10 18 23

For example, the above image process has higher priority than other processes getting CPU
earlier. We can think of a scenario in which only one process has very low priority (for
example, 127), and we are giving other processes high priority. This can lead to indefinitely
waiting for the process for CPU, which is having low-priority, which leads to Starvation.

Solutions to Handle Starvation


Some solutions that can be implemented in a system to handle Starvation are as
follows:
OPERATING SYSTEM QUESTION BANK

o An independent manager can be used for the allocation of resources. This


resource manager distributes resources fairly and tries to avoid Starvation.
o Random selection of processes for resource allocation or processor allocation
should be avoided as they encourage Starvation.
o The priority scheme of resource allocation should include concepts such as
Aging, where the priority of a process is increased the longer it waits, which
avoids Starvation.

5.
a) What is meant by Context Switching?

The Context switching is a technique or method used by the operating system to switch a
process from one state to another to execute its function using CPUs in the system. When
switching perform in the system, it stores the old running process's status in the form of
registers and assigns the CPU to a new process to execute its tasks. While a new process is
running in the system, the previous process must wait in a ready queue. The execution of the
old process starts at that point where another process stopped it. It defines the
characteristics of a multitasking operating system in which multiple processes shared the
same CPU to perform multiple tasks without the need for additional processors in the
system.

Example of Context Switching


Suppose that multiple processes are stored in a Process Control Block (PCB). One
process is running state to execute its task with the use of CPUs. As the process is
running, another process arrives in the ready queue, which has a high priority of
completing its task using CPU. Here we used context switching that switches the
current process with the new process requiring the CPU to finish its tasks. While
switching the process, a context switch saves the status of the old process in registers.
When the process reloads into the CPU, it starts the execution of the process when the
new process stops the old process. If we do not save the state of the process, we have
to start its execution at the initial level. In this way, context switching helps the
operating system to switch between the processes, store or reload the process when
it requires executing its tasks.

Context switching triggers


Following are the three types of context switching triggers as follows.
OPERATING SYSTEM QUESTION BANK

1. Interrupts
2. Multitasking
3. Kernel/User switch

Interrupts: A CPU requests for the data to read from a disk, and if there are any
interrupts, the context switching automatic switches a part of the hardware that
requires less time to handle the interrupts.

Multitasking: A context switching is the characteristic of multitasking that allows the


process to be switched from the CPU so that another process can be run. When
switching the process, the old state is saved to resume the process's execution at the
same point in the system.

Kernel/User Switch: It is used in the operating systems when switching between the
user mode, and the kernel/user mode is performed.

b) What does PCB contain?

Each process is represented in the operating System by a Process Control Block.


It is also called Task Control Block. It contains many pieces of information associated with a
specific
Process.

Process Control Block


1.
Process State
: The State may be new, ready, running, and waiting, Terminated…
2.
Program Counter
: indicates the Address of the next Instruction to be executed.
3.
CPUregisters
: registers include accumulators, stack pointers,
General purpose Registers….41
4.
CPU-Scheduling Info :
OPERATING SYSTEM QUESTION BANK

includes
a
process
pointer,
pointers
to
scheduling Queues, other scheduling parameters etc.
5.
Memory management Info: includes page tables, segmentation tables, value ofbase and
limit
registers.
6.
Accounting Information: includes amount of CPU used, time limits, Jobs(or)Process
numbers.
7.
I/O Status Information: Includes the list of I/O Devices Allocated to the processes, list
ofopen
Files

1. What is deadlock? Explain deadlock prevention in detail.

Every process needs some resources to complete its execution. However, the resource
is granted in a sequential order.

1. The process requests for some resource.


2. OS grant the resource if it is available otherwise let the process waits.
3. The process uses it and release on the completion.

A Deadlock is a situation where each of the computer process waits for a resource
which is being assigned to some another process. In this situation, none of the process
gets executed since the resource it needs, is held by some other process which is also
waiting for some other resource to be released.

Necessary conditions for Deadlocks


1. Mutual Exclusion

A resource can only be shared in mutually exclusive manner. It implies, if two


process cannot use the same resource at the same time.

2. Hold and Wait


OPERATING SYSTEM QUESTION BANK

A process waits for some resources while holding another resource at the same
time.

3. No preemption

The process which once scheduled will be executed till the completion. No other
process can be scheduled by the scheduler meanwhile.

4. Circular Wait

All the processes must be waiting for the resources in a cyclic manner so that
the last process is waiting for the resource which is being held by the first
process.

DEADLOCK PREVENTION
For a deadlock to occur, each of the 4 necessary conditions must held. By ensuring that at
leastone of these conditions cannot hold, we can prevent the occurrence of a deadlock.
Mutual Exclusion – not required for sharable resources; must hold for non sharable
resources
Hold and Wait – must guarantee that whenever a process requests a resource,it does not
hold any other resources
o
Require process to request and be allocated all its resources before it begins
execution, or allow process to request resources only when the process hasnone
o
Low resource utilization; starvation possible
No Preemption –
o
If a process that is holding some resources requests another resource that
cannot be immediately allocated to it, then all resources currently being held are released
o
Preempted resources are added to the list of resources for whichthe process is
waiting
o
Process will be restarted only when it can regain its old resources, aswell as
the new ones that it isrequesting
Circular Wait – impose a total ordering of all resource types, and require that each
process requests resources in an increasing order of enumeration Deadlock Avoidance
Requires that the system has some additional a priori information available

Simplest
and most useful model requires that each process declare the
maximum number57
of resources of each type that it may need

The deadlock-avoidance algorithm dynamically examines the resource
allocation state to ensure that there can never be a circular-wait condition

OPERATING SYSTEM QUESTION BANK

Resource-allocation state is defined by the number of available and allocated


resources, and the maximum demands of the processes .

2.Explain deadlock Avoidance in detail with Bankers Algorithm

A Deadlock is a situation where each of the computer process waits for a resource
which is being assigned to some another process. In this situation, none of the process
gets executed since the resource it needs, is held by some other process which is also
waiting for some other resource to be released.

Deadlock avoidencce using Bankers algorithm:

Banker’s Algorithm
Multiple instances
Each process must a priori claim maximum use
When a process requests a resource it may have to wait
When a process gets all its resources it must return them in a finite amount of time Let n
= number of processes, and m = number of resources types.
Available: Vector of length m. If available [
j] = k, there are k instances of resource type
Rjavailable
Max: n x m matrix. If Max [i,j] = k, then process Pimay request at most k
instances of resource type Rj
Allocation: n x m matrix. If Allocation[i,j] = k then Pi is currentlyallocated k instances of
Rj
Need: n x m matrix. If Need[i,j] = k, then Pi may need k more instances of
Rjto complete its task
Need [i,j] = Max[i,j] – Allocation [i,j]
Safety Algorithm
1.
Let Work and Finish be vectors of length m and n,respectively.
2.
Initialize: Work = Available
Finish [i] = false fori = 0, 1, …,n- 1
3.
Find an isuch that both:
(a)
Finish [i] = false
(b)
Needi=Work
If no such iexists, go to step 4
4.
Work = Work + AllocationiFinish[i] = true
go to step 2
5.

IfFinish [i] == true for all i, then the system is in a safe state

3.
OPERATING SYSTEM QUESTION BANK

A) Explain about resource allocation graph (RAG)?

RESOURCE ALLOCATION GRAPH


Deadlocks can be described more precisely in terms of a directed graph called a system
resource allocation graph. This graph consists of a set of vertices V and a set of edges E.
the set of verticesV is partitioned into 2 different types of nodes:
P = {P1, P2….Pn}, the set consisting of all the active processes in the system. R= {R1,
R2….Rm}, the set consisting of all resource types in the system.
A directed edge from process Pi to resource type Rj is denoted by Pi ->Rj. It signifies that
processPi has requested an instance of resource type Rj and is currently waiting for that
resource.
A directed edge from resource type Rj to process Pi is denoted by Rj ->Pi, it signifies that
an instance of resource type Rj has been allocated to process Pi.
A directed edge Pi ->Rj is called a requested edge. A directed edge Rj->Piis called an
assignmentedge.
We represent each process Pi as a circle, each resource type Rj as a rectangle. Since
resource type Rj may have more than one instance. We represent each such instance as a
dot within the rectangle. A request edge points to only the rectangle Rj. An assignment
edge must also designate one of the dots in therectangle.
When process Pi requests an instance of resource type Rj, a request edge is inserted in the
resource allocation graph. When this request can be fulfilled, the request edge is
instantaneously transformed to an assignment edge. When the process no longer needs
access to the resource, it releases the resource, as a result, the assignment edge is deleted.
The sets P, R, E:55

P= {P1, P2, P3}


R= {R1, R2, R3, R4}
E= {P1 ->R1, P2 ->R3, R1 ->P2, R2 ->P2, R2 ->P1, R3 ->P3}

One instance of resource type R1


Two instances of resource type R2 One instance of resource type R3 Three instances of
resource type R4 PROCESS STATES:
Process P1 is holding an instance of resource type R2 and is waiting for an instance of
resourcetype R1.
Process P2 is holding an instance of R1 and an instance of R2 and is waiting for instance
of R3.Process P3 is holding an instance of R3.
If the graph contains no cycles, then no process in the system is deadlocked. Ifthe graph
does contain a cycle, then a deadlock may exist.
Suppose that process P3 requests an instance of resource type R2. Since no resource
instance iscurrently available, a request edge P3 ->R2 is added to the graph.
2
OPERATING SYSTEM QUESTION BANK

cycles:
P1 ->R1 ->P2 ->R3 ->P3 ->R2 ->P1P2 ->R3 ->P3 ->R2 ->P2
R1 R3

Processes P1, P2, P3 are deadlocked. Process P2 is waiting for the resource R3, which is 56
held byprocess P3.process P3 is waiting for either process P1 (or) P2 to release resource
R2. In addition,process P1 is waiting for process P2 to release resource R1.
We also have a cycle: P1 ->R1 ->P3 ->R2 ->P1

However there is no deadlock. Process P4 may release its instance of resource type R2.

That resource can then be allocated to P3, breaking the cycle.

B) Mention the Safe State and Unsafe State in Deadlock Avoidance

Deadlock Avoidence:

In deadlock avoidance, the request for any resource will be granted if the resulting
state of the system doesn't cause deadlock in the system. The state of the system will
continuously be checked for safe and unsafe states.

In order to avoid deadlocks, the process must tell OS, the maximum number of
resources a process can request to complete its execution.

The simplest and most useful approach states that the process should declare the
maximum number of resources of each type it may ever need. The Deadlock avoidance
algorithm examines the resource allocations so that there can never be a circular wait
condition.

Safe and Unsafe States


OPERATING SYSTEM QUESTION BANK

The resource allocation state of a system can be defined by the instances of available
and allocated resources, and the maximum instance of the resources demanded by
the processes.

A state of a system recorded at some random time is shown below.

if there exists a sequence <P1, P2, …, Pn> of ALL the processes


in the systems such that for each Pi, the resources that Pi can still request can be satisfied
by currently available resources + resources held by all the Pj, with j <I
That is:
o
If Pi resource needs are not immediately available, then Pi can wait until all
Pj have finished
o
When Pj is finished, Pi can obtain needed resources, execute,return allocated
resources, and terminate
o
When Pi terminates, Pi +1 can obtain its needed resources,
and so on If a system is in safe state no deadlocks
If a system is in unsafe state
possibility of deadlock Avoidance
ensure

that a system will never enter an unsafe stateAvoidance algorithms

4. What is the important feature of critical section? State the Readers Writers problem and

give solution using semaphore.

Critical Section Problem


Consider a system consisting of n processes {P0, P1, …, Pn - 1}. Each process has a segment
of
code called critical section. In which the process may be changing common variables.
Updating a
table, writing a file, etc., when one process is executing in its critical section, no other process
is

allowed into its critical section.

The important feature of the system is that, when one process is executing in its critical
section, no other process is allowed to execute in its critical section. That is, no two
processes are executing in their critical sections at the same time.

A data object (i.e., file or record) is to be shared among several concurrent processes. Some
of these
processes may want only to read the content of the shared object, whereas others may want to
update (i.e.,
read and write) the sharedobject.
OPERATING SYSTEM QUESTION BANK

In this context if two readers access the shared data object simultaneously then no problem at
all. If a writer
and some other process (either reader or writer) access theshared object simultaneously then
conflict will
raise.
For solving these problems various solutions are there:-
1.
Assign higher priorities to the reader processes, as compared to the writerprocesses.
2.
The writers have exclusive access to the shared object.
3.
No reader should wait for other readers to finish. Here the problem is writers maystarve.
4.
If a writer is waiting to access the object, no new readers may start reading. Herereaders may
starve.
General structure of a writer process
Code:

General structure of a reader process


Code:

5.Mention the Classical Problems of Synchronization.

1)
Producer-Consumer Problem
Producer-consumer problem is also called as Bounded-Buffer problem. Pool consists of n
buffers, each
capable of holding one item. Mutex semaphore is initialized to ‘1’. Empty and full
semaphores count the
number of empty and full
buffers, respectively. Empty is initialized to ‘n’ and full is initialized to ‘0’ (zero). Here one
or more
OPERATING SYSTEM QUESTION BANK

producers are generating some type of data and placing these in a buffer. A single consumer
is taking items
out of the buffer one at a time. It prevents the overlap of buffer operations.
Producer Process
Code:

Consumer Process
Code:

With these codes producer producing full buffers for the consumer. Consumer
producing empty buffers for the producer.
2)
Dining-Philosophers Problem:-
Dining-pilosophers problem is posed by Disjkstra in 1965. This problem is very
simple. Five
philosophers are seated around a circular table. The life of each philosopher consists of
thinking and
eating. For eating five plates are there, one for each philosopher. There is a big serving bowl
present in
the middle of the table with enough food in it. Only five forks are available on the whole.
Each
philosopher needs to use two forks on either side of the plate to eat food.
OPERATING SYSTEM QUESTION BANK

Now the problem is algorithm must satisfy mutual exclusion (i.e., no two philosohers can use
the
same fork at the same time) deadlock and starvation.
For this problem various solutions are available.

1.
Each philosopher picks up first the fork on the left and then the fork on the right.After
eating two forks are replaced on the table. In this case if all the philosophers are hungry all
will sit
down and pick up the fork on their left and all reach out for the other fork, which is not there.
In this
undegnified position, all philosophers will starve.It is the deadlock situation.
2.
Buy five additional forks
3.
Eat with only one fork
4.
Allow only four philosophers at a time, due to this restriction at least one philosopherwill
have
two forks. He will eat and then replace, then there replaced forks are utilized by the other
philosophers (washing of the forks is implicit).
5.
Allow philosopher to pick up the two forks if both are available.
6.
An asymmetric solution is, an odd philosopher picks up first their left fork and then their
right
side fork whereas an even philosopher picks up their right side fork first and then their left
side fork.
Code for the fourth form of solution is as follows:
Program dining philosophers;
Code:
OPERATING SYSTEM QUESTION BANK

3)
Readers and Writers Problem:-
A data object (i.e., file or record) is to be shared among several concurrent processes. Some
of these
processes may want only to read the content of the shared object, whereas others may want to
update (i.e.,
read and write) the sharedobject.
In this context if two readers access the shared data object simultaneously then no problem at
all. If a writer
and some other process (either reader or writer) access theshared object simultaneously then
conflict will
raise.
For solving these problems various solutions are there:-
1.
Assign higher priorities to the reader processes, as compared to the writerprocesses.
2.
The writers have exclusive access to the shared object.
3.
No reader should wait for other readers to finish. Here the problem is writers maystarve.
4.
If a writer is waiting to access the object, no new readers may start reading. Herereaders may
starve.
General structure of a writer process
Code:
OPERATING SYSTEM QUESTION BANK

General structure of a reader process


Code:

3) Sleeping Barber Problem:-


A barber shop has one barber chair for the customer being served currently and few chairs for
the
waiting customers (if any). The barber manages his time efficiently.
1.
When there are no customers, the barber goes to sleep on the barber chair.
2.
As soon as a customer arrives, he has to wake up the sleeping barber, and ask for ahair cut.
3.
If more customers arrive whilst the barber is serving a customer, they either sit downin the
waiting
chairs or can simply leave the shop.

For solving this problem define three semaphores


1.
Customers — specifies the number of waiting customers only.
2.
Barber — 0 means the barber is free, 1 means he is busy.
3.
Mutex — mutual exclusion variable.
Code:
# Define Chairs 4 typedef
int semaphore; semaphore
OPERATING SYSTEM QUESTION BANK

customers = 0;semaphore
barber = 0; semaphore mutex
= 1;
int waiting = 0; Void
barber (Void)
{
while (TRUE)
{
waiting = waiting - 1;
signal (barber);
cut-hair();
}
}
Void customer (void)
{
if (waiting < chairs)
{
waiting = waiting + 1;
signal (customers); get
hair cut();
}
else
{
wait (mutex);
}
OPERATING SYSTEM QUESTION BANK

1. Explain in detail Inter Process Communication.

Inter Process Communication


• Processes share memory
odata in shared messages
• Processes exchange messages
omessage passing via sockets
• Requires synchronization
omutex, waiting
Inter Process Communication(IPC) is an OS supported mechanism for interaction among
processes
(coordination and communication)

Message Passing
o
e.g. sockets, pips, messages, queues
Memory based IPC
o
shared memory, memory mapped files
Higher level semantics
o
files, RPC
Synchronization primitives
Message Passing

Send/Receive messages

OS creates and maintains a channel
o buffer, FIFO queue

OS provides interfaces to processes
o
a port
o
processes send/write messages to this port
o
processes receive/read messages from this port
OPERATING SYSTEM QUESTION BANK

• Kernel required to
o establish communication
o perform each IPC operation
o send: system call + data copy
o receive: system call + data copy
Request-response:4xuser/kernelcrossings+4x data copies
Advantages
• simplicity : kernel does channel management and synchronization
Disadvantages
• Overheads
Forms of Message Passing IPC
1. Pipes
• Carry byte stream between 2 process
• e.g connect output from 1 process to input of

2. Message queues
• Carry "messages" among processes79
• OS management includes priorities, scheduling of message delivery
OPERATING SYSTEM QUESTION BANK

• APIs : Sys-V and POSIX

3. Sockets
• send() and recv() : pass message buffers
• socket() : create kernel level socket buffer
• associated necessary kernel processing (TCP-IP,..)
• If different machines, channel between processes and network devices
• If same machine, bypass full protocol stack

Shared Memory IPC


• read and write to shared memory region
• OS establishes shared channel between the processes
1. physical pages mapped into virtual address space
2. VA(P1) and VA(P2) map to same physical address
3. VA(P1) != VA(P2)
4. physical memory doesn't need to be contiguous
• APIs : SysV, POSIX, memory mapped files, Android ashmem
OPERATING SYSTEM QUESTION BANK

Shared Memory IPC


• read and write to shared memory region
• OS establishes shared channel between the processes
1. physical pages mapped into virtual address space
2. VA(P1) and VA(P2) map to same physical address
3. VA(P1) != VA(P2)
4. physical memory doesn't need to be contiguous
• APIs : SysV, POSIX, memory mapped files, Android ashmem

Advantages

System calls only for setup data copies potentially reduced (but not eliminated)
Disdvantages

explicit synchronization

communication protocol, shared buffer management
o
OPERATING SYSTEM QUESTION BANK

programmer's responsibility
Overheads for 1. Message Passing : must perform multiple copies 2. Shared Memory : must
establish all
mappings among processes' address space and shared memory pages

2.
A) What is swapping and what is its purpose?

• A process must be loaded into memory in order to execute.


• If there is not enough memory available to keep all running processes in memory at the
same time, then
some processes who are not currently using the CPU may have their memory swapped out to
a fast local disk
called the backing store.
• If compile-time or load-time address binding is used, then processes must be swapped back
into the same
memory location from which they were swapped out. If execution time binding is used, then
the processes can
be swapped back into any available location.
• Swapping is a very slow process compared to other operations. For example, if a user
process occupied 10
MB and the transfer rate for the backing store were 40 MB per second, then it would take 1/4
second ( 250
milliseconds ) just to do the data transfer. Adding in a latency lag of 8 milliseconds and
ignoring head seek
time for the moment, and further recognizing that swapping involves moving old data out as
well as new data
in, the overall transfer time required for this swap is 512 milliseconds, or over half a second.
For efficient
processor scheduling the CPU time slice should be significantly longer than this lost transfer
time.
• To reduce swapping transfer overhead, it is desired to transfer as little information as
possible.

It is important to swap processes out of memory only when they are idle, or more to the point,
only when there
are no pending I/O operations.

Most modern Operating Systems no longer use swapping, because it is too slow and there are
faster
alternatives available. ( e.g. Paging. ) However some UNIX systems will still invoke
swapping if the system
gets extremely full, and then discontinue swapping when the load reduces again.
OPERATING SYSTEM QUESTION BANK

B) Distinguish between Internal and External Fragmentation

s.no. Internal fragmentation External fragmentation


1. In internal fragmentation In external
fixed-sized memory, fragmentation, variable-
blocks square measure sized memory blocks
appointed to process. square measure
appointed to the
method.
2. Internal fragmentation External fragmentation
happens when the happens when the
method or process is method or process is
larger than the memory. removed.
3. The solution of internal The solution to external
fragmentation is the fragmentation is
best-fit block. compaction and paging.
4. Internal fragmentation External fragmentation
occurs when memory is occurs when memory is
divided into fixed-sized divided into variable size
partitions. partitions based on the
size of processes.
5. The difference between The unused spaces
memory allocated and formed between non-
required space or contiguous memory
memory is called fragments are too small
Internal fragmentation. to serve a new process,
which is called External
fragmentation.
OPERATING SYSTEM QUESTION BANK

6. External fragmentation
Internal fragmentation occurs with
occurs with paging and segmentation and
fixed partitioning. dynamic partitioning.

7. It occurs on the It occurs on the


allocation of a process allocation of a process
to a partition greater to a partition greater
than the process’s which is exactly the
requirement. The same memory space as
leftover space causes it is required.
degradation system
performance.

3.Explain the basic concepts of segmentation in detail.

Segmentation
Basic Method
• Most users ( programmers ) do not think of their programs as existing in one continuous
linear address
space.
• Rather they tend to think of their memory in multiple segments, each dedicated to a
particular use, such as
code, data, the stack, the heap, etc.
• Memory segmentation supports this view by providing addresses with a segment number (
mapped to a
segment base address ) and an offset from the beginning of that segment.
For example, a C compiler might generate 5 segments for the user code, library code, global (
static )
variables, the stack, and the heap, as shown in Figure
Programmer's view of a program

Segmentation Hardware
• A segment table maps segment-offset addresses to physical addresses, and simultaneously
checks for invalid
OPERATING SYSTEM QUESTION BANK

addresses, using a system similar to the page tables and relocation base registers discussed
previously.

Segmentation hardware {s(segment number) d(offset)}


Example of segmentation

4. Why is paging necessary in OS? Explain in detail with a neat diagram.


OPERATING SYSTEM QUESTION BANK

Paging is a memory management scheme that allows processes physical memory to be


discontinuous,
and which eliminates problems with fragmentation by allocating memory in equal sized
blocks known as
pages.

Paging eliminates most of the problems of the other methods discussed previously, and is the
predominant memory management technique used today.

Paging is used for faster access to data. When a program needs a page, it is
available in the main memory as the OS copies a certain number of pages from your
storage device to main memory. Paging allows the physical address space of a
process to be noncontiguous.

Paging is a storage mechanism used to retrieve processes from the secondary storage into
the main memory

One page of the process is to be stored in one of the frames of the memory. The pages
can be stored at the different locations of the memory but the priority is always to find
the contiguous frames or holes.

Pages of the process are brought into the main memory only when they are required
otherwise they reside in the secondary storage.

Different operating system defines different frame sizes. The sizes of each frame must
be equal. Considering the fact that the pages are mapped to the frames in Paging,
page size needs to be as same as frame size.
OPERATING SYSTEM QUESTION BANK

5. Explain about contiguous memory allocation with Advantages and Disadvantages

Contiguous Memory Allocation


• One approach to memory management is to load each process into a contiguous space. The
operating system
is allocated space first, usually at either low or high memory locations, and then the
remaining available
memory is allocated to processes as needed
Memory Protection
• The system shown in Figure below allows protection against user programs accessing areas
that they should
not, allows programs to be relocated to different memory starting addresses as needed, and
allows the memory
space devoted to the OS to grow or shrink dynamically as needs change.

Advantages:

Both the Sequential and Direct Accesses are supported by this. For direct access, the address
of the kth block
of the file which starts at block b can easily be obtained as (b+k).
This is extremely fast since the number of seeks are minimal because of contiguous allocation
of file blocks.

Disadvantages:

This method suffers from both internal and external fragmentation. This makes it inefficient
in terms of
memory utilization.
Increasing file size is difficult because it depends on the availability of contiguous memory at
a particular instance.
OPERATING SYSTEM QUESTION BANK

1. Explain different Disk scheduling algorithms SCAN,CSCAN,CLOOK

Disk scheduling is done by operating systems to schedule I/O requests arriving for the disk.
Disk scheduling is also known as I/O scheduling.

SCAN:
In SCAN algorithm the disk arm moves into a particular direction and services the requests
coming in
its path and after reaching the end of disk, it reverses its direction and again services the
request arriving in its
path. So, this algorithm works as an elevator and hence also known as elevator algorithm.
As a result, the
requests at the midrange are serviced more and those arriving behind the disk arm will have
to wait.
Example:
• Suppose the requests to be addressed are-82,170,43,140,24,16,190. And the Read/Write arm
is at 50, and it is
also given that the disk arm should move “towards the larger value”.

Therefore, the seek time is calculated as:


=(199-50)+(199-16)
=332

CSCAN:
In SCAN algorithm, the disk arm again scans the path that has been scanned, after reversing
its
direction. So, it may be possible that too many requests are waiting at the other end or there
may be zero or
few requests pending at the scanned area.

These situations are avoided in CSCAN algorithm in which the disk arm instead of reversing
its direction goes
OPERATING SYSTEM QUESTION BANK

to the other end of the disk and starts servicing the requests from there. So, the disk arm
moves in a circular
fashion and this algorithm is also similar to SCAN algorithm and hence it is known as C-
SCAN (Circular
SCAN).

Example:

Suppose the requests to be addressed are-82,170,43,140,24,16,190. And the Read/Write arm
is at 50, and it is
also given that the disk arm should move “towards the larger value”.

Seek time is calculated as:


=(199-50)+(199-0)+(43-0)
=391

CLOOK:
As LOOK is similar to SCAN algorithm, in similar way, CLOOK is similar to CSCAN disk
scheduling algorithm. In CLOOK, the disk arm in spite of going to the end goes only to the
last request to be
serviced in front of the head and then from there goes to the other end’s last request. Thus, it
also prevents the extra delay which occurred due to unnecessary traversal to the end of the
disk.
Example:
Suppose the requests to be addressed are-82,170,43,140,24,16,190. And the Read/Write arm
is at 50, and it is
also given that the disk arm should move “towards the larger value”
OPERATING SYSTEM QUESTION BANK

So, the seek time is calculated as:


=(190-50)+(190-16)+(43-16)
=341

2.Write a detail note on the file system Architecture

FILE-SYSTEM STRUCTURE
• Hard disks have two important properties that make them suitable for secondary storage of
files in
file systems: (1) Blocks of data can be rewritten in place, and (2) they are direct access,
allowing any
block of data to be accessed with only ( relatively ) minor movements of the disk heads and
rotational
latency.
• Disks are usually accessed in physical blocks, rather than a byte at a time. Block sizes may
range
from 512 bytes to 4K or larger.

File systems organize storage on disk drives, and can be viewed as a layered design:
o
At the lowest layer are the physical devices, consisting of the magnetic media, motors &
controls, and
the electronics connected to them and controlling them. Modern disk put more and more of
the electronic 134
controls directly on the disk drive itself, leaving relatively little work for the disk controller
card to
perform.
o I/O Control consists of device drivers, special software programs ( often written in
assembly ) which
communicate with the devices by reading and writing special codes directly to and from
memory
OPERATING SYSTEM QUESTION BANK

addresses corresponding to the controller card's registers. Each controller card ( device ) on a
system has
a different set of addresses ( registers, a.k.a. ports ) that it listens to, and a unique set of
command codes
and results codes that it understands.
o The basic file system level works directly with the device drivers in terms of retrieving and
storing
raw blocks of data, without any consideration for what is in each block. Depending on the
system, blocks
may be referred to with a single block number, ( e.g. block # 234234 ), or with head-sector-
cylinder
combinations.
o The file organization module knows about files and their logical blocks, and how they
map to
physical blocks on the disk. In addition to translating from logical to physical blocks, the file
organization module also maintains the list of free blocks, and allocates free blocks to files as
needed.
o The logical file system deals with all of the meta data associated with a file ( UID, GID,
mode, dates,
etc ), i.e. everything about the file except the data itself. This level manages the directory
structure and
the mapping of file names to file control blocks, FCBs, which contain all of the meta data as
well as
block number information for finding the data on the disk.
• The layered approach to file systems means that much of the code can be used uniformly for
a wide
variety of different file systems, and only certain layers need to be filesystem specific.
Common file
systems in use include the UNIX file system, UFS, the Berkeley Fast File System, FFS,
Windows
systems FAT, FAT32, NTFS, CD-ROM systems ISO 9660, and for Linux the extended file
systems ext2
and ext3 ( among 40 others supported.)
OPERATING SYSTEM QUESTION BANK

3.Explain about single-level, two-level directory structure?

A Directory is the collection of the correlated files on the disk. In simple words, a directory is
like a container
which contains file and folder. In a directory, we can store the complete file attributes or
some attributes of the
file. A directory can be comprised of various files. With the help of the directory, we can
maintain the
information related to the files.
Single-Level Directory: – Single-Level Directory is the easiest directory structure. There is
only one
directory in a single-level directory, and that directory is called a root directory. In a single-
level directory, all
the files are present in one directory that makes it easy to understand. In this, under the root
directory, the user
cannot create the subdirectories.
Simple to implement, but each file must have a unique name.

Advantages of Single-Level Directory



The advantages of the single-level directory are:

The implementation of a single-level directory is so easy.

In a single-level directory, if all the files have a small size, then due to this, the searching of
the files will be
easy.

In a single-Level directory, the operations such as searching, creation, deletion, and updating
can be
performed.
Disadvantages of Single-Level Directory

The disadvantages of Single-Level Directory are:

If the size of the directory is large in Single-Level Directory, then the searching will be tough.

In a single-level directory, we cannot group the similar type of files.•
Another disadvantage of a single-level directory is that there is a possibility of collision
because the two files
cannot have the same name.

The task of choosing the unique file name is a little bit complex.
OPERATING SYSTEM QUESTION BANK

Two-Level Directory
• Each user gets their own directory space.
• File names only need to be unique within a given user's directory.
• A master file directory is used to keep track of each users directory, and must be maintained
when users are
added to or removed from the system.
• A separate directory is generally needed for system ( executable ) files.
• Systems may or may not allow users to access other directories besides their own o If access
to other
directories is allowed, then provision must be made to specify the directory being accessed.
o If access is denied, then special consideration must be made for users to run programs
located in system
directories. A search path is the list of directories in which to search for executable
programs, and can be set
uniquely for each user.

Characteristics of Two-Level Directory


The characteristics of the two-level directory are:
In a two-level directory, there may be same file name of different users.
There is a pathname of each file such as /User-name/directory-name/
In a two-level directory, we cannot group the files which are having the same name into a
single directory for
a specific user.
In a two-level directory, searching is more effective because there is only one user’s list,
which is required to
be traversed.
Advantages of Two-Level Directory
The advantages of the two-level directory are:
In the two-level directory, various users have the same file name and also directory name.
Because of using the user-grouping and pathname, searching of files are quite easy.
Disadvantages of Two-Level Directory
The disadvantages of the two-level directory are:
In a two-level directory, one user cannot share the file with another user.
Another disadvantage with the two-level directory is it is not scalable.

4.Discuss in detail various Disk Space Allocation Methods.

ALLOCATION METHODS
• There are three major methods of storing files on disks: contiguous, linked, and indexed.
Contiguous Allocation
OPERATING SYSTEM QUESTION BANK


Contiguous Allocation requires that all blocks of a file be kept together contiguously.

Performance is very fast, because reading successive blocks of the same file generally
requires no
movement of the disk heads, or at most one small step to the next adjacent cylinder.
• Storage allocation involves the same issues discussed earlier for the allocation of contiguous
blocks
of memory ( first fit, best fit, fragmentation problems, etc. ) The distinction is that the high
time penalty
required for moving the disk heads from spot to spot may now justify the benefits of keeping
files
contiguously when possible.
• ( Even file systems that do not by default store files contiguously can benefit from certain
utilities that
compact the disk and make all files contiguous in the process. )

Problems can arise when files grow, or if the exact size of a file is unknown at creation time:
o
Over-estimation of the file's final size increases external fragmentation and wastes disk space.
o
Under-estimation may require that a file be moved or a process aborted if the file grows
beyond its
originally allocated space.
o If a file grows slowly over a long time period and the total final space must be allocated
initially, then
a lot of space becomes unusable before the file fills the space.
• A variation is to allocate file space in large contiguous chunks, called extents. When a file
outgrows
its original extent, then an additional one is allocated. ( For example an extent may be the size
of a
complete track or even cylinder, aligned on an appropriate track or cylinder boundary. ) The
high
performance files system Veritas uses extents to optimize performance.

Contiguous allocation of disk space.


OPERATING SYSTEM QUESTION BANK

Linked Allocation
• Disk files can be stored as linked lists, with the expense of the storage space consumed by
each link. (
E.g. a block may be 508 bytes instead of 512. )
• Linked allocation involves no external fragmentation, does not require pre-known file sizes,
and
allows files to grow dynamically at any time.
• Unfortunately linked allocation is only efficient for sequential access files, as random access
requires
starting at the beginning of the list for each new location access.
• Allocating clusters of blocks reduces the space wasted by pointers, at the cost of internal
fragmentation.
• Another big problem with linked allocation is reliability if a pointer is lost or damaged.
Doubly linked
lists provide some protection, at the cost of additional overhead and wasted space.

Linked allocation of disk space.

The File Allocation Table, FAT, used by DOS is a variation of linked allocation, where all
the links are
stored in a separate table at the beginning of the disk. The benefit of this approach is that the
FAT table
can be cached in memory, greatly improving random access speeds.
OPERATING SYSTEM QUESTION BANK

File-allocation table.

Indexed Allocation
• Indexed Allocation combines all of the indexes for accessing each file into a common block
( for that
file ), as opposed to spreading them all over the disk or storing them in a FAT table.

Indexed allocation of disk space.


• Some disk space is wasted ( relative to linked lists or FAT tables ) because an entire index
block must
be allocated for each file, regardless of how many data blocks the file contains. This leads to
questions of
how big the index block should be, and how it should be implemented. There are several
approaches:
o Linked Scheme - An index block is one disk block, which can be read and written in a
single disk
OPERATING SYSTEM QUESTION BANK

operation. The first index block contains some header information, the first N block
addresses, and if
necessary a pointer to additional linked index blocks.
o Multi-Level Index - The first index block contains a set of pointers to secondary index
blocks, which
in turn contain pointers to the actual data blocks.
o Combined Scheme - This is the scheme used in UNIX inodes, in which the first 12 or so
data block
pointers are stored directly in the inode, and then singly, doubly, and triply indirect pointers
provide
access to more data blocks as needed. ( See below. ) The advantage of this scheme is that for
small files (
which many are ), the data blocks are readily accessible ( up to 48K with 4K block sizes );
files up to
about 4144K ( using 4K blocks ) are accessible with only a single indirect block ( which can
be cached ), 141
and huge files are still accessible using a relatively small number of disk accesses ( larger in
theory than
can be addressed by a 32-bit address, which is why some systems have moved to 64-bit file
pointers. )

The UNIX inode.

5.
A) Write a short note on file system mounting.

File-System Mounting
OPERATING SYSTEM QUESTION BANK

• The basic idea behind mounting file systems is to combine multiple file systems into one
large tree structure.
• The mount command is given a filesystem to mount and a mount point ( directory ) on
which to attach it. • Once a file system is mounted onto a mount point, any further references
to that directory actually refer to
the root of the mounted file system.
• Any files ( or sub-directories ) that had been stored in the mount point directory prior to
mounting the new
filesystem are now hidden by the mounted filesystem, and are no longer available. For this
reason some
systems only allow mounting onto empty directories.
• Filesystems can only be mounted by root, unless root has previously configured certain
filesystems to be
mountable onto certain pre-determined mount points.
( E.g. root may allow users to mount floppy filesystems to /mnt or something like it. )
Anyone can run the
mount command to see what filesystems are currently mounted.
• Filesystems may be mounted read-only, or have other restrictions imposed.
• The traditional Windows OS runs an extended two-tier directory structure, where the first
tier of the
structure separates volumes by drive letters, and a tree structure is implemented below that
level.
• Macintosh runs a similar system, where each new volume that is found is automatically
mounted and added
to the desktop when it is found.
• More recent Windows systems allow filesystems to be mounted to any directory in the
filesystem, much like
UNIX.


To illustrate file mounting, consider the file system depicted in above Figure File system,
where the triangles
represent subtrees of directories that are of interest.
OPERATING SYSTEM QUESTION BANK


Figure File system (a) shows an existing file system, vvhile Figure File system (b) shows an
unmounted
volume residing on /device/dsk. At this point, only the files on the existing file system can be
accessed.

Figure Mount point shows the effects of mounting the volume residing on device/dsk over
lusers.

If the volume is unmounted, the file system is restored to the situation depicted in Figure File
system
• The traditional Windows OS runs an extended two-tier directory structure, where the first
tier of the
structure separates volumes by drive letters, and a tree structure is implemented below that
level.
• Macintosh runs a similar system, where each new volume that is found is automatically
mounted and added
to the desktop when it is found.
• More recent Windows systems allow filesystems to be mounted to any directory in the
filesystem, much like
UNIX.

B) Explain system protection with its goals.

Protection
• Files must be kept safe for reliability ( against accidental damage ), and protection ( against
deliberate
malicious access. ) The former is usually managed with backup copies. This section discusses
the latter.
• One simple protection scheme is to remove all access to a file. However this makes the file
unusable, so
some sort of controlled access must be arranged.

Types of Access
• The following low-level operations are often controlled:
o Read - View the contents of the file
o Write - Change the contents of the file.
o Execute - Load the file onto the CPU and follow the instructions contained therein.
o
Append - Add to the end of an existing file.
o
Delete - Remove a file from the system.
o
List -View the name and other attributes of files on the system.
• Higher-level operations, such as copy, can generally be performed through combinations of
the
above.
Access Control132

OPERATING SYSTEM QUESTION BANK

One approach is to have complicated Access Control Lists, ACL, which specify exactly what
access
is allowed or denied for specific users or groups.
o
The AFS uses this system for distributed access.
o
Control is very finely adjustable, but may be complicated, particularly when the specific
users
involved are unknown. ( AFS allows some wild cards, so for example all users on a certain
remote system may be trusted, or a given username may be trusted when accessing from any
remote system. )

UNIX uses a set of 9 access control bits, in three groups of three. These correspond to R, W,
and X
permissions for each of the Owner, Group, and Others. ( See "man chmod" for full details. )
The
RWX bits control the following privileges for ordinary files and directories:


In addition there are some special bits that can also be applied:
o
The set user ID ( SUID ) bit and/or the set group ID ( SGID ) bits applied to executable files
temporarily change the identity of whoever runs the program to match that of the owner /
group of the executable program. This allows users running specific programs to have access
to files ( while running that program ) to which they would normally be unable to access.
Setting of these two bits is usually restricted to root, and must be done with caution, as it
introduces a potential security leak.
o
The sticky bit on a directory modifies write permission, allowing users to only delete files for
which they are the owner. This allows everyone to create files in /tmp, for example, but to
only delete files which they have created, and not anyone else's.
o
The SUID, SGID, and sticky bits are indicated with an S, S, and T in the positions for execute
permission for the user, group, and others, respectively. If the letter is lower case, ( s, s, t ),
then the corresponding execute permission is not also given. If it is upper case, ( S, S, T ),
then the corresponding execute permission IS given.
o
The numeric form of chmod is needed to set these advanced bits.
OPERATING SYSTEM QUESTION BANK

Sample permissions in a UNIX system.

• Windows adjusts files access through a simple GUI:

Other Protection Approaches and Issues


• Some systems can apply passwords, either to individual files, or to specific sub-directories,
or to the
entire system. There is a trade-off between the number of passwords that must be maintained
( and
remembered by the users ) and the amount of information that is vulnerable to a lost or
forgotten
password.
• Older systems which did not originally have multi-user file access permissions ( DOS and
older
versions of Mac ) must now be retrofitted if they are to share files on a network.
• Access to a file requires access to all the files along its path as well. In a cyclic directory
structure,
users may have different access to the same file accessed through different paths.
• Sometimes just the knowledge of the existence of a file of a certain name is a security ( or
privacy )
concern. Hence the distinction between the R and X bits on UNIX directories.

... Gayatri Gajawada...

You might also like