0% found this document useful (0 votes)
3 views

operating system module

Uploaded by

shinasar123
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

operating system module

Uploaded by

shinasar123
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 26

OPERATING SYSTEM

MODULE 1
1. What is PCB?
A process control block (PCB) is a data structure used by computer operating
systems to store all the information about a process
2. What is time sharing operating system?

A time shared operating system allows multiple users to share computers


simultaneously. Each action or order at a time the shared system becomes smaller, so
only a little CPU time is required for each user. As the system rapidly switches from
one user to another, each user is given the impression that the entire computer system is
dedicated to its use, although it is being shared among multiple users.
A time shared operating system uses CPU scheduling and multi-programming to
provide each with a small portion of a shared computer at once. Each user has at least
one separate program in memory. A program loaded into memory and executes, it
performs a short period of time either before completion or to complete I/O.This short
period of time during which user gets attention of CPU is known as time slice, time
slot or quantum.It is typically of the order of 10 to 100 milliseconds

3. What is process? Explain its different states.


A process is basically a program in execution. The execution of a process must progress
in a sequential fashion.

Start
This is the initial state when a process is first started/created.

Ready
The process is waiting to be assigned to a processor. Ready processes are waiting to have the
processor allocated to them by the operating system so that they can run. Process may come into this
state after Start state or while running it by but interrupted by the scheduler to assign CPU to some
other process.

Running
Once the process has been assigned to a processor by the OS scheduler, the process state is set to
running and the processor executes its instructions.

Waiting
Process moves into the waiting state if it needs to wait for a resource, such as waiting for user input,
or waiting for a file to become available.
Terminated or Exit
Once the process finishes its execution, or it is terminated by the operating system, it is moved to the
terminated state where it waits to be removed from main memory.

4. What is an operating system? explain its function and evolution

the First Generation (1940 to early 1950s)

When the first electronic computer was developed in 1940, it was created without any
operating system. In early times, users have full access to the computer machine and
write a program for each task in absolute machine language. The programmer can
perform and solve only simple mathematical calculations during the computer generation,
and this calculation does not require an operating system.

The Second Generation (1955 - 1965)

The first operating system (OS) was created in the early 1950s and was known
as GMOS. General Motors has developed OS for the IBM computer. The second-
generation operating system was based on a single stream batch processing system
because it collects all similar jobs in groups or batches and then submits the jobs to the
operating system using a punch card to complete all jobs in a machine. At each
completion of jobs (either normally or abnormally), control transfer to the operating
system that is cleaned after completing one job and then continues to read and initiates
the next job in a punch card. After that, new machines were called mainframes, which
were very big and used by professional operators.

The Third Generation (1965 - 1980)

During the late 1960s, operating system designers were very capable of developing a new
operating system that could simultaneously perform multiple tasks in a single computer
program called multiprogramming. The introduction of multiprogramming plays a very
important role in developing operating systems that allow a CPU to be busy every time
by performing different tasks on a computer at the same time. During the third
generation, there was a new development of minicomputer's phenomenal growth starting
in 1961 with the DEC PDP-1. These PDP's leads to the creation of personal computers in
the fourth generation.

The Fourth Generation (1980 - Present Day)

The fourth generation of operating systems is related to the development of the personal
computer. However, the personal computer is very similar to the minicomputers that were
developed in the third generation. The cost of a personal computer was very high at that
time; there were small fractions of minicomputers costs. A major factor related to
creating personal computers was the birth of Microsoft and the Windows operating
system. Microsoft created the first window operating system in 1975. After introducing
the Microsoft Windows OS, Bill Gates and Paul Allen had the vision to take personal
computers to the next level. Therefore, they introduced the MS-DOS in 1981; however, it
was very difficult for the person to understand its cryptic commands. Today, Windows
has become the most popular and most commonly used operating system technology.
And then, Windows released various operating systems such as Windows 95, Windows
98, Windows XP and the latest operating system, Windows 7. Currently, most Windows
users use the Windows 10 operating system. Besides the Windows operating system,
Apple is another popular operating system built in the 1980s, and this operating system
was developed by Steve Jobs, a co-founder of Apple. They named the operating system
Macintosh OS or Mac OS.

5. What is semaphore? Explain its implementation.


Semaphore was proposed by Dijkstra in 1965 which is a very significant technique to
manage concurrent processes by using a simple integer value, which is known as a
semaphore. Semaphore is simply a variable which is non-negative and shared between
threads. This variable is used to solve the critical section problem and to achieve process
synchronization in the multiprocessing environment.

Semaphores are of two types:


1. Binary Semaphore – This is also known as mutex lock. It can have only two values – 0
and 1. Its value is initialized to 1. It is used to implement the solution of critical section
problem with multiple processes.
2. Counting Semaphore – Its value can range over an unrestricted domain. It is used to
control access to a resource that has multiple instances.
Now let us see how it do so.
First, look at two operations which can be used to access and change the value of the semaphore
variable.
Some point regarding P and V operation
1. P operation is also called wait, sleep or down operation and V operation is also called
signal, wake-up or up operation.
2. Both operations are atomic and semaphore(s) is always initialized to one.Here atomic
means that variable on which read, modify and update happens at the same time/moment
with no pre-emption i.e. in between read, modify and update no other operation is
performed that may change the variable.
3. A critical section is surrounded by both operations to implement process
synchronization.See below image.critical section of Process P is in between P and V
operation.

Now, let us see how it implements mutual exclusion. Let there be two processes P1 and P2 and a
semaphore s is initialized as 1. Now if suppose P1 enters in its critical section then the value of
semaphore s becomes 0. Now if P2 wants to enter its critical section then it will wait until s > 0,
this can only happen when P1 finishes its critical section and calls V operation on semaphore s.
This way mutual exclusion is achieved. Look at the below image for details which is Binary
semaphore.
6. What is deadlock? Explain necessary condition and sufficient conditions for the
occurrence of deadlock.
A process in operating systems uses different resources and uses resources in the following
way.
1)Requests a resource
2) Use the resource
2) Releases the resource
Deadlock is a situation where a set of processes are blocked because each process is
holding a resource and waiting for another resource acquired by some other process.
Consider an example when two trains are coming toward each other on the same track and
there is only one track, none of the trains can move once they are in front of each other. A
similar situation occurs in operating systems when there are two or more processes that
hold some resources and wait for resources held by other(s). For example, in the below
diagram, Process 1 is holding Resource 1 and waiting for resource 2 which is acquired by
process 2, and process 2 is waiting for resource 1.

Deadlock can arise if the following four conditions hold simultaneously (Necessary
Conditions)
Mutual Exclusion: One or more than one resource are non-shareable (Only one process
can use at a time)
Hold and Wait: A process is holding at least one resource and waiting for resources.
No Preemption: A resource cannot be taken from a process unless the process releases the
resource.
Circular Wait: A set of processes are waiting for each other in circular form.
Methods for handling deadlock
There are three ways to handle deadlock
1) Deadlock prevention or avoidance: The idea is to not let the system into a deadlock
state.
One can zoom into each category individually, Prevention is done by negating one of above
mentioned necessary conditions for deadlock.
Avoidance is kind of futuristic in nature. By using strategy of “Avoidance”, we have to
make an assumption. We need to ensure that all information about resources which process
will need are known to us prior to execution of the process. We use Banker’s algorithm
(Which is in-turn a gift from Dijkstra) in order to avoid deadlock.

2) Deadlock detection and recovery: Let deadlock occur, then do preemption to handle it
once occurred.
3) Ignore the problem altogether: If deadlock is very rare, then let it happen and reboot
the system. This is the approach that both Windows and UNIX take.

7. A --- can be considered as a program in execution


process
8. – is a multi user operating system
Windows,linux..etc
9. Pcb contain information about ----
process
10. --- provide interface between user and the computer
Operating system
11. Define the term operating system
An operating system is a program that acts as an interface between the user and the
computer hardware and controls the execution of all kinds of programs.
12. Discuss the features of time sharing and multi user operating system

Time-sharing enables many people, located at various terminals, to use a particular computer
system at the same time. Multitasking or Time-Sharing Systems is a logical extension of
multiprogramming. Processor’s time is shared among multiple users simultaneously is termed as
time-sharing.
The main difference between Time-Sharing Systems and Multiprogrammed Batch Systems is
that in case of Multiprogrammed batch systems, the objective is to maximize processor use,
whereas in Time-Sharing Systems, the objective is to minimize response time.
Multiple jobs are implemented by the CPU by switching between them, but the switches occur so
frequently. So, the user can receive an immediate response. For an example, in a transaction
processing, the processor executes each user program in a short burst or quantum of
computation, i.e.; if n users are present, then each user can get a time quantum. Whenever the
user submits the command, the response time is in few seconds at most.
An operating system uses CPU scheduling and multiprogramming to provide each user with a
small portion of a time. Computer systems which were designed primarily as batch systems have
been modified to time-sharing systems.
Advantages of Timesharing operating systems are −

• It provides the advantage of quick response.


• This type of operating system avoids duplication of software.
• It reduces CPU idle time.
Disadvantages of Time-sharing operating systems are −

• Time sharing has problem of reliability.


• Question of security and integrity of user programs and data can be raised.
• Problem of data communication occurs.

13. Discuss about deadlock detection

Deadlock Detection
1. If resources have single instance:
In this case for Deadlock detection we can run an algorithm to check for cycle in the
Resource Allocation Graph. Presence of cycle in the graph is the sufficient condition for
deadlock.

In the above diagram, resource 1 and resource 2 have single instances. There is a cycle R1
→ P1 → R2 → P2. So, Deadlock is Confirmed.
2. If there are multiple instances of resources:
Detection of the cycle is necessary but not sufficient condition for deadlock detection, in
this case, the system may or may not be in deadlock varies according to different
situations.
Deadlock Recovery
A traditional operating system such as Windows doesn’t deal with deadlock recovery as it is time
and space consuming process. Real-time operating systems use Deadlock recovery.
Recovery method
1. Killing the process: killing all the process involved in the deadlock. Killing process one
by one. After killing each process check for deadlock again keep repeating the process till
system recover from deadlock.
2. Resource Preemption: Resources are preempted from the processes involved in the
deadlock, preempted resources are allocated to other processes so that there is a possibility
of recovering the system from deadlock. In this case, the system goes into starvation.
14. With a neat diagram explain process states

The process, from its creation to completion, passes through various states. The
minimum number of states is five.

The names of the states are not standardized although the process may be in one of the
following states during execution.

1. New

A program which is going to be picked up by the OS into the main memory is called a
new process.
2. Ready

Whenever a process is created, it directly enters in the ready state, in which, it waits for
the CPU to be assigned. The OS picks the new processes from the secondary memory and
put all of them in the main memory.

The processes which are ready for the execution and reside in the main memory are
called ready state processes. There can be many processes present in the ready state.

3. Running

One of the processes from the ready state will be chosen by the OS depending upon the
scheduling algorithm. Hence, if we have only one CPU in our system, the number of
running processes for a particular time will always be one. If we have n processors in the
system then we can have n processes running simultaneously.

4. Block or wait

From the Running state, a process can make the transition to the block or wait state
depending upon the scheduling algorithm or the intrinsic behavior of the process.

When a process waits for a certain resource to be assigned or for the input from the user
then the OS move this process to the block or wait state and assigns the CPU to the other
processes.

5. Completion or termination

When a process finishes its execution, it comes in the termination state. All the context of the
process (Process Control Block) will also be deleted the process will be terminated by the
Operating system.

6. Suspend ready

A process in the ready state, which is moved to secondary memory from the main
memory due to lack of the resources (mainly primary memory) is called in the suspend
ready state.
If the main memory is full and a higher priority process comes for the execution then the
OS have to make the room for the process in the main memory by throwing the lower
priority process out into the secondary memory. The suspend ready processes remain in
the secondary memory until the main memory gets available.

7. Suspend wait

Instead of removing the process from the ready queue, it's better to remove the blocked
process which is waiting for some resources in the main memory. Since it is already
waiting for some resource to get available hence it is better if it waits in the secondary
memory and make room for the higher priority process. These processes complete their
execution once the main memory gets available and their wait is finished.

15. Explain the function of operating system

Following are some of important functions of an operating System.

• Memory Management
• Processor Management
• Device Management
• File Management
• Security
• Control over system performance
• Job accounting
• Error detecting aids
• Coordination between other software and users

Memory Management

Memory management refers to management of Primary Memory or Main Memory. Main


memory is a large array of words or bytes where each word or byte has its own address.
Main memory provides a fast storage that can be accessed directly by the CPU. For a program
to be executed, it must in the main memory. An Operating System does the following activities
for memory management −
• Keeps tracks of primary memory, i.e., what part of it are in use by whom, what part are
not in use.
• In multiprogramming, the OS decides which process will get memory when and how
much.
• Allocates the memory when a process requests it to do so.
• De-allocates the memory when a process no longer needs it or has been terminated.

Processor Management

In multiprogramming environment, the OS decides which process gets the processor when and
for how much time. This function is called process scheduling. An Operating System does the
following activities for processor management −
• Keeps tracks of processor and status of process. The program responsible for this task is
known as traffic controller.
• Allocates the processor (CPU) to a process.
• De-allocates processor when a process is no longer required.

Device Management

An Operating System manages device communication via their respective drivers. It does the
following activities for device management −
• Keeps tracks of all devices. Program responsible for this task is known as the I/O
controller.
• Decides which process gets the device when and for how much time.
• Allocates the device in the efficient way.
• De-allocates devices.

File Management

A file system is normally organized into directories for easy navigation and usage. These
directories may contain files and other directions.
An Operating System does the following activities for file management −
• Keeps track of information, location, uses, status etc. The collective facilities are often
known as file system.
• Decides who gets the resources.
• Allocates the resources.
• De-allocates the resources.

Other Important Activities

Following are some of the important activities that an Operating System performs −
• Security − By means of password and similar other techniques, it prevents unauthorized
access to programs and data.
• Control over system performance − Recording delays between request for a service
and response from the system.
• Job accounting − Keeping track of time and resources used by various jobs and users.
• Error detecting aids − Production of dumps, traces, error messages, and other
debugging and error detecting aids.
• Coordination between other softwares and users − Coordination and assignment of
compilers, interpreters, assemblers and other software to the various users of the
computer systems.

16. Explain resource allocation graph with an example


Resource Allocation Graph

no cycle IMPLIES no deadlock


deadlock IMPLIES cycle (necessary condition)
cycle IMPLIES maybe deadlock (but not sufficient condition)
single instance resource AND cycle IMPLIES deadlock
(necessary and sufficient)
Deadlock: Multiple Instance Resources
DEADLOCK:

NO DEADLOCK:

Here P!,P2,P3 are processes and R1,R2and R3 resources


17. Methods for Handling Deadlock
1. never let deadlock occur
2. prevention: break one of the 4 conditions
3. avoidance: resources give advance notice of maximum use
4. let deadlock occur and do something about it
5. detection: search for cycles periodically
6. recovery: preempt processes or resources
7. don't worry about it (UNIX and other OS)
8. cheap: just reboot (it happens rarely)
18. Methods of preventing deadlock

Deadlock: Prevention
1. break mutual exclusion:
2. read-only files are shareable
3. but some resources are intrinsically nonshareable (printers)
4. break hold and wait:
5. request all resources in advance
6. request (tape, disk, printer) 7. release all resources before requesting new batch
8. request (tape,disk), release (tape,disk), request (disk,printer)
9. disadvantages: low resource utilization, starvation
19. Explain pcb with neat diagram
A Process Control Block is a data structure maintained by the Operating System for
every process. The PCB is identified by an integer process ID (PID). A PCB keeps all
the information needed to keep track of a process as listed below in the table −

S.N. Information & Description

1 Process State
The current state of the process i.e., whether it is ready, running, waiting, or whatever.

2 Process privileges
This is required to allow/disallow access to system resources.

3
Process ID
Unique identification for each of the process in the operating system.

4 Pointer
A pointer to parent process.
5 Program Counter
Program Counter is a pointer to the address of the next instruction to be executed for this process.

6 CPU registers
Various CPU registers where process need to be stored for execution for running state.

7 CPU Scheduling Information


Process priority and other scheduling information which is required to schedule the process.

8 Memory management information


This includes the information of page table, memory limits, Segment table depending on memory used
by the operating system.

9 Accounting information
This includes the amount of CPU used for process execution, time limits, execution ID etc.

10 IO status information
This includes a list of I/O devices allocated to the process.

The architecture of a PCB is completely dependent on Operating System and may


contain different information in different operating systems. Here is a simplified diagram
of a PCB
20. Explain the necessary conditions of deadlock
Deadlock can arise if the following four conditions hold simultaneously (Necessary
Conditions)
Mutual Exclusion: One or more than one resource are non-shareable (Only one process
can use at a time)
Hold and Wait: A process is holding at least one resource and waiting for resources.
No Preemption: A resource cannot be taken from a process unless the process releases the
resource.
Circular Wait: A set of processes are waiting for each other in circular form.

21. Explain deadlock prevention in detail

Deadlock Prevention And Avoidance

Deadlock Characteristics
deadlock has following characteristics.
1. Mutual Exclusion
2. Hold and Wait
3. No preemption
4. Circular wait

Deadlock Prevention
We can prevent Deadlock by eliminating any of the above four conditions.

Eliminate Mutual Exclusion


It is not possible to dis-satisfy the mutual exclusion because some resources, such as the tape drive
and printer, are inherently non-shareable.

Eliminate Hold and wait


1. Allocate all required resources to the process before the start of its execution, this way
hold and wait condition is eliminated but it will lead to low device utilization. for example,
if a process requires printer at a later time and we have allocated printer before the start of
its execution printer will remain blocked till it has completed its execution.
2. The process will make a new request for resources after releasing the current set of
resources. This solution may lead to starvation.
Eliminate No Preemption
Preempt resources from the process when resources required by other high priority processes.

Eliminate Circular Wait


Each resource will be assigned with a numerical number. A process can request the resources
increasing/decreasing. order of numbering.
For Example, if P1 process is allocated R5 resources, now next time if P1 ask for R4, R3 lesser
than R5 such request will not be granted, only request for resources more than R5 will be granted.

22. Explain multi processor OS


Definition – Multiprocessor operating system allows the multiple processors, and these
processors are connected with physical memory, computer buses, clocks, and peripheral
devices. Main objective of using multiprocessor operating system is to consume high
computing power and increase the execution speed of system.
23. Distinguish between multiprogramming and multitasking operating system
Multitasking means concurrent execution of multiple processes by one user on the same
computer utilizing multiple CPUs.
Multiprogramming is the ability for more than one user to use the computer at a time
using a single CPU. The idea is to effectively utilize the processor to create multiple
ready-to-run processes with each process belongs to different user.

25. Explain different types of operating system

1. Batch Operating System –


This type of operating system does not interact with the computer directly. There is an operator
which takes similar jobs having the same requirement and group them into batches. It is the
responsibility of the operator to sort jobs with similar needs.
Advantages of Batch Operating System:

• It is very difficult to guess or know the time required for any job to complete. Processors of
the batch systems know how long the job would be when it is in queue
• Multiple users can share the batch systems
• The idle time for the batch system is very less
• It is easy to manage large work repeatedly in batch systems
Disadvantages of Batch Operating System:
• The computer operators should be well known with batch systems
• Batch systems are hard to debug
• It is sometimes costly
• The other jobs will have to wait for an unknown time if any job fails
Examples of Batch based Operating System: Payroll System, Bank Statements, etc.
2. Time-Sharing Operating Systems –
Each task is given some time to execute so that all the tasks work smoothly. Each user gets the
time of CPU as they use a single system. These systems are also known as Multitasking
Systems. The task can be from a single user or different users also. The time that each task gets
to execute is called quantum. After this time interval is over OS switches over to the next task.

Advantages of Time-Sharing OS:


• Each task gets an equal opportunity
• Fewer chances of duplication of software
• CPU idle time can be reduced
Disadvantages of Time-Sharing OS:
• Reliability problem
• One must have to take care of the security and integrity of user programs and data
• Data communication problem
Examples of Time-Sharing OSs are: Multics, Unix, etc.
3. Distributed Operating System –
These types of the operating system is a recent advancement in the world of computer
technology and are being widely accepted all over the world and, that too, with a great pace.
Various autonomous interconnected computers communicate with each other using a shared
communication network. Independent systems possess their own memory unit and CPU. These
are referred to as loosely coupled systems or distributed systems. These system’s processors
differ in size and function. The major benefit of working with these types of the operating
system is that it is always possible that one user can access the files or software which are not
actually present on his system but some other system connected within this network i.e.,
remote access is enabled within the devices connected in that network.

Advantages of Distributed Operating System:


• Failure of one will not affect the other network communication, as all systems are
independent from each other
• Electronic mail increases the data exchange speed
• Since resources are being shared, computation is highly fast and durable
• Load on host computer reduces
• These systems are easily scalable as many systems can be easily added to the network
• Delay in data processing reduces
Disadvantages of Distributed Operating System:
• Failure of the main network will stop the entire communication
• To establish distributed systems the language which is used are not well defined yet
• These types of systems are not readily available as they are very expensive. Not only that
the underlying software is highly complex and not understood well yet
Examples of Distributed Operating System are- LOCUS, etc.
4. Network Operating System –
These systems run on a server and provide the capability to manage data, users, groups,
security, applications, and other networking functions. These types of operating systems allow
shared access of files, printers, security, applications, and other networking functions over a
small private network. One more important aspect of Network Operating Systems is that all the
users are well aware of the underlying configuration, of all other users within the network,
their individual connections, etc. and that’s why these computers are popularly known
as tightly coupled systems.
Advantages of Network Operating System:
• Highly stable centralized servers
• Security concerns are handled through servers
• New technologies and hardware up-gradation are easily integrated into the system
• Server access is possible remotely from different locations and types of systems
Disadvantages of Network Operating System:
• Servers are costly
• User has to depend on a central location for most operations
• Maintenance and updates are required regularly
Examples of Network Operating System are: Microsoft Windows Server 2003, Microsoft
Windows Server 2008, UNIX, Linux, Mac OS X, Novell NetWare, and BSD, etc.
5. Real-Time Operating System –
These types of OSs serve real-time systems. The time interval required to process and respond
to inputs is very small. This time interval is called response time.
Real-time systems are used when there are time requirements that are very strict like missile
systems, air traffic control systems, robots, etc.
Two types of Real-Time Operating System which are as follows:
• Hard Real-Time Systems:
These OSs are meant for applications where time constraints are very strict and even the
shortest possible delay is not acceptable. These systems are built for saving life like
automatic parachutes or airbags which are required to be readily available in case of any
accident. Virtual memory is rarely found in these systems.
• Soft Real-Time Systems:
These OSs are for applications where for time-constraint is less strict.
Advantages of RTOS:
• Maximum Consumption: Maximum utilization of devices and system, thus more output
from all the resources
• Task Shifting: The time assigned for shifting tasks in these systems are very less. For
example, in older systems, it takes about 10 microseconds in shifting one task to another,
and in the latest systems, it takes 3 microseconds.
• Focus on Application: Focus on running applications and less importance to applications
which are in the queue.
• Real-time operating system in the embedded system: Since the size of programs are
small, RTOS can also be used in embedded systems like in transport and others.
• Error Free: These types of systems are error-free.
• Memory Allocation: Memory allocation is best managed in these types of systems.
Disadvantages of RTOS:
• Limited Tasks: Very few tasks run at the same time and their concentration is very less on
few applications to avoid errors.
• Use heavy system resources: Sometimes the system resources are not so good and they
are expensive as well.
• Complex Algorithms: The algorithms are very complex and difficult for the designer to
write on.
• Device driver and interrupt signals: It needs specific device drivers and interrupts signals
to respond earliest to interrupts.
• Thread Priority: It is not good to set thread priority as these systems are very less prone to
switching tasks.
Examples of Real-Time Operating Systems are: Scientific experiments, medical imaging
systems, industrial control systems, weapon systems, robots, air traffic control systems, etc.

24. What is a process scheduler?


Schedulers are special system software which handle process scheduling in various ways
25. What are the characteristics of good processor scheduler

Scheduling can be defined as a set of policies and mechanisms which controls the order in
which the work to be done is completed. The scheduling program which is a system software
concerned with scheduling is called the scheduler and the algorithm it uses is called the
scheduling algorithm.
Various criteria or characteristics that help in designing a good scheduling algorithm are:
• CPU Utilization − A scheduling algorithm should be designed so that CPU remains
busy as possible. It should make efficient use of CPU.
• Throughput − Throughput is the amount of work completed in a unit of time. In other
words throughput is the processes executed to number of jobs completed in a unit of
time. The scheduling algorithm must look to maximize the number of jobs processed per
time unit.
• Response time − Response time is the time taken to start responding to the request. A
scheduler must aim to minimize response time for interactive users.
• Turnaround time − Turnaround time refers to the time between the moment of
submission of a job/ process and the time of its completion. Thus how long it takes to
execute a process is also an important factor.
• Waiting time − It is the time a job waits for resource allocation when several jobs are
competing in multiprogramming system. The aim is to minimize the waiting time.
• Fairness − A good scheduler should make sure that each process gets its fair share of the
CPU.

26. What is semaphore


It is a synchronization tool . A semaphore is a shared object that can be manipulated
only by two atomic operations, P and V.
27. What are the contents of pcb

Process State
The current state of the process i.e., whether it is ready, running, waiting, or whatever.

Process privileges
This is required to allow/disallow access to system resources.

Process ID
Unique identification for each of the process in the operating system.

Pointer
A pointer to parent process.
Program Counter
Program Counter is a pointer to the address of the next instruction to be executed for this process.

CPU registers
Various CPU registers where process need to be stored for execution for running state.

CPU Scheduling Information


Process priority and other scheduling information which is required to schedule the process.

Memory management information


This includes the information of page table, memory limits, Segment table depending on memory used by the
operating system.

Accounting information
This includes the amount of CPU used for process execution, time limits, execution ID etc.

IO status information
This includes a list of I/O devices allocated to the process.

28. What are the features of distributed os

Connecting Users and Resources :


The main goal of a distributed system is to make it easy for users to access remote
resources, and to share them with other users in a controlled manner. Resources can be
virtually anything, tyoical examples of resources are printers, storage facilities, data,
files, web pages, and networks. There are many reasons for sharing resources. One
reason is economics.

Transparency :
An important goal of a distributed system is to hide the fact that its process and
resources are physically distributed across multiple computers. A distributed system
that is capable of presenting itself to users and applications such that it is only a single
computer system is called transparent.
Openness :
Another important goal of distributed systems is openness. An open distributed system is a
system that offers services in standards that describable the syntax and semantics of those
service instances, standard rules in computer networks control the format, content, and
meaning of messages sent and received. Such rules are formalized in the protocols. In
distributed systems, services are typically specified through interfaces, often called
interface definition languages (IDL). Interface definitions written in IDL almost always
capture only the syntax of services. They accurately specify the names of functions that are
available with the types of parameters, return values, possible exceptions that can be raised
and so on.

Scalability :
The uncertain trend in distributed systems is towards larger systems. This observation has
implications for distributed file system design. Algorithms that work well for systems with
100 machines can work for systems with 1000 machines and none at all for systems with
10, 000 machines. for starters, the centralized algorithm does not scale well. If opening a
file requires contacting a single centralized server to record the fact that the file is open
then the server will eventually become a bottleneck as the system grows.

Reliability :
The main goal of building distributed systems was to make them more reliable than single
processor systems. The idea is that if some machine goes down, some other machine gets
used to it. In other words, theoretically the reliability of the overall system can be a
Boolean OR of the component reliability. For example, with four file servers, each with a
0.95 chance of being up at any instant, the probability of all four being down
simultaneously is 0.000006, so the probability of at least one being available is (1-
0.000006)= 0.999994, far better than any individual server.

Performance :
Building a transparent, flexible, reliable distributed system is useless if it is slow like
molasses. In particular application on a distributed system, it should not deteriorate better
than running some application on a single processor. Various performance metrics can be
used. Response time is one, but so are throughput, system utilization, and amount of
network capacity consumed. Furthermore, The results of any benchmark are often highly
dependent on the nature of the benchmark. A benchmark involves a large number of
independent highly CPU-bound computations which give radically different results than a
benchmark that consists of scanning a single large file for same pattern.

29. What is the degree of multiprogram


The degree of multiprogramming describes the maximum number of processes that a
single-processor system can accommodate efficiently.
30. Distinguish between job and process
A process refers to a program under execution. This program may be an application or
system program. Job means an application program and it is not a system program.
31. A unit work done by the processor in a unit time is called----
Throughput
32. The concept of------ helps in keeping the processor busy,ideally having some job to
execute all the time
Scheduling
33. A ----- is a popular synchronization tool used to handle critical section problem
semaphore
34. What are the operation on process

1. Creation

Once the process is created, it will be ready and come into the ready queue (main memory)
and will be ready for the execution.

2. Scheduling

Out of the many processes present in the ready queue, the Operating system chooses one
process and start executing it. Selecting the process which is to be executed next, is known as
scheduling.

3. Execution

Once the process is scheduled for the execution, the processor starts executing it. Process
may come to the blocked or wait state during the execution then in that case the processor
starts executing the other processes.

4. Deletion/killing
Once the purpose of the process gets over then the OS will kill the process. The Context of the
process (PCB) will be deleted and the process gets terminated by the Operating system.

You might also like