0% found this document useful (0 votes)
20 views7 pages

Os Qbank Unit2

The document is a question bank on Operating Systems, specifically focusing on multithreading, CPU scheduling, and synchronization concepts. It covers definitions, comparisons, and explanations of key topics such as threads, CPU scheduling algorithms, critical sections, semaphores, and various synchronization problems. Additionally, it includes specific questions regarding the implementation and challenges of these concepts in operating systems.

Uploaded by

anjali.jain.it
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views7 pages

Os Qbank Unit2

The document is a question bank on Operating Systems, specifically focusing on multithreading, CPU scheduling, and synchronization concepts. It covers definitions, comparisons, and explanations of key topics such as threads, CPU scheduling algorithms, critical sections, semaphores, and various synchronization problems. Additionally, it includes specific questions regarding the implementation and challenges of these concepts in operating systems.

Uploaded by

anjali.jain.it
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Harsh Khatter Kiet Group of Institutions

Operating System Question Bank


UNIT II

1. What is a thread?

A thread otherwise called a lightweight process (LWP) is a basic unit of CPU utilization, it
comprises of a thread id, a program counter, a register set and a stack. It shares with other
threads belonging to the same process its code section, data section, and operating system
resources such as open files and signals.

2. What are the benefits of multithreaded programming?

The benefits of multithreaded programming can be broken down into four major categories:

 Responsiveness

 Resource sharing

 Economy

 Utilization of multiprocessor architectures

3.Compare user threads and kernel threads.

User threads Kernel threads

User threads are supported above the kernel and are implemented by a thread library at the user
level Kernel threads are supported directly by the operating system

Thread creation & scheduling are done in the user space, without kernel intervention. Therefore
they are fast to create and manage Thread creation, scheduling and management are done by the
operating system. Therefore they are slower to create & manage compared to user threads

Blocking system call will cause the entire process to block If the thread performs a blocking
system call, the kernel can schedule another thread in the application for execution

4.Define thread cancellation & target thread.

The thread cancellation is the task of terminating a thread before it has completed. A thread that
is to be cancelled is often referred to as the target thread. For example, if multiple threads are
concurrently searching through a database and one thread returns the result, the remaining
threads might be cancelled.

5.What are the different ways in which a thread can be cancelled?

KCS 401 CS Dept


Harsh Khatter Kiet Group of Institutions

Cancellation of a target thread may occur in two different scenarios:

 Asynchronous cancellation: One thread immediately terminates the target thread is called
asynchronous cancellation.

Deferred cancellation: The target thread can periodically check if it should terminate, allowing
the target thread an opportunity to terminate itself in an orderly fashion.

6.Define CPU scheduling.

CPU scheduling is the process of switching the CPU among various processes. CPU scheduling
is the basis of multiprogrammed operating systems. By switching the CPU among processes, the
operating system can make the computer more productive.

7.What is preemptive and nonpreemptive scheduling?

Under nonpreemptive scheduling once the CPU has been allocated to a process, the process
keeps the CPU until it releases the CPU either by terminating or switching to the waiting state.

Preemptive scheduling can preempt a process which is utilizing the CPU in between its
execution and give the CPU to another process.

8.What is a Dispatcher?

The dispatcher is the module that gives control of the CPU to the process selected by the short-
term scheduler. This function involves:

 Switching context

 Switching to user mode

 Jumping to the proper location in the user program to restart that program.

9.What is dispatch latency?

The time taken by the dispatcher to stop one process and start another running is known as
dispatch latency.

10.What are the various scheduling criteria for CPU scheduling?

The various scheduling criteria are

 CPU utilization

 Throughput

KCS 401 CS Dept


Harsh Khatter Kiet Group of Institutions

 Turnaround time

 Waiting time

 Response time

11.Define throughput?

Throughput in CPU scheduling is the number of processes that are completed per unit time. For
long processes, this rate may be one process per hour; for short transactions, throughput might be
10 processes per second.

12.What is turnaround time?

Turnaround time is the interval from the time of submission to the time of completion of a
process. It is the sum of the periods spent waiting to get into memory, waiting in the ready
queue, executing on the CPU, and doing I/O.

13.Define race condition.

When several process access and manipulate same data concurrently, then the outcome of the
execution depends on particular order in which the access takes place is called race condition. To
avoid race condition, only one process at a time can manipulate the shared variable.

14.What is critical section problem?

Consider a system consists of ‘n‘ processes. Each process has segment of code called a critical
section, in which the process may be changing common variables, updating a table, writing a
file. When one process is executing in its critical section, no other process can allowed to execute
in its critical section.

15.What are the requirements that a solution to the critical section problem must satisfy?

The three requirements are

 Mutual exclusion

 Progress

 Bounded waiting

16.Define entry section and exit section.

The critical section problem is to design a protocol that the processes can use to cooperate. Each
process must request permission to enter its critical section. The section of the code

KCS 401 CS Dept


Harsh Khatter Kiet Group of Institutions

implementing this request is the entry section. The critical section is followed by an exit section.
The remaining code is the remainder section.

17.Give two hardware instructions and their definitions which can be used for implementing
mutual exclusion.

 TestAndSet

boolean TestAndSet (boolean &target)

boolean rv = target;

target = true;

return rv;

 Swap

void Swap (boolean &a, boolean &b)

boolean temp = a;

a = b;

b = temp;

18.What is semaphores?

A semaphore ‘S’ is a synchronization tool which is an integer value that, apart from
initialization, is accessed only through two standard atomic operations; wait and signal.
Semaphores can be used to deal with the n-process critical section problem. It can be also used to
solve various synchronization problems.

The classic definition of ‘wait’

wait (S)

KCS 401 CS Dept


Harsh Khatter Kiet Group of Institutions

while (S<=0);

S--;

The classic definition of ‘signal’

signal (S)

S++;

19.Define busy waiting and spinlock.

When a process is in its critical section, any other process that tries to enter its critical section
must loop continuously in the entry code. This is called as busy waiting and this type of
semaphore is also called a spinlock, because the process while waiting for the lock.

20. How can we say the First Come First Served scheduling algorithm is non preemptive?

Once the CPU has been allocated to a process, that process keeps the CPU until it releases the
CPU, either by terminating or by requesting I/O. So we can say the First Come First Served
scheduling algorithm is non preemptive.

21.What is waiting time in CPU scheduling?

Waiting time is the sum of periods spent waiting in the ready queue. CPU scheduling algorithm
affects only the amount of time that a process spends waiting in the ready queue.

22. What is Response time in CPU scheduling?

Response time is the measure of the time from the submission of a request until the first response
is produced. Response time is amount of time it takes to start responding, but not the time that it
takes to output that response.

23. Differentiate long term scheduler and short term scheduler

• The long-term scheduler or job scheduler selects processes from the job pool and loads them
into memory for execution.

KCS 401 CS Dept


Harsh Khatter Kiet Group of Institutions

• The short-term scheduler or CPU scheduler selects from among the process that are ready to
execute, and allocates the CPU to one of them.

24. Write some classical problems of synchronization?

The Bounded-Buffer Problem

The Readers-Writers Problem

The Dining Philosophers Problem

25. When the error will occur when we use the semaphore?

i. When the process interchanges the order in which the wait and signal operations on the
semaphore mutex.

ii. When a process replaces a signal (mutex) with wait (mutex).

iii. When a process omits the wait (mutex), or the signal (mutex), or both.

26.What is Mutual Exclusion?

A way of making sure that if one process is using a shared modifiable data, the other processes
will be excluded from doing the same thing. Each process executing the shared data variables
excludes all others from doing so simultaneously. This is called mutual exclusion.

27.Define the term critical regions?

Critical regions are small and infrequent so that system through put is largely unaffected by their
existence. Critical region is a control structure for implementing mutual exclusion over a shared
variable.

28.What are the drawbacks of monitors?

1.Monitor concept is its lack of implementation most commonly used programming languages.

2.There is the possibility of deadlocks in the case of nested monitors calls.

29.What are the two levels in threads?

Thread is implemented in two ways.

KCS 401 CS Dept


Harsh Khatter Kiet Group of Institutions

1.User level

2.Kernel level

30. What is a Gantt chart?

A two dimensional chart that plots the activity of a unit on the Y-axis versus the time on the X-
axis. The chart quickly represents how the activities of the units are serialized.

KCS 401 CS Dept

You might also like