0% found this document useful (0 votes)
11 views6 pages

OS Answer Key

The document is an internal test on Operating Systems covering key concepts such as fork and exec system calls, CPU scheduling, throughput, turnaround time, race conditions, critical sections, semaphores, swapping, and virtual memory. It also details scheduling algorithms like First Come First Serve (FCFS) and thread scheduling, including lightweight processes and contention scope. Additionally, it discusses the advantages and disadvantages of swapping as a memory management technique.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views6 pages

OS Answer Key

The document is an internal test on Operating Systems covering key concepts such as fork and exec system calls, CPU scheduling, throughput, turnaround time, race conditions, critical sections, semaphores, swapping, and virtual memory. It also details scheduling algorithms like First Come First Serve (FCFS) and thread scheduling, including lightweight processes and contention scope. Additionally, it discusses the advantages and disadvantages of swapping as a memory management technique.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 6

1

DEPARTMENT OF ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING


INTERNAL TEST – II

OPERATING SYSTEMS / AIEC401

Part-A

1.What is the use fork and exec system calls ?

 The fork() system call is used to create a separate, duplicate process.


 The semantics of the fork() and exec() system calls change in a multithreaded
program
 The fork generates new processes while simultaneously preserving its parent
process.
 The exec creates new processes but doesn't preserve the parent
process simultaneously.

2. Define CPU scheduling?

 Process Scheduling is responsible for selecting a processor process based on a


scheduling method as well as removing a processor process.
 It’s a crucial component of a multiprogramming operating system.
 Process scheduling makes use of a variety of scheduling queues.

3. Define throughput

If the CPU is busy executing processes, then work is being done. One measure of work is
the number of processes that are completed per time unit, called throughput. For long processes,
this rate may be one process per hour; for short transactions, it may be ten processes per second.

4. What is turnaround time

From the point of view of a particular process, the important criterion is how long it takes
to execute that process. The interval from the time of submission of a process to the time of
completion is the turnaround time. Turnaround time is the sum of the periods spent waiting to
get into memory, waiting in the ready queue, executing on the CPU, and doing I/O

5. Define Race condition


A situation where several processes access and manipulate the same data concurrently
and the outcome of the execution depends on the particular order in which the access takes
place, is called a race condition.
2

6. What is critical section problem?


 When more than one processes access the same code segment that segment is known
as the critical section.
 The critical section contains shared variables or resources which are needed to be
synchronized to maintain the consistency of data variables.
 In simple terms, a critical section is a group of instructions/statements or region of
code that need to be executed atomically , such as accessing a resource

7. Give two hardware instructions and their definitions which can be used for implementing
mutual exclusion
 Locks and semaphores are the two hardware instructions for implementing mutual
exclusion.
 Semaphores is the process when the message to send will be send by holding two
flags, in a specified manner based on the alphabet code.
 Lock is the process of securing the database with some password.

8. What is semaphore

 A semaphore is an integer variable, shared among multiple processes. The main


aim of using a semaphore is process synchronization and access control for a
common resource in a concurrent environment.
 Apart from initialization, is accessed only through two standard atomic operations:
wait() and signal().

9. Define swapping

 A process must be in memory to be executed. A process, however, can be swapped


temporarily out of memory to a backing store and then brought back into memory for
continued execution .
 Swapping makes it possible for the total physical address space of all processes to
exceed the real physical memory of the system, thus increasing the degree of
multiprogramming in a system

10. What is virtual memory?


 Virtual memory is a technique that allows the execution of processes that are not
completely in memory.
 One major advantage of this scheme is that programs can be larger than physical
memory.
3

 Further, virtual memory abstracts main memory into an extremely large, uniform
array of storage, separating logical memory as viewed by the user from physical
memory

Part-B

1.Describe in Detail any two Scheduling Algorithms

There are six types of process scheduling algorithms are:


1)First Come First Serve (FCFS),
2) Shortest-Job-First (SJF) Scheduling,
3) Shortest Remaining Time,
4) Priority Scheduling,
5) Round Robin Scheduling,
6) Multilevel Queue Scheduling.
1)First Come First Serve (FCFS)

First Come First Serve (FCFS) is an operating system scheduling algorithm that
automatically executes queued requests and processes in order of their arrival. It is the easiest
and simplest CPU scheduling algorithm. In this type of algorithm, processes which requests the
CPU first get the CPU allocation first. This is managed with a FIFO queue. The full form of
FCFS is First Come First Serve.
As the process enters the ready queue, its PCB (Process Control Block) is linked with the
tail of the queue and, when the CPU becomes free, it should be assigned to the process at the
beginning of the queue.

Characteristics of FCFS method

 It supports non-preemptive and pre-emptive scheduling algorithm.


 Jobs are always executed on a first-come, first-serve basis.
 It is easy to implement and use.
 This method is poor in performance, and the general wait time is quite high.

Example of FCFS scheduling


A real-life example of the FCFS method is buying a movie ticket on the ticket counter. In
this scheduling algorithm, a person is served according to the queue manner. The person who
arrives first in the queue first buys the ticket and then the next one. This will continue until the
last person in the queue purchases the ticket. Using this algorithm, the CPU process works in a
similar manner.

Advantages of FCFS
4

Here, are pros/benefits of using FCFS scheduling algorithm:

The simplest form of a CPU scheduling algorithm

 Easy to program
 First come first served

Disadvantages of FCFS
Here, are cons/ drawbacks of using FCFS scheduling algorithm:

 It is a Non-Preemptive CPU scheduling algorithm, so after the process has been allocated
to the CPU, it will never release the CPU until it finishes executing.
 The Average Waiting Time is high.
 Short processes that are at the back of the queue have to wait for the long process at the
front to finish.
 Not an ideal technique for time-sharing systems.
 Because of its simplicity, FCFS is not very efficient.

2. Explain in detail about Thread Scheduling

The scheduling of thread involves two boundary scheduling:


1. Scheduling of Kernel-Level Threads by the system scheduler.
2. Scheduling of User-Level Threads or ULT to Kernel-Level Threads or KLT by using
Lightweight process or LWP.
Lightweight process (LWP):
 The Lightweight process is threads that act as an interface for the User-Level
Threads to access the physical CPU resources.
 The number of the lightweight processes depends on the type of application, for an
I\O bound application the number of LWP depends on the user level threads, and for
CPU bound application each thread is connected to a separate kernel-level thread.
 In real-time, the first boundary of thread scheduling is beyond specifying the
scheduling policy and the priority, therefore, it requires two controls to be specified for
the User level threads:

1. Contention scope – Control scope defines the extent to which contention takes place.
Contention refers to the competition among the ULTs to access the KLTs.
Contention scope can be further classified into Process Contention Scope
(PCS) and System Contention Scope (SCS).

 Process Contention Scope: Process Contention Scope is when the contention takes
place in the same process.
5

 System contention scope (SCS): System Contention Scope refers to the contention that
takes place among all the threads in the system.
2. Allocation domain – The allocation domain is a set of multiple (or single) resources for
which a thread is competing.
Advantages of PCS over SCS:

The advantages of PCS over SCS are as follows:


1. It is cheaper.
2. It helps reduce system calls and achieve better performance.
3. If the SCS thread is a part of more than one allocation domain, the system will have to
handle multiple interfaces.
4. PCS thread can share one or multiple available LWPs, while every SCS thread needs a
separate LWP. Therefore, for every system call, a separate KLT will be created.

3.Explain in detail about swapping

Swapping is a memory management scheme in which any process can be temporarily


swapped from main memory to secondary memory so that the main memory can be made
available for other processes. It is used to improve main memory utilization. In secondary
memory, the place where the swapped-out process is stored is called swap space.

The purpose of the swapping in operating system is to access the data present in the hard
disk and bring it to RAM so that the application programs can use it. The thing to remember is
that swapping is used only when data is not present in RAM.

Although the process of swapping affects the performance of the system, it helps to run
larger and more than one process. This is the reason why swapping is also referred to as
memory compaction.

The concept of swapping has divided into two more concepts: Swap-in and Swap-out.

o swap-out is a method of removing a process from RAM and adding it to the hard disk.
o Swap-in is a method of removing a program from a hard disk and putting it back into the
main memory or RAM.

Example: Suppose the user process's size is 2048KB and is a standard hard disk where
swapping has a data transfer rate of 1Mbps. Now we will calculate how long it will take to
transfer from main memory to secondary memory.

Advantages of Swapping

1. It helps the CPU to manage multiple processes within a single main memory.
6

2. It helps to create and use virtual memory.


3. Swapping allows the CPU to perform multiple tasks simultaneously. Therefore,
processes do not have to wait very long before they are executed.
4. It improves the main memory utilization.

Disadvantages of Swapping

1. If the computer system loses power, the user may lose all information related to the
program in case of substantial swapping activity.
2. If the swapping algorithm is not good, the composite method can increase the number of
Page Fault and decrease the overall processing performance.

You might also like