0% found this document useful (0 votes)
48 views7 pages

OS T1 Model Solution - June 1 2023

The document discusses interrupts and traps, providing details on their purposes and how they differ. It also covers justification for frequent CPU job switching in time-sharing operating systems and describes the key contents and purpose of a process control block.

Uploaded by

xyz86538
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
48 views7 pages

OS T1 Model Solution - June 1 2023

The document discusses interrupts and traps, providing details on their purposes and how they differ. It also covers justification for frequent CPU job switching in time-sharing operating systems and describes the key contents and purpose of a process control block.

Uploaded by

xyz86538
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

OS T1 Model Solution- June 1 2023

Q 1 A) What is the purpose of interrupts? How does an interrupt differ from a


trap?
Interrupts and traps are both mechanisms used in computer systems to handle
exceptional events. However, they serve different purposes and are triggered in
different ways.

Interrupts:
Interrupts are signals sent to the processor by external devices or internal
mechanisms to request attention or notify the processor of an event that needs to
be handled. The purpose of interrupts is to interrupt the normal execution flow of a
program and transfer control to a specific interrupt handler or interrupt service
routine (ISR). Interrupts are typically used for time-critical events or to handle
asynchronous events such as hardware interrupts, I/O operations, timers, or other
external signals.
When an interrupt occurs, the processor saves its current state and transfers control
to the appropriate interrupt handler, which is a predefined routine responsible for
handling that specific interrupt. Once the interrupt handler completes its execution,
the processor resumes the interrupted program from where it left off.
Interrupts are classified into different types, such as hardware interrupts (e.g.,
keyboard input, mouse input, disk I/O), software interrupts (e.g., system calls), and
exceptions (e.g., divide-by-zero, page faults).
Traps:
Traps, also known as exceptions or software interrupts, are synchronous events that
occur during the execution of a program. Unlike interrupts, traps are intentionally
triggered by executing specific instructions or encountering certain conditions
within the program itself. Traps are typically used for error handling or for executing
privileged instructions that require a transition to a higher privilege level.

When a trap occurs, the processor saves its current state and transfers control to the
corresponding trap handler. The trap handler can perform tasks such as error
handling, exception processing, or executing a specific routine associated with the
trap. After the trap handler completes its execution, control is returned to the point
in the program where the trap was triggered.

Traps can be used for a variety of purposes, including division by zero errors,
memory protection violations, debugging, system calls, or software breakpoints.
Q 1B) Justify, “Frequent switching amongst CPU jobs is the principle theme of time-sharing
operating system”.

The statement can be justified based on the fundamental principles and objectives
of time-sharing systems as follows:

Maximizing CPU utilization: Time-sharing operating systems aim to maximize


CPU utilization by allowing multiple users or processes to share the CPU's
processing time. By frequently switching among CPU jobs, the operating system
ensures that each user or process gets a fair and efficient allocation of CPU time.
This approach enables better utilization of computing resources, as idle CPU time
is minimized.

Interactive response: Time-sharing systems prioritize providing interactive


response to users. By quickly switching among CPU jobs, the operating system
ensures that users receive prompt feedback for their input or requests. This
responsiveness is crucial for applications such as command-line interfaces,
interactive editing, and real-time interactions with the system.

Time slicing: Time-sharing systems typically employ a technique called time slicing
or time quantum, where each process is allocated a small time slice to execute
before switching to another process. This time-slicing mechanism allows the
operating system to provide the illusion of concurrent execution and fairness
among processes, even though the CPU is executing them sequentially.

Multitasking: Time-sharing systems support multitasking, allowing multiple


processes to run concurrently. By rapidly switching among CPU jobs, the operating
system enables the execution of multiple processes in an interleaved manner,
giving the appearance of parallelism. This enables efficient utilization of system
resources and enhances productivity by allowing users to run multiple tasks
simultaneously.

Fairness and resource sharing: Time-sharing systems emphasize fair allocation of


computing resources among users or processes. By frequently switching among
CPU jobs, the operating system ensures that no single job or user monopolizes the
CPU for an extended period. Fairness in resource allocation is essential to maintain
system stability and prevent individual users or processes from adversely affecting
others.
Q 2 A)

i) SJF non-preemptive [0.5 M]


ii) Solve using answer identified in part (i) i.e. SJF non-preemptive [2]

iii) Solution using Round Robin [1M]


Q 2B)
In an operating system, a Process Control Block (PCB), also known as a Task Control
Block (TCB), is a data structure that stores the relevant information about a specific
process or task. The PCB plays a crucial role in managing and controlling processes
within an operating system. Its primary purpose is to facilitate the context switching
between processes, allowing the operating system to efficiently manage and
execute multiple processes concurrently.

The PCB contains various pieces of information associated with a process,


including:
1. Process state: This field indicates the current state of the process, such as
running, ready, waiting, or terminated. The operating system relies on this
information to determine which processes are ready for execution and which
ones need to wait for certain events or resources.
2. Program counter (PC): The PC stores the address of the next instruction to be
executed for the given process. When a context switch occurs, the current value
of the PC is saved in the PCB of the outgoing process, and the saved value from
the PCB of the incoming process is loaded into the PC, allowing the system to
resume execution from the appropriate point.
3. CPU registers: The PCB contains a snapshot of the CPU registers for the
corresponding process. This includes general-purpose registers, stack pointers,
and other special-purpose registers. Saving and restoring these register values
during a context switch ensures that the process can resume its execution
accurately.
4. Memory management information: This field holds information about the
memory allocation and memory usage of the process. It includes details such as
the base and limit registers for the process's memory segments or address
spaces, which are necessary for memory protection and address translation.
5. Process identification: Each process is assigned a unique process identifier
(PID) by the operating system. The PCB stores this PID along with other
identification details to track and manage processes effectively.
6. Process scheduling information: The PCB may include data related to the
process's scheduling priority, scheduling queues, and any other scheduling-
related parameters. This information assists the operating system's scheduler in
making decisions about process prioritization and execution.

During a context switch, the operating system saves the current PCB of the running
process and loads the PCB of the next process to be executed. This switch involves
updating the necessary fields in the PCB, such as the program counter and CPU
registers, to ensure a seamless transition between processes. By utilizing the PCB,
the operating system can efficiently switch between processes, allowing for
multitasking and the illusion of concurrent execution.

Q.3 (a) How TestAndSet instruction can be used to achieve mutual


exclusion?
-- uniprocessor environment if we could forbid interrupts to occur while a shared
variable is being modified. In this manner, we could be sure that the current
sequence of instructions would be allowed to execute in order without preemption.
No other instructions would be run, so no unexpected modifications could be made
to the shared variable.
--not feasible in a multiprocessor environment. Disabling interrupts on a
multiprocessor can be time-consuming, as the message is passed to all the
processors. This message passing delays entry into each critical section, and system
efficiency decreases. Also, consider the effect on a system's clock, if the clock is kept
updated by interrupts.
-- Many machines therefore provide special hardware instructions that allow us
either to test and modify the content of a word, or to swap the contents of two
words, atomically-that is, as one uninterruptible unit. We can use these special
instructions to solve the critical-section problem in a relatively simple manner.
Mutual-exclusion implementation with Test AndSet
The TestAndSet instruction can be defined as shown in Figure 7.6. The important
characteristic is that this instruction is executed atomically. Thus, if two TestAndSet
instructions are executed simultaneously (each on a different CPU), they will be
executed sequentially in some arbitrary order.

Q 3 (b)
) Consider the following synchronization construct used by the processes:

Here, value1 and value2 are shared variables. Which of the following solutions to
critical section problem (mutual exclusion, progress and bounded wait) are satisfied
by the above construct? Explain.

• Mutual exclusion is satisfied here


• No Progress.
• Bounded waiting is not satisfied

(Mutual exclusion because no two processes can enter critical section at the same
time. If one process is in the critical section, then other has to wait. Assume P1 is in
critical section (it means value1=true, value2 can be anything, true or false). So this
ensures that p2 won’t enter in critical section and vice versa. This satisfies the
property of mutual exclusion. Here bounded waiting condition is also satisfied as
there is a bound on the number of process which gets access to critical section after
a process request access to it)

Two processes, P1 and P2, need to access a critical section of code. Here, value1and
value2 are shared variables, which are initialized to false.
Now, when both value1and value2 become true, both process p1 and p2 enter in
while loop and waiting for each other to finish. This while loop run indefinitely which
leads to deadlock.
Progress is not satisfied here because when both the process execute and get
preempted then both the processes waiting for each other to release the flag.
Which is not going to happen without entering the critical section. So they will wait
for indefinite time, No progress. This situation is called "Deadlock".

You might also like