0% found this document useful (0 votes)
7 views3 pages

Os 2

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views3 pages

Os 2

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 3

COMSATS University Islamabad

Sahiwal Campus

Object Oriented Programming

Assignment # 02

Submitted To:
Ms. Shaheen Kousar
.

Submitted By:
Ameer Hamza .

Registration No:
FA23-BSE-157 . .

BS (Software Engineering)
DEPARTMENT OF COMPUTER SCIENCE

COMSATS UNIVERSITY ISLAMABAD


SAHIWAL CAMPUS

Question 1:
Examine how preemptive and non-preemptive scheduling affects real-time operating
system performance. Analyze their pros and cons, considering response time, waiting time,
and CPU utilization.

Preemptive and non-preemptive scheduling are two different approaches to task management in
operating systems, particularly in real-time systems.

1. Preemptive Scheduling:

In preemptive scheduling, a running task can be interrupted if a


higher-priority task arrives. This ensures that the most urgent tasks get CPU time quickly,
which is crucial for real-time systems.

o Pros:
 Better Response Time: Higher-priority tasks are executed immediately,
improving the response time.
 Efficient CPU Utilization: The CPU is never idle if there are tasks
waiting, as it can switch to another task when necessary.
o Cons:
Increased Overhead: Frequent switching between tasks can lead to more

overhead in terms of time spent on context switching.
 Complexity: Managing task priorities and frequent preemptions can make
the system more complex.
2. Non-preemptive Scheduling:

In non-preemptive scheduling, a running task is allowed to


finish before another task can take over the CPU, regardless of priority.

o Pros:
 Less Overhead: Since tasks are not interrupted, the system does not need
to perform as many context switches.
 Simplicity: The scheduling is simpler to implement as tasks are completed
in the order they start.
o Cons:
 Poor Response Time: Higher-priority tasks may have to wait for lower-
priority tasks to finish, increasing their response time.
 CPU Underutilization: If a long-running task is executing, the CPU may
not be efficiently utilized for urgent tasks.

In real-time systems, preemptive scheduling is often preferred as it ensures that urgent tasks meet
their deadlines. However, non-preemptive scheduling may be suitable in simpler systems or
when the overhead of context switching is too high.

Question 2:
Explain how context switching affects CPU performance and identify the key factors
contributing to its overhead.

Context switching occurs when the CPU switches from executing one task to another. It involves
saving the state of the current task and loading the state of the next task.

 Impact on CPU Performance:


o CPU Time Loss: During a context switch, the CPU stops executing the actual
tasks and instead works on saving and loading the states of tasks. This leads to a
slight loss of CPU time, which could have been used to execute instructions.
o Memory Access Delays: Context switching may require loading data from
memory, which can be slower than executing cached instructions, further
affecting performance.
 Key Factors Contributing to Overhead:
o Frequency of Switching: The more often the CPU switches between tasks, the
more overhead is introduced, as each switch takes time.
o Size of Task State: If tasks have large states (e.g., more data to save/load),
context switching takes longer.
o Hardware Support: Some CPUs have better support for efficient context
switching, while others may incur more overhead.

While context switching is necessary for multitasking, frequent switching or inefficient handling
of task states can negatively impact CPU performance.

You might also like