Apex Institute of Technology: Department of Computer Science & Engineering
Apex Institute of Technology: Department of Computer Science & Engineering
TECHNOLOGY
DEPARTMENT OF COMPUTER SCIENCE &
ENGINEERING
OPERATING SYSTEM
Faculty: Ms. Anudeep Kaur(E16380)
CO2 Understand Process and Disk scheduling algorithms and apply the best scheduling
algorithm for a given problem instance.
CO3 Apply an algorithm for process scheduling, deadlock avoidance and file management.
CO4 Identify and analyze memory management techniques and apply them for memory
management related problems.
Operating System Introduction to Operating Systems, Operating System Structure, Main Functions and
characteristics of Operating Systems, Types of Operating Systems, System calls, Types
Overview of system calls, System programs, Reentrant Kernels, Monolithic and Microkernel
Systems
Process Process Concept, Process Control Block, Process Scheduling, Threads, CPU Scheduling:
Preemptive/Non-Preemptive Scheduling, Scheduling Criteria, Scheduling Algorithms
Management (FCFS, SJF, RR), Priority, real-time scheduling.
Definition: A spinlock is a type of busy-waiting semaphore where a thread repeatedly checks if the lock is
available in a tight loop. The thread "spins" until the lock becomes available.
The spinlock is a locking system mechanism. It allows a thread to acquire it to simply wait in loop until the lock
is available i.e. a thread waits in a loop or spin until the lock is available. Spinlock is held for a short period of
time. Spinlock are useful in multiprocessor system.
We use Spinlocks When:
•The critical section is very short.
•Lock contention is expected to be low.
•The overhead of context switching is greater than the cost of busy-waiting.
•You are in a real-time system where response time is critical.
Spinlock (Busy-waiting)
Performance:
•Low Overhead: Spinlocks have low overhead in terms of context switches because the thread does not relinquish the CPU
while waiting.
•Short Wait Times: Spinlocks are efficient when the expected wait time is short, as they avoid the overhead of putting a
thread to sleep and waking it up.
•CPU Utilization: Spinlocks can be wasteful in terms of CPU utilization. While waiting, the thread consumes CPU cycles
without doing useful work.
Disadvantages:
•High CPU Usage: Can waste CPU cycles on busy-waiting, especially if the lock is held for long periods.
•Poor Scalability: Not suitable for high contention scenarios or longer critical sections.
•Resource Contention: Can lead to performance degradation if many threads are spinning.
Suitable Scenarios
Spinlocks are most suitable in scenarios where:
•Short Critical Sections: The code within the critical section executes very quickly.
•Low Contention: There is minimal contention for the lock.
•Real-Time Requirements: Response time is critical, and the overhead of blocking and context switching must be minimized.
Blocking (Sleep-Wait)
Blocking (Sleep-Wait) Semaphore
A blocking semaphore, also known as a sleep-wait semaphore, is a synchronization primitive used to manage
concurrent access to shared resources by putting threads to sleep when they cannot immediately proceed. This
approach is efficient for scenarios with longer wait times and higher contention, as it frees up the CPU to
perform other tasks rather than wasting cycles on busy-waiting.
Disadvantages:
•Higher Overhead: The cost of context switching and maintaining wait queues can be significant.
•Complexity: More complex to implement and manage due to kernel-level interactions.
Suitable Scenarios
Blocking (sleep-wait) semaphores are most suitable in scenarios where:
•High Contention: Multiple threads frequently compete for the same resource.
•Long Wait Times: Threads may need to wait for significant periods before accessing the resource.
•Resource Efficiency: Efficient use of CPU and other system resources is critical.
Comparison
Aspect Busy-Waiting (Spinlock) Blocking (Sleep-Wait)
CPU Utilization High (wastes CPU cycles) Low (frees CPU for other tasks)
Scalability May lead to contention and performance issues Scales better by allowing other processes to run
Cache Effects Cache-friendly for short critical sections Less dependent on cache