Ch. 4 ... Part - 4
Ch. 4 ... Part - 4
System
CH. 4 Services
BY DR. VEEJYA KUMBHAR
Ch. 4 Operating System Services
Syllabus
• Processes ✓
• Process Structure ✓
• Process Scheduling ✓
• Scheduling Algorithms ✓
• Process Synchronization & Deadlocks
• Thread Management
• Memory Management
• System Calls
• File System
Deadlocks
Examples Of Deadlock
1. The system has 2 tape drives. P1 and P2 each hold one tape drive and each
needs another one.
2. Semaphores A and B, initialized to 1, P0, and P1 are in deadlock as follows:
• P0 executes wait(A) and preempts.
• P1 executes wait(B).
• Now P0 and P1 enter in deadlock.
Deadlocks
Deadlocks
3. Assume the space is available for allocation of 200K bytes, and the following sequence
of events occurs.
Deadlock can arise if the following four conditions hold simultaneously (Necessary Conditions)
Prevention:
The idea is to not let the system into a deadlock state. This system will make sure that
above mentioned four conditions will not arise. These techniques are very costly, so
we use this in cases where our priority is making a system deadlock-free.
One can zoom into each category individually, Prevention is done by negating one of
the above-mentioned necessary conditions for deadlock. Prevention can be done in
four different ways:
1. Eliminate mutual exclusion 3. Allow preemption
2. Solve hold and Wait 4. Circular wait Solution
Deadlock prevention or avoidance
Avoidance:
Avoidance is kind of futuristic. By using the strategy of “Avoidance”, we have to make
an assumption. We need to ensure that all information about resources that the
process will need is known to us before the execution of the process. We use Banker’s
algorithm (Which is in turn a gift from Dijkstra) to avoid deadlock.
In prevention and avoidance, we get the correctness of data but performance
decreases.
2) Deadlock detection and recovery:
• Processes ✓
• Process Structure ✓
• Process Scheduling ✓
• Scheduling Algorithms ✓
• Process Synchronization & Deadlocks ✓
• Thread Management
• Memory Management
• System Calls
• File System
Thread in Operating System
The primary difference is that threads within the same process run in a shared
memory space, while processes run in separate memory spaces.
Threads are not independent of one another like processes are, and as a result
threads share with other threads their code section, data section, and OS resources
(like open files and signals).
But, like process, a thread has its own program counter (PC), register set, and stack
space.
Advantages of Thread over Process
1. Responsiveness: If the process is divided into multiple threads, if one thread completes its execution, then its output
can be immediately returned.
2. Faster context switch: Context switch time between threads is lower compared to process context switch. Process
context switching requires more overhead from the CPU.
3. Effective utilization of multiprocessor system: If we have multiple threads in a single process, then we can schedule
multiple threads on multiple processor. This will make process execution faster.
4. Resource sharing: Resources like code, data, and files can be shared among all threads within a process. Note: stack
and registers can’t be shared among the threads. Each thread has its own stack and registers.
5. Communication: Communication between multiple threads is easier, as the threads shares common address space.
while in process we have to follow some specific communication technique for communication between two process.
6. Enhanced throughput of the system: If a process is divided into multiple threads, and each thread function is
considered as one job, then the number of jobs completed per unit of time is increased, thus increasing the
throughput of the system.
Types of Threads
2. Recognize Operating System doesn’t recognize user level threads. Kernel threads are recognized by Operating System.
4. Context switch time Context switch time is less. Context switch time is more.
5. Hardware support Context switch requires no hardware support. Hardware support is needed.
If one user level thread performs blocking operation then If one kernel thread perform blocking operation then
6. Blocking operation
entire process will be blocked. another thread can continue execution.
Creation and User level threads can be created and managed more Kernel level threads take more time to create and
8.
Management quickly. manage.
9. Operating System Any operating system can support user-level threads. Kernel level threads are operating system-specific.
Ch. 4 Operating System Services
Syllabus
• Processes ✓
• Process Structure ✓
• Process Scheduling ✓
• Scheduling Algorithms ✓
• Process Synchronization & Deadlocks ✓
• Thread Management ✓
• Memory Management ✓
• System Calls
• File System
System Call
In computing, a system call is a programmatic way in which a computer program requests a service from the
kernel of the operating system it is executed on.
A system call is a way for programs to interact with the operating system.
A computer program makes a system call when it makes a request to the operating system’s kernel.
System call provides the services of the operating system to the user programs via Application Program
Interface(API).
It provides an interface between a process and an operating system to allow user-level processes to request
services of the operating system.
System calls are the only entry points into the kernel system. All programs needing resources must use system
calls.
Services Provided by System Calls
1. Interface: System calls provide a well-defined interface between user programs and the operating system. Programs
make requests by calling specific functions, and the operating system responds by executing the requested service and
returning a result.
2. Protection: System calls are used to access privileged operations that are not available to normal user programs. The
operating system uses this privilege to protect the system from malicious or unauthorized access.
3. Kernel Mode: When a system call is made, the program is temporarily switched from user mode to kernel mode. In
kernel mode, the program has access to all system resources, including hardware, memory, and other processes.
4. Context Switching: A system call requires a context switch, which involves saving the state of the current process and
switching to the kernel mode to execute the requested service. This can introduce overhead, which can impact system
performance.
5. Error Handling: System calls can return error codes to indicate problems with the requested service. Programs must
check for these errors and handle them appropriately.
6. Synchronization: System calls can be used to synchronize access to shared resources, such as files or network
connections. The operating system provides synchronization mechanisms, such as locks or semaphores, to ensure that
multiple programs can access these resources safely.
Ch. 4 Operating System Services
Syllabus
• Processes ✓
• Process Structure ✓
• Process Scheduling ✓
• Scheduling Algorithms ✓
• Process Synchronization & Deadlocks ✓
• Thread Management
• Memory Management
• System Calls
• File System
Memory Management in Operating System
Loading a process into the main memory is done by a loader. There are two different types of
loading :
• Static loading:- loading the entire program into a fixed address. It requires more memory space.
• Dynamic loading:- The entire program and all data of a process must be in physical memory for
the process to execute. So, the size of a process is limited to the size of physical memory. To gain
proper memory utilization, dynamic loading is used. In dynamic loading, a routine is not loaded
until it is called. All routines are residing on disk in a relocatable load format. One of the
advantages of dynamic loading is that unused routine is never loaded. This loading is useful when
a large amount of code is needed to handle it efficiently.
Static and Dynamic linking:
To perform a linking task a linker is used. A linker is a program that takes one or more
object files generated by a compiler and combines them into a single executable file.
• Static linking: In static linking, the linker combines all necessary program modules into a
single executable program. So there is no runtime dependency. Some operating systems
support only static linking, in which system language libraries are treated like any other
object module.
• Dynamic linking: The basic concept of dynamic linking is similar to dynamic loading. In
dynamic linking, “Stub” is included for each appropriate library routine reference. A stub
is a small piece of code. When the stub is executed, it checks whether the needed
routine is already in memory or not. If not available then the program loads the routine
into memory.
Swapping :