0% found this document useful (0 votes)
27 views37 pages

Ch. 4 ... Part - 4

1) The document discusses operating system services including processes, process scheduling, deadlocks, threads, and system calls. 2) It defines deadlocks as when a set of processes are blocked because each is holding a resource and waiting for a resource held by another. Examples and conditions for deadlocks are provided. 3) Methods for handling deadlocks include prevention, detection and recovery, and ignorance. Prevention avoids deadlock conditions while detection stops deadlocks after they occur. Ignorance allows deadlocks to happen and reboots the system.

Uploaded by

V
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views37 pages

Ch. 4 ... Part - 4

1) The document discusses operating system services including processes, process scheduling, deadlocks, threads, and system calls. 2) It defines deadlocks as when a set of processes are blocked because each is holding a resource and waiting for a resource held by another. Examples and conditions for deadlocks are provided. 3) Methods for handling deadlocks include prevention, detection and recovery, and ignorance. Prevention avoids deadlock conditions while detection stops deadlocks after they occur. Ignorance allows deadlocks to happen and reboots the system.

Uploaded by

V
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 37

Operating

System
CH. 4 Services
BY DR. VEEJYA KUMBHAR
Ch. 4 Operating System Services
Syllabus

• Processes ✓
• Process Structure ✓
• Process Scheduling ✓
• Scheduling Algorithms ✓
• Process Synchronization & Deadlocks
• Thread Management
• Memory Management
• System Calls
• File System
Deadlocks

 A process in operating system uses resources in the following way.


1. Requests a resource
2. Use the resource
3. Releases the resource
 A deadlock is a situation where a set of processes are blocked because each process is holding a
resource and waiting for another resource acquired by some other process.
 Consider an example when two trains are coming toward each other on the same track and there
is only one track, none of the trains can move once they are in front of each other. A similar
situation occurs in operating systems when there are two or more processes that hold some
resources and wait for resources held by other(s). For example, in the below diagram, Process 1 is
holding Resource 1 and waiting for resource 2 which is acquired by process 2, and process 2 is
waiting for resource 1.
Deadlocks
Deadlocks

Examples Of Deadlock
1. The system has 2 tape drives. P1 and P2 each hold one tape drive and each
needs another one.
2. Semaphores A and B, initialized to 1, P0, and P1 are in deadlock as follows:
• P0 executes wait(A) and preempts.
• P1 executes wait(B).
• Now P0 and P1 enter in deadlock.
Deadlocks
Deadlocks

3. Assume the space is available for allocation of 200K bytes, and the following sequence
of events occurs.

Deadlock occurs if both processes progress to their second request.


Deadlocks

Deadlock can arise if the following four conditions hold simultaneously (Necessary Conditions)

Mutual Exclusion: Two or more resources are non-shareable (Only one


process can use at a time)
Hold and Wait: A process is holding at least one resource and waiting for
resources.
No Preemption: A resource cannot be taken from a process unless the
process releases the resource.
Circular Wait: A set of processes waiting for each other in circular form.
Deadlock Handling

 There are three ways to handle deadlock


1) Deadlock prevention or avoidance
 2) Deadlock detection and recovery
 3) Deadlock ignorance
Deadlock prevention or avoidance

 Prevention:
 The idea is to not let the system into a deadlock state. This system will make sure that
above mentioned four conditions will not arise. These techniques are very costly, so
we use this in cases where our priority is making a system deadlock-free.
One can zoom into each category individually, Prevention is done by negating one of
the above-mentioned necessary conditions for deadlock. Prevention can be done in
four different ways:
 1. Eliminate mutual exclusion 3. Allow preemption
 2. Solve hold and Wait 4. Circular wait Solution
Deadlock prevention or avoidance

 Avoidance:
Avoidance is kind of futuristic. By using the strategy of “Avoidance”, we have to make
an assumption. We need to ensure that all information about resources that the
process will need is known to us before the execution of the process. We use Banker’s
algorithm (Which is in turn a gift from Dijkstra) to avoid deadlock.
 In prevention and avoidance, we get the correctness of data but performance
decreases.
2) Deadlock detection and recovery:

 If Deadlock prevention or avoidance is not applied to the software then we can


handle this by deadlock detection and recovery. which consist of two phases:
1. In the first phase, we examine the state of the process and check whether there is a
deadlock or not in the system.
2. If found deadlock in the first phase then we apply the algorithm for recovery of the
deadlock.
 In Deadlock detection and recovery, we get the correctness of data but performance
decreases.
3) Deadlock ignorance:

 3) Deadlock ignorance: If a deadlock is very rare, then let it


happen and reboot the system. This is the approach that both
Windows and UNIX take. we use the ostrich algorithm for
deadlock ignorance.
In Deadlock, ignorance performance is better than the above two
methods but the correctness of data.
Ch. 4 Operating System Services
Syllabus

• Processes ✓
• Process Structure ✓
• Process Scheduling ✓
• Scheduling Algorithms ✓
• Process Synchronization & Deadlocks ✓
• Thread Management
• Memory Management
• System Calls
• File System
Thread in Operating System

 Within a program, a thread is a separate execution path.


 It is a lightweight process that the operating system can
schedule and run concurrently with other threads.
 The operating system creates and manages threads, and
they share the same memory and resources as the program
that created them.
 This enables multiple threads to collaborate and work
efficiently within a single program.
Why Multithreading?

 A thread is also known as lightweight process.


 The idea is to achieve parallelism by dividing a process into
multiple threads.
For example,
 in a browser, multiple tabs can be different threads.
 MS Word uses multiple threads: one thread to format the text,
another thread to process inputs, etc.
Process vs Thread:

 The primary difference is that threads within the same process run in a shared
memory space, while processes run in separate memory spaces.
 Threads are not independent of one another like processes are, and as a result
threads share with other threads their code section, data section, and OS resources
(like open files and signals).
 But, like process, a thread has its own program counter (PC), register set, and stack
space.
Advantages of Thread over Process

1. Responsiveness: If the process is divided into multiple threads, if one thread completes its execution, then its output
can be immediately returned.
2. Faster context switch: Context switch time between threads is lower compared to process context switch. Process
context switching requires more overhead from the CPU.
3. Effective utilization of multiprocessor system: If we have multiple threads in a single process, then we can schedule
multiple threads on multiple processor. This will make process execution faster.
4. Resource sharing: Resources like code, data, and files can be shared among all threads within a process. Note: stack
and registers can’t be shared among the threads. Each thread has its own stack and registers.
5. Communication: Communication between multiple threads is easier, as the threads shares common address space.
while in process we have to follow some specific communication technique for communication between two process.
6. Enhanced throughput of the system: If a process is divided into multiple threads, and each thread function is
considered as one job, then the number of jobs completed per unit of time is increased, thus increasing the
throughput of the system.
Types of Threads

 There are two types of threads:


• User Level Thread
• Kernel Level Thread
User Level Thread Vs Kernel Level Thread

S. No. Parameters User Level Thread Kernel Level Thread

Kernel threads are implemented by Operating System


1. Implemented by User threads are implemented by users.
(OS).

2. Recognize Operating System doesn’t recognize user level threads. Kernel threads are recognized by Operating System.

3. Implementation Implementation of User threads is easy. Implementation of Kernel thread is complicated.

4. Context switch time Context switch time is less. Context switch time is more.

5. Hardware support Context switch requires no hardware support. Hardware support is needed.

If one user level thread performs blocking operation then If one kernel thread perform blocking operation then
6. Blocking operation
entire process will be blocked. another thread can continue execution.

Multithread applications cannot take advantage of


7. Multithreading Kernels can be multithreaded.
multiprocessing.

Creation and User level threads can be created and managed more Kernel level threads take more time to create and
8.
Management quickly. manage.

9. Operating System Any operating system can support user-level threads. Kernel level threads are operating system-specific.
Ch. 4 Operating System Services
Syllabus

• Processes ✓
• Process Structure ✓
• Process Scheduling ✓
• Scheduling Algorithms ✓
• Process Synchronization & Deadlocks ✓
• Thread Management ✓
• Memory Management ✓
• System Calls
• File System
System Call

 In computing, a system call is a programmatic way in which a computer program requests a service from the
kernel of the operating system it is executed on.
 A system call is a way for programs to interact with the operating system.
 A computer program makes a system call when it makes a request to the operating system’s kernel.
 System call provides the services of the operating system to the user programs via Application Program
Interface(API).
 It provides an interface between a process and an operating system to allow user-level processes to request
services of the operating system.
 System calls are the only entry points into the kernel system. All programs needing resources must use system
calls.
Services Provided by System Calls

1. Process creation and management


2. Main memory management
3. File Access, Directory, and File system management
4. Device handling(I/O)
5. Protection
6. Networking, etc.
1. Process control: end, abort, create, terminate, allocate, and free memory.
2. File management: create, open, close, delete, read files,s, etc.
3. Device management
4. Information maintenance
5. Communication
Features of system calls:

1. Interface: System calls provide a well-defined interface between user programs and the operating system. Programs
make requests by calling specific functions, and the operating system responds by executing the requested service and
returning a result.
2. Protection: System calls are used to access privileged operations that are not available to normal user programs. The
operating system uses this privilege to protect the system from malicious or unauthorized access.
3. Kernel Mode: When a system call is made, the program is temporarily switched from user mode to kernel mode. In
kernel mode, the program has access to all system resources, including hardware, memory, and other processes.
4. Context Switching: A system call requires a context switch, which involves saving the state of the current process and
switching to the kernel mode to execute the requested service. This can introduce overhead, which can impact system
performance.
5. Error Handling: System calls can return error codes to indicate problems with the requested service. Programs must
check for these errors and handle them appropriately.
6. Synchronization: System calls can be used to synchronize access to shared resources, such as files or network
connections. The operating system provides synchronization mechanisms, such as locks or semaphores, to ensure that
multiple programs can access these resources safely.
Ch. 4 Operating System Services
Syllabus

• Processes ✓
• Process Structure ✓
• Process Scheduling ✓
• Scheduling Algorithms ✓
• Process Synchronization & Deadlocks ✓
• Thread Management
• Memory Management
• System Calls
• File System
Memory Management in Operating System

 The term Memory can be defined as a collection of data in a specific format.


 It is used to store instructions and process data.
 The memory comprises a large array or group of words or bytes, each with its own
location.
 The primary motive of a computer system is to execute programs.
 These programs, along with the information they access, should be in the main
memory during execution.
 The CPU fetches instructions from memory according to the value of the program
counter.
Memory Management in Operating System

To achieve a degree of multiprogramming and proper utilization of


memory, memory management is important. Many memory
management methods exist, reflecting various approaches, and the
effectiveness of each algorithm depends on the situation.
What is Main Memory:

 The main memory is central to the operation of a modern computer.


 Main Memory is a large array of words or bytes, ranging in size from hundreds of thousands to
billions. Main memory is a repository of rapidly available information shared by the CPU and I/O
devices.
 Main memory is the place where programs and information are kept when the processor is
effectively utilizing them.
 Main memory is associated with the processor, so moving instructions and information into and
out of the processor is extremely fast.
 Main memory is also known as RAM(Random Access Memory). This memory is a volatile
memory.
 RAM lost its data when a power interruption occurs.
Types of memory devices
What is Memory Management

 In a multiprogramming computer, the operating system resides in a part of


memory and the rest is used by multiple processes.
 The task of subdividing the memory among different processes is called memory
management.
 Memory management is a method in the operating system to manage
operations between main memory and disk during process execution.
 The main aim of memory management is to achieve efficient utilization of
memory.
Why Memory Management is required:

• Allocate and de-allocate memory before and after process


execution.
• To keep track of used memory space by processes.
• To minimize fragmentation issues.
• To proper utilization of main memory.
• To maintain data integrity while executing of process.
Logical and Physical Address Space:

 Logical Address space: An address generated by the CPU is known as a “Logical


Address”. It is also known as a Virtual address. Logical address space can be defined as
the size of the process. A logical address can be changed.
 Physical Address space: An address seen by the memory unit (i.e the one loaded into
the memory address register of the memory) is commonly known as a “Physical
Address”. A Physical address is also known as a Real address. The set of all physical
addresses corresponding to these logical addresses is known as Physical address
space. A physical address is computed by MMU. The run-time mapping from virtual to
physical addresses is done by a hardware device Memory Management Unit(MMU).
The physical address always remains constant.
Static and Dynamic Loading:

 Loading a process into the main memory is done by a loader. There are two different types of
loading :
• Static loading:- loading the entire program into a fixed address. It requires more memory space.
• Dynamic loading:- The entire program and all data of a process must be in physical memory for
the process to execute. So, the size of a process is limited to the size of physical memory. To gain
proper memory utilization, dynamic loading is used. In dynamic loading, a routine is not loaded
until it is called. All routines are residing on disk in a relocatable load format. One of the
advantages of dynamic loading is that unused routine is never loaded. This loading is useful when
a large amount of code is needed to handle it efficiently.
Static and Dynamic linking:

 To perform a linking task a linker is used. A linker is a program that takes one or more
object files generated by a compiler and combines them into a single executable file.
• Static linking: In static linking, the linker combines all necessary program modules into a
single executable program. So there is no runtime dependency. Some operating systems
support only static linking, in which system language libraries are treated like any other
object module.
• Dynamic linking: The basic concept of dynamic linking is similar to dynamic loading. In
dynamic linking, “Stub” is included for each appropriate library routine reference. A stub
is a small piece of code. When the stub is executed, it checks whether the needed
routine is already in memory or not. If not available then the program loads the routine
into memory.
Swapping :

 When a process is executed, it must have resided in memory.


 Swapping is a process of swapping a process temporarily into a secondary memory from
the main memory, which is fast as compared to secondary memory.
 A swapping allows more processes to be run and can be fit into memory at one time. The
main part of swapping is transferred time and the total time is directly proportional to
the amount of memory swapped.
 Swapping is also known as roll-out, roll in, because if a higher priority process arrives and
wants service, the memory manager can swap out the lower priority process and then
load and execute the higher priority process.
 After finishing higher priority work, the lower priority process swapped back in memory
and continued to the execution process.
Swapping :
Thank
You

You might also like