0% found this document useful (0 votes)
17 views72 pages

Os Unit 2 & 3

Detailed unit on international interrupt handling in OS

Uploaded by

Nilesh Pawar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views72 pages

Os Unit 2 & 3

Detailed unit on international interrupt handling in OS

Uploaded by

Nilesh Pawar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 72

Unit-II

Mrs. Shubhangi S. Sapkale


School of Computer Sciences
K B C North Maharashtra University, Jalgaon
Today’s discussion…
 Computer organization interface:
using interrupt handler to pass control between a running program and OS.

11/16/2024 CA-102: Operating Systems [Conducted by: S. S. Sapkale] 2


Interrupt handler
 In computer organization, an interrupt handler is a key component of how a
computer's operating system (OS) manages multitasking and responsive
interactions with hardware devices.

 It plays a crucial role in transferring control between a running program and


the OS when an interrupt occurs.

 Here’s how the interrupt handler facilitates this control flow:


1. Interrupt Occurrence
 An interrupt is a signal from either hardware (e.g., input from a keyboard, mouse, or a system

timer) or software (e.g., system call or exception).

 When an interrupt occurs, it indicates that something outside the current executing process

requires immediate attention from the CPU.

11/16/2024 CA-102: Operating Systems [Conducted by: S. S. Sapkale] 3


Interrupt handler
2. Saving the Program's State
 When an interrupt is triggered, the processor stops executing the current program and saves
the current program’s state.
 This state includes the program counter, registers, and other CPU states.
 The saved state allows the processor to return to the program once the interrupt has been
handled.
3. Transferring Control to the OS
 After saving the state, the CPU transfers control to the interrupt handler.
 This handler is a special routine that resides in the OS.
 The OS is responsible for determining what kind of interrupt occurred and what needs to be
done about it.
 The interrupt handler is typically determined by the interrupt vector table, which holds
pointers to the appropriate interrupt service routines (ISRs).

11/16/2024 CA-102: Operating Systems [Conducted by: S. S. Sapkale] 4


Interrupt handler
4. Executing the Interrupt Service Routine (ISR)
 The interrupt handler calls the appropriate Interrupt Service Routine (ISR) based on the type
of interrupt.
 For example:
 If the interrupt was generated by a hardware device (like a mouse), the ISR might read
data from that device.
 If the interrupt was a system timer, the ISR might trigger a context switch to allow
multitasking.
5. Restoring the Program's State
 Once the ISR has completed its task, the interrupt handler restores the program's saved state
and passes control back to the interrupted program.
 This allows the program to resume execution as if nothing had happened, with no loss of
progress.
4. Return to the Running Program
 After the ISR is complete and the saved state is restored, the CPU resumes execution of the
interrupted program at the exact point where it left off.

11/16/2024 CA-102: Operating Systems [Conducted by: S. S. Sapkale] 5


Key Points
 Interrupt handlers ensure that the CPU can quickly respond to external events (like I/O
requests) while minimizing disruption to currently running programs.

 Interrupt-driven I/O is more efficient than polling (which continuously checks device
status) since it allows the CPU to handle other tasks while waiting for external events.

 Context switching between processes often uses the same mechanism, allowing the OS
to allocate CPU time across multiple programs.

11/16/2024 CA-102: Operating Systems [Conducted by: S. S. Sapkale] 6


Unit-III
Process Management
Mrs. Shubhangi S. Sapkale
School of Computer Sciences
K B C North Maharashtra University, Jalgaon
1. Process Management
 Program does nothing unless its instructions are executed by a CPU.

 A program in execution, as mentioned, is a process.

 A time-shared user program such as a compiler is a process.

 A word-processing program being run by an individual user on a PC is a process.

 A system task, such as sending output to a printer, can also be a process.

 For now, you can consider a process to be a job or a time-shared program.

11/16/2024 CA-102: Operating Systems [Conducted by: S. S. Sapkale] 9


Differences between Process and Program

11/16/2024 CA-102: Operating Systems [Conducted by: S. S. Sapkale] 10


2. Process States
• When a process executed, it changes the state, generally the state of process is
determined by the current activity of the process.

• Each process may be in one of the following states:

1. New : The process is being created.

2. Running : The process is being executed.

3. Waiting : The process is waiting for some event to occur.

4. Ready : The process is waiting to be assigned to a processor.

5. Terminated : The Process has finished execution.

 Only one process can be running in any processor at any time, But many process may be
in ready and waiting states. The ready processes are loaded into a “ready queue”.

11/16/2024 CA-102: Operating Systems [Conducted by: S. S. Sapkale] 11


2. Diagram of process state
• New ->Ready : OS creates process and prepares the process to be executed, then
OS moved the process into ready queue.
• Ready->Running : OS selects one of the Jobs from ready Queue and move them
from ready to Running.
• Running->Terminated : When the Execution of a process has Completed, OS
terminates that process from running state. Sometimes OS terminates the
process for some other reasons including Time exceeded, memory unavailable,
access violation, protection Error, I/O failure and soon.

11/16/2024 CA-102: Operating Systems [Conducted by: S. S. Sapkale] 12


2. Diagram of process state
• Running->Ready : When the time slot of the processor expired (or) If the
Processor received any interrupt signal, the OS shifted Running -> Ready State.
• Running -> Waiting : A process is put into the waiting state, if the process need
an event occur (or) an I/O Device require.
• Waiting->Ready : A process in the waiting state is moved to ready state when
the event for which it has been Completed.

11/16/2024 CA-102: Operating Systems [Conducted by: S. S. Sapkale] 13


3. Process Control Block:
• Each process is represented in the operating System by a Process Control Block.
• It is also called Task Control Block.
• It contains many pieces of information associated with a specific Process.

11/16/2024 CA-102: Operating Systems [Conducted by: S. S. Sapkale] 14


3. Process Control Block:
1. Process State : The State may be new, ready,
running, and waiting, Terminated.
2. Program Counter : indicates the Address of the
next Instruction to be executed.
3. CPU registers : registers include accumulators,
stack pointers, General purpose Registers…..
4. CPU-Scheduling Info : includes a process
pointer, pointers to scheduling Queues, other
scheduling parameters etc.
5. Memory management Info: includes page
tables, segmentation tables, value of base and
limit registers.
6. Accounting Information: includes amount of
CPU used, time limits, Jobs(or)Process
numbers.
7. I/O Status Information: Includes the list of I/O
Devices Allocated to the processes, list of open
files.
11/16/2024 CA-102: Operating Systems [Conducted by: S. S. Sapkale] 15
4. PROCESS SCHEDULING
• CPU is always busy in Multiprogramming. Because CPU switches from one job to
another job.

• But in simple computers CPU sit idle until the I/O request granted.

• Scheduling is a important OS function.

• All resources are scheduled before use.(CPU, memory, devices…..)

• Process scheduling is an essential part of a Multiprogramming operating


systems.

• Such operating systems allow more than one process to be loaded into the
executable memory at a time and the loaded process shares the CPU using time
multiplexing.

11/16/2024 CA-102: Operating Systems [Conducted by: S. S. Sapkale] 16


5. PROCESS SCHEDULING
• Objectives of Scheduling :

• Maximize throughput(One measure of work is the number of processes that


are completed per time unit, called throughput).

• Maximize number of users receiving acceptable response times.

• Be predictable.

• Balance resource use.

• Avoid indefinite postponement.

• Enforce Priorities.

• Give preference to processes holding key resources.

11/16/2024 CA-102: Operating Systems [Conducted by: S. S. Sapkale] 17


6. SCHEDULING QUEUES
• There are 3 types Scheduling Queues :
1. Job queue:
• when processes enter the system, they are put into a job queue, which consists all
processes in the system.
• Processes in the job queue reside on mass storage and await the allocation of
main memory.
2. Ready queue:
• if a process is present in main memory and is ready to be allocated to CPU for
execution, is kept in ready queue.
3. Device queue:
• if a process is present in waiting state (or) waiting for an i/o event to complete is
said to be in device queue.
(or)
• The processes waiting for a particular I/O device is called device queue.

11/16/2024 CA-102: Operating Systems [Conducted by: S. S. Sapkale] 18


7. Schedulers
• Scheduler duties:

• Maintains the queue.

• Select the process from queues assign to CPU.

• There are 3 schedulers:

1. Long term scheduler.

2. Medium term scheduler

3. Short term scheduler.

11/16/2024 CA-102: Operating Systems [Conducted by: S. S. Sapkale] 19


8. Schedulers
• Types of schedulers:
1. Long term scheduler:
• Select the jobs from the job pool and loaded these jobs into main memory (ready
queue).
• Long term scheduler is also called job scheduler.
2. Short term scheduler:
• Select the process from ready queue, and allocates it to the CPU.
• If a process requires an I/O device, which is not present available then process
enters device queue.
• Short term scheduler maintains ready queue, device queue. Also called as CPU
scheduler.
3. Medium term scheduler:
• if process request an I/O device in the middle of the execution, then the process
removed from the main memory and loaded into the waiting queue.
• When the I/O operation completed, then the job moved from waiting queue to
ready queue.

11/16/2024 CA-102: Operating Systems [Conducted by: S. S. Sapkale] 20


9. Schedulers

11/16/2024 CA-102: Operating Systems [Conducted by: S. S. Sapkale] 21


10. Context Switch
• Assume, main memory contains more than one process.

• If CPU is executing a process, if time expires or if a high priority process enters into
main memory, then the scheduler saves information about current process in the PCB
and switches to execute the another process.

• The concept of moving CPU by scheduler from one process to other process is known
as context switch.

• Non-Preemptive Scheduling: CPU is assigned to one process, CPU do not release


until the competition of that process. The CPU will assigned to some other process
only after the previous process has finished.

• Preemptive scheduling: CPU can release the processes even in the middle of the
execution. CPU received a signal from process p2. OS compares the priorities of p1
,p2. If p1>p2, CPU continues the execution of p1. If p1<p2 CPU preempt p1 and
assigned to p2.

11/16/2024 CA-102: Operating Systems [Conducted by: S. S. Sapkale] 22


10. Context Switch
• Dispatcher:

• The main job of dispatcher is switching the CPU from one process to another
process.

• Dispatcher connects the CPU to the process selected by the short term
scheduler.

• Dispatcher latency:

• The time it takes by the dispatcher to stop one process and start another process
is known as dispatcher latency.

• If the dispatcher latency is increasing, then the degree of multiprogramming


decreases.

11/16/2024 CA-102: Operating Systems [Conducted by: S. S. Sapkale] 23


11. Operations with examples
from UNIX (fork, exec)
• In UNIX-like operating systems, interrupts are central to how the OS handles
multitasking, user programs, and system processes.

• Two common system calls that utilize these mechanisms are fork() and exec().

• Together, they play a critical role in process management.

• Here’s an explanation of how they work, with examples, as well as their


relationship with interrupts and context switching.

11/16/2024 CA-102: Operating Systems [Conducted by: S. S. Sapkale] 24


11. Operations with examples
from UNIX (fork, exec)
1. fork(): Creating a New Process

• The fork() system call is used to create a new process by duplicating the
calling process (the parent).

• The new process created by fork() is called the child process, and it is an exact
copy of the parent process except for a few key differences (e.g., the child has
a different process ID).

• Key Concept:

• When fork() is called, an interrupt occurs, and the OS takes control to handle the
creation of a new process.

• During this interrupt, the OS duplicates the calling process by copying its
memory, file descriptors, and registers.

• The OS then creates a new process control block (PCB) for the child.
11/16/2024 CA-102: Operating Systems [Conducted by: S. S. Sapkale] 25
11. Operations with examples
from UNIX (fork, exec)
1. fork():

11/16/2024 CA-102: Operating Systems [Conducted by: S. S. Sapkale] 26


11. Operations with examples
12. from UNIX (fork, exec)
1. fork():

• In this case, fork() creates a child process, and both parent and child continue
executing the code.

• The parent and child processes execute independently and can take different
paths depending on the logic in the code.

11/16/2024 CA-102: Operating Systems [Conducted by: S. S. Sapkale] 27


11. Operations with examples
from UNIX (fork, exec)
1. exec(): Replacing the Process :

• The exec() family of functions (such as execv(), execl(), execvp(), etc.) is used
to replace the current process image with a new process image.

• When a process calls exec(), the current program (the one that called exec())
is completely replaced by the new program specified in the parameters of the
exec() call.

• Importantly, the process ID (PID) remains the same, but everything else
(code, data, stack) is replaced by the new program.

11/16/2024 CA-102: Operating Systems [Conducted by: S. S. Sapkale] 28


11. Operations with examples
from UNIX (fork, exec)
1. exec(): Replacing the Process :
• The exec() family of functions (such as execv(), execl(), execvp(), etc.) is used
to replace the current process image with a new process image.
• When a process calls exec(), the current program (the one that called exec())
is completely replaced by the new program specified in the parameters of the
exec() call.
• Importantly, the process ID (PID) remains the same, but everything else
(code, data, stack) is replaced by the new program.
• Key Concept:
• After a call to fork(), a common practice is to call exec() in the child process to
run a different program.
• This allows the child process to execute a completely different program than
the parent.
• exec() does not return to the calling program unless an error occurs.
• After exec(), the original program is gone and replaced with the new one.

11/16/2024 CA-102: Operating Systems [Conducted by: S. S. Sapkale] 29


11. Operations with examples
from UNIX (fork, exec)
1. exec(): Here, the child process created by fork() is replaced by the ls command
using execl().
2. The ls command lists the files in the current directory, and after its execution, the
child process terminates.

11/16/2024 CA-102: Operating Systems [Conducted by: S. S. Sapkale] 30


11. Operations with examples
from UNIX (fork, exec)
1. Relationship Between fork(), exec(), and Interrupt Handling :

• Interrupt Handling in fork():

• When fork() is called, the operating system must handle the request by
suspending the current program's execution.

• This triggers a system call interrupt, which causes the OS to enter kernel
mode.

• In kernel mode, the OS creates a copy of the process, setting up a new


process control block (PCB) for the child.

• Once the child is created, the OS returns control back to user space,
resuming the parent and child processes.

11/16/2024 CA-102: Operating Systems [Conducted by: S. S. Sapkale] 31


11. Operations with examples
from UNIX (fork, exec)
1. Relationship Between fork(), exec(), and Interrupt Handling :

• Interrupt Handling in exec():

• When exec() is called, a similar system call interrupt occurs, except this
time, the process is not duplicated.

• Instead, the current process’s memory space is replaced by the new


program’s memory space.

• The OS manages the loading of the new program into memory, and once
the program is loaded, the OS transfers control back to the newly loaded
program.

11/16/2024 CA-102: Operating Systems [Conducted by: S. S. Sapkale] 32


11. Operations with examples
from UNIX (fork, exec)
1. Relationship Between fork(), exec(), and Interrupt Handling :

• Context Switching:

• When fork() creates a child process, both the parent and child are now
separate processes that require the CPU.

• To allocate CPU time between the parent and child, the OS uses context
switching, which may be triggered by a timer interrupt or an explicit
system call.

• The OS saves the state of the current process (such as registers, memory,
etc.) and loads the state of the next process to be executed.

11/16/2024 CA-102: Operating Systems [Conducted by: S. S. Sapkale] 33


11. Practical Usage Example: A Shell
1. In a UNIX-like shell, you commonly see the fork() and exec() system calls used
together.
2. The shell uses fork() to create a new child process to run a command, and the
child process uses exec() to replace itself with the desired program.

11/16/2024 CA-102: Operating Systems [Conducted by: S. S. Sapkale] 34


12. SCHEDULING CRITERIA
• Throughput:

• how many jobs are completed by the CPU with in a time period.

• Turn around time :

• The time interval between the submission of the process and time of the completion is
turn around time.
TAT = Waiting time in ready queue + executing time + waiting time in waiting queue for I/O.

• Waiting time: The time spent by the process to wait for CPU to be allocated.

• Response time:

• Time duration between the submission and first response.

• Cpu Utilization:

• CPU is costly device, it must be kept as busy as possible.Eg: CPU efficiency is 90%
means it is busy for 90 units, 10 units idle

11/16/2024 CA-102: Operating Systems [Conducted by: S. S. Sapkale] 35


13. CPU Scheduling Algorithms
• First come First served scheduling: (FCFS):

• The process that request the CPU first is holds the CPU first. If a process request the
CPU then it is loaded into the ready queue, connect CPU to that process.

• Consider the following set of processes that arrive at time 0, the length of the CPU
burst time given in milli seconds .

“burst time is the time, required the cpu to execute that job, it is in milli
seconds.”

11/16/2024 CA-102: Operating Systems [Conducted by: S. S. Sapkale] 36


9. CPU Scheduling Algorithms
• First come First served scheduling: (FCFS):

Average turn around time= (5+29++45+55+58/5) = 192/5 =38.5 millisecounds

Average waiting time= 0+5+29+45+55/5 = 134/5 = 26.8 ms

Average Response Time => (0+5+29+45+55)/5 =>26.8ms

11/16/2024 CA-102: Operating Systems [Conducted by: S. S. Sapkale] 37


• First come First served scheduling: (FCFS)
• It is Non Primitive Scheduling Algorithm.

• Advantages: Easy to Implement, Simple.

• Disadvantage: Average waiting time is very high.

11/16/2024 CA-102: Operating Systems [Conducted by: S. S. Sapkale] 38


• Shortest Job First Scheduling ( SJF ):
• Which process having the smallest CPU burst time, CPU is assigned to that
process . If two process having the same CPU burst time, FCFS is used.

11/16/2024 CA-102: Operating Systems [Conducted by: S. S. Sapkale] 39


• Shortest Job First Scheduling ( SJF ):

11/16/2024 CA-102: Operating Systems [Conducted by: S. S. Sapkale] 40


• Shortest Remaining Time First (SRTF):
• The Shortest Remaining Time First (SRTF) algorithm is a preemptive version of
the Shortest Job Next (SJN) scheduling algorithm.

• In SRTF, the process with the shortest remaining execution time is always
selected for execution.

• If a new process arrives with a shorter remaining time than the currently
running process, the running process is preempted, and the new process starts
execution.

11/16/2024 CA-102: Operating Systems [Conducted by: S. S. Sapkale] 41


• Shortest Remaining Time First (SRTF):

11/16/2024 CA-102: Operating Systems [Conducted by: S. S. Sapkale] 42


• Shortest Remaining Time First (SRTF):

11/16/2024 CA-102: Operating Systems [Conducted by: S. S. Sapkale] 43


• Shortest Remaining Time First (SRTF):

11/16/2024 CA-102: Operating Systems [Conducted by: S. S. Sapkale] 44


• Shortest Remaining Time First (SRTF):

11/16/2024 CA-102: Operating Systems [Conducted by: S. S. Sapkale] 45


• Round-Robin Scheduling
• The Round Robin (RR) scheduling algorithm is a preemptive CPU scheduling
technique that assigns a fixed time unit called a time quantum (or time slice) to
each process in the ready queue.

• Each process executes for a maximum of one time quantum at a time, then it's
placed back into the queue if it hasn't finished, allowing other processes a
chance to execute.

11/16/2024 CA-102: Operating Systems [Conducted by: S. S. Sapkale] 46


• Round-Robin Scheduling

11/16/2024 CA-102: Operating Systems [Conducted by: S. S. Sapkale] 47


• Round-Robin Scheduling

11/16/2024 CA-102: Operating Systems [Conducted by: S. S. Sapkale] 48


• Round-Robin Scheduling

11/16/2024 CA-102: Operating Systems [Conducted by: S. S. Sapkale] 49


• Round-Robin Scheduling

11/16/2024 CA-102: Operating Systems [Conducted by: S. S. Sapkale] 50


• Round-Robin Scheduling
Key Points:

• Time Quantum:

If the quantum is too large, Round Robin behaves like First-Come, First-
Served (FCFS). If it's too small, there is more overhead due to frequent context
switching.

• Fairness:

RR is fair because every process gets an equal time share in each round.

• Preemptive:

Processes are preempted after the quantum expires, promoting responsiveness


in time-sharing systems.

11/16/2024 CA-102: Operating Systems [Conducted by: S. S. Sapkale] 51


• Multilevel Queue Scheduling
• Multilevel Queue Scheduling is a CPU scheduling algorithm that partitions
the ready queue into multiple queues based on process priority or type, where
each queue can use different scheduling algorithms.

• Processes are permanently assigned to one queue based on criteria like


priority, memory requirements, or process type (interactive vs. batch).

• Each queue has its own scheduling policy, and processes are scheduled
according to predefined rules between the queues.

11/16/2024 CA-102: Operating Systems [Conducted by: S. S. Sapkale] 52


• Multilevel Queue Scheduling
• Key Characteristics:

• Multiple Queues: The ready queue is divided into several separate queues.

• Scheduling Policy per Queue: Each queue can use a different scheduling
algorithm, such as First-Come-First-Served (FCFS), Shortest Job First (SJF), or
Round Robin (RR).

• Fixed Priority: Queues are given different priorities. Processes in higher-


priority queues are executed first, and lower-priority queues are executed only
if higher-priority queues are empty.

• No Movement Between Queues: Processes remain in the queue to which


they are initially assigned.

11/16/2024 CA-102: Operating Systems [Conducted by: S. S. Sapkale] 53


• Multilevel Queue Scheduling
• Key Characteristics:

• Multiple Queues: The ready queue is divided into several separate queues.

• Scheduling Policy per Queue: Each queue can use a different scheduling
algorithm, such as First-Come-First-Served (FCFS), Shortest Job First (SJF), or
Round Robin (RR).

• Fixed Priority: Queues are given different priorities. Processes in higher-


priority queues are executed first, and lower-priority queues are executed only
if higher-priority queues are empty.

• No Movement Between Queues: Processes remain in the queue to which


they are initially assigned.

11/16/2024 CA-102: Operating Systems [Conducted by: S. S. Sapkale] 54


• Multilevel Queue Scheduling

11/16/2024 CA-102: Operating Systems [Conducted by: S. S. Sapkale] 55


• Multilevel Queue Scheduling

11/16/2024 CA-102: Operating Systems [Conducted by: S. S. Sapkale] 56


• Multilevel Queue Scheduling

11/16/2024 CA-102: Operating Systems [Conducted by: S. S. Sapkale] 57


• Multilevel Queue Scheduling

11/16/2024 CA-102: Operating Systems [Conducted by: S. S. Sapkale] 58


• Multilevel Queue Scheduling

11/16/2024 CA-102: Operating Systems [Conducted by: S. S. Sapkale] 59


Inter-process communication
(shared memory and message passing)

• Interprocess Communication (IPC) allows processes to exchange data and


signals, which is essential for multi-tasking systems.

• There are two main types of IPC methods:

1. shared memory

2. and message passing.

11/16/2024 CA-102: Operating Systems [Conducted by: S. S. Sapkale] 60


Inter-process communication
(shared memory and message passing)
1. Shared Memory

• In shared memory IPC, multiple processes can access the same memory region.

• One process writes data into this shared memory, and another process reads
from it.

• Mechanism: The OS allocates a shared memory segment that can be mapped


into the address space of the processes.

• Speed: Fast, since data doesn't have to be copied from one process to another.

• Synchronization: Requires explicit synchronization mechanisms like


semaphores or mutexes to prevent race conditions.

• Best for: Situations where large amounts of data need to be transferred


frequently.

• Example: Multiple processes updating the same data (like a database cache).

11/16/2024 CA-102: Operating Systems [Conducted by: S. S. Sapkale] 61


Inter-process communication
(shared memory and message passing)

1. Shared Memory

• Advantages:

• High performance for large data transfers.

• Lower overhead since there’s no need for system calls during


communication.

• Disadvantages:

• Complex synchronization.

• Requires processes to reside on the same machine.

11/16/2024 CA-102: Operating Systems [Conducted by: S. S. Sapkale] 62


Inter-process communication
(shared memory and message passing)
1. Message Passing

• In message passing IPC, processes communicate by sending messages to each


other via the OS.

• Mechanism: The OS acts as an intermediary, where one process sends a


message and the other receives it.

• Speed: Slower than shared memory, as data is copied from the sender process to
the OS and then to the receiver.

• Synchronization: Built-in synchronization; send and receive operations


usually block the process until the message is transmitted or received.

• Best for: Systems where data needs to be transferred between processes


running on different machines (e.g., distributed systems).

• Example: Processes sending requests to a server or across a network.

11/16/2024 CA-102: Operating Systems [Conducted by: S. S. Sapkale] 63


Inter-process communication
(shared memory and message passing)

1. Message Passing

• Advantages:

• Easier to implement and manage.

• Works in both local and distributed systems.

• Disadvantages:

• Slower compared to shared memory for large data transfers.

• High overhead due to context switching and data copying.

11/16/2024 CA-102: Operating Systems [Conducted by: S. S. Sapkale] 64


Inter-process communication
(shared memory and message passing)

11/16/2024 CA-102: Operating Systems [Conducted by: S. S. Sapkale] 65


UNIX signals
1. In UNIX-like operating systems, signals are a form of interprocess
communication (IPC) that notify a process of events or conditions
asynchronously.

2. A signal interrupts the process, forcing it to pause its current activity and
execute a signal handler or take default actions.

11/16/2024 CA-102: Operating Systems [Conducted by: S. S. Sapkale] 66


UNIX signals
• Key Concepts of UNIX Signals
1. Signal Types:
• Signals represent various types of events or conditions in the system. Some
common signals include:
• SIGINT (Signal Interrupt): Sent to a process when the user types Ctrl+C.
It typically requests the process to terminate.
• SIGKILL (Signal Kill): Immediately terminates the process. It cannot be
caught or ignored.
• SIGTERM (Signal Terminate): Requests graceful termination of the
process. It can be caught, blocked, or ignored by the process.
• SIGSTOP: Stops (pauses) the process’s execution.
• SIGCONT: Resumes execution of a stopped process.
• SIGHUP: Sent to a process when the terminal is disconnected. Often used
to reload configurations.
• SIGSEGV: Indicates a segmentation fault, which happens when a process
accesses an invalid memory location.
• SIGCHLD: Sent to a parent process when a child process terminates or
stops.

11/16/2024 CA-102: Operating Systems [Conducted by: S. S. Sapkale] 67


UNIX signals
• Key Concepts of UNIX Signals
2. Signal Actions:
• A process can take one of the following actions when receiving a signal:
• Default Action: The default behavior defined by the OS (e.g., terminate or
stop).
• Ignore: The process can ignore the signal (not all signals can be ignored,
e.g., SIGKILL).
• Handle: The process can define a custom signal handler function that will
be invoked when the signal is received.
3. Signal Delivery:
• Signals can be sent to a process via system calls like kill() (which sends
signals by process ID) or using raise() (which sends a signal to the process
that called it).
• Signals are asynchronous, meaning they can be delivered at any time
during the execution of a process.

11/16/2024 CA-102: Operating Systems [Conducted by: S. S. Sapkale] 68


UNIX signals
• Key Concepts of UNIX Signals
4. Signal Handlers:
• A signal handler is a function that is executed when a process receives a
particular signal.
• A process can set its own handler using functions like signal() or
sigaction().
• Example: Handling SIGINT (e.g., graceful shutdown on Ctrl+C):

11/16/2024 CA-102: Operating Systems [Conducted by: S. S. Sapkale] 69


UNIX signals
• Key Concepts of UNIX Signals
5. Signal Masking and Blocking:
• Processes can block signals from being delivered using a signal mask.
• This allows critical sections of code to run without being interrupted by
certain signals.
• Signals can be blocked or unblocked using sigprocmask().
6. Real-Time Signals:
• In addition to standard signals, UNIX systems support real-time signals,
which are more flexible and can carry additional data.
• They are delivered in a guaranteed order and are not discarded if sent
multiple times.

11/16/2024 CA-102: Operating Systems [Conducted by: S. S. Sapkale] 70


UNIX signals
• Common Use Cases for UNIX Signals:
• Graceful termination: Handling SIGTERM to save the state and clean up
before terminating.
• Process management: Using SIGCHLD to manage child processes.
• Pausing and resuming: Using SIGSTOP and SIGCONT to pause and
resume processes.
• Handling errors: Using signals like SIGSEGV to handle invalid memory
accesses.

11/16/2024 CA-102: Operating Systems [Conducted by: S. S. Sapkale] 71


UNIX signals
• Important Signals Overview::

11/16/2024 CA-102: Operating Systems [Conducted by: S. S. Sapkale] 72


Thank You

CA-102: Operating Systems [Conducted by: S. S.


11/16/2024 Sapkale] 73

You might also like