Os Unit 2 & 3
Os Unit 2 & 3
When an interrupt occurs, it indicates that something outside the current executing process
Interrupt-driven I/O is more efficient than polling (which continuously checks device
status) since it allows the CPU to handle other tasks while waiting for external events.
Context switching between processes often uses the same mechanism, allowing the OS
to allocate CPU time across multiple programs.
Only one process can be running in any processor at any time, But many process may be
in ready and waiting states. The ready processes are loaded into a “ready queue”.
• But in simple computers CPU sit idle until the I/O request granted.
• Such operating systems allow more than one process to be loaded into the
executable memory at a time and the loaded process shares the CPU using time
multiplexing.
• Be predictable.
• Enforce Priorities.
• If CPU is executing a process, if time expires or if a high priority process enters into
main memory, then the scheduler saves information about current process in the PCB
and switches to execute the another process.
• The concept of moving CPU by scheduler from one process to other process is known
as context switch.
• Preemptive scheduling: CPU can release the processes even in the middle of the
execution. CPU received a signal from process p2. OS compares the priorities of p1
,p2. If p1>p2, CPU continues the execution of p1. If p1<p2 CPU preempt p1 and
assigned to p2.
• The main job of dispatcher is switching the CPU from one process to another
process.
• Dispatcher connects the CPU to the process selected by the short term
scheduler.
• Dispatcher latency:
• The time it takes by the dispatcher to stop one process and start another process
is known as dispatcher latency.
• Two common system calls that utilize these mechanisms are fork() and exec().
• The fork() system call is used to create a new process by duplicating the
calling process (the parent).
• The new process created by fork() is called the child process, and it is an exact
copy of the parent process except for a few key differences (e.g., the child has
a different process ID).
• Key Concept:
• When fork() is called, an interrupt occurs, and the OS takes control to handle the
creation of a new process.
• During this interrupt, the OS duplicates the calling process by copying its
memory, file descriptors, and registers.
• The OS then creates a new process control block (PCB) for the child.
11/16/2024 CA-102: Operating Systems [Conducted by: S. S. Sapkale] 25
11. Operations with examples
from UNIX (fork, exec)
1. fork():
• In this case, fork() creates a child process, and both parent and child continue
executing the code.
• The parent and child processes execute independently and can take different
paths depending on the logic in the code.
• The exec() family of functions (such as execv(), execl(), execvp(), etc.) is used
to replace the current process image with a new process image.
• When a process calls exec(), the current program (the one that called exec())
is completely replaced by the new program specified in the parameters of the
exec() call.
• Importantly, the process ID (PID) remains the same, but everything else
(code, data, stack) is replaced by the new program.
• When fork() is called, the operating system must handle the request by
suspending the current program's execution.
• This triggers a system call interrupt, which causes the OS to enter kernel
mode.
• Once the child is created, the OS returns control back to user space,
resuming the parent and child processes.
• When exec() is called, a similar system call interrupt occurs, except this
time, the process is not duplicated.
• The OS manages the loading of the new program into memory, and once
the program is loaded, the OS transfers control back to the newly loaded
program.
• Context Switching:
• When fork() creates a child process, both the parent and child are now
separate processes that require the CPU.
• To allocate CPU time between the parent and child, the OS uses context
switching, which may be triggered by a timer interrupt or an explicit
system call.
• The OS saves the state of the current process (such as registers, memory,
etc.) and loads the state of the next process to be executed.
• how many jobs are completed by the CPU with in a time period.
• The time interval between the submission of the process and time of the completion is
turn around time.
TAT = Waiting time in ready queue + executing time + waiting time in waiting queue for I/O.
• Waiting time: The time spent by the process to wait for CPU to be allocated.
• Response time:
• Cpu Utilization:
• CPU is costly device, it must be kept as busy as possible.Eg: CPU efficiency is 90%
means it is busy for 90 units, 10 units idle
• The process that request the CPU first is holds the CPU first. If a process request the
CPU then it is loaded into the ready queue, connect CPU to that process.
• Consider the following set of processes that arrive at time 0, the length of the CPU
burst time given in milli seconds .
“burst time is the time, required the cpu to execute that job, it is in milli
seconds.”
• In SRTF, the process with the shortest remaining execution time is always
selected for execution.
• If a new process arrives with a shorter remaining time than the currently
running process, the running process is preempted, and the new process starts
execution.
• Each process executes for a maximum of one time quantum at a time, then it's
placed back into the queue if it hasn't finished, allowing other processes a
chance to execute.
• Time Quantum:
If the quantum is too large, Round Robin behaves like First-Come, First-
Served (FCFS). If it's too small, there is more overhead due to frequent context
switching.
• Fairness:
RR is fair because every process gets an equal time share in each round.
• Preemptive:
• Each queue has its own scheduling policy, and processes are scheduled
according to predefined rules between the queues.
• Multiple Queues: The ready queue is divided into several separate queues.
• Scheduling Policy per Queue: Each queue can use a different scheduling
algorithm, such as First-Come-First-Served (FCFS), Shortest Job First (SJF), or
Round Robin (RR).
• Multiple Queues: The ready queue is divided into several separate queues.
• Scheduling Policy per Queue: Each queue can use a different scheduling
algorithm, such as First-Come-First-Served (FCFS), Shortest Job First (SJF), or
Round Robin (RR).
1. shared memory
• In shared memory IPC, multiple processes can access the same memory region.
• One process writes data into this shared memory, and another process reads
from it.
• Speed: Fast, since data doesn't have to be copied from one process to another.
• Example: Multiple processes updating the same data (like a database cache).
1. Shared Memory
• Advantages:
• Disadvantages:
• Complex synchronization.
• Speed: Slower than shared memory, as data is copied from the sender process to
the OS and then to the receiver.
1. Message Passing
• Advantages:
• Disadvantages:
2. A signal interrupts the process, forcing it to pause its current activity and
execute a signal handler or take default actions.