0% found this document useful (0 votes)
25 views7 pages

Os Unit 02 Renew

Uploaded by

Debasish Sarangi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views7 pages

Os Unit 02 Renew

Uploaded by

Debasish Sarangi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Process Management:

Process Management is a core function Of the operating system that deals


with managing processes during their lifecycle. A process is an instance of a
program in execution, and process management ensures that these processes
are executed effciently, fairly, and without interfering with one another. Key
tasks involved in process management include: Process Creation and
Termination: Creating new processes. terminating processes. and managing
process resources. Process Scheduling: Determining which process should be
executed at any given time, based on various scheduling algorithms. Process
Synchronization: Coordinating processes so that they can share resources
without conflicts. Inter-Process Communication (IPC): Mechanisms for
processes to communicate with each other, share data, or synchronize their
actions. Process Concept: Program Code (Text Section): The actual code or
instructions Of the program being executed. Program Counter (PC): A register
that keeps track of the address of the next instruction to be executed. Process
Stack Used for function calls. local variables, and control flow during execution.
Heap: Dynamic memory used by the program for runtime memory allocation.
Data Section: Contains global and static variables used by the program.
Process Control Block (PCB): A data structure that holds important
information about a process, such as its state. program counter, register
values, memory management information, etc. Types of Processes: Ready
Process: A process that is loaded into memory and ready to run, but waiting
for CPU time. Running Process: A process currently being executed by the
CPU. Blocked or Waiting Process: A process that cannot proceed until some
condition is met (e.g., I/O completion). New Process: A process that is being
created. Terminated Process: A process that has completed execution and is
about to be removed from memory. Operations on Processes: which include:
Process Creation: A new process is created by the fork() system call (in UNIX-
like systems). The parent process creates a child process, which is an identical
copy of the parent process but with a unique process ID (PID). Process
Termination: A process can terminate by using the exit() system call. When a
process finishes its execution, it returns a status code to the operating system.
wait() is used by the parent process to wait for the child process to finish and
retrieve its exit status. Process State Transitions: A process can transition
between different states: from new to ready. from ready to running, from
running to waiting, and finally from waiting or running to terminated.
Context Switching: When the CPU switches from executing one process to
another, the OS performs a context switch. This involves saving the state
(register values. program counter, etc.) of the current process and loading the
state of the next process from the PCB. Process Scheduling and Algorithms:
Process Scheduling is the mechanism by which the operating system decides
which process to execute next. The CPU time is shared among multiple
processes, and the OS uses scheduling algorithms to decide the order in which
processes are executed. Types of Scheduling: Preemptive Scheduling: The OS
can stop a running process (preemption) and start executing another process.
This ensures fair CPU time distribution. Non-preemptive Scheduling: A running
process is not interrupted, and it runs until it either finishes or voluntarily
yields control of the CPU. Scheduling Algorithms: First-come, First-served
(FCFS) The first process to arrive is the first to be executed. Simple but can lead
to the convoy effect, where long processes delay the execution of shorter
ones. Shortest Job Next (SJN) or Shortest Job First (SJF): The process with the
shortest burst time (expected CPI-I time) is executed next. It minimizes
average waiting time but requires knowing the burst time in advance, which is
often difficult. Priority Scheduling: Each process is assigned a priority, and the
process with the highest priority is executed first. If two processes have the
same priority, scheduling can be FCFS- This can be preemptive or non-
preemptive. Issue: Starvation—low-priority processes might never get
executed.
inter-Process Communication (IPC): Inter-Process Communication (IPC) refers to
mechanisms that allow processes to communicate with each other and share data- IPC is
necessary because each process typically operates in its own address space and cannot
directly access the memory of another process. IPC ensures that processes can
cooperate and exchange information safely and efficiently. Types of pc: Shared
Memory: Multiple processes share a region of memory. One process writes data to the
memory, and another process reads it. It is very fast since the communication happens
through direct memory access. Synchronization: To avoid race conditions.
synchronization mechanisms like semaphores, mutexes, or locks are needed. Message
Passing: Processes communicate by sending and receiving messages through a message-
passing system (e.g.. pipes, message queues). This method is slower than shared
memory but simpler and safer for inter-process communication. Direct Communication:
Processes can communicate directly, specifying the recipient. Indirect Communication:
Processes communicate indirectly through a message queue or broker. Pipes A pipe is a
unidirectional communication channel used for transferring data between processes.
Named pipes (FIFOs) can be used for communication between unrelated processes,
while anonymous pipes are typically used between parent-child processes. Sockets:
Sockets provide communication between processes over a network (local or remote). A
socket is an endpoint for sending or receiving data. They can be used for both TCP
(reliable) and UDP (unreliable) communication. Signals: Signals are used to notify
processes of events (e.g., a process can be sent a signal when an error occurs or when a
particular condition is met). Signals can be asynchronous and used for process control,
like terminating a process (e.g., SIGKILL, SIGTERM). Semaphores: A semaphore is a
variable used for synchronization, typically to manage access to shared resources. A
binary semaphore is used for mutual exclusion (mutex), while a counting semaphore
allows multiple processes to access a limited number of resources. Thread and Process
Concept: Process: A process is a program in execution. It is an independent unit of
execution that has its own memory space and resources. The operating system manages
processes to ensure that they do not interfere with each other and that each process
gets a fair share of the system's resources. Characteristics of a Process: Program
Counter (PC): Keeps track of the address of the next instruction to be executed.
Memory: A process has its own address space, including code, data, stack, and heap.
Process Control Block (PCB): Holds the process's metadata, such as state, process ID,
CPU registers, memory management information, etc. Resource Allocation: A process
may hold various resources like files, semaphores, or devices. A thread is a smaller unit
of a process. While a process is an independent entity. a thread is a basic unit of
execution within that process. Multiple threads can exist within a process and share the
same memory space and resources, but each thread has its own execution stack,
program counter, and local variables. Characteristics of a Thread: Shared Memory:
Threads within the same process share the same memory space (code, data). which
allows for efficient communication. Thread Control Block (TCB): Similar to a PCB, but for
threads. It contains information specific to the thread like its program counter, stack
pointer, etc. Concurrency: Threads allow a process to perform multiple operations
concurrently. For example: one thread could be reading data, while another thread
processes the data.
Differences between Threads and Processes:

Memory Each process has its own memory Threads share the same
space memory space of the process
Communication Inter-process communication Direct communication between
(IPC) needed threads (fast)
Overhead More overhead due to separate Less overhead, as threads share
memory space and resources resources
Execution Executes independently Executes concurrently with
other threads in the same
process
Threads are lighter-weight and more efficient for multitasking within the same process,
whereas processes are used when tasks require separate memory spaces or more
isolation.
Deadlock:

A deadlock is a situation in a multi-process or multi-threaded environment


where a set of processes or threads are unable to proceed because each one
is waiting for resources that are held by others in the set. Deadlock can occur
in systems with shared resources (such as memory, CPU, or I/O devices) when
processes lock resources and enter a circular waiting pattern. Conditions for
Deadlock: Mutual Exclusion: At least one resource must be held in a non-
shareable mode, only one process can use the resource at a time. Hold and
Wait: A process holding at least one resource is waiting to acquire additional
resources that are currently being held by other processes. No preemption:
Resources cannot be forcibly taken from processes; they must be released
voluntarily. Circular Wait: A set of processes exist such that each process is
waiting for a resource held by the next process in the set, forming a circular
chain. Deadlock Detection: Deadlock Detection is a method used by the
operating system to identify whether a deadlock situation has occurred. It
typically involves monitoring the state Of the system and determining if
processes are stuck in a circular wait Deadlock Detection Techniques:
Resource Allocation Graph (RAG): In this method, processes and resources
are represented as nodes in a directed graph. An edge from a process to a
resource indicates that the process is requesting that resource. An edge from
a resource to a process indicates that the resource is currently allocated to the
process. Deadlock is detected by checking for cycles in the graph, where a
process is waiting for a resource held by another process. which is waiting for
the first process, creating a circular wait. Detection Algorithm: The system
periodically checks if any processes are in a state of waiting indefinitely The
deadlock detection algorithm involves checking for the presence of circular
wait conditions and can be more complex, requiring algorithms to check for
processes waiting on resources. Example: When a process requests a
resource, if the resource is not available, the system adds a request edge to
the graph. If at any time, a cycle forms, a deadlock has occurred.
Deadlock:

DEADLOCK Prevention: Deadlock Prevention aims to ensure that at least one of the necessary conditions for deadlock does not occur,
thus preventing deadlock from ever happening. Strategies for Deadlock Prevention: Eliminating Mutual Exclusion: Not always feasible
because some resources (e.g„ printers, disk drives) must be exclusive. However. it may be possible for some resources to be shared
among processes (e.g. readonly files). Eliminating Hold and Wait: A process must request all the resources it needs at once and must
not hold any resources while waiting for others. Eliminating No Preemption: Resources can be preempted from processes if they are
holding resources and waiting for others. If a process IS holding resources that other processes need. the system may take resources
from the process and give them to other processes. This approach could lead to higher system overhead and complex resource
management. Eliminating Circular Wait: This can be achieved by Imposing a stnctordenng of resource requests. Constance, resources
could be numbered and processes can only request resources in increasing order of their IDs, ensuring no cycles in resource request.
Deadlock:

Deadlock Avoidance: Deadlock Avoidance involves making dynamic decisions about resource allocation to ensure that deadlock will
never occur. It is more flexible than deadlock prevention because It allows processes to request resources as needed. but the system
ensures that such requests do not lead to deadlock. Strategies for Deadlock Avoidance: Banker's Algorithm: One of the most well-
known deadlock avoidance algorithms, used primarily in systems where resources are allocated dynamically. The systern must know in
advance the maximum resources each process will need When a process requests resources, the OS checks whether granting the
request would leave the system in a sate state (a state where deadlock is not possible). Safe State A state where there exists a sequence
of processes that con each be allocated the resources it needs and eventually terminate. Unsafe State: State Where no such sequence
exists, leading to potential deadlock The Bankers algorithm uses the following approach: Before granting a resource request. the system
checks if it is possible to allocate resources without leading to deadlock, by simulating resource allocation and checking whether the
system can eventually reach a safe State. Resource Allocation Graph (RAG) with Claim Edge: In the RAG. o •claim edge' is used to
represent a process that may eventually request a resource. If granting a request would result in a cycle in the graph, the request is
denied. This prevents circular wait conditions from forming.

You might also like