0% found this document useful (0 votes)
16 views8 pages

Chap 4

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views8 pages

Chap 4

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 8

Chap 4 : UNIX Process control

subsystem

In Aos a process is an instance{it’s like a small part of the


program} of program this is running , it can the thought as an
fundamental unit of work that needs to be implemented in a
system.

How to create a process ?


 A process can be created in 2 ways in a unix of
1st vai ; Forking , a process can be create a child process
using the fork() system call. The child is a duplicate of
the parent but has a unique process ID.
2nd vai ; Exec; after forking the child process can be
replace it’s memory space with a new program using a
the exec() family of functions this allows it to run a
different application.

# Process states
There are 5 process states namely as:
1.New- the process is being created
2.Ready – the process is ready to run and waiting for CPU
time
3.Running = The process is currently executing
4.Blocked – the process cannot continue until some event
occurs like and input output completion
5.Terminated – The process has finished execution.

# Inter-Process Communication
As the name implies this, is used establish communication
between process.
> Includes several methods as
1. Pipes
2. Message Queues
3. Shared Memory

#Pipes
A pipe is a mechanism that enables inter process
communication by allowing the output of one process to eb
used as input of another. This concept is crucial in creating
efficient and modular software applications
OR
Pipes are a foundational IPC mechanism in operating systems,
enabling efficient and structured communication between
processes
$ Working
>creation of pipe : it’s created by using a system call pipe() .
This created a pair of file descriptors one for reading and one
for writing
>Data Flow:
Data written to write-end of the pipe is buffered and can be
read from the read-end
If the buffer is full, the writing process in blocked until space
becomes available
If no data is available the reading process in blocked until
data is written

Advantages of Pipes ;
>they simplify the flow of data between processes without
needing complex protocols
> Since pipes use memory to buffer data, they are often
faster than other IPC methods that involve disk or network
communication.
>Allowing sperate processes to communicate can led to
cleaner and more modular code
Limitations of Pipes
>Data flows in one direction only, which can require multiple
pipes for bidirectional communication.
>Anonymous pipes are limited to related process, while ned
pipes have a naming overhead
> Processes may block if the pipe buffer is full of empty which
can lead to deadlocks if not managed carefully .
Use Cases
 Used extensively in shell scripting and command-line
operations to connect commands
 Useful in application where data needs to be processed
in stated such as filters in data streams
 A classic problem in concurrent programming that can
be effectively managed using pipes
$ Types
1. Anonymous Pipes
2. Named Pipes
3. Unnamed Pipes

definition In Unnamed pipe In Named pipe IPC


temporary IPC for for any processes
related processes related or
(eg: parent-child) unrelated
Direction Typically Can be
unidirectional unidirectional or
bidirectional using
2 pipes

Naming No name; exits Has a name in


only in memory filesystem,
allowing access via
that name

Creation Created using Created using


pipe() mkfifo()
Scope Limited to the Exists until
lifetime to the explicitly deleted
porcess
Usage Commonly used in Suitable for
command line communication
pipelines across unrelated
processes
Blocking Behavious Blocks if the buffer Same blocking
is full or empty behaviour but can
be accessed via
the file system

#Process context
The context of a process in the UNIX operating system
encompasses the entirety of its current state and execution
environment. It can be broadly divided into three main layers:
user-level, register, and system-level contexts.
User-Level Context
This layer pertains to the information and resources that are
directly accessible and manipulated by the process itself. It
includes:
 Process Text: The actual program instructions that the
CPU executes. These instructions reside in memory and
are loaded from an executable file when the process is
created.
 Data: Variables, data structures, and dynamically
allocated memory used by the process during its
execution. This data is specific to the process and is not
directly accessible by other processes.
 User Stack: A LIFO (last-in-first-out) data structure used
for function call management, local variable storage, and
parameter passing. Each process has its own private
stack, essential for maintaining the execution flow of the
program.
 Shared Memory: In some cases, processes can share
specific regions of memory for inter-process
communication. These shared regions are part of each
participating process's user-level context, allowing them
to access and modify shared data.
Register Context
This layer represents the state of the CPU's registers at any
given moment during the process's execution. It is crucial for
preserving the process's exact state, especially during context
switches:
 Program Counter (PC): Points to the memory address of
the next instruction to be executed. This register ensures
the sequential execution of program instructions.
 Processor Status Register (PS): Contains various flags
and bits that reflect the current state of the CPU in
relation to the running process. This includes
information about the results of arithmetic operations
(zero, negative, overflow), the current execution mode
(user or kernel), and the processor execution level for
handling interrupts.
System-Level Context
This layer deals with information managed by the kernel on
behalf of the process. It has two parts:
Static System-Level Context
These components remain relatively constant throughout the
process's lifetime:
 Process Table Entry: Each process has an entry in the
kernel's process table. This entry stores essential
information about the process, such as its process ID
(PID), current state, scheduling priority, parent process
ID, and pointers to other data structures related to the
process.
 U Area: A dedicated memory area associated with each
process. It contains process-specific information that the
kernel needs to manage the process, such as open file
descriptors, I/O parameters, resource limits, and the
process's current working directory.
 Memory Mapping Information: This includes data
structures like page tables (for paging systems) that
define how the process's virtual address space maps to
physical memory. This mapping is crucial for memory
management and protection.
Dynamic System-Level Context
These components change dynamically as the process
interacts with the system:
 Kernel Stack: When a process makes a system call or an
interrupt occurs, the CPU switches to kernel mode, and
the kernel uses a stack to manage its own function calls.
Each process has its own kernel stack, which is separate
from its user stack. This separation ensures that kernel
operations don't interfere with the process's execution
environment.
 Saved Registers of Previous Context Layer: Whenever a
context switch occurs, whether due to a system call,
interrupt, or scheduling decision, the kernel needs to
save the register context of the previously executing
process. This saved context is stored on the kernel stack
and is essential for resuming the process's execution
from where it left off.
The concept of context switching is central to how the UNIX
operating system manages multiple processes. During a
context switch, the kernel saves the current context of the
running process and restores the context of a different
process, effectively switching the CPU's focus from one
process to another. This mechanism, combined with the
process state model and scheduling algorithms, creates the
illusion of concurrency, making it seem like multiple
processes are running simultaneously even on a single CPU
core.

You might also like