0% found this document useful (0 votes)
57 views12 pages

Chapter 3 (Lecture 9) Operating System by Ihap El-Galaly

The document summarizes key concepts about processes from an operating systems lecture: 1. A process is an active program in memory that can be in various states like running, ready, waiting, and terminated. 2. Each process is represented by a process control block (PCB) that stores its state, resources, and scheduling information. 3. Modern operating systems allow multithreading where a process can have multiple threads of execution to perform multiple tasks simultaneously. 4. Process creation, termination, and communication involve operations like forking, executing, waiting, and signaling between related processes.

Uploaded by

marten mistry
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
57 views12 pages

Chapter 3 (Lecture 9) Operating System by Ihap El-Galaly

The document summarizes key concepts about processes from an operating systems lecture: 1. A process is an active program in memory that can be in various states like running, ready, waiting, and terminated. 2. Each process is represented by a process control block (PCB) that stores its state, resources, and scheduling information. 3. Modern operating systems allow multithreading where a process can have multiple threads of execution to perform multiple tasks simultaneously. 4. Process creation, termination, and communication involve operations like forking, executing, waiting, and signaling between related processes.

Uploaded by

marten mistry
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

Chapter 3(lecture 9) Operating System by Ihap El-galaly

Process – a program in execution; process execution must progress in


sequential fashion. No parallel execution of instructions of a single
process.
passive entity, such as a file containing a list of
instructions stored on disk (often called an executable file).
active entity, with a program counter specifying the next
instruction to execute and a set of associated resources.
es a process when an executable file is loaded into
memory.
The memory layout of a process is typically divided into multiple
sections (parts) including:
•Text section: The program code (the executable code).
•Data section containing global variables.
•Heap section containing memory dynamically allocated during
program run time.
•Stack section containing temporary data when invoking functions.
rn addresses, local variables.
•The sizes of the text and data sections are fixed,
as their sizes do not change during program run
time.
•The stack and heap sections can shrink and
grow dynamically during program execution.

For the stack:


activation
record containing function parameters, local
variables, and the return address is pushed onto the stack
the heap will grow as memory is dynamically allocated, and will shrink
when memory is returned to the system.
toward one another, the
operating system must ensure they do not overlap one another.
Process State is defined in part by the current activity of that process.
A process may be in one of the following states:
•New: The process is being created.
•Running: Instructions are being executed.
•Waiting: The process is waiting for some event to occur (such as an
I/O completion or reception of a signal).
•Ready: The process is waiting to be assigned to a processor.
•Terminated: The process has finished execution.

instant. Many processes may be ready and waiting.


Process Control Block (PCB)
Each process is represented in the OS by a process control block
(PCB) (also called a task control block).
Process state – new, running, waiting, etc.
Program counter – address (location) of the next instruction to be
executed for this process.
CPU registers – contents of all process-centric registers.
CPU-scheduling information – a process priority, pointers to
scheduling queues, etc.
Memory-management information – memory allocated to the
process.
Accounting information – amount of CPU used, clock time elapsed
since start, time limits, account numbers, job or process numbers, etc.
I/O status information – list of I/O devices allocated to the process,
list of open files, etc.
Most modern operating systems have extended the process concept to
allow a process to have multiple threads of execution and thus to
perform more than one task at a time.

•A web browser might have one thread display images or text while
another thread retrieves data from the network.
•A word processor may have a thread for displaying graphics, another
thread for responding to keystrokes from the user, and a third thread for
performing spelling and grammar checking in the background.
A thread includes a thread ID, a program counter (PC), a register set,
and a stack. It shares with other threads belonging to the same process
its code section, data section, and other OS resources like open files
and signals.
traditional process has a single thread of control. If a process has
multiple threads of control, it can perform more than one task at a time.
Advantages of Multithreading
1.Responsiveness. Multithreading an interactive application may allow
a program to continue running even if part of it is blocked or is
performing a lengthy operation, thereby increasing responsiveness to
the user.
2.Resource sharing. Threads share the memory and the resources of
the process to which they belong by default.
3.Economy. Allocating memory and resources for process creation is
costly. Because threads share the resources of the process to which
they belong, it is more economical to create and context-switch threads.
4.Scalability. The benefits of multithreading can be even greater in a
multiprocessor architecture, where threads may be running in parallel
on different processing cores.
parent is the process that created it; its children are any
processes that it creates. Its siblings are children with the same parent
process).

doubly linked list of task struct.


Process Scheduling
Process scheduler selects among available processes for next
execution on CPU core.
•Each CPU core can run one process at a time.
-- Maximize CPU use, quickly switch processes onto CPU core.
degree
of multiprogramming.

•An I/O-bound process is one that spends more of its time doing I/O
than it spends doing computations.
•A CPU-bound process, in contrast, generates I/O requests
infrequently, using more of its time doing computations.
Maintains scheduling queues of processes:
•Ready queue – set of all processes residing in main memory, ready
and waiting to execute. This queue is generally stored as a linked list.
•Wait queues – set of processes waiting for an event (e.g. I/O
completion).
ueues throughout its lifetime.

ted for execution, or dispatched.

several events could occur:


•The process could issue an I/O request and then be placed in an I/O
wait queue.
•The process could create a new child process and then be placed in a
wait queue while it awaits the child’s termination.
•The process could be removed forcibly from the core, as a result of an
interrupt or having its time slice expire, and be put back in the ready
queue.
s cycle until it terminates, at which time it is
removed from all queues and has its PCB and resources deallocated.
The CPU scheduler selects from among the processes that are in the
ready queue and allocate a CPU core to one of them.
The CPU scheduler must select a new process for the CPU frequently.
•An I/O-bound process may execute for only a few milliseconds before
waiting for an I/O request.
•A CPU-bound process will require a CPU core for longer durations.
•The scheduler is designed to forcibly remove the CPU from a process
and schedule another process to run.
Swapping is an intermediate form of scheduling, a process can be
―swapped out‖ from memory to disk, where its current status is saved,
and later ―swapped in‖ from disk back to memory, where its status is
restored.
Chapter 3(lecture 10) Operating System by Ihap El-galaly
A context switch occurs when the CPU switches from one process to
another.

and to run a kernel routine.


es to another process, the system must save the
state of the old process (state save) and load the saved state for the
new process (state restore) via a context switch.

Context of a process is represented in the PCB of the process.


-switch time is pure overhead; the system does no useful work
while switching:
Operations on Processes

•Process creation.
•Process termination.
Parent process create children processes, which, in turn create other
processes, forming a tree of processes.
process
identifier (pid) (i.e. it is an integer number).
A child process may obtain its resources directly from the OS, or it may
be constrained to a subset of the resources of the parent process.
Resource sharing options:
•Parent and children share all resources.
•Children share subset of parent’s resources.
•Parent and child share no resources.
options:
•The parent continues to execute concurrently with its children.
•The parent waits until some or all of its children have terminated.
-space possibilities for the new process:
•The child process is a duplicate of the parent process (it has the same
program and data as the parent).
•The child process has a new program loaded into it.

•fork() system call creates a new process.


•exec() system call used after a fork() to replace the process’ memory
space with a new program.
•Parent process calls wait()waiting for the child to terminate.
Process Termination
Process executes last statement and then asks the operating system to
delete it using the exit() system call.
Parent may terminate the execution of children processes using the
abort() system call.
Some operating systems do not allow child to exists if its parent has
terminated. If a process terminates (either normally or abnormally), then
all its children must also be terminated.
•This is called Cascading termination. All children, grandchildren, etc.,
are terminated.

Inter-process Communication
-process with 3 different types of
processes:
•Browser process manages user interface, disk and network I/O.

browser process is created.


•Renderer process renders web pages, deals with HTML, Javascript,
images, etc.

•Plug-in process is created for each type of plug-in (e.g. QuickTime) in


use.

•If one website crashes, only its renderer process is affected; all other
processes remain unharmed.
sandbox,
independent or cooperating.
•A process is independent if it does not share data with any other
processes executing in the system.
•A cooperating process can affect or be affected by other processes
executing in the system, including sharing data.
•Information sharing: several applications may be interested in the
same piece of information.
•Computation speedup: In multicore systems, we may break a task (to
run faster) into subtasks, each of which will be executing in parallel with
the others.
•Modularity: A software system is constructed in a modular fashion,
dividing the system functions into separate processes or threads.
interprocess communication (IPC)
mechanism that will allow them to exchange data

•Shared memory.
that is shared by the cooperating processes is
established. Processes can then exchange information by reading and
writing data to the shared region.
•Message passing.

between the cooperating processes.

IPC in Shared-Memory Systems


An area of memory shared among the processes that wish to
communicate.
The communication is under the control of the users processes not the
operating system.

writing to the same location simultaneously.


Producer-Consumer Problem

•producer process produces information that is consumed by a


consumer process.

•Unbounded-buffer places no practical limit on the size of the buffer:


•Bounded-buffer assumes that there is a fixed buffer size:

pointers: in and out.


•The variable in points to the next free position in the buffer.
•The variable out points to the first full position in the buffer.
•The buffer is empty when in == out.
•The buffer is full when ((in + 1) % BUFFER SIZE) == out.
FER SIZE − 1) items in the buffer at
the same time.
Chapter 3(lecture 10) Operating System by Ihap El-galaly
IPC in Message-Passing Systems

variables.
•It is particularly useful in a distributed environment, where the
communicating processes may reside on different computers connected
by a network.
IPC facility provides two operations:
•send(message)
•receive(message)
The message (sent by a process) size is either fixed or variable.

•How are links established?


•Can a link be associated with more than two processes?
•How many links can there be between every pair of communicating
processes?
•What is the capacity of a link?
•Is the size of a message that the link can accommodate fixed or
variable?
•Is a link unidirectional or bi-directional?
Implementation of Communication Link

•Shared memory.
•Hardware bus.
•Network.

•Direct or indirect communication.


•Synchronous or asynchronous communication.
•Automatic or explicit buffering
Direct Communication

recipient or sender of the communication.

•send (P, message) – send a message to process P.


•receive(Q, message) – receive a message from process Q.

•A Link is established automatically between every pair of processes


that want to communicate. The processes need to know only each
other’s identity to communicate.
•A link is associated with exactly one pair of communicating processes
(i.e. two processes).
•Between each pair of processes, there exists exactly one link.
•The link may be unidirectional, but is usually bi-directional.

Indirect Communication
mailboxes (also
referred to as ports):
•Each mailbox has a unique identification (id).

•send(A, message) – send a message to mailbox A.


•receive(A, message) – receive a message from mailbox A.
•Two processes can communicate only if they share a mailbox.

placed by processes and from which messages can be removed.


with another process via a number of
different mailboxes.

•A link is established between processes only if processes share a


common mailbox.
•A link may be associated with many (more than two) processes.
•Each pair of processes may share several communication links, with
each link corresponding to one mailbox.
•Link may be unidirectional or bi-directional.

•P1, P2, and P3 share mailbox A.


•P1, sends; P2 and P3 receive.

receive() from A.
•Who gets the message? Which process will receive the message sent
by P1? The answer depends on which of the following methods we
choose.

•Allow a link to be associated with at most two processes.


•Allow only one process at a time to execute a receive operation.
•Allow the system to select arbitrarily the receiver. (that is, either P2 or
P3, but not both, will receive the message). Sender is notified who the
receiver was.
Synchronization
Communication between processes takes place through calls to send()
and receive() primitives. There are different design options for
implementing each primitive.
-blocking, also known
as synchronous and asynchronous.
Blocking is considered synchronous:
•Blocking send -- the sender process is blocked until the message is
received by the receiving process or by the mailbox.
•Blocking receive -- the receiver is blocked until a message is
available.
Non-blocking is considered asynchronous:
•Non-blocking send -- the sender process sends the message and
continue (i.e. resumes operation).
•Non-blocking receive -- the receiver receives:

Null message.
–consumer problem becomes trivial when
we use blocking send() and receive() statements.

the message is delivered to either the receiver or the mailbox.

message is available.
Buffering

Queues can be Implemented in one of three ways:


1.Zero capacity – no messages are queued on a link. Sender must
wait for receiver (rendezvous).
2.Bounded capacity – finite length of n messages, sender must wait if
link is full. The link’s capacity is finite.
3.Unbounded capacity – infinite length, sender never waits.
The sender never blocks.
Communication in Client-Server Systems

socket is defined as an endpoint for communication.


port – a number included at start of
message packet to differentiate network services on a host.
161.25.19.8:1625 refers to port 1625 on host 161.25.19.8

well known, used for standard services.


A socket is defined as an endpoint for communication.

sockets—one for each process.

number.
–server architecture.

specified port.

client socket to complete the connection.


Servers implementing specific services (e.g. HTTP) listen to well-
known ports (HTTP, server listens to port 80).
When a client process initiates a request for a connection, it is assigned
a port by its host computer. This port has some arbitrary number
greater than 1024.
The packets traveling between the hosts are delivered to the
appropriate process based on the destination port number.
All connections must be unique. Therefore, if another process also on
host X wished to establish another connection with the same web
server, it would be assigned a port number greater than 1024 and not
equal to 1625. This ensures that all connections consist of a unique pair
of sockets.

•Connection-oriented (TCP)
•Connectionless (UDP)
•MulticastSocket class– data can be sent to multiple recipients

You might also like