CSI3131 Ch3 Processes
CSI3131 Ch3 Processes
CSI3131 Ch3 Processes
Chapter 3
Processes
2
Plan
3
3.1 Process concept – Overview
• A process is a program loaded into memory and
executing (program in execution)
• To accomplish its task a process will need certain
resources such
• CPU time
• Memory
• Files
• I/O devices
• A process is the unit of work in most systems
• Systems consist of a collection of processes:
• operating-system processes execute system code
• user processes execute user code.
4
3.1 Process concept - A process is a program in execution
• A program stored on a HD is a passive entry and is not a
process
• such as a file containing a list of instructions stored on disk (often
called an executable file) - (as in prog.exe or a.out)
• A program becomes a process when an executable file is loaded
into memory.
• In contrast, a process is an active entity, with a program
counter (PC) specifying the next instruction to execute
and a set of associated resources.
• A process is also called job, task or user program in
different contexts or references! We will use these terms
interchangeably
5
3.1 Process concept - Layout of a process in memory
Note: notice that the stack and heap sections can shrink
and grow dynamically during program execution. 6
3.1 Process concept - Layout of a process in memory
• An example highlighting how the different sections of a process relate to an actual C program
Free RAM
7
3.1 Process state
8
3.1 Process state
9
3.1 Process state
10
3.1 Process state
- When the running process needs a service from the OS that it cannot
immediately offer. e.g., To free the core for:
- an I/O H/W interrupt (ISR) needed by the process itself (wait for the result)
- access to a resource not yet available
- needs a response from another process 11
3.1 Process state
12
3.1 Process state
14
3.1 Process control block (PCB)
15
3.1 Process control block (PCB)
PCB also called task control block
• Process state – running, waiting, etc
• Program counter – location of instruction to next execute
• CPU registers – contents of all process-centric registers
• CPU scheduling information- priorities, scheduling queue
pointers
• Memory-management information – memory allocated
to the process
• Accounting information – CPU used, clock time elapsed
since start, time limits
• I/O status information – I/O devices allocated to process,
list of open files
16
3.1 Process control block (PCB)
• Process state: new, ready, running, waiting, halted and so on.
• Program counter register (PC): indicates the address of the next
instruction to be executed for this process. It must be saved to
allow the process to be continued correctly when it runs again.
• Other CPU registers: Vary computer architecture to other.
• CPU-scheduling information: process priority, scheduling queues
pointers, and other scheduling parameters (to see in Ch5)
• Memory-management information: value of the base and limit
registers and the page tables, or the segment tables, depending
on the memory system used by the OS (to see in Ch 9).
• Accounting information: includes the amount of CPU and real time used, time limits,
account numbers, job or process numbers …
• I/O status information: This information includes the list of I/O devices allocated to
the process, a list of open files, and so on.
• Pointer to next PCB in a linked list (to see later in this chapter)
17
3.2 Process Scheduling - basic
• The objective of multiprogramming is to have some process running at
all times so as to maximize CPU utilization.
• The objective of time sharing is to switch a CPU core among processes
so frequently - users perceive as all program are running in parallel.
• process scheduler selects process from a set available processes for
execution on a core. (to see in more details in CH5)
• Remember: For a system with a single CPU core, there will never be
more than one process running at a time.
• Multicore system can run multiple processes at one time. But we still
need scheduling as if there are more processes than cores (usual
case), excess processes will have to reschedule. The number of
processes currently in memory is known as the degree of
multiprogramming.
18
3.2 Process Scheduling Queues
• As processes enter the system, they are put into a ready queue
• where they are ready and waiting to execute on a CPU's core
• This queue is generally stored as a linked list
• a ready-queue header contains pointers to the first PCB in the list
• each PCB includes a pointer field that points to the next PCB in the ready queue.
• The system also includes other queues.
• Running process on a CPU core, will have eventually to terminates,
interrupts, or waits (i.e wait the completion of an I/O request).
• Suppose the process makes an I/O request to a device such as a disk.
• A disk run significantly slower than processors, the process will have to wait for the I/O to
become available.
• Processes that are waiting for a certain event to occur are placed in a wait queue –
actually a queue of PCB’s processes.
• Processes can migrate between the various queues as they change state 19
3.2 The ready queue and wait queues
20
3.2 The ready queue and wait queues
• When a processes migrate between the various queues actually, they do not
physically move into memory
• Actually, the corresponding PCB’s pointers are modified to points into a
different queue.
ready
wait
21
3.2 CPU scheduling - Process scheduling (short-term)
• The role of the CPU scheduler is to select
from among the processes that are in the
ready queue and allocate a CPU core to
one of them.
• CPU scheduling decides which of the ready
processes should run next on the CPU.
• The CPU scheduler must select a new
process for the CPU frequently.
• An I/O-bound process may execute for only a few
milliseconds before waiting for an I/O request.
• a CPU-bound process will require a CPU core for
longer durations, the scheduler is likely to remove
the CPU from it and schedule another process to
run.
• Therefore, the CPU scheduler executes at least
once every 100 milliseconds, or much more
frequently.
22
3.2 CPU scheduling (short-term) - queueing diagram
• Two types of queues are present: the ready queue and a set of wait queues.
• The circles represent the resources that serve the queues, and the arrows
indicate the flow of processes in the system.
• The process could issue an I/O
request and then be placed in
an I/O wait queue.
• The process could create a new
child process and then be
placed in a wait queue while it
awaits the child's termination
(coding example in C will be
show…).
• The process could be forced out
from the core, because of
• an interrupt or having its time
slice expire.
• When a process terminates,
removed from all queues and
has its PCB and resources
deallocated.
23
3.2 Intermediate form of scheduling (Swapping)
• Some OSes have an intermediate form of scheduling (mid-term),
known as swapping
• it can be added if degree of multiple programming needs to
decrease by removing a process from memory to a disk storage.
• Later, the process can be reintroduced into memory, to resume its
execution (swapping)
• So, a swapping is
• a process can be "swapped out" from memory to disk and its current status is
saved
• and later "swapped in" from disk back to memory, where its status is restored.
• Swapping is typically only necessary when memory has been
overcommitted and must be freed up.
• Swapping is discussed in more details in chapter 9 (Main Memory).
24
3.2 Long-term scheduling (Job scheduling)
• Long-term scheduling decides when a process should enter the ready state and
start competing for the CPU.
• Processes are created at unpredictable times and enter the system with the process state:
new.
• Long-term scheduling decides which
new processes are moved to the
ready list to compete for the CPU.
• Processes are also subject to
long-term scheduling when
suspended and later reactivated by
the OS.
• So, long-term scheduler controls
the degree of multiprogramming,
to achieve optimal performance.
• Long-term scheduling takes place
less frequently than short-term
scheduling
• It occurs only at process creation and when the OS suspends the process. These events are
much less frequent than moving between the running, ready, or blocked states, which
requires short-term scheduling.
25
3.2 Quiz!
26
3.2 Scheduling - Context switch (CS)
• CS or Process switching, is when the CPU moves from executing of one
process, say process 0, to another process, say process 1
• CS occurs when the CPU is interrupted during the execution of process 0
(can be either hardware or software interrupt).
• Save the running process (0) current context to PCB0 (state save)
• essentially suspending the process
• Context definition: of a process, the state of its execution, including the contents of
registers, its PC, and its memory context, including its stack and heap.
• Finds and accesses the PCB1 of process 1, that was previously saved.
• Perform context restore (state restore) from PCB1
• Essentially the opposite of Context save to run the previously suspended process 1
• Then when Process 1 is done, the above steps can be repeated to
suspend process 1 and to run process 0 again (see the following
animation of CS between Process 0 and Process 1) 27
3.2 Scheduling - Context switch from process to process
28
3.2 Scheduling - Context switch from process to process
29
3.2 Scheduling - Context switch from process to process
30
3.2 Scheduling - Context switch from process to process
31
3.3 Process Operations - Process creation
• During execution, a process (Parent) can create one children processes
• In turn, a child process can also have children processes - forming a tree of processes
• Generally, process identified and managed via a process identifier (pid)
which is typically an integer number
• Usually, several properties can be specified at child creation time:
• Resource sharing options
• Share all resources
• Share subset of parent’s resources
• No sharing
• When a process creates a new process, two possibilities for execution exist:
• Parent and children execute concurrently
• Parent waits until children terminate
• There are two address-space possibilities for the new process:
• Child duplicate of parent (it has the same program and data as the parent)
• Child has a new program loaded into it
32
3.3 Why Do We Need to Create A Child Process?
• To fulfil a need for a program to perform more than one
function simultaneously.
• Since these jobs may be interrelated so two different programs to
perform them cannot be created.
• When a parent process creates a child process, it passes
some data or instructions to the child process. The child
process is an independent entity that can execute a
different program or perform a different task from the
parent process.
33
3.3 Process creation – C program in UNIX example
• A new process is created by the fork() system call
• The new process consists of a copy of the address space of the parent process
• the parent and the child continue execution at the instruction after the fork()
• When the child is executing, fork() returns is zero, whereas when the parent is executing,
fork() returns PID of the child.
• the parent may call wait() to wait until the child terminates
• exec(…) system call used after a fork() one of the two processes to replace the
process’ memory space with a new program
• When the child process completes the parent process resumes from the call to wait(),
till it completes (return or exit)
• With n fork statements, there are always (2^n) – 1 child processes.
34
3.3 Process creation – C program in UNIX example
• This is a C code example for the
previous page scenario
• In this example: The child process then
overlays its address space with the
UNIX command /bin/ls (used to get a /* fork a child process */
directory listing) using the execlp()
system call
• execlp() is one of many different versions of
the exec() system call.
• The parent waits for the child process
to complete with the wait() system call.
• When the child process completes, the
parent process resumes from the call to
wait(), where it completes at line 21
(return 0;).
35
3.3 Process creation using the fork() system call.
36
3.3 Process creation using the fork() system call.
37
3.3 Process creation using the fork() system call.
38
3.3 Process creation using the fork() system call.
39
3.3 Process creation using the fork() system call.
40
3.3 Another fork example – Parent wait for child to terminate
Watch this “nicely done” video first! forks basics explained: https://www.youtube.com/watch?v=cex9XrZCU14
#include <sys/wait.h> And for wait() : https://www.youtube.com/watch?v=cex9XrZCU14
#include <sys/types.h> Also if you wish you can subscribe and watch all other interesting topinc
#include <unistd.h>
#include <stdio.h> about OS as well!
int main(){
pid_t pid; // a pid typed (similar: int pid;)
printf("Parent process start: ID= %d\n", getpid());
pid = fork();
printf("Process exetuting now: ID %d\n", getpid());
if(pid>0)
printf("Congratulations you got a child with id %d\n", pid); // Parent executing
45
3.3 Processes Creation - Process via Windows API
• The Processes are created in the Windows Application Programming
Interface (API) using the CreateProcess() function, which is similar to
fork() in that a parent creates a new child process…We will not cover
it here but see details and a coding example which creates a child
process that loads the application mspaint.exe in the textbook
• Discussion
• fork() very convenient for passing data/parameters from the parent to the child
• all code can be conveniently at one place
• Direct process creation is efficient to launch a new program
46
3.3 Typical process tree for the Linux OS
• The systemd process (or task on Linux) (which always has a pid of 1) is the root
parent process for all user processes and is the first user process created when
the system boots.
• Once the system has booted, the systemd process creates processes which
provide additional services such as
• a web or print server, an ssh (secure shell)server …
• two children of systemd—logind and sshd
• The logind process is responsible for managing clients that directly log onto the system.
• In the example on next page,
• a client has logged on and is using the bash shell, which has been assigned pid 8416.
• Using the bash command-line interface
• this user has created the process ps as well as the vim editor.
• The sshd process is responsible for managing clients that connect to the system by using
ssh.
47
3.3 Typical process tree for the Linux OS
48
3.3 Process termination
• Process executes last statement and asks the operating system to
delete it (by making exit() system call)
• Returns status data from child to parent (via wait())
• Process’ resources are deallocated by OS
• Abnormal termination
• Division by zero, memory access violation, …
• Parent may terminate the execution of children processes using
the abort() system call. Some reasons for doing so:
• Child has exceeded allocated resources
• Task assigned to child is no longer required
• The parent is exiting, and the OS does not allow a child to continue if its
parent terminates
• Other OSes may find a parent process for the orphan process (i.e init or
systemd process in UNIX/Linux)
49
3.3 Process termination
• Some operating systems do not allow child to exists if its parent has
terminated. If a process terminates, then all its children must also be
terminated.
• cascading termination. All children, grandchildren, etc. are terminated.
• The termination is initiated by the OS.
• The parent process may wait for termination of a child process by using
the wait() system call. The call returns status information and the pid of
the terminated process pid = wait(&status);
• A child process that has terminate but whose parent has not yet invoke
wait(), is known a zombie process
• Zombie process exist only briefly till parent calls wait(), then the zombie process
identifier and its entry in the process table are released.
• If parent terminated without invoking wait , process is an orphan
• Manually termination: Windows: TerminateProcess(…),
UNIX: kill(processID, signal) 50
3.3 Some exercises
• Process states
• Can a process move from waiting state to running state?
• From ready state to terminated state?
• PCB
• Does PCB contain program’s global variables?
• How is the PCB used in context switches?
• CPU scheduling
• What is the difference between long term and medium-term scheduler?
• A process (in Unix) has executed wait() system call. In which queue it is located?
• Process creation
• Understand fork(), exec(), wait() … check your understanding:
• So, how many processes are created in this code fragment?
for(i=0; i<3; i++)
fork();
• Process termination
• How/when.
• What should the OS do?
51
3.4 Interprocess communication (IPC)
• Processes executing concurrently in the OS may be independent or
cooperating
• A process is independent if it does not share data with any other processes executing in
the system.
• A process is cooperating if it can affect or be affected by the other processes executing
in the system (shares data with other processes)
• Reasons for cooperating processes:
• Information sharing – e.g., copying and pasting from one app. to another
• Computation speedup - break a task into subtasks to execute in parallel
• Modularity - dividing the system functions into separate processes or threads
• Cooperating processes requires an interprocess communication (IPC)
• Two models of IPC
• Shared memory - Processes can read/write data by a shared memory region
• Message passing - Cooperating processes communicate by messages exchange. 52
3.4 Interprocess communication (IPC)
• Cooperating processes need IPC mechanism to exchange data and to
synchronize their actions
• Shared memory - typically faster than message passing – initial system call only to
established the shared memory segment, afterward it can be accessed in user mode.
• Message passing – all communication use system call (continuous kernel intervention can
slow down the performance). (a) shared memory. (b) Message passing.
• But Easier to implement than the
shared memory model
53
3.5 IPC – Shared Memory
• Normally the OS prevent one process from accessing another process's
memory
• Shared memory requires that two or more processes agree to remove this
restriction.
• One process makes a system call to create a shared memory region (bounded or
unbounded buffer)
• Other processes makes system call to attach this shared memory region to their address
space.
• The data and the location are determined by these processes and are not under the
operating system's control (in user mode).
• Major issues is to provide mechanism that will allow the user processes to
synchronize their actions when they access shared memory.
• The processes are responsible for ensuring that they are not writing to the same location
simultaneously!
• Producer-consumer problem, which is a common paradigm for cooperating processes.
• Synchronization is discussed in great details in Chapter 5.
54
3.6 IPC in message-passing systems - Introduction
• Message passing provides a mechanism to allow processes to
communicate and to synchronize their actions without sharing the
same address space.
• Particularly useful in networking e.g, internet chat
55
3.6 IPC in message-passing - Direct Communication
Processes that want to communicate must have a way to refer to each other
• Under direct communication, processes must name each other explicitly:
• send (P, message) – Send a message to process P
• receive(Q, message) – Receive a message from process Q
• Properties of communication link
• Links are established automatically
• A link is associated with exactly one pair of communicating processes
• Between each pair there exists exactly one link
• The link may be unidirectional, but is usually bi-directional
• This scheme exhibits symmetry in addressing
• both the sender and the receiver processes must name the other to communicate.
56
3.6 IPC in message-passing - Direct Communication
• A variant of Direct communication employs asymmetry in addressing
• only the sender names the recipient; the recipient is not required to name the sender.
• Then primitives are defined as follows:
• send (P, message) – Send a message to process P
• receive(id, message) – Receive a message from any process
• The variable id is set to the name of the process with which communication has taken
place.
• Limitation of both symmetric and asymmetric Direct messaging
• Process definitions limited modularity of the resulting process definitions.
• Changing the id of a process may necessitate examining all other process definitions to
update them with the new id.
• Must found and update old id to the new ids.
57
3.6 IPC in message-passing - Indirect Communication
• Messages are sent to and received mailboxes or ports
• A mailbox can be viewed abstractly as an object into which messages
can be placed or removed by processes.
• Each mailbox has unique identification
• Two processes can communicate only if they have a shared mailbox
• primitives are defined as follows:
• send (A, message) – Send a message to mailbox A
• receive(A, message) – Receive a message from mailbox A
• Operations
• create a new mailbox
• send and receive messages through mailbox
• destroy a mailbox
58
3.6 IPC in message-passing - Indirect Communication
• Mailbox sharing
• Processes P1, P2, and P3 all share mailbox A
• P1 sends to A while P2 and P3 receive (execute a receive() from A)
• Which process will receive the message sent by P1?
59
3.6 IPC in message-passing - Synchronization
• Message passing may be either blocking or non-blocking
• Blocking – also known as synchronous message passing
• Blocking send. The sending process is blocked until the message is received
by the receiving process or by the mailbox.
• Blocking receive. The receiver blocks until a message is available.
60
3.6 IPC in message-passing - Buffering
Whether communication is direct or indirect, messages exchanged by
communicating processes reside in a temporary queue.
• Zero capacity:
• No queue exist;
• No messages waiting
• the sender must block until the recipient receives the message.
• Bounded capacity:
• The queue has finite length n
• at most n messages can reside in it.
• If the queue is not full when a new message is sent, the message is placed in the queue, and
the sender can continue execution without waiting.
• If the link is full, the sender must block until space is available in the queue.
• Unbounded capacity:
• The queue's length is potentially infinite; thus, any number of messages can wait in it. The sender
never blocks.
Note: The zero-capacity case is sometimes referred to as a message system with no buffering.
The other cases are referred to as systems with automatic buffering. 61
3.7 and 3.8 Examples of IPC/RPC Mechanisms
• Pipes
• Sockets
• Remote Procedure Calls (RPC)
62
3.7 Pipes
• Acts as a conduit allowing two processes to communicate
• Were one of the first IPC mechanisms in early UNIX systems
• They typically provide one of the simpler ways for processes to
communicate with one another
• In implementing a pipe, four issues must be considered:
• Does the pipe allow bidirectional communication, or is communication
unidirectional?
• If bidirectional communication is allowed, then is it half-duplex or full-duplex?
• Must a relationship (such as parent-child) between the communicating processes?
• Can the pipes be used over a network, or must be on same machine?
• Two common types of pipes
• Ordinary pipes – cannot be accessed from outside the process that created it.
Typically, a parent process creates a pipe and uses it to communicate with a child
process that it created (a parent-child relationship).
• Named pipes – can be accessed without a parent-child relationship.
63
3.7 Ordinary Pipes
• Ordinary Pipes allow communication in standard producer-
consumer style
• Producer writes to one end (the write-end of the pipe)
• Consumer reads from the other end (the read-end of the pipe)
• As a result, ordinary pipes are unidirectional (only one-way communication)
• For two-way, two pipes must be used (each pipe send data in a different
direction)
• Require parent-child relationship between communicating processes
pipe(int fd[])
• On Windows they are calls anonymous pipes
64
3.7 Ordinary Pipes - pipe(int fd[]) function
• This function creates a pipe that is accessed through the int fd[]file descriptors:
• fd[0] is the read end and fd[1] is the write end of the pipe.
• Pipes can be accessed using ordinary read() and write() system calls.
• Typically, a parent process creates a pipe and uses it to communicate with a child
process that it creates via fork().
• The child inherits the pipe from its parent process.
• Relationship of the file descriptors in the fd array to the parent and child processes:
• Parent writes on its pipe write end —fd[1]— child reads from its pipe read end—fd[0]
65
3.7 Ordinary Pipes – Creation in UNIX
• System call: pipe(int fd[2])
• Create a pipe by the parent process (from the parent process end) with two file
descriptors:
• fd[0] – is a file descriptor for the read end of the pipe
• fd[1] - is a file descriptor for the write end of the pipe
/* Array for storing 2 file descriptors */
int fd[2],pid, ret;
/* create the pipe */
ret = pipe(fd);
if (ret == -1) {
fprintf(stderr,"Pipe failed");
return 1;
}
• So far both endpoints are to the process owner of the pipe, up till now the
process can talk to himself!
Note: After calling pipe(fd), the descriptor values are updated to fd[0]=3 and fd[1]=4 66
3.7 Ordinary Pipes – Creation in UNIX
• System call: fork()
• Create a child process by the parent process:
• The child inherit the Parent memory space – including the PIPE endpoints
/* Array for storing 2 file descriptors */
int fd[2], pid, ret;;
/* create the pipe */
int ret = pipe(fd);
if (ret == -1) {
fprintf(stderr,"Pipe failed");
return 1;
}
/* fork a child process */
pid = fork();
if (pid < 0) { /* error occurred */
fprintf(stderr, "Fork Failed");
return 1;
}
67
3.7 Ordinary Pipes – Creation in UNIX
• In instance, the parent want to writes to the pipe
• So it is important for the parent process to initially close his unused ends of the
pipe (the read end).
/* Array for storing 2 file descriptors */
int fd[2], pid, ret;;
/* create the pipe */
int ret = pipe(fd);
if (ret == -1) {
fprintf(stderr,"Pipe failed");
return 1;
}
/* fork a child process */
pid = fork();
if (pid < 0) { /* error occurred */
fprintf(stderr, "Fork Failed");
return 1;
}
if (pid > 0) { /* parent process */
/* close the unused end of the pipe */
close(fd[0]);
68
3.7 Ordinary Pipes – Creation in UNIX
• In instance, the child will have to read from the pipe
• So it is important for the child process to initially close his unused ends of the
pipe (the write end).
...
/* fork a child process */
pid = fork();
if (pid < 0) { /* error occurred */
fprintf(stderr, "Fork Failed");
return 1;
}
if (pid > 0) { /* parent process */
/* close the unused end of the pipe */
close(fd[0]);
...
else { /* child process */
/* close the unused end of the pipe */
close(fd[1]);
69
3.7 Ordinary Pipes – Creation in UNIX
• Now the parent can write to the pipe, and the child can read from
it. How?
• read(fd[0], dataBuf_x, count) = read count characters (bytes) or less
from the pipe into dataBuf_x character array
• Where: fd[0] = 3, dataBuf_x is a pointer to dataBuf_x[SIZE], and count is an
integer ≤ SIZE
• write(fd[1], dataBuf_y, count) = write count characters bytes from
dataBuf_y character array into the pipe
• Where: fd[1] = 4, dataBuf_y is a pointer to dataBuf_y[SIZE], and count is an
integer ≤ SIZE
Note: In C, the name of an array[], is the address (pointer) to the first element of that array 70
3.7 Ordinary Pipes – The complete C code in UNIX
#include <sys/types.h>
#include <stdio.h> if (pid > 0) { /* parent process */
#include <string.h>
#include <unistd.h> /* close the unused end of the pipe */
close(fd[READ_END]);
#define BUFFER_SIZE 25
#define READ_END 0 /* write to the pipe */
#define WRITE_END 1 write(fd[WRITE_END], write_msg,
int main(void) strlen(write_msg)+1);
{
char write_msg[BUFFER_SIZE] = "Greetings"; /* close the write end of the pipe */
char read_msg[BUFFER_SIZE]; close(fd[WRITE_END]);
int fd[2]; }
pid_t pid; else { /* child process */
/* create the pipe */ /* close the unused end of the pipe */
if (pipe(fd) == -1) { close(fd[WRITE_END]);
fprintf(stderr,"Pipe failed");
return 1; /* read from the pipe */
} read(fd[READ_END], read_msg,
/* fork a child process */
pid = fork(); BUFFER_SIZE);
printf("read %s",read_msg);
if (pid < 0) { /* error occurred */
fprintf(stderr, "Fork Failed"); /* close the read end of the pipe */
return 1; close(fd[READ_END]);
}
}
return 0;
} 71
3.7 Ordinary Pipes – recap and final note
• When parent calls fork(), the child gets the parent’s open file
descriptors.
• That includes the pipe endpoints!
• So, how the parent talk to its child:
• The parent creates pipe, and calls fork() to create child
• The parent closes the read end of the pipe and writes to the write end
• The child closes the write end of the pipe and reads from the read end
• Try to have the communication in the opposite way as an exercise!
• If we need 2-way communication, then 2 pipes can be used
72
3.7 Ordinary Pipes –final note - limitation
• Ordinary pipes:
• Exist only while the processes are communicating with one another, once have done
and terminated they ceases to exist.
• Require a parent-child relationship between the communicating processes. This
means that they can be used only for communication between processes on the
same machine.
• Gotcha’s:
• Each pipe is implemented using a fixed-size system buffer:
• if the buffer becomes full, the writer is blocked until the reader reads enough
• If the buffer is empty, the reader is blocked until the data is written
• What happens if both processes start by reading from pipes, or writing when the pipe
becomes full?
• Deadlock, of course!
• When working with several pipes, one has to be extra careful to avoid deadlock
Note: Deadlock will be covered in detail in a future chapter
73
3.7 Named Pipes
74
3.7 Unix Named Pipes
• Named pipes are referred to as FIFOs in UNIX systems
• Created with mkfifo() system call.
• Once created, they appear as typical files in the file system
• Can be manipulated with ordinary open(), read(), write(), and close()
system calls.
• Continue to exist until it is explicitly deleted from the file system
• Are bidirectional but only in half-duplex. For full-duplex, two FIFOs are
typically used
• the communicating processes must reside on the same machine.
• If intermachine communication is required, sockets (next section) must be
used
75
3.7 Windows Named Pipes
76
3.8 Communication in client-server systems - Sockets
• A socket is defined as an endpoint for communication
• A pair of processes communicating over a network employs a pair of sockets
• A socket is identified by an IP address concatenated with a port number
• The socket 161.25.19.8:1625 refers to port 1625 on host 161.25.19.8
• Communication link corresponds to a pair of sockets
• Port 80 has been reserved for the HTTP server. It may not be used for any other
purpose.
77
3.8 Sockets
If client process initiates a connection request, it is assigned a port by its host
computer
• This port has some arbitrary number greater than 1024.
• For example, if a client on host X with IP address 146.86.5.20, establish a connection with a
web server (normally is listening on port 80) at address 161.25.19.8, host X may be assigned
port 1625.
• The connection will consist of a pair of sockets:
(146.86.5.20:1625) on host X and
(161.25.19.8:80) on the web server.
• The packets traveling between the hosts
are delivered to the appropriate process
based on the destination port number.
• All connections must be unique.
• if another process on host X try to establish another connection with
the same web server, it would be assigned a port number greater than
1024 other than 1625.
• This ensures that all connections consist of a unique pair of sockets.
78
3.8 Remote Procedure Calls (RPC)
• A message-based communication scheme to provide remote service
• It is similar in many respects to Interprocess communication (IPC)
mechanism, but processes are executing on separate systems?
• The parameters and return values need to be somehow transferred
between the computers.
• The computers might have different data format (i.e. big- and little-endian
systems) - External data representation (XDR) is used to resolve a such data
representation differences between two hosts communicating using RPCs
• Support for locating the server/procedure needed
79
3.8 Remote Procedure Calls (RPC)
• Stubs – client and server – side proxies (reduced function) implementing
the needed communication
• The client-side stub locates the server and marshals (convert) the parameters.
• Then the stub transmits a message to the server using message passing
• The server-side stub receives this message and invokes the procedure on the server
• If necessary, return values are passed back to the client using the same technique
• but how does a client know the port numbers on the server?
• Two approaches:
• An RPC call has a fixed port number associated with it – Once a program is compiled,
the server cannot change the port number of the requested service.
• Dynamically by OS provided matchmaker (rendezvous mechanism): A function that
matches a caller to a service being called (e.g., a RPC call attempting to find a server
daemon). ….. See following figure
80
3.8 Execution of RPC
81
Do not forget to read related sections in Ch. 3 on zybook
and to complete the related / suggested interactive
participation exercises by the due date
End of Chapter 3
Thank you
82