Week
Week
CST2555 2022/23
Operating Systems and Computer Networks
What we will learn today
• Process Concept
• Process Scheduling
• Operations on Processes
• Interprocess Communication
• IPC in Shared-Memory Systems
• IPC in Message-Passing Systems
• Examples of IPC Systems
• Communication in Client-Server Systems
After this lecture, you will:
• Identify the separate components of a process and illustrate how
they are represented and scheduled in an operating system.
• Describe how processes are created and terminated in an
operating system, including developing programs using the
appropriate system calls that perform these operations.
• Describe and contrast interprocess communication using shared
memory and message passing.
• Design programs that uses pipes and POSIX shared memory to
perform interprocess communication.
• Describe client-server communication using sockets and remote
procedure calls.
• Design kernel modules that interact with the Linux operating
system.
Process Concept
• An operating system executes a variety of programs that
run as a process.
• Process – a program in execution; process execution must
progress in sequential fashion. No parallel execution of
instructions of a single process
• Multiple parts
• The program code, also called text section
• Current activity including program counter, processor registers
• Stack containing temporary data
• Function parameters, return addresses, local variables
• Data section containing global variables
• Heap containing memory dynamically allocated during run
time
Process Concept (Cont.)
• Program is passive entity stored on disk
(executable file); process is active
• Program becomes process when an executable
file is loaded into memory
• Execution of program started via GUI mouse
clicks, command line entry of its name, etc.
• One program can be several processes
• Consider multiple users executing the same
program
Process in Memory
Memory Layout of a C Program
Process State
• This multi-process architecture is a unique Google Chrome feature that allows your
browser to not rely on the work of every single process to function.
• If a particular process freezes or stops working, the other processes won’t be
affected so you can resume working in Chrome. Running multiple processes
simultaneously also makes Chrome more responsive.
Interprocess Communication
• Processes within a system may be independent or cooperating
• Cooperating process can affect or be affected by other
processes, including sharing data
• Reasons for cooperating processes:
• Information sharing
• Computation speedup
• Modularity
• Convenience
• Cooperating processes need interprocess communication
(IPC)
• Two models of IPC
• Shared memory
• Message passing
Communications Models
(a) Shared memory. (b) Message passing.
Producer-Consumer Problem
• Paradigm for cooperating processes:
• producer process produces information that is
consumed by a consumer process
• Two variations:
• unbounded-buffer places no practical limit on
the size of the buffer:
• Producer never waits
• Consumer waits if there is no buffer to consume
• bounded-buffer assumes that there is a fixed
buffer size
• Producer must wait if all buffers are full
• Consumer waits if there is no buffer to consume
IPC – Shared Memory
• The producer consumer problem (or bounded buffer problem) describes two processes,
the producer and the consumer, which share a common, fixed-size buffer used as a
queue. Producer produce an item and put it into buffer. If buffer is already full then
producer will have to wait for an empty block in buffer. Consumer consume an item from
buffer. If buffer is already empty then consumer will have to wait for an item in buffer.
•
Suppose that we wanted to provide a solution to the consumer-producer problem that
fills all the buffers. We can do so by having an integer counter that keeps track of the
number of full buffers. Initially, counter is set to 0. It is incremented by the producer
after it produces a new buffer and is decremented by the consumer after it consumes a
buffer.
Producer/ Consumer
"in" used in a Producer
producer code while (true) {
represent the /* produce an item in next produced */
next empty
while (counter == BUFFER_SIZE); /* do nothing */
buffer. "out"
used in buffer[in] = next_produced;
consumer code in = (in + 1) % BUFFER_SIZE;
represent first counter++;
filled buffer.
}
“Count” keeps
the count Consumer
number of
while (true) {
elements in the
while (counter == 0); /* do nothing */
buffer.
next_consumed = buffer[out];
out = (out + 1) % BUFFER_SIZE;
counter--;
/* consume the item in next consumed */
}
Analysis
• Producer Consumer Problem involving ‘counter’
• Although the producer and consumer routines shown above are correct
separately, they may not function correctly when executed concurrently.
As an illustration, suppose that the value of the variable count is
currently 5 and that the producer and consumer processes concurrently
execute the statements “counter++” and “counter−−”. Following the
execution of these two statements, the value of the variable count may
be 4, 5, or 6! The only correct result, though, is counter == 5, which is
generated correctly if the producer and consumer execute separately.
Race Condition
• Processes P0 and P1 are creating child processs using the fork() system call
• Race condition on kernel variable next_available_pid which represents the next
available process identifier (pid)
• Unless there is mutual exclusion, the same pid could be assigned to two different
processes!
• A race condition occurs when two or more operations are executed at the same time, not
scheduled in the proper sequence, and not exited in the critical section correctly.
• Race conditions are most associated with computer science and programming. They occur
when two computer program processes, or threads, attempt to access the same resource
at the same time and cause problems in the system.
Race Condition
• counter++ could be implemented as
register1 = counter
register1 = register1 + 1
counter = register1
register2 = counter
register2 = register2 - 1
counter = register2
• Physical:
• Shared memory
• Hardware bus
• Network
• Logical:
• Direct or indirect
• Synchronous or asynchronous
• Automatic or explicit buffering
Direct Communication
• Operations
• Create a new mailbox (port)
• Send and receive messages through mailbox
• Delete a mailbox
• Primitives are defined as:
• send(A, message) – send a message to mailbox A
• receive(A, message) – receive a message from
mailbox A
Indirect Communication (Cont.)
• Mailbox sharing
• P1, P2, and P3 share mailbox A
• P1, sends; P2 and P3 receive
• Who gets the message?
• Solutions
• Allow a link to be associated with at most two
processes
• Allow only one process at a time to execute a
receive operation
• Allow the system to select arbitrarily the
receiver. Sender is notified who the receiver
was.
Synchronization
Message passing may be either blocking or non-blocking
• Producer
message next_produced;
while (true) {
/* produce an item in next_produced */
send(next_produced);
}
• Consumer
message next_consumed;
while (true) {
receive(next_consumed)
POSIX shared memory is organized using memory-mapped files, which associate the region of shared memory with a file. A
process must first create a shared-memory object using the shm_open() system call, as follows:
• The first parameter specifies the name of the shared-memory object. Processes that wish to access this shared
memory must refer to the object by this name. O_CREAT | O_RDWR : The subsequent parameters specify that the
shared-memory object is to be created if it does not yet exist (O_CREAT) and that the object is open for reading and
writing (O_RDWR). The last parameter establishes the directory permissions of the shared-memory object.
• A successful call to shm_open() returns an integer file descriptor for the shared-memory object. Once the object is
established, the ftruncate() function is used to configure the size of the object in bytes. The call
ftruncate(shm_fd, 4096);
• sets the size of the object to 4, 096 bytes.
• Finally, the mmap() function establishes a memory-mapped file containing the shared-memory object. It also returns
a pointer to the memory-mapped file that is used for accessing the shared-memory object.
• Reading and writing to shared memory is done by using the pointer returned by mmap().
IPC POSIX Producer
//C program for Producer process illustrating POSIX shared-memory API.
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <fcntl.h>
#include <sys/shm.h>
#include <sys/stat.h> /* create the shared memory object */
#include <sys/mman.h> shm_fd = shm_open(name, O_CREAT | O_RDWR, 0666);
int main() /* configure the size of the shared memory object */
{ ftruncate(shm_fd, SIZE);
/* the size (in bytes) of shared memory object */
const int SIZE = 4096;
/* memory map the shared memory object */
/* name of the shared memory object */
ptr = mmap(0, SIZE, PROT_WRITE, MAP_SHARED, shm_fd, 0);
const char* name = "OS";
/* strings written to shared memory */
/* write to the shared memory object */
const char* message_0 = "Hello"; sprintf(ptr, "%s", message_0);
const char* message_1 = "World!";
/* shared memory file descriptor */
ptr += strlen(message_0);
int shm_fd; sprintf(ptr, "%s", message_1);
ptr += strlen(message_1);
/* pointer to shared memory object */ return 0;
void* ptr; }
IPC POSIX Consumer
Examples of IPC Systems - Mach
Mach is a kernel developed at Carnegie Mellon University by Richard Rashid and Avie Tevanian to
support operating system research, primarily distributed and parallel computing
The typical communication scenario between the server and the client is as follows:
• A server process first creates a named server connection port object, and waits for clients to connect.
• A client requests a connection to that named port by sending a connect message.
• If the server accepts the connection, two unnamed ports are created:
• client communication port - used by client threads to communicate with a particular server
• server communication port - used by the server to communicate with a particular client; one such port per client is created
• The client receives a handle to the client communication port, and server receives a handle to the server communication port, and the
inter-process communication channel is established.
• (A)LPC supports the following three modes of message exchange between the server and client:
• For short messages (fewer than 256 bytes) the kernel copies the message buffers between processes, from the address space of the
sending process to the system address space, and from there to the receiving process' address space.
• For messages longer than 256 bytes a shared memory section must be used to transfer data, which the (A)LPC service maps between
the sending and receiving processes. First the sender places data into the shared memory, and then sends a notification (e.g. a small
message, using the first method of (A)LPC) to the receiving process pointing to the sent data in the shared memory section.
• Server can directly read and write data from the client's address space, when the amount of data is too large to fit in a shared section.
Local Procedure Calls in Windows
Pipes
• Acts as a conduit allowing two processes to communicate
• Issues:
• Is communication unidirectional or bidirectional?
• In the case of two-way communication, is it half or full-duplex?
• Must there exist a relationship (i.e., parent-child) between the
communicating processes?
• Can the pipes be used over a network?
• Ordinary pipes – cannot be accessed from outside the
process that created it. Typically, a parent process creates a
pipe and uses it to communicate with a child process that it
created.
• Named pipes – can be accessed without a parent-child
relationship.
Ordinary Pipes
• Ordinary Pipes allow communication in standard producer-
consumer style
• Producer writes to one end (the write-end of the pipe)
• Consumer reads from the other end (the read-end of the pipe)
• Ordinary pipes are therefore unidirectional
• Require parent-child relationship between communicating
processes
• Sockets
• Remote Procedure Calls
Sockets
• A socket is defined as an endpoint for communication
• Socket programming is a way of connecting two nodes on a network
to communicate with each other. One socket(node) listens on a
particular port at an IP, while the other socket reaches out to the other
to form a connection. The server forms the listener socket while the
client reaches out to the server.
• The client calls the client stub. The call is a local procedure call with parameters pushed onto the stack in the
normal way.
• The client stub packs the procedure parameters into a message and makes a system call to send the
message. The packing of the procedure parameters is called marshalling.
• The client's local OS sends the message from the client machine to the remote server machine.
• The server OS passes the incoming packets to the server stub.
• The server stub unpacks the parameters -- called unmarshalling -- from the message.
• When the server procedure is finished, it returns to the server stub, which marshals the return values into a
message. The server stub then hands the message to the transport layer.
• The transport layer sends the resulting message back to the client transport layer, which hands the message
back to the client stub.
• The client stub unmarshalls the return parameters, and execution returns to the caller.
Remote Procedure Calls (Cont.)
• Data representation handled via External Data Representation
(XDR) format to account for different architectures
• Big-endian and little-endian