0% found this document useful (0 votes)
83 views65 pages

Week

The document discusses a lecture on operating systems and computer networks. It will cover process concepts like process representation and scheduling, operations on processes like creation and termination, and interprocess communication using shared memory and message passing. It provides details on process components like the program code, stack, data section, and heap. It also describes process states, the process control block, and context switching during process scheduling.

Uploaded by

Maaz Chowdhry
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
83 views65 pages

Week

The document discusses a lecture on operating systems and computer networks. It will cover process concepts like process representation and scheduling, operations on processes like creation and termination, and interprocess communication using shared memory and message passing. It provides details on process components like the program code, stack, data section, and heap. It also describes process states, the process control block, and context switching during process scheduling.

Uploaded by

Maaz Chowdhry
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 65

Week 03

CST2555 2022/23
Operating Systems and Computer Networks
What we will learn today
• Process Concept
• Process Scheduling
• Operations on Processes
• Interprocess Communication
• IPC in Shared-Memory Systems
• IPC in Message-Passing Systems
• Examples of IPC Systems
• Communication in Client-Server Systems
After this lecture, you will:
• Identify the separate components of a process and illustrate how
they are represented and scheduled in an operating system.
• Describe how processes are created and terminated in an
operating system, including developing programs using the
appropriate system calls that perform these operations.
• Describe and contrast interprocess communication using shared
memory and message passing.
• Design programs that uses pipes and POSIX shared memory to
perform interprocess communication.
• Describe client-server communication using sockets and remote
procedure calls.
• Design kernel modules that interact with the Linux operating
system.
Process Concept
• An operating system executes a variety of programs that
run as a process.
• Process – a program in execution; process execution must
progress in sequential fashion. No parallel execution of
instructions of a single process
• Multiple parts
• The program code, also called text section
• Current activity including program counter, processor registers
• Stack containing temporary data
• Function parameters, return addresses, local variables
• Data section containing global variables
• Heap containing memory dynamically allocated during run
time
Process Concept (Cont.)
• Program is passive entity stored on disk
(executable file); process is active
• Program becomes process when an executable
file is loaded into memory
• Execution of program started via GUI mouse
clicks, command line entry of its name, etc.
• One program can be several processes
• Consider multiple users executing the same
program
Process in Memory
Memory Layout of a C Program
Process State

• As a process executes, it changes state


• New: The process is being created
• Running: Instructions are being executed
• Waiting: The process is waiting for some event to
occur
• Ready: The process is waiting to be assigned to a
processor
• Terminated: The process has finished execution
Diagram of Process State
Process Control Block (PCB)
Information associated with each process(also called task control block)

• Process state – running, waiting, etc.


• Program counter – location of instruction to next execute
• CPU registers – contents of all process-centric registers
• CPU scheduling information- priorities, scheduling queue
pointers
• Memory-management information – memory allocated to
the process
• Accounting information – CPU used, clock time elapsed
since start, time limits
• I/O status information – I/O devices allocated to process,
list of open files
Threads
• So far, process has a single thread of execution
• Consider having multiple program counters
per process
• Multiple locations can execute at once
• Multiple threads of control -> threads
• Must then have storage for thread details,
multiple program counters in PCB
Process Representation in Linux

Represented by the C structure task_struct ( contains all


information about a process)

pid t_pid; /* process identifier */


long state; /* state of the process */
unsigned int time_slice /* scheduling information */
struct task_struct *parent;/* this process’s parent */
struct list_head children; /* this process’s children */
struct files_struct *files;/* list of open files */
struct mm_struct *mm; /* address space of this process */
Process Scheduling
• Process scheduler selects among available
processes for next execution on CPU core
• Goal -- Maximize CPU use, quickly switch
processes onto CPU core
• Maintains scheduling queues of processes
• Ready queue – set of all processes residing in
main memory, ready and waiting to execute
• Wait queues – set of processes waiting for an
event (i.e., I/O)
• Processes migrate among the various queues
Ready and Wait Queues
Representation of Process Scheduling
CPU Switch From Process to Process
A context switch occurs when the CPU switches from
one process to another.
Context Switch
• When CPU switches to another process, the
system must save the state of the old process and
load the saved state for the new process via a
context switch
• Context of a process represented in the PCB
• Context-switch time is pure overhead; the system
does no useful work while switching
• The more complex the OS and the PCB  the longer
the context switch
• Time dependent on hardware support
• Some hardware provides multiple sets of registers per
CPU  multiple contexts loaded at once
Operations on Processes

• System must provide mechanisms for:


• Process creation
• Process termination
Process Creation

• Parent process create children processes,


which, in turn create other processes,
forming a tree of processes
• Generally, process identified and managed via
a process identifier (pid)
• Resource sharing options
• Parent and children share all resources
• Children share subset of parent’s resources
• Parent and child share no resources
• Execution options
• Parent and children execute concurrently
• Parent waits until children terminate
Process Creation (Cont.)
• Address space
• Child duplicate of parent
• Child has a program loaded into it
• UNIX examples
• fork() system call creates new process
• exec() system call used after a fork() to replace the
process’ memory space with a new program
• Parent process calls wait()waiting for the child to
terminate
A Tree of Processes in Linux
C Program Forking Separate Process
It takes no parameters and returns
an integer value. Below are
different values returned by
fork().
Negative Value: creation of a
child process was unsuccessful.
Zero: Returned to the newly
created child process.
Positive value: Returned to parent
or caller. The value contains
process ID of newly created child
process.

Now what does Fork() returns


depends upon which process
you are in currently if you are in
process “A” ( parent ) then it will
return process id ( PID ) of child (
“B” ) process while in child
process (“B”) it returns 0 
Process Termination
• Process executes last statement and then asks the
operating system to delete it using the exit() system
call ( voluntarily it exits)
• Returns status data from child to parent (via wait())
• Process’ resources are deallocated by operating system
• Parent may terminate the execution of children
processes using the abort() system call. Some
reasons for doing so:
• Child has exceeded allocated resources
• Task assigned to child is no longer required
• The parent is exiting, and the operating systems does not
allow a child to continue if its parent terminates
Process Termination
• Some operating systems do not allow child to exists if its parent
has terminated. If a process terminates, then all its children must
also be terminated.
• cascading termination. All children, grandchildren, etc., are terminated.
• The termination is initiated by the operating system.
• The parent process may wait for termination of a child process by
using the wait()system call. The wait() system call
suspends execution of the current process
until one of its children terminates. The call
returns status information and the pid of the terminated process
pid = wait(&status);
• If no parent waiting (did not invoke wait()) process is a zombie
• If parent terminated without invoking wait(), process is an
orphan
Android Process Importance Hierarchy

• Mobile operating systems often have to terminate


processes to reclaim system resources such as
memory. From most to least important:
• Foreground process
• Visible process
• Service process
• Background process
• Empty process
• Android will begin terminating processes that are
least important.
Multiprocess Architecture – Chrome Browser
• Many web browsers ran as single process (some still do)
• If one web site causes trouble, entire browser can hang or crash
• Google Chrome Browser is multiprocess with 3 different types of
processes:
• Browser process manages user interface, disk and network I/O
• Renderer process renders web pages, is responsible to display UI using HTML, CSS,
and JavaScript. A new renderer created for each website opened
• Plug-in process for each type of plug-in ( add-ons or extensions)

• This multi-process architecture is a unique Google Chrome feature that allows your
browser to not rely on the work of every single process to function.
• If a particular process freezes or stops working, the other processes won’t be
affected so you can resume working in Chrome. Running multiple processes
simultaneously also makes Chrome more responsive. 
Interprocess Communication
• Processes within a system may be independent or cooperating
• Cooperating process can affect or be affected by other
processes, including sharing data
• Reasons for cooperating processes:
• Information sharing
• Computation speedup
• Modularity
• Convenience
• Cooperating processes need interprocess communication
(IPC)
• Two models of IPC
• Shared memory
• Message passing
Communications Models
(a) Shared memory. (b) Message passing.
Producer-Consumer Problem
• Paradigm for cooperating processes:
• producer process produces information that is
consumed by a consumer process
• Two variations:
• unbounded-buffer places no practical limit on
the size of the buffer:
• Producer never waits
• Consumer waits if there is no buffer to consume
• bounded-buffer assumes that there is a fixed
buffer size
• Producer must wait if all buffers are full
• Consumer waits if there is no buffer to consume
IPC – Shared Memory

• An area of memory shared among the


processes that wish to communicate
• The communication is under the control of the
users processes not the operating system.
• Major issues is to provide mechanism that will
allow the user processes to synchronize their
actions when they access shared memory.
Producer Consumer Problem
• Problem:

• The producer consumer problem (or bounded buffer problem) describes two processes,
the producer and the consumer, which share a common, fixed-size buffer used as a
queue. Producer produce an item and put it into buffer. If buffer is already full then
producer will have to wait for an empty block in buffer. Consumer consume an item from
buffer. If buffer is already empty then consumer will have to wait for an item in buffer.

Suppose that we wanted to provide a solution to the consumer-producer problem that
fills all the buffers. We can do so by having an integer counter that keeps track of the
number of full buffers. Initially, counter is set to 0. It is incremented by the producer
after it produces a new buffer and is decremented by the consumer after it consumes a
buffer.
Producer/ Consumer
"in" used in a Producer
producer code while (true) {
represent the /* produce an item in next produced */
next empty
while (counter == BUFFER_SIZE); /* do nothing */
buffer. "out"
used in buffer[in] = next_produced;
consumer code in = (in + 1) % BUFFER_SIZE;
represent first counter++;
filled buffer.
}
“Count” keeps
the count Consumer
number of
while (true) {
elements in the
while (counter == 0); /* do nothing */
buffer.
next_consumed = buffer[out];
out = (out + 1) % BUFFER_SIZE;
counter--;
/* consume the item in next consumed */
}
Analysis
• Producer Consumer Problem involving ‘counter’
• Although the producer and consumer routines shown above are correct
separately, they may not function correctly when executed concurrently.
As an illustration, suppose that the value of the variable count is
currently 5 and that the producer and consumer processes concurrently
execute the statements “counter++” and “counter−−”. Following the
execution of these two statements, the value of the variable count may
be 4, 5, or 6! The only correct result, though, is counter == 5, which is
generated correctly if the producer and consumer execute separately.
Race Condition
• Processes P0 and P1 are creating child processs using the fork() system call
• Race condition on kernel variable next_available_pid which represents the next
available process identifier (pid)

• Unless there is mutual exclusion, the same pid could be assigned to two different
processes!
• A race condition occurs when two or more operations are executed at the same time, not
scheduled in the proper sequence, and not exited in the critical section correctly.
• Race conditions are most associated with computer science and programming. They occur
when two computer program processes, or threads, attempt to access the same resource
at the same time and cause problems in the system.
Race Condition
• counter++ could be implemented as

register1 = counter
register1 = register1 + 1
counter = register1

• counter-- could be implemented as

register2 = counter
register2 = register2 - 1
counter = register2

• Consider this execution interleaving with “count = 5” initially:


S0: producer execute register1 = counter {register1 = 5}
S1: producer execute register1 = register1 + 1 {register1 = 6}
S2: consumer execute register2 = counter {register2 = 5}
S3: consumer execute register2 = register2 – 1 {register2 = 4}
S4: producer execute counter = register1 {counter = 6 }
S5: consumer execute counter = register2 {counter = 4}
IPC – Message Passing

• Processes communicate with each other


without resorting to shared variables

• IPC facility provides two operations:


• send(message)
• receive(message)

• The message size is either fixed or


variable
Message Passing (Cont.)
• If processes P and Q wish to communicate, they need
to:
• Establish a communication link between them
• Exchange messages via send/receive
• Implementation issues:
• How are links established?
• Can a link be associated with more than two processes?
• How many links can there be between every pair of
communicating processes?
• What is the capacity of a link?
• Is the size of a message that the link can accommodate
fixed or variable?
• Is a link unidirectional or bi-directional?
Implementation of Communication Link

• Physical:
• Shared memory
• Hardware bus
• Network
• Logical:
• Direct or indirect
• Synchronous or asynchronous
• Automatic or explicit buffering
Direct Communication

• Processes must name each other explicitly:


• send (P, message) – send a message to process P
• receive(Q, message) – receive a message from
process Q
• Properties of communication link
• Links are established automatically
• A link is associated with exactly one pair of
communicating processes
• Between each pair there exists exactly one link
• The link may be unidirectional, but is usually bi-
directional
Indirect Communication

• Messages are directed and received from mailboxes


(also referred to as ports)
• Each mailbox has a unique id
• Processes can communicate only if they share a mailbox
• Properties of communication link
• Link established only if processes share a common
mailbox
• A link may be associated with many processes
• Each pair of processes may share several communication
links
• Link may be unidirectional or bi-directional
Indirect Communication (Cont.)

• Operations
• Create a new mailbox (port)
• Send and receive messages through mailbox
• Delete a mailbox
• Primitives are defined as:
• send(A, message) – send a message to mailbox A
• receive(A, message) – receive a message from
mailbox A
Indirect Communication (Cont.)
• Mailbox sharing
• P1, P2, and P3 share mailbox A
• P1, sends; P2 and P3 receive
• Who gets the message?
• Solutions
• Allow a link to be associated with at most two
processes
• Allow only one process at a time to execute a
receive operation
• Allow the system to select arbitrarily the
receiver. Sender is notified who the receiver
was.
Synchronization
Message passing may be either blocking or non-blocking

• Blocking is considered synchronous


• Blocking send -- the sender is blocked until the
message is received
• Blocking receive -- the receiver is blocked until a
message is available
• Non-blocking is considered asynchronous
• Non-blocking send -- the sender sends the message
and continue
• Non-blocking receive -- the receiver receives:
• A valid message, or
• Null message
Producer-Consumer: Message Passing

• Producer
message next_produced;
while (true) {
/* produce an item in next_produced */

send(next_produced);
}

• Consumer
message next_consumed;
while (true) {
receive(next_consumed)

/* consume the item in next_consumed */


}
Buffering

• Queue of messages attached to the link.


• Implemented in one of three ways
1. Zero capacity – no messages are queued on a link.
Sender must wait for receiver
2. Bounded capacity – finite length of n messages
Sender must wait if link full
3. Unbounded capacity – infinite length
Sender never waits
Examples of IPC Systems - POSIX
Several IPC mechanisms are available for POSIX systems, including shared memory and message passing.
Here, we explore the POSIX API for shared memory.

POSIX shared memory is organized using memory-mapped files, which associate the region of shared memory with a file. A
process must first create a shared-memory object using the shm_open() system call, as follows:

• Process first creates shared memory segment


shm_fd = shm_open(name, O CREAT | O RDWR, 0666);

• The first parameter specifies the name of the shared-memory object. Processes that wish to access this shared
memory must refer to the object by this name. O_CREAT | O_RDWR : The subsequent parameters specify that the
shared-memory object is to be created if it does not yet exist (O_CREAT) and that the object is open for reading and
writing (O_RDWR). The last parameter establishes the directory permissions of the shared-memory object.

• A successful call to shm_open() returns an integer file descriptor for the shared-memory object. Once the object is
established, the ftruncate() function is used to configure the size of the object in bytes. The call

ftruncate(shm_fd, 4096);
• sets the size of the object to 4, 096 bytes.
• Finally, the mmap() function establishes a memory-mapped file containing the shared-memory object. It also returns
a pointer to the memory-mapped file that is used for accessing the shared-memory object.

• Reading and writing to shared memory is done by using the pointer returned by mmap().
IPC POSIX Producer
//C program for Producer process illustrating POSIX shared-memory API.
 
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <fcntl.h>
#include <sys/shm.h>
#include <sys/stat.h>     /* create the shared memory object */
#include <sys/mman.h>     shm_fd = shm_open(name, O_CREAT | O_RDWR, 0666);
   
int main()     /* configure the size of the shared memory object */
{     ftruncate(shm_fd, SIZE);
    /* the size (in bytes) of shared memory object */
    const int SIZE = 4096;  
    /* memory map the shared memory object */
     /* name of the shared memory object */
    ptr = mmap(0, SIZE, PROT_WRITE, MAP_SHARED, shm_fd, 0);
    const char* name = "OS";
 
     /* strings written to shared memory */
    /* write to the shared memory object */
    const char* message_0 = "Hello";     sprintf(ptr, "%s", message_0);
    const char* message_1 = "World!";
 /* shared memory file descriptor */
 
    ptr += strlen(message_0);
    int shm_fd;     sprintf(ptr, "%s", message_1);
      ptr += strlen(message_1);
    /* pointer to shared memory object */     return 0;
    void* ptr; }

 
IPC POSIX Consumer
Examples of IPC Systems - Mach
Mach is a kernel developed at Carnegie Mellon University by Richard Rashid and Avie Tevanian to
support operating system research, primarily distributed and parallel computing

• Mach communication is message based


• Even system calls are messages
• Messages are sent and received using the
mach_msg() function
• Ports needed for communication, created via
mach_port_allocate()
• Send and receive are flexible; for example four options
if mailbox full:
• Wait indefinitely
• Wait at most n milliseconds
• Return immediately
• Temporarily cache a message
Examples of IPC Systems – Windows
• Message-passing centric via advanced local procedure call (LPC) facility
• Only works between processes on the same system
• Uses ports (like mailboxes) to establish and maintain communication channels

The typical communication scenario between the server and the client is as follows:

• A server process first creates a named server connection port object, and waits for clients to connect.
• A client requests a connection to that named port by sending a connect message.
• If the server accepts the connection, two unnamed ports are created:
• client communication port - used by client threads to communicate with a particular server
• server communication port - used by the server to communicate with a particular client; one such port per client is created
• The client receives a handle to the client communication port, and server receives a handle to the server communication port, and the
inter-process communication channel is established.
• (A)LPC supports the following three modes of message exchange between the server and client:

• For short messages (fewer than 256 bytes) the kernel copies the message buffers between processes, from the address space of the
sending process to the system address space, and from there to the receiving process' address space.
• For messages longer than 256 bytes a shared memory section must be used to transfer data, which the (A)LPC service maps between
the sending and receiving processes. First the sender places data into the shared memory, and then sends a notification (e.g. a small
message, using the first method of (A)LPC) to the receiving process pointing to the sent data in the shared memory section.
• Server can directly read and write data from the client's address space, when the amount of data is too large to fit in a shared section.
Local Procedure Calls in Windows
Pipes
• Acts as a conduit allowing two processes to communicate
• Issues:
• Is communication unidirectional or bidirectional?
• In the case of two-way communication, is it half or full-duplex?
• Must there exist a relationship (i.e., parent-child) between the
communicating processes?
• Can the pipes be used over a network?
• Ordinary pipes – cannot be accessed from outside the
process that created it. Typically, a parent process creates a
pipe and uses it to communicate with a child process that it
created.
• Named pipes – can be accessed without a parent-child
relationship.
Ordinary Pipes
• Ordinary Pipes allow communication in standard producer-
consumer style
• Producer writes to one end (the write-end of the pipe)
• Consumer reads from the other end (the read-end of the pipe)
• Ordinary pipes are therefore unidirectional
• Require parent-child relationship between communicating
processes

• Windows calls these anonymous pipes


Named Pipes

• Named Pipes are more powerful than ordinary


pipes
• Communication is bidirectional
• No parent-child relationship is necessary between
the communicating processes
• Several processes can use the named pipe for
communication
• Provided on both UNIX and Windows systems
Communications in Client-Server Systems

• Sockets
• Remote Procedure Calls
Sockets
• A socket is defined as an endpoint for communication
• Socket programming is a way of connecting two nodes on a network
to communicate with each other. One socket(node) listens on a
particular port at an IP, while the other socket reaches out to the other
to form a connection. The server forms the listener socket while the
client reaches out to the server.

• Concatenation of IP address and port – a number included at start of


message packet to differentiate network services on a host
• The socket 161.25.19.8:1625 refers to port 1625 on host 161.25.19.8
• Communication consists between a pair of sockets
• All ports below 1024 are well known, used for standard services
• Special IP address 127.0.0.1 (loopback) to refer to system on which
process is running
Socket Communication
Sockets in Java
• Three types of sockets
• Connection-oriented
(TCP)
• Connectionless (UDP)
• MulticastSocket
class– data can be
sent to multiple
recipients (for
sending and
receiving IP
multicast packets.)
• Consider this “Date”
server in Java:
Sockets in Java The equivalent Date client
Remote Procedure Calls
• Remote procedure call (RPC) abstracts procedure calls
between processes on networked systems
• Again uses ports for service differentiation

Remote Procedure Call is a software communication protocol that


one program can use to request a service from a program located in
another computer on a network without having to understand the
network's details. RPC is used to call other processes on the remote
systems like a local system. A procedure call is also sometimes
known as a function call or a subroutine call.

RPC uses the client-server model. The requesting program is a


client, and the service-providing program is the server. Like a local
procedure call, an RPC is a synchronous operation requiring the
requesting program to be suspended until the results of the remote
procedure are returned.
Remote Procedure Calls
• During an RPC, the following steps take place:

• The client calls the client stub. The call is a local procedure call with parameters pushed onto the stack in the
normal way.
• The client stub packs the procedure parameters into a message and makes a system call to send the
message. The packing of the procedure parameters is called marshalling.
• The client's local OS sends the message from the client machine to the remote server machine.
• The server OS passes the incoming packets to the server stub.
• The server stub unpacks the parameters -- called unmarshalling -- from the message.
• When the server procedure is finished, it returns to the server stub, which marshals the return values into a
message. The server stub then hands the message to the transport layer.
• The transport layer sends the resulting message back to the client transport layer, which hands the message
back to the client stub.
• The client stub unmarshalls the return parameters, and execution returns to the caller.
Remote Procedure Calls (Cont.)
• Data representation handled via External Data Representation
(XDR) format to account for different architectures
• Big-endian and little-endian

• External Data Representation (XDR) is a standard data


serialization format, for uses such as computer network
protocols. It allows data to be transferred between different
kinds of computer systems. Converting from the local
representation to XDR is called encoding.
• Remote communication has more failure scenarios than local
• Messages can be delivered exactly once rather than at most once
Big Endian Vs Little Endian

Endianness is a term that describes the order in which a sequence


of bytes is stored in computer memory. Endianness can be either big or
small, with the adjectives referring to which value is stored first.
Big-endian is an order in which the "big end" (most significant value in
the sequence) is stored first, at the lowest storage address. Little-
endian is an order in which the "little end" (least significant value in the
sequence) is stored first.
End of Week 3

You might also like