0% found this document useful (0 votes)
5 views67 pages

05 Ipc

The document discusses concurrency in operating systems, focusing on processes and threads as sources of concurrency, along with their scheduling and coordination. It highlights the challenges of concurrency such as race conditions and deadlocks, and presents mechanisms like mutexes, condition variables, and semaphores to manage these issues. Practical examples using the pthreads library illustrate the implementation of these concepts in real-world scenarios.

Uploaded by

mohta.harsh9163
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views67 pages

05 Ipc

The document discusses concurrency in operating systems, focusing on processes and threads as sources of concurrency, along with their scheduling and coordination. It highlights the challenges of concurrency such as race conditions and deadlocks, and presents mechanisms like mutexes, condition variables, and semaphores to manage these issues. Practical examples using the pthreads library illustrate the implementation of these concepts in real-world scenarios.

Uploaded by

mohta.harsh9163
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 67

Concurrency,

Coordination and
Communication
Or

IPC
Sources of Concurrency
Sources of Concurrency in Operating
Systems
• Processes
• Threads (sometimes called light weight processes)
• Scheduling
• All general purpose operating systems are preemptive!
• Single CPU systems have concurrency too
• Multiple CPUs add their own flavor to this
• Sometimes we call threads and processes together as “tasks”
• Note – Kernel has a lot of concurrent tasks!
The concept of ‘threads’
and an example with Linux
pthreads library functions.
This is another way (in addition to processes) to create concurrency
Concurrency with Threads (as
opposed to processes)
• Threads share almost everything with each other
• No parent-child relationship (in terms of PIDs etc)
• Shared address space – watch out – this can be dangerous! However this
means they can easily share data.
• Except the call stack!
• They are identified by different thread IDs (not different PIDs)
• Linux calls them as light-weight processes(LWP) see ps -eLF
• They are independently schedulable
• Note – User threads vs Kernel threads
Processes and threads – creating
and using threads in practiceP1
P1 T1
Stack_T1
fork() Stack_T2 Thread_create()
P2 T2

wait() join()

exit() Thread_exit()

Lets see a practical example with linux pthreads – see pthreads/simple.c


A few notes about pthread_ calls we
saw
• Any thread calling exit() will cause all threads to terminate!
• Any thread can terminate only that thread by calling pthread_exit()
• Simply returning from the thread function is also as good as pthread_exit()
– i.e., causes only that thread to terminate, not the others
• pthread_join() will wait for the given thread to terminate.
• One thread can terminate another thread by calling pthread_cancel()
• You can see how they are used in pthread/simple.c
• Note that source needs #include <pthread.h> and compiling requires
gcc -o simple -pthread simple.c
So concurrency can
happen with processes or
threads
… Now more about concurrency
Processes and threads working
collaboratively and concurrently
• Multiple processes or threads let us independently do different things
“simultaneously”
• We can also use multiple processes and multiple threads to distribute work
among multiple entities.
• Parallel / pipelined processing
• Think of one thread doing compute when the other is doing IO
• Threads are particularly useful in this context because they naturally share
the data part of the work.
• See example: pthreads/find_some_primes.c
• This example was quite straightforward. However, note that each thread
needed to be aware of the other.
Challenges in dealing with
concurrency -
Races and Deadlocks
Preemptive Scheduling

Concurrency can create its own


issues Concurrency

• Parallel execution is great … get more things done ‘simultaneously’


• Note that tasks often work collaboratively and share something Problems
• Sometimes it is natural, as in the kernel
• Sometimes it is to speed up some task (divide and conquer)
• …And sometimes this can cause an issue if we are not careful:
• Here is a simple race example caused by concurrency + sharing
// shared variable
1. load ax, balance
float balance=1000;
2. add ax, 100
T1 T2 T3 3. store ax, balance

balance= balance + 100 balance= balance + 100 balance= balance − 100

See how this can happen:


Where is the problem that concurrency & preemption creates ? pthreads/update_balance.c
Another example of race
• We could modify our find_some_primes.c example to have two
read_items threads.
• Now there is a possibility of a race condition:
• Both the reader threads want to get an item from the rear
• n = number[NEXT(rear)];
• rear = NEXT(rear);
• There is a problem because of possible concurrent execution
• See pthread/find_some_primes_2r.c
• Play with turning on nanosleep() to introduce more chance of a race.
The problems of concurrency –
Races and Deadlocks
• Races: As we saw in the previous example – data reading and writing
concurrently to shared variables may lead to problems
• Deadlocks: Sometimes simultaneously holding resources is needed to
complete a task, that may lead to problems

C B
Coordination of
Concurrency
… so as to avoid races and
deadlocks
Notion of a Critical Section and
Mutual Exclusion
• Clearly in our example tasks T1 , T2 and T3 had each some code such that it
would work perfectly fine provided they execute their code exclusively (no two
at the same time).
• Such piece of code is called a critical section. (Heard this before?)
• Mutual exclusion (consisting of lock and unlock operations) is the mechanism
used to protect critical sections.
• Thought as a barrier, only one process allowed to pass at any time.
A thread does a lock operation T1 may access like this: oversimplified lock implementation:
simply turn off all interrupts
to enter the critical section lock_the_critical section
and… // This is the critical section code and an unlock restores interrupts
an unlock operation to signal balance= balance + 100
unlock .. Works, ok for a small time! But…
exiting sectionfrom the ciritical not a good idea in general!
Important
• Mechanism vs Usage:
• Lock and unlock are mechanisms to implement a critical section
• In reality only proper use by the programmer ensures mutual exclusion

• Lets take a look at how the lock and unlock operations may be
implemented so that it can be used by the programmer to implement
mutual exclusion to access critical sections.
Implementing the lock()
and unclock() operations
A Proposed locking code – Aim is
that at most one process is in the !
CSImplementation
at any time Usage E D
IL
struct lock { Task T1 ( or T2, r T3, etc..)
FA
int flag=0; (step, flag)
}L1 ;
lock(&L1) T1 T2
5. // We are in the critical section
lock( struct lock * l){ 10
6. // so we can modify data 1 0
1. while (l->flag == 1 )
unlock(&L1) 3 1
2. ; // keep looping if locked 3 1
3. lock->flag=1; // Got it, so 5
// indicate 5
} Can you say what could easily go
unlock( struct lock * l){ wrong?
4. l->flag=0; // Declare free
}
Ref: www.ostep.org chapter 28

Peterson’s Algorithm
What is an issue
with this idea?
Hardware support: Test and Set
• A special machine instruction: LOAD flag,ax • init(int * flag ) { *flag=0; }
STORE 1,flag
TSL ax, flag • lock(int * flag) {
• Copy original value of flag to ax, then set while( test_and_set(flag) == 1 )
flag=1 in a single instruction ;
• int test_and_set( int *flag){ // flag was 0, now is 1
TSL ax, flag }
RET // ax value is returned
} int flag; // global shared
• If original value was 0 then, we got the lock
T1::
(returns 0) and others cant get it (flag=1).
lock(&flag)
• unlock(int * flag) is simple:
// This is the critical section code
STORE 0, flag balance= balance + 100
The implementations here are a model for understanding and
not intended to be compilable assembly/C code! unlock(&flag)
Avoiding the busy loop
• Notice: Our solution has a busy loop! • What if multiple people are waiting for
• Busy loop consumes too many CPU cycles. the lock
• Possible starvation
• Alternate:
• A syscall to give up CPU “yield until a later time”
• One solution
• lock( int * flag) { • If yield could put waiters in a queue
while( test_and_set(flag) == 1 ) • and unlock wakes them up sequentially.
yield • In FCFS order
// here flag was 0, now is 1
}
• These ideas are combined in the
concept called a mutex
• An extended idea is that of a
• In Linux the closest yield is semaphore.
• sched_yield() or pthread_yeld() or sleep() or nanosleep()
A practical example in the pthreads library:
Mutual Exclusion using a mutex.
• We saw how to create threads using pthreads.
• We can use the mutex provided by pthreads to prevent multiple
threads from simultaneously entering the critical section:

Recall our example: T1:


Three useful mutex related calls supported by pthreads:
lock_the_critical section pthread_mutex_lock()
// This is the critical section code pthread_mutex_unlock()
balance= balance + 100 pthread_mutex_init()
unlock
Mutexes, Condition
Variables, Semaphores
Typical usage
Define and initialize (Once) Lock and unlock (Repeatedly in
each thread)
// define a mutex variable // to enter the critical section
pthread_mutex_t mymutex; pthread_mutex_lock(&mymutex);

… … critical section …
// initialize before use
pthread_mutex_init(&mymutex // to exit the critical section
, NULL); // NULL is attributes pthread_mutex_unlock(&mymutex)
// returns 0 if and only if success ;
See how a mutex is used in
pthreads/update_balance_mutex.c
Mutex ensures locking
but sometimes we want conditional
locking
• Revisit the find_some_primes.c example with two readers.
• See how there can be a race to deal with access to “front” and “rear”
• Let’s see a solution using mutex to make sure only one thread accesses these
variables at a time.
• pthreads/find_some_primes_mutex.c
• Notice how this still had a busy wait even though access was mutually
exclusive!
• We are waiting for a condition that can only come to be if another threads changes
front or rear.
• So we gave up the mutex
• We would prefer to wait for some other process to tell us that perhaps the condition
we are waiting for is met. (slot is free or slot is filled)
See : pthreads/find_some_primes_mutex.c
Condition Variables
• They work in tandem with Now the reader in our example
mutexes • Does a lock_mutex
• You can wait for a condition and • Waits for a condition for a filled
give up the mutex in one shot slot
• You can also signal to other • Once it wakes up it has the
threads that a condition has mutex lock
occurred

• The main thread may signal the


condition that the slot is filled.
Synchronization with mutexes and
conditional variables – a template
supported by pthreads
T1 T2
pthread_mutex_lock(&mutex) pthread_mutex_lock(&mutex)

while( desired condition1 not met) { while( desired condition2 not met) {
pthread_cond_wait(&desiredcv1, pthread_cond_wait(&desiredcv2,
&mutex); &mutex);
} }
// do CS // do CS
pthread_cond_signal(&desiredcv2); pthread_cond_signal(&desiredcv1);

pthread_mutex_unlock(&mutex); pthread_mutex_unlock(&mutex);

See: pthread/find_some_primes_cv.c
Semaphore – A more general
solution:
counter
• Conceptually a semaphore is
0
two things:
• A counter (of allowed tasks) n ( 1 3375 4175 2100
in our example)
• A queue (of waiting tasks) wq FIFO Queue

• Often initially
• Basically the semaphore is a
• n=MAX (MAX=1 in our example)
barrier except it allows a MAX
• wq= empty list number of threads to pass the
barrier (MAX possibly > 1)
• It behaves like a mutex if MAX is 1
Implementing sem_wait and
sem_post atomically Edsger
• sem_wait( S )
P • sem_post( S )
V Dijkstra

ATOMIC !
if (S.n > 0 ) if ( S.wq is empty )
S.n -- ; S.n ++ ;
Try to get else else Let others know..
in
addq(self, S.wq) t = delq(S.wq) On the way out

blocked() movetoready(t)

sem_post is also called


• Basically the semaphore is a barrier except it allows a MAX number sem_signal in literature.
of threads to pass the barrier (MAX possibly > 1)
• It behaves like a mutex if MAX is 1 (Binary semaphore vs counting
semaphore)
Example: The (multi) producer-
consumer problem
Producers:
keep putting items to the buffer

Consumers:
MAX number of array slots used circularly keep getting items from the buffer

put(val){
buffer[free]=val;
free=free+1 % MAX;
}

get(){
tmp=buffer[used];
• We only want as many producers to put() as there are free slots used=used+1 % MAX;
return tmp
• We only want as many consumers to get() as there are filled slots }
• Also we want only one thread updating the index at any time!
A PC problem solution that combines
Mutexes and POSIX Semaphores
Too many people Producer(){
accessing free and used, …
so we’ll guard them with
sem_wait(&empty_slot); //wait till empty_slot
locks ie mutex
lock(); put(x); unlock(); // we don’t want race for “free”;
put(val){ sem_signal(&filled_slot) //signal anyone waiting for a filled slot
buffer[free]=val; …
free=free+1 % }
MAX;
}
init(&empty_slot) :: empty_slot.n=MAX
get(){ init(&filled_slot) :: filled_slot.n = 0
tmp=buffer[used];
used=used+1 % Consumer(){
MAX; …
return tmp sem_wait(&filled_slot); //wait till filled slot
} lock(); x=get(); unlock(); // we don’t want race for “used”;
sem_signal(&empty_slot); //signal anyone waiting for an empty slot
See ipc/sem-posix/pc.c

}
Aside: Shared memory among
processes
• We will read soon about how P1 P2
processes, not just threads can
share parts of their address spaces.
• The exact techniqes is called “POSIX
shared memory” and we come to
that later.
• However, this means all the stuff
about mtexes etc are valid between
processes too as long as they are
located inside the shared memory.
Processes and semaphores and
Linux
• POSIX supports two types of semaphores
• Unnamed: These are usually used between threads. However they can be
used between processes if they are using shared memory.
• When the processes exit, it is gone
• Named sempahores: These semaphores have a name. In fact they are
created in
• /dev/shm
• It is persistent (exist even after the processes are gone) and needs to be explicitly
removed
• Can also be used between threads
• They can be used between processes to control access to shared memory data
• They can be used between processes even if they don’t share memory
Semaphore can be used as a mutex
too! Producer(){

sem_wait(&empty_slot); //wait till empty_slot
sem_wait(&lock); put(val); sem_post(&lock);
sem_signal(&filled_slot) //signal anyone waiting for a filled slot

put(val){ }
buffer[free]=val;
free=free+1 % MAX;
} init(&empty_slot) :: empty_slot.n=MAX
init(&filled_slot) :: filled_slot.n = 0 init(&lock) :: lock.n=1
get(){
tmp=buffer[used]; Consumer(){
used=used+1 % MAX; …
return tmp sem_wait(&filled_slot); //wait till filled slot
} sem_wait(&lock); put(val); sem_post(&lock);
sem_signal(&empty_slot); //signal anyone waiting for an empty slot

}
Monitors: A high level abstraction
• Locks and Semaphores rely heavily on the programmer to make sure
they use them appropriately.
• With the intent to make life better for the programmer languages
introduce the notion of Monitors in the programming language itself
• A Monitor is like an object in Object Oriented Languages: Think of it as
a qualification to a class!
• With the additional semantics that only one task is allowed to execute
a method.
• It is implemented on top of semaphores/mutexes/condition variables.
Deadlocks
Example: Dining Philosophers’
Problem
Implemented with a
Philosopher(i) { semaphore to avoid
loop { race
P3 think(i)
eat(i)
f2 }
f3 }
P2 P4 eat(i){ A problem remains:
pick_up(fi)
pick_up(fi-1)
Deadlock
f1
f4 // actually eat 
put_down (fi)
put_down (fi-1) How to solve?
P5
P1 f5 }
PS: Think of fi as fork to the left of PI
and fi-1 as fork to the right of Pi
Solving deadlock problems in OS

Detect and break Avoid deadlocks


• Detecting a deadlock • Access resources in a certain
• Cyclical resource access graph order, eg always access lower
• Bipartite graph numbered resource before a
• Breaking a deadlock higher numbered resource
• Preempt a process based on • Assumes each task knows
priority or any other criterion beforehand which resources it
• Still need to deal with starvation will access.
• These resources could also be Can you solve the dining philosophers’
semaphores problem using one of these methods?
Deadlock conditions formally
1. Circular wait: a cycle of processes each holding one resource and
waiting for a resource held by another

2. Resource holding(and usage) is mutually exclusive


3. Hold and wait: A process holds one resource and waits for another
4. No preemption – A resource cannot be taken away from a process
holding it.
Summary about Concurrency and
Coordination
• Concurrency is a common usage
• Not just process, but threads too: We learnt pthreads
• Threads need mutual exclusion (ME): sort of like a barrier in the code
• ME can be done in software (Peterson’s)
• ME is more efficiently with hardware support (TSL)
• Avoiding the busy loop by waiting in a queue: lock() and unlock() with mutexes
• Practical example: pthread_mutex_(un)lock(), pthread_mutex_init()
• Busy loop at a higher level – condition variables – pthread_cond_wait() and pthread_cond_signal()
• Another solutions: Semaphores. can count too! And can be used for other purposes.
• Deadlock is another problem and mutexes don’t solve it.
• Starvations is a third problem and we wish to avoid that.
Now more about
communication
Beyond thread shared memory
Shared memory between processes
on the same OS instance
P1 1 Allocated in P2
the RAM with Since the memory slab is now in both
a known process’ address spaces, both can now
(pre-arranged) read or write to it.
unique “name”
Of course we need to use known
methods to properly coordinate
access.
2 Mapped into
the process’ The region of the virtual address
virtual address space that has the shared segment
space mapped may be different in P1 vs P2

Now it is available for read and write IMPORTANT: the existence of the shared memory block is
like any other part of the memory! not tied to the life of the processes using it! … needs to be
explicitly removed when we don’t need it.
POSIX interface for shared memory
ops
• shm_fd = shm_open(SHM_NAME, O_CREAT|O_RDWR,0644);
• Just like a file open sample name: /example1
• See /dev/shm
• ftruncate(shm_fd, SHARED_MEMORY_SIZE);
• Setting the size
• s = mmap(NULL, sizeof(struct mystruct), PROT_READ | PROT_WRITE,
MAP_SHARED, shm_fd, 0);
• Mapping the area to memory
• You can write into the area pointed by s
• shm_close(shm_fd);
• shm_unlink(SHM_NAME);
See ipc/shm-posix/*.c
Communication without shared
space
• The goal is to share data, • Conceptually common:
without any form of shared
space
• Typically
• Files
• Pipes
• Pipe() – fd pair
• Named pipe (similar to a file)
• Basically a queue with variations in:
• Message queues
• A way for different processes to identify and access it
• Sockets • Send and receive operations
• Data format
Pipes and named pipes
• Pipes: We saw
fd[1] fd[0]
• int fd[2];
• pipe(fd)

• Named Pipes: with different processes


• Also called FIFOs. fd= open(…) read(fd,..) write(fd,…) close(fd)
• Identified by a special file called a FIFO see mkfifo(1) or mkfifo(3)
• The rest is exactly like files – opening reading, writing, closing.
• Notice how the writer waits for a reader.
• Closing the write end automatically signals EOF for the read end.
• No real data is written onto the disk! See example in: ipc/named-pipe/
• Data is simply a sequence of characters (just like a file)
Named-Pipes (contd.)
• Any number of processes may open() simultaneously
• Any number of processes may read() and any number may write()
• A process may open it for both reading and writing
• The reads fail when there are no writers (i.e., they have all closed or
exited)
• The writes fail when there are no readers (i.e., they have all closed or
exited)
Pipe/Fifo/File ABCDEFGHIJKLMNOPQRSTUVWXYZ…….

Message queues
Message queue ABCDE FGH IJKLMN …. ….

• Unlike a pipe, a message queue typically has a notion of a data unit “message”.
• Any number of processes can open, read or write.
• Messages are stored in an order until anyone reads.
• write() and read() are usually called send() and receive().
• So one talks of “sending” and “receiving” messages..
• Messages also have the notion of a special tag or a priority used to select or wait for an
appropriate message during receive.
• Also note that we need some common way (across processes) to identify a message
queue – some kind of pre-agreed name or key just like we had the name of the fifo in the
previous example.

• Message queue continues to hold data until removed.


• It persists and is not linked to the life of a process that uses it.
POSIX Message Queues

Control operations (resembles file ops) Send and receive operations


• Create/open: • Send
mq= mq_open(MQ_NAME, mq_send (mq, buffr, nbytes, prio);
O_CREAT|O_RDWR,0644,NULL); • Receive
• Close mq_receive(mq, buffr, size, prio);
mq_close(mq); • Get attributes:
• Remove mq_getattr (mq, &attr)
mq_unlink(MQ_NAME);
See example in ipc/mq-posix/
In particular see the use of getting the mq_getattr
along with mq_receive
Why “POSIX” ?
• You may have noticed our nomenclature
• POSIX shared memory

com
m
• POSIX message queue
• POSIX semaphore
• There is another interface (different calls, but same underying
functionality) offered by Linux and several older UNIX flavors.
• System-V interface
• So when you look up any manual page, please ensure you are looking at the
right pages.
• We studied POSIX because it is modern and simplified.
Sockets
Communication between processes
via sockets
• Sockets provide a mechanism that work
• Between processes inside an OS instance (UNIX sockets)
• Between two different systems also! (INET sockets)
• It is two way.
• It is the basis of communication between nodes in a network and on the
internet
• Communication primitive is based on the notion of send-recv, However…
• identity is established in a manner that is internet wide acceptable.
• Here the notion of the identity of a socket endpoint is a 5-tuple
• Two endpoints together make the socket connection
Socket: Communication by send()
and recv()
• send(sockfd, data_buff, data_length, flags) or write()
• recv(sockfd, data_buff, data_length, flags) or read()
• These are very similar in form to read and write.

• What is sockfd ?
• It is like a file descriptor.
• It identifies a communication endpoint.
The internet socket endpoints

Port-1 Port-2
P1 P2

TCP

OS-1 OS-2
on Computer-1 on Computer-2

Computer-1 Computer-2

IP1 128.213.3.4 Internet IP2 113.13.23.10

See:
netstat –numeric-ports –a (to see what ports are active)
Try running a process waiting on 8080:
Aside: A little experiment to
understand this
• Use netstat --numeric-ports -a
… and see some connection activity
… can you identify some sockets?
• Here is a simple server to listen and respond to a connection :
while : ; do cat README | nc -l -p 8080 ; done
• Use netstat again and see that 8080 has someone in LISTEN
mode
• Now run a wget localhost:8080 and see netstat output again
• Fun aside: run wireshark and see data moving in and out of your PC
• See /etc/services for standardized service-port mappings
Socket: Understanding a socket file
descriptor
• Recall: send(sockfd, data_buff, data_length, flags) …
similarly recv(sockfd,…))
• What is sockfd ? It identifies a socket.
• A sockfd can be used for send recv only after it has both endpoints established.
• In particular it identifies the protocol being used. Common varieties:
• Internet sockets: TCP, UDP UNIX domain sockets
• An internet socket has IP addresses and ports associated with the endpoints:
• { local_ip_address, local_port, remote_ip_address, remote_port,
protocol }
• This is got not all at once, but in stages – connection establishment process
• A Unix domain socket only works within the OS
• Identified by a socket path name (like we saw semaphore and fifo names)
Focus for this study: a TCP
connection
• Initially create a socket (and get an fd) which has not yet established a
connection. (‘half’ socket)
• The connection process identifies the endpoint pair and now both
ends are sure who they are talking to. This is the ‘full’ socket.
• Once connected then it is like having two directed connections
between the end points to send and receive:

P1 P2
Port Port
TCP
TCP
& IP & IP
addr addr

1. TCP Server endpoint 2. TCP Client endpoint


Big Picture of a
Sample TCP connection

Server Client
• Create one “half” socket • Create one “half” socket
• Set it to listen mode and
wait
• Once you receive a • Establish connection
client request, accept it with a server creating a
and create a new “full” “full” socket.
socket
• Use the new full socket • Use the new full socket
for send() recv() for send() recv()

• Use the normal close() • Use the normal close()


to close the socket to close the socket

• The old “half” socket is


still usable to listen for
new connections.
Socket: Process – Server side for a
SOCK_STEAM
1. sockfd=socket(AF/domain, Type, Protocol) 1. fd = socket(AF_INET, SOCK_STREAM, 0)
2-- struct sockaddr_in { struct sockaddr_in local;
sa_family_t sin_family; /* address family: AF_INET */ local.family= AF_INET
in_port_t sin_port; /* port in network byte order */ local.sin_addr.s_addr = htonl(INADDR_ANY);
struct in_addr sin_addr; /* internet address */
local.sin_port = htons(33333);
}; ….

2. bind(sockfd, &sockaddr, addr_len) 2. bind(fd, &local, sizeof(local));


Socket now set to listen mode
3. listen(sockfd, connection_backlog) 3. listen(fd,2);

4. newfd=accept(sockfd, &sockaddr, 4. newfd=accept(fd,&cliaddr, &cliaddr_len)


&addr_len) - at this point you can find remote process
Waiting for a address and port
connection here
Socket: Process – Server side for a
SOCK_STEAM (contd…)
• When you exit the accept you
have a newfd which is a complete • char str[100]=“Hello World\n”;
socket and use it:
• send(newfd, data_buff, n=send(newfd,str, strlen(str), 0);
data_length, flags)
• recv(newfd, data_buff, buff_size, •
n=recv(newfd, str, maxlen, 0);
flags)
• …. Repeatedly… • close(newfd);
• close(newfd)
Socket: Process – Client side for a
SOCK_STEAM
1. sockfd=socket(AF_INET, 3. …
SOCK_STREAM, 0);
char str[100]=“Hello World\n”;
2-- struct sockaddr_in server;
server.family= AF_INET; n=send(sockfd,str, strlen(str), 0);
server.sin_addr.s_addr = …
inet_addr(server_ip_string); n=recv(sockfd, str, maxlen, 0);
server.sin_port = htons(server_port_int);

2. connect(sockfd, &server,
sizeof(server)); close(sockfd);
Simple server vs
concurrent server lsockfd = socket(…);
// setup the address
lsockfd = socket(…); bind(lsockfd,…);
// setup the address listen(lsockfd,…);
bind(lsockfd,…);
listen(lsockfd,…); Loop{
newfd=accept(lsockfd,…);

newfd=accept(lsockfd,…); fork and create a child.


send(newfd,…);
In the child:
recv(newfd,…); send(newfd,…);
... recv(newfd,…);
… ...
close(newfd); …
close(newfd);
close(lsockfd); exit();
close(newfd);
}
Examples on github See: ipc/sockets/

• Simple tcp server – client : Single • You can see connection states:
message exchange netstat –a ( ..| more)
• Concurrent tcp server – long • Sockets don’t vanish as soon as
running client (5 messages) your program exits… So be
• Simple udp server – client : Single patient.
message exchange • Good practice to unlink Unix
• Simple udp server in a loop that domain sockets at the beginning.
echos any client message
• Simple unix domain socket server
– client : Single message exchange
Sockets… there is more
• Socket programming in practice is a more elaborate topic that is
covered typically in a network programming course
Signals as a way to communicate
P1
• [Despite the name it has nothing to do with condition variables or
semaphores  ] P2

• Unlike most other methods it is inherently


asynchronous to the program instructions.
• Processes can send a signal to another process.
• “kill” (despite the name) is a command line utility to
send a signal to a process
• The kill() system call
• The receiving process executes a signal handler in
response to a signal Signal Handler

• Reminiscent of ‘interrupts’, ‘event handlers’ or Usually other


‘threads’ signals are
blocked
during this time
Signals are used inside the
operating system in a rather
elaborate way!
• A signal is generated when • SIG_IGN and SIG_DFL handlers
• Illegal memory access
• Usually the default handlers do the
• Illegal instruction code
• Child exits
appropriate thing for each of these.
• Divide by zero
• Exits the program on receiving SIGSEGV
• Out of some limits • Ends the process on receiving SIGKILL
• Process stop/continue/end from user • SIGKILL and SIGSTOP cant be caught
• SIGUSR1, SIGUSR2 or ignored !
• kill(pid, signum) • Checkout
• signal(signum, handler) • man signal
• New alternate sigaction() • kill –L
• Command line eg: • A negative PID has an interesting
• kill –s SIGUSR1 <pid> meaning

See: misc/sig.c
Summary: Communication methods
in OS
• Shared memory between processes too
• Named Pipes
• Message queues
• Sockets – can be used between different systems as well.
• Signals
• We also saw a POSIX standard implementation of several of these.
This ends the lectures on
Concurrency
Coordination and
Communication

You might also like