0% found this document useful (0 votes)
81 views21 pages

II Unit

This document discusses TCP echo clients and servers in C. It provides code for a TCP echo server and client. The server creates a socket, binds it to an address and port, listens for connections, and echoes any data received back to the client. The client creates a socket, connects to the server, reads from standard input and writes it to the server, then reads the echoed data from the server. It also discusses how the client and server processes terminate and the role of SIGCHLD signals from child processes.

Uploaded by

suganya004
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
81 views21 pages

II Unit

This document discusses TCP echo clients and servers in C. It provides code for a TCP echo server and client. The server creates a socket, binds it to an address and port, listens for connections, and echoes any data received back to the client. The client creates a socket, connects to the server, reads from standard input and writes it to the server, then reads the echoed data from the server. It also discusses how the client and server processes terminate and the role of SIGCHLD signals from child processes.

Uploaded by

suganya004
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 21

CHAPTER 2 Application Development

TCP ECHO SERVER and CLIENT

Introduction
TCP echo server that performs the following steps:

1. The client reads a line of text from its standard input and writes the line to the
server.
2. The server reads the line from its network input and echoes the line back to the
client.
3. The client reads the echoed line and prints it on its standard output.

Figure. Simple echo client and server.

TCP Echo Server:

#include<sys/socket.h>
#include<netinet/in.h>
#include<string.h>
main()
{
socklen_t clilen;
int s1,s2;
char line[8]="";
pid_t chipid;
struct sockaddr_in cliaddr,servaddr;
s1=socket(AF_INET,SOCK_STREAM,0); // create socket
servaddr.sin_family=AF_INET;
servaddr.sin_addr.s_addr=inet_addr("127.0.0.1");
servaddr.sin_port=htons(9877);
bind(s1,(struct sockaddr *)&servaddr,sizeof(servaddr)); // bind the socket with server IP
Address
listen(s1,8); // listen the client connection
for( ; ; )
AICTE Sponsored SDP on Network Programming and Management from 07.06.2010 to 12.06.2010

{
clilen=sizeof(cliaddr);
if(chipid==0)
{
clilen=sizeof(cliaddr);
s2=accept(8,(struct sockaddr *)&cliaddr,&clilen); // accept the connection from client
close(s1);
for(; ;)
{
read(s2,line,8); // read the line of text from the networks input
write(s2,line,8); // write the echoes line back to the client
}
exit(0);
}
}
}

TCP ECHO CLIENT PROGRAM

# include<sys/socket.h>
# include<sys/types.h>
# include<netinet/in.h>
# include<stdio.h>
# include<string.h>
main(int argc,char * *argv)
{
int sockfd;
char *sline,*cline,str[8],str1[8];
struct sockaddr_in seraddr;
if(argc!=2)
{
printf("error");
}
sockfd=socket(AF_INET,SOCK_STREAM,0);// create socket
seraddr.sin_family=AF_INET; // socket family - Address family of Internet
seraddr.sin_port=htons(9877);
inet_pton(AF_INET,argv[1],&seraddr.sin_addr);
connect(sockfd,(struct sockaddr*)&seraddr,sizeof(seraddr)); // connect to server( with
server address)
for(;;)
{
scanf("%s",str);
sline=str;
cline=str1;
write(sockfd,sline,strlen(sline)); // write the line of text to server

2
National Engineering College Department of Information Technology

printf("%s",sline);
read(sockfd,cline,0); // read the echoed line from server and print it on to stdout
printf("%s",cline);
exit(0);
}
}

Normal startup
In a TCP program, use 2 main function, str_echo, str_cli, read and write. It is
essential that we understand how the client and server start, how they end.
1. Start the server
$ tcp echoser &
[1] 21130
When the server starts it calls socket, bind, listen and accept, blocking in the
call to accept.
2. start the client, (Before), we run the netstat program to verify the state of
the servers listening socket
$ netstat -a - list all the listening socket
Proto Recv-Q send Q local address foreign addr. State
Tcp O O * : 9877 *:* LISTEN
3. Then start the client on the same host, specifying the servers IP address
of 127.0.01 (loop back address)

$ ./tcpechocli 127.0.01

The client calls str_cli, which will block in the call to fgets, because we have not
typed a line of input yet.
When accept returns in the server, it calls fork and the child calls str_echo. This
function calls readline, which calls read, which blocks while waiting for a line to
be sent from the client.
The server parent, on the other hand, calls accept again, and blocks while waiting
for the next client connection.

tcp 0 0 Local host :9877 Local host :42578 Establish


tcp 0 0 Local host :42578 Local host :9877 Establish
tcp 0 0 * : 9877 *:* Establish

3
AICTE Sponsored SDP on Network Programming and Management from 07.06.2010 to 12.06.2010

Normal Termination
At this point, the connection is established and whatever we type to the client is echoed
back.

We type in two lines, each one is echoed, and then we type our terminal EOF character
(Control-D), which terminates the client. If we immediately execute netstat, we have

The client's side of the connection (since the local port is 42758) enters the TIME_WAIT
state, and the listening server is still waiting for another client connection. (This time we
pipe the output of netstat into grep, printing only the lines with our server's well-known
port.

1. The client reads EOF character, fgets returns a null pointer and the function
str_cli returns. main function to client.
2. Part of the process termination is the closing of all open descriptors, so the client
socket is closed by the kernel.
- At this point, the server socket is in the CLOSE_WAIT state and the
client socket in the FIN_WAIT_2 state.
3. Server receives the FIN, the server child is blocked in a call to read, (read than
returns O).
- This cause the str_echo fn to return to the server child main.
4. By calling exit fn, the server child is terminated.
5. The SIGCHILD single is sent to the parent when the server child terminates.

The child enters the Zombie state.

4
National Engineering College Department of Information Technology

The STAT of the child is now Z (for zombie).

We need to clean up our zombie processes and doing this requires dealing with Unix
signals.

POSIX Signal Handling

POSIX is stands for Portable operating system Interface. It is developed by IEEE.


It have also adopted as ISO /IEC. The first posix standard as IEEE IX .It
specified the C language Interface into a UNIX like kernel. It covers.
1. Process primitives (fork,exec,signals and timers).
2. The environment of process (user IDS, process groups).
3. Files and directories (all the I/O functions).
4. Terminal I/O, system database (pswd file and group file)
A signal is a notification to a process that an event has occurred. Signals are sometimes
called software interrupts. Signals usually occur asynchronously. By this we mean that a
process doesn't know ahead of time exactly when a signal will occur.

Signals can be sent


By one process to another process (or to itself)
By the kernel to a process

The SIGCHLD signal that we described at the end of the previous section is one that is
sent by the kernel whenever a process terminates, to the parent of the terminating process.

Every signal has a disposition, which is also called the action associated with the signal.
We set the disposition of a signal by calling the sig action function (described shortly)
and we have three choices for the disposition:

5
AICTE Sponsored SDP on Network Programming and Management from 07.06.2010 to 12.06.2010

We can provide a function that is called whenever a specific signal occurs. This
function is called a signal handler and this action is called catching a signal. The
two signals SIGKILL and SIGSTOP cannot be caught. Our function is called with
a single integer argument that is the signal number and the function returns
nothing. Its function prototype is therefore

For most signals, calling sigaction and specifying a function to be called when the signal
occurs is all that is required to catch a signal.

We can ignore a signal by setting its disposition to SIG_IGN. The two signals
SIGKILL and SIGSTOP cannot be ignored.
We can set the default disposition for a signal by setting its disposition to
SIG_DFL. The default is normally to terminate a process on receipt of a signal,
with certain signals also generating a core image of the process in its current
working directory. There are a few signals whose default disposition is to be
ignored: SIGCHLD and SIGURG

POSIX Signal Semantics


We summarize the following points about signal handling on a POSIX-compliant system:

Once a signal handler is installed, it remains installed. (Older systems removed


the signal handler each time it was executed.)
While a signal handler is executing, the signal being delivered is blocked.
Furthermore, any additional signals that were specified in the sa_mask signal set
passed to sigaction when the handler was installed are also blocked. In Figure 5.6,
we set sa_mask to the empty set, meaning no additional signals are blocked other
than the signal being caught.
If a signal is generated one or more times while it is blocked, it is normally
delivered only one time after the signal is unblocked. That is, by default, Unix
signals are not queued. We will see an example of this in the next section. The
POSIX real-time standard, 1003.1b, defines some reliable signals that are queued,
but we do not use them in this text.
It is possible to selectively block and unblock a set of signals using the
sigprocmask function. This lets us protect a critical region of code by preventing
certain signals from being caught while that region of code is executing.

Handling SIGCHLD Signals


The purpose of the zombie state is to maintain information about the child for the
parent to fetch at some later time. This information includes the process ID of the

6
National Engineering College Department of Information Technology

child, its termination status, and information on the resource utilization of the child
(CPU time, memory, etc.). If a process terminates, and that process has children in the
zombie state, the parent process ID of all the zombie children is set to 1 (the init
process), which will inherit the children and clean them up (i.e., init will wait for
them, which removes the zombie). Some Unix systems show the COMMAND
column for a zombie process as <defunct>.

Handling Zombies

Obviously we do not want to leave zombies around. They take up space in the kernel
and eventually we can run out of processes. Whenever we fork children, we must wait
for them to prevent them from becoming zombies. To do this, we establish a signal
handler to catch SIGCHLD, and within the handler, we call wait. We establish the
signal handler by adding the function call

If we compile this program, with the call to Signal, with our sig_chld handlerunder
Solaris 9 and use the signal function from the system library, we have the following:

7
AICTE Sponsored SDP on Network Programming and Management from 07.06.2010 to 12.06.2010

The sequence of steps is as follows:

1. We terminate the client by typing our EOF character. The client TCP sends a
FIN to the server and the server responds with an ACK.
2. The receipt of the FIN delivers an EOF to the child's pending readline. The
child terminates.
3. The parent is blocked in its call to accept when the SIGCHLD signal is
delivered. The sig_chld function executes (our signal handler), wait fetches
the child's PID and termination status, and printf is called from the signal
handler. The signal handler returns.
4. Since the signal was caught by the parent while the parent was blocked in a
slow system call (accept), the kernel causes the accept to return an error of
EINTR (interrupted system call). The parent does not handle this error, so it
aborts.

Wait () and Waitpid() Functions

we called the wait function to handle the terminated child.

wait and waitpid both return two values: the return value of the function is the process ID
of the terminated child, and the termination status of the child (an integer) is returned
through the statloc pointer. There are three macros that we can call that examine the
termination status and tell us if the child terminated normally, was killed by a signal, or
was just stopped by job control. Additional macros let us then fetch the exit status of the
child, or the value of the signal that killed the child, or the value of the job-control signal
that stopped the child.
If there are no terminated children for the process calling wait, but the process has one or
more children that are still executing, then wait blocks until the first of the existing
children terminates.
waitpid gives us more control over which process to wait for and whether or not to block.
First, the pid argument lets us specify the process ID that we want to wait for. A value of
-1 says to wait for the first of our children to terminate. (There are other options, dealing
with process group IDs, but we do not need them in this text.) The options argument lets
us specify additional options. The most common option is WNOHANG. This option tells
the kernel not to block if there are no terminated children

8
National Engineering College Department of Information Technology

Difference between wait and waitpid

We now illustrate the difference between the wait and waitpid functions when used to
clean up terminated children. The client establishes five connections with the server and
then uses only the first one (sockfd[0]) in the call to str_cli. The purpose of establishing
multiple connections is to spawn multiple children from the concurrent server

Figure. Client with five established connections to same concurrent server.

Termination of Server Process

We will now start our client/server and then kill the server child process. This simulates
the crashing of the server process, so we can see what happens to the client, the following
steps take place:
We start the server and client and type one line to the client to verify that all is
okay. That line is echoed normally by the server child.
We find the process ID of the server child and kill it. As part of process
termination, all open descriptors in the child are closed. This causes a FIN to be
sent to the client, and the client TCP responds with an ACK. This is the first half
of the TCP connection termination.
The SIGCHLD signal is sent to the server parent and handled correctly
Nothing happens at the client. The client TCP receives the FIN from the server
TCP and responds with an ACK, but the problem is that the client process is
blocked in the call to fgets waiting for a line from the terminal.
Running netstat at this point shows the state of the sockets.

9
AICTE Sponsored SDP on Network Programming and Management from 07.06.2010 to 12.06.2010

We can still type a line of input to the client. Here is what happens at the client
starting from Step 1

The client process will not see the RST because it calls readline immediately
after the call to writen and readline returns 0 (EOF) immediately because of the
FIN that was received in Step 2. Our client is not expecting to receive an EOF at
this point so it quits with the error message "server terminated prematurely."
When the client terminates (by calling err_quit in Figure 5.5), all its open
descriptors are closed.
What we have described also depends on the timing of the example. The client's call to
readline may happen before the server's RST is received by the client, or it may happen
after. If the readline happens before the RST is received, as we've shown in our example,
the result is an unexpected EOF in the client. But if the RST arrives first, the result is an
ECONNRESET ("Connection reset by peer") error return from readline.

SIGPIPE Signal
What happens if the client ignores the error return from readline and writes more data to
the server? This can happen, for example, if the client needs to perform two writes to the
server before reading anything back, with the first write eliciting the RST.
The rule that applies is: When a process writes to a socket that has received an RST, the
SIGPIPE signal is sent to the process. The default action of this signal is to terminate the
process, so the process must catch the signal to avoid being involuntarily terminated.
If the process either catches the signal and returns from the signal handler, or ignores the
signal, the write operation returns EPIPE.

10
National Engineering College Department of Information Technology

Boundary Conditions

Crashing of Server Host

This scenario will test to see what happens when the server host crashes. To simulate this,
we must run the client and server on different hosts. We then start the server, start the
client, type in a line to the client to verify that the connection is up, disconnect the server
host from the network, and type in another line at the client. This also covers the scenario
of the server host being unreachable when the client sends data (i.e., some intermediate
router goes down after the connection has been established).
The following steps take place:
When the server host crashes, nothing is sent out on the existing network
connections. That is, we are assuming the host crashes and is not shut down by an
operator.
We type a line of input to the client, it is written by writen, and is sent by the
client TCP as a data segment. The client then blocks in the call to readline,
waiting for the echoed reply.
If we watch the network with tcpdump, we will see the client TCP continually
retransmitting the data segment, trying to receive an ACK from the
server.Berkeley-derived implementations retransmit the data segment 12 times,
waiting for around 9 minutes before giving up. When the client TCP finally gives
up (assuming the server host has not been rebooted during this time, or if the
server host has not crashed but was unreachable on the network, assuming the
host was still unreachable), an error is returned to the client process. Since the
client is blocked in the call to readline, it returns an error. Assuming the server
host crashed and there were no responses at all to the client's data segments, the
error is ETIMEDOUT. But if some intermediate router determined that the server
host was unreachable and responded with an ICMP "destination unreachable'
message, the error is either EHOSTUNREACH or ENETUNREACH.

Crashing and Rebooting of Server Host


In this scenario, we will establish a connection between the client and server and then
assume the server host crashes and reboots. In the previous section, the server host was

11
AICTE Sponsored SDP on Network Programming and Management from 07.06.2010 to 12.06.2010

still down when we sent it data. Here, we will let the server host reboot before sending it
data. The easiest way to simulate this is to establish the connection, disconnect the server
from the network, shut down the server host and then reboot it, and then reconnect the
server host to the network. We do not want the client to see the server host shut down

We start the server and then the client. We type a line to verify that the connection
is established.
The server host crashes and reboots. 2.
We type a line of input to the client, which is sent as a TCP data segment to the
server host.
When the server host reboots after crashing, its TCP loses all information about
connections that existed before the crash. Therefore, the server TCP responds to
the received data segment from the client with an RST.
Our client is blocked in the call to readline when the RST is received, causing
readline to return the error ECONNRESET.

Shutdown of Server Host


The previous two sections discussed the crashing of the server host, or the server host
being unreachable across the network. We now consider what happens if the server host
is shut down by an operator while our server process is running on that host.
When a Unix system is shut down, the init process normally sends the SIGTERM signal
to all processes (we can catch this signal), waits some fixed amount of time (often
between 5 and 20 seconds), and then sends the SIGKILL signal (which we cannot catch)
to any processes still running. This gives all running processes a short amount of time to
clean up and terminate. If we do not catch SIGTERM and terminate, our server will be
terminated by the SIGKILL signal. When the process terminates, all open descriptors are
closed

As stated there, we must use the select or poll function in our client to have the client
detect the termination of the server process as soon as it occurs.

I/O Multiplexing

Introduction
TCP client handling two inputs at the same time: standard input and a TCP socket. We
encountered a problem when the client was blocked in a call to fgets (on standard input)

12
National Engineering College Department of Information Technology

and the server process was killed. The server TCP correctly sent a FIN to the client TCP,
but since the client process was blocked reading from standard input, it never saw the
EOF until it read from the socket (possibly much later). What we need is the capability to
tell the kernel that we want to be notified if one or more I/O conditions are ready (i.e.,
input is ready to be read, or the descriptor is capable of taking more output). This
capability is called I/O multiplexing and is provided by the select and poll functions.

We will also cover a newer POSIX variation of the former, called pselect. Some systems
provide more advanced ways for processes to wait for a list of events. A poll device is
one mechanism provided in different forms by different vendors.

I/O multiplexing is typically used in networking applications in the following scenarios:

When a client is handling multiple descriptors (normally interactive input and a


network socket), I/O multiplexing should be used. This is the scenario we
described previously.
It is possible, but rare, for a client to handle multiple sockets at the same time.
If a TCP server handles both a listening socket and its connected sockets, I/O
multiplexing is normally used, If a server handles both TCP and UDP, I/O
multiplexing is normally used
If a server handles multiple services and perhaps multiple, I/O multiplexing is
normally used.

I/O multiplexing is not limited to network programming. Many nontrivial applications


find a need for these techniques

I/O Models
Before describing select and poll, we need to step back and look at the bigger picture,
examining the basic differences in the five I/O models that are available to us under Unix:

blocking I/O
nonblocking I/O
I/O multiplexing (select and poll)
signal driven I/O (SIGIO)
asynchronous I/O (the POSIX aio_functions)

As we show in all the examples in this section, there are normally two distinct phases for
an input operation:
Waiting for the data to be ready
Copying the data from the kernel to the process

For an input operation on a socket, the first step normally involves waiting for data to
arrive on the network. When the packet arrives, it is copied into a buffer within the

13
AICTE Sponsored SDP on Network Programming and Management from 07.06.2010 to 12.06.2010

kernel. The second step is copying this data from the kernel's buffer into our application
buffer.
Blocking I/O Model
The most prevalent model for I/O is the blocking I/O model, which we have used for all
our examples so far in the text. By default, all sockets are blocking. Using a datagram
socket for our examples

We use UDP for this example instead of TCP because with UDP, the concept of data
being "ready" to read is simple: either an entire datagram has been received or it has not.
With TCP it gets more complicated, as additional variables such as the socket's low-water
mark come into play.

In the examples in this section, we also refer to recvfrom as a system call because we are
differentiating between our application and the kernel. Regardless of how recvfrom is
implemented (as a system call on a Berkeley-derived kernel or as a function that invokes
the getmsg system call on a System V kernel), there is normally a switch from running in
the application to running in the kernel, followed at some time later by a return to the
application.

The process calls recvfrom and the system call does not return until the datagram arrives
and is copied into our application buffer, or an error occurs. The most common error is
the system call being interrupted by a signalWe says that our process is blocked the entire
time from when it calls recvfrom until it returns. When recvfrom returns successfully, our
application processes the datagram.

14
National Engineering College Department of Information Technology

Nonblocking I/O Model


When we set a socket to be nonblocking, we are telling the kernel "when an I/O operation
that I request cannot be completed without putting the process to sleep, do not put the
process to sleep, but return an error instead

The first three times that we call recvfrom, there is no data to return, so the kernel
immediately returns an error of EWOULDBLOCK instead. The fourth time we
call recvfrom, a datagram is ready, it is copied into our application buffer, and
recvfrom returns successfully.We then process the data.
When an application sits in a loop calling recvfrom on a nonblocking descriptor
like this, it is called polling. The application is continually polling the kernel to
see if some operation is ready. This is often a waste of CPU time, but this model is
occasionally encountered, normally on systems dedicated to one function.

I/O Multiplexing Model

With I/O multiplexing, we call select or poll and block in one of these two system calls,
instead of blocking in the actual I/O system call.

We block in a call to select, waiting for the datagram socket to be readable. When select
returns that the socket is readable, we then call recvfrom to copy the datagram into our
application buffer.

Another closely related I/O model is to use multithreading with blocking I/O. That model
very closely resembles the model described above, except that instead of using select to

15
AICTE Sponsored SDP on Network Programming and Management from 07.06.2010 to 12.06.2010

block on multiple file descriptors, the program uses multiple threads (one per file
descriptor), and each thread is then free to call blocking system calls like recvfrom.

Signal-Driven I/O Model


We can also use signals, telling the kernel to notify us with the SIGIO signal when the
descriptor is ready

We first enable the socket for signal-driven I/O and install a signal handler using the
sigaction system call. The return from this system call is immediate and our process
continues; it is not blocked. When the datagram is ready to be read, the SIGIO signal is
generated for our process. We can either read the datagram from the signal handler by

16
National Engineering College Department of Information Technology

calling recvfrom or then notify the main loop that the data is ready to be, or we can notify
the main loop and let it read the datagram
Regardless of how we handle the signal, the advantage to this model is that we are not
blocked while waiting for the datagram to arrive. The main loop can continue executing
and just wait to be notified by the signal handler that either the data is ready to process or
the datagram is ready to be read.

Asynchronous I/O Model

Asynchronous I/O is defined by the POSIX specification, and various differences in the
realtime functions that appeared in the various standards which came together to form the
current POSIX specification have been reconciled. In general, these functions work by
telling the kernel to start the operation and to notify us when the entire operation
(including the copy of the data from the kernel to our buffer) is complete. The main
difference between this model and the signal-driven I/O model in the previous section is
that with signal-driven I/O, the kernel tells us when an I/O operation can be initiated, but
with asynchronous I/O, the kernel tells us when an I/O operation is complete.

We call aio_read (the POSIX asynchronous I/O functions begin with aio_ or lio_) and
pass the kernel the descriptor, buffer pointer, buffer size (the same three arguments for
read), file offset (similar to lseek), and how to notify us when the entire operation is
complete. This system call returns immediately and our process is not blocked while
waiting for the I/O to complete. We assume in this example that we ask the kernel to
generate some signal when the operation is complete. This signal is not generated until
the data has been copied into our application buffer, which is different from the signal-
driven I/O model.
Synchronous I/O versus Asynchronous I/O
POSIX defines these two terms as follows:

17
AICTE Sponsored SDP on Network Programming and Management from 07.06.2010 to 12.06.2010

A synchronous I/O operation causes the requesting process to be blocked until


that I/O operation completes.
An asynchronous I/O operation does not cause the requesting process to be
blocked.
Using these definitions, the first four I/O modelsblocking, nonblocking, I/O
multiplexing, and signal-driven I/Oare all synchronous because the actual I/O
operation (recvfrom) blocks the process. Only the asynchronous I/O model matches the
asynchronous I/O

Select() Function
This function allows the process to instruct the kernel to wait for any one of multiple
events to occur and to wake up the process only when one or more of these events occurs
or when a specified amount of time has passed.
As an example, we can call select and tell the kernel to return only when:
Any of the descriptors in the set {1, 4, 5} are ready for reading
Any of the descriptors in the set {2, 7} are ready for writing
Any of the descriptors in the set {1, 4} have an exception condition pending
10.2 seconds have elapsed

That is, we tell the kernel what descriptors we are interested in (for reading, writing, or an
exception condition) and how long to wait. The descriptors in which we are interested are
not restricted to sockets; any descriptor can be tested using select.

There are three possibilities:

18
National Engineering College Department of Information Technology

Wait forever Return only when one of the specified descriptors is ready for I/O.
For this, we specify the timeout argument as a null pointer.

Wait up to a fixed amount of time Return when one of the specified descriptors
is ready for I/O, but do not wait beyond the number of seconds and microseconds
specified in the timeval structure pointed to by the timeout argument.

Do not wait at all Return immediately after checking the descriptors. This is
called polling. To specify this, the timeout argument must point to a timeval
structure and the timer value (the number of seconds and microseconds specified
by the structure) must be 0.

Shutdown () Function

The normal way to terminate a network connection is to call the close function. But, there
are two limitations with close that can be avoided with shutdown:

close decrements the descriptor's reference count and closes the socket only if the
count reaches 0. With shutdown, we can initiate TCP's normal connection
termination sequence, regardless of the reference count.
close terminates both directions of data transfer, reading and writing. Since a TCP
connection is full-duplex, there are times when we want to tell the other end that
we have finished sending, even though that end might have more data to send us.
This is the scenario we encountered in the previous section with batch input to our
str_cli function.

Figure shows the typical function calls in this scenario.

19
AICTE Sponsored SDP on Network Programming and Management from 07.06.2010 to 12.06.2010

Poll () Function

The poll function originated with SVR3 and was originally limited to STREAMS devices
. SVR4 removed this limitation, allowing poll to work with any descriptor. Poll provides
functionality that is similar to select, but poll provides additional information when
dealing with STREAMS devices.

20
National Engineering College Department of Information Technology

The first argument is a pointer to the first element of an array of structures. Each element
of the array is a pollfd structure that specifies the conditions to be tested for a given
descriptor,

The conditions to be tested are specified by the events member, and the function returns
the status for that descriptor in the corresponding revents member. (Having two variables
per descriptor, one a value and one a result, avoids value-result arguments. Recall that the
middle three arguments for select are value-result.) Each of these two members is
composed of one or more bits that specify a certain condition.

21

You might also like