Unit 4 - Data Communication - WWW - Rgpvnotes.in
Unit 4 - Data Communication - WWW - Rgpvnotes.in
Tech
Subject Name: Data Communication
Subject Code: EC-603
Semester: 6th
Downloaded from www.rgpvnotes.in
UNIT -IV
Transport Layer
Connection-oriented protocol services are often but not always reliable network services, that
provide acknowledgment after successful delivery, and automatic repeat request functions in
case of missing data or detected bit-errors. ATM, Frame Relay and MPLS are examples of a
connection oriented, unreliable protocol. Circuit switching Circuit switched communication, for
example the public switched telephone network, ISDN, SONET/SDH and optical mesh
networks, are intrinsically connection-oriented communications systems. Circuit mode
communication provides guarantees that data will arrive with constant bandwidth and at constant
delay and in-order delivery of a bit stream or byte stream is provided. The switches are
reconfigured during a circuit establishment phase. Virtual circuit switching Packet switched
communication may also be connection-oriented, which is called virtual circuit mode
communication. Due to the packet switching, the communication may suffer from variable bit
rate and delay, due to varying traffic load and packet queue lengths. Connection-oriented
communication are not necessarily reliable protocols. Because they can keep track of a
conversation, connection-oriented protocols are sometimes described as stateful. Transport layer
connection mode communication Connection-oriented transport layer protocols provide
connection-oriented communications over connectionless communications systems. A
connection-oriented transport layer protocol, such as TCP, may be based on a connectionless
network layer protocol (such as IP), but still achieves in-order delivery of a byte-stream, by
means of segment sequence numbering on the sender side, packet buffering and data packet
reordering on the receiver side. The sequence numbering requires two-way synchronization of
segment counters during a three-step connection establishment phase.
4.2 TCP (Transmission Control Protocol)
TCP (Transmission Control Protocol) is a standard that defines how to establish and maintain a
network conversation via which application programs can exchange data. TCP works with the
Internet Protocol (IP), which defines how computers send packets of data to each other.
Together, TCP and IP are the basic rules defining the Internet. TCP is defined by the Internet
Engineering Task Force (IETF) in the Request for Comment (RFC) standards document number
793.
For example, when a Web server sends an HTML file to a client, it uses the HTTP protocol to do
so. The HTTP program layer asks the TCP layer to set up the connection and send the file. The
TCP stack divides the file into packets, numbers them and then forwards them individually to the
IP layer for delivery. Although each packet in the transmission will have the same source and
destination IP addresses, packets may be sent along multiple routes. The TCP program layer in
the client computer waits until all of the packets have arrived, then acknowledges those it
receives and asks for the retransmission on any it does not (based on missing packet numbers),
then assembles them into a file and delivers the file to the receiving application.
Flow control is a function for the control of the data flow within an OSI layer or between
adjacent layers. In other words it limits the amount of data transmitted by the sending transport
entity to a level, or rate, that the receiver can manage.
Flow control is a good example of a protocol function that must be implemented in several
layers of the OSI architecture model. At the transport level flow control will allow the transport
protocol entity in a host to restrict the flow of data over a logical connection from the transport
protocol entity in another host. However, one of the services of the network level is to prevent
congestion. Thus the network level also uses flow control to restrict the flow of network protocol
data units (NPDUs).
The flow control mechanisms used in the transport layer vary for the different classes of service.
Since the different classes of service are determined by the quality of service of the underlying
data network which transports the transport protocol data units (TPDUs), it is these which
influence the type of flow control used.
Thus flow control becomes a much more complex issue at the transport layer than at lower levels
like the data link level.
Flow control must interact with transport users, transport entities, and the network
service.
Long and variable transmission delays between transport entities.
Flow control causes Queuing amongst transport users, entities, and the network service. We take
a look at the four possible queues that form and what control policies are at work here.
The transport entity is responsible for generating one or more transport protocol data units
(TPDUs) for passing onto the network layer. The network layer delivers the TPDUs to the
receiving transport entity which then takes out the data and passes it on to the destination user.
There are two reasons why the receiving transport entity would want to control the flow of
TPDUs:
When we say that a user or transport entity cannot keep up with the data flow, we mean that
the receiving buffers are filling too quickly and will overflow and lose data unless the rate of
incoming data is slowed.
There are different issues to be considered with transport flow control over different levels of
network service. The more unreliable the network service provided the more complex flow
control mechanism that may be needed to be used by the Transport Layer. The credit scheme
works well with the different network services although specific issues need to be addressed as
with a Reliable Non sequencing Network Service and an Unreliable Network Service.
The credit scheme seems most suited for flow control in the transport layer with all types of
network service. It gives the receiver the best control over data flow and helps provide a smooth
traffic flow. Sequence numbering of credit allocations handles the arrival of ACK/CREDIT
TPDUs out of order, and a window timer will ensure deadlock does not occur in a network
environment where TPDUs can be lost.
4.5 UDP
When an IP datagram is too large for the maximum transmission unit (MTU) of the underlying
data link layer technology used for the next leg of its journey, it must be fragmented before it can
be sent across the network. The higher-layer message to be transmitted is not sent in a single IP
datagram but rather broken down into pieces called fragments that are sent separately. In some
cases, the fragments themselves may need to be fragmented further.
fragment a message we make a single datagram into many, which introduces several new issues
to be concerned with:
o Sequencing and Placement: The fragments will typically be sent in sequential order
from the beginning of the message to the end, but they won't necessarily show up in the
order in which they were sent. The receiving device must be able to determine the
sequence of the fragments to reassemble them in the correct order. In fact, some
implementations send the last fragment first, so the receiving device will immediately
know the full size of the original complete datagram. This makes keeping track of the
order of segments even more essential.
o Separation of Fragmented Messages: A source device may need to send more than one
fragmented message at a time; or, it may send multiple datagrams that are fragmented en
route. This means the destination may be receiving multiple sets of fragments that must
be put back together. Imagine a box into which the pieces from two, three or more jigsaw
puzzles have been mixed and you understand this issue.
o Completion: The destination device has to be able to tell when it has received all of the
fragments so it knows when to start reassembly (or when to give up if it didn't get all the
pieces).
To address these concerns and allow the proper reassembly of the fragmented message, IP
includes several fields in the IP format header that convey information from the source to the
destination about the fragments. Some of these contain a common value for all the fragments of
the message, while others are different for each fragment.
In the Open Systems Interconnection (OSI) communications model, the session layer resides at
Layer 5 and manages the setup and teardown of the association between two communicating
endpoints. The communication between the two endpoints is known as the connection. A
connection is established and maintained while the two endpoint applications are communicating
back and forth in a conversation, or session, of some duration. Lower-level protocols are
responsible for the actual transmission data. However, this is typically done in short-lived
transmissions. The session layer builds a transmission bridge to provide more efficient long-term
transport, as well as a way to better organize simultaneous communication of multiple network
applications.
When the communication between network applications is complete, session layer services
terminate the connection. Some connections last only long enough to send a message in one
direction. This is known as a simplex transmission. Another popular transmission mode is half-
duplex. Here, the communication is bidirectional, but occurring in a single direction at a time.
Finally, full-duplex connections enable bidirectional communication that occurs simultaneously.
The session layer is also responsible for masking potential transport layer failures from upper-
layer protocols. This includes mechanisms to handle errors in endpoint transmit/receive
synchronization, transmission checkpoints and connection recovery. Additionally, sessions help
to group multiple transport streams that belong to a specific application. Different data streams
can then be combined and synchronized at the endpoint destination prior to sending the streams
received up the stack to the presentation layer.
Examples of session layer protocols include X.225, AppleTalk and Zone Information Protocol
(ZIP). Technically speaking, TCP/IP does not use an exclusive session layer. Instead, session and
presentation services are handled at the application layer within the TCP/IP model.
sets up, tears down and manages the communication between two application endpoints;
builds semi permanent transport bridges for more efficiency and data stream organization;
masks communication failures from upper-layer services in the OSI model; and
Dialog management:
Deciding whose turn it is to talk. Some applications operate in half-duplex mode, whereby the
two sides alternate between sending and receiving messages, and never send data
simultaneously. In the ISO protocols, dialog management is implemented through the use of
a data token. The token is sent back and forth, and a user may transmit only when it possesses
the token.
Synchronization:
The transport layer handles only communication errors, synchronization deals with upper layer
errors. In a file transfer, for instance, the transport layer might deliver data correctly, but the
application layer might be unable to write the file because the file system is full.
Users can split the data stream into pages, nserting synchronization points between each page.
When an error occurs, the receiver can resynchronize the state of the session to a previous
synchronization point. This requires that the sender hold data as long as may be needed.
Synchronization is achieved through the use of sequence numbers. The ISO protocols provide
both major and minor synchronization points. When resynchronizing, one can only go back as
far as the previous major synchronization point. In addition, major synchronization points are
acknowledged through explicit messages (making their use expensive). In contrast, minor
synchronization points are just markers.