Unit 3-1
Unit 3-1
1. INTRODUCTION
The transport layer is the fourth layer of the OSI model and is the core of the Internet
model.
It responds to service requests from the session layer and issues service requests to
the network Layer.
The transport layer provides transparent transfer of data between hosts.
It provides end-to-end control and information transfer with the quality of service
needed by the application program.
It is the first true end-to-end layer, implemented in all End Systems (ES).
TRANSPORT LAYER FUNCTIONS / SERVICES
The transport layer is located between the network layer and the application layer.
The transport layer is responsible for providing services to the application layer; it
receives services from the network layer.
The services that can be provided by the transport layer are
1. Process-to-Process Communication
2. Addressing : Port Numbers
3. Encapsulation and Decapsulation
4. Multiplexing and Demultiplexing
5. Flow Control
6. Error Control
7. Congestion Control
Advantages of UDP
Disadvantages of UDP
We can not have any way to acknowledge the successful transfer of data.
UDP cannot have the mechanism to track the sequence of data.
UDP is connectionless, and due to this, it is unreliable to transfer data.
In case of a Collision, UDP packets are dropped by Routers in comparison to TCP.
UDP can drop packets in case of detection of errors.
TCP (Transmission Control Protocol) is one of the main protocols of the Internet protocol suite. It lies
between the Application and Network Layers which are used in providing reliable delivery services. It is a
connection-oriented protocol for communications that helps in the exchange of messages between
different devices over a network. The Internet Protocol (IP), which establishes the technique for sending
data packets between computers, works with TCP.
Advantages of TCP
Disadvantages of TCP
The main differences between TCP (Transmission Control Protocol) and UDP (User Datagram
Protocol) are:
TCP is used by HTTP, HTTPs, FTP, UDP is used by DNS, DHCP, TFTP,
Protocols
SMTP and Telnet. SNMP, RIP, and VoIP.
IP address
An IP address is a unique address that identifies a device on the internet or a local network. IP
stands for "Internet Protocol," which is the set of rules governing the format of data sent via the
internet or local network.
In essence, IP addresses are the identifier that allows information to be sent between devices on a
network: they contain location information and make devices accessible for communication. The
internet needs a way to differentiate between different computers, routers, and websites. IP
addresses provide a way of doing so and form an essential part of how the internet works.
IP addresses are not random. They are mathematically produced and allocated by the Internet
Assigned Numbers Authority (IANA), a division of the Internet Corporation for Assigned Names
and Numbers (ICANN). ICANN is a non-profit organization that was established in the United
States in 1998 to help maintain the security of the internet and allow it to be usable by all. Each
time anyone registers a domain on the internet, they go through a domain name registrar, who pays
a small fee to ICANN to register the domain.
Watch this video to learn what IP address is, why IP address is important and how to protect it from
hackers:
The use of IP addresses typically happens behind the scenes. The process works like this:
1. Your device indirectly connects to the internet by connecting at first to a network connected
to the internet, which then grants your device access to the internet.
2. When you are at home, that network will probably be your Internet Service Provider (ISP).
At work, it will be your company network.
3. Your IP address is assigned to your device by your ISP.
4. Your internet activity goes through the ISP, and they route it back to you, using your IP
address. Since they are giving you access to the internet, it is their role to assign an IP
address to your device.
5. However, your IP address can change. For example, turning your modem or router on or off
can change it. Or you can contact your ISP, and they can change it for you.
6. When you are out and about – for example, traveling – and you take your device with you,
your home IP address does not come with you. This is because you will be using another
network (Wi-Fi at a hotel, airport, or coffee shop, etc.) to access the internet and will be
using a different (and temporary) IP address, assigned to you by the ISP of the hotel, airport
or coffee shop.
Types of IP addresses
Private IP addresses
Every device that connects to your internet network has a private IP address. This includes
computers, smartphones, and tablets but also any Bluetooth-enabled devices like speakers, printers,
or smart TVs. With the growing internet of things, the number of private IP addresses you have at
home is probably growing. Your router needs a way to identify these items separately, and many
items need a way to recognize each other. Therefore, your router generates private IP addresses that
are unique identifiers for each device that differentiate them on the network.
Public IP addresses
A public IP address is the primary address associated with your whole network. While each
connected device has its own IP address, they are also included within the main IP address for your
network. As described above, your public IP address is provided to your router by your ISP.
Typically, ISPs have a large pool of IP addresses that they distribute to their customers. Your public
IP address is the address that all the devices outside your internet network will use to recognize
your network.
Types of IP Address
IPv4
IPv4 address consists of two things that are the network address and the host address. It stands for
Internet Protocol version four. It was introduced in 1981 by DARPA and was the first deployed
version in 1982 for production on SATNET and on the ARPANET in January 1983.
IPv4 addresses are 32-bit integers that have to be expressed in Decimal Notation. It is represented by
4 numbers separated by dots in the range of 0-255, which have to be converted to 0 and 1, to be
understood by Computers. For Example, An IPv4 Address can be written as 189.123.123.90.
IPv4 Address Format is a 32-bit Address that comprises binary digits separated by a dot (.).
IPv4 Address Format
IPv6
IPv6 is based on IPv4 and stands for Internet Protocol version 6. It was first introduced in December
1995 by Internet Engineering Task Force. IP version 6 is the new version of Internet Protocol, which
is way better than IP version 4 in terms of complexity and efficiency. IPv6 is written as a group of 8
hexadecimal numbers separated by colon (:). It can be written as 128 bits of 0s and 1s.
IPv6 Address Format is a 128-bit IP Address, which is written in a group of 8 hexadecimal numbers
separated by colon (:).
IPv4 has a 32-bit address length IPv6 has a 128-bit address length
In IPv4 Packet flow identification is not In IPv6 packet flow identification are Available and uses the
available flow label field in the header
It has a broadcast Message Transmission In IPv6 multicast and anycast message transmission
Scheme scheme is available
In IPv4 Encryption and Authentication facility In IPv6 Encryption and Authentication are provided
not provided
IPv4 can be converted to IPv6 Not all IPv6 can be converted to IPv4
Example of IPv6:
Example of IPv4: 66.94.29.13
2001:0000:3238:DFE1:0063:0000:0000:FEFB
Connection-Oriented vs Connectionless Service
S. Comparison
Connection-oriented Service Connection Less Service
No Parameter
Data Packets All data packets are received in the same Not all data packets are received in the
5.
Path order as those sent by the sender. same order as those sent by the sender.
Bandwidth It requires a higher bandwidth to transfer It requires low bandwidth to transfer the
6.
Requirement the data packets. data packets.
Routing Algorithm
A routing algorithm is a procedure that lays down the route or path to transfer data packets from
source to the destination. They help in directing Internet traffic efficiently. After a data packet leaves its
source, it can choose among the many different paths to reach its destination. Routing algorithm
mathematically computes the best path, i.e. “least – cost path” that the packet can be routed through.
1. Flooding
Flooding is a non-adaptive routing technique following this simple method: when a data packet
arrives at a router, it is sent to all the outgoing links except the one it has arrived on.
For example, let us consider the network in the figure, having six routers that are connected through
transmission lines.
Advantages of Flooding
It is very simple to setup and implement, since a router may know only its neighbours.
It is extremely robust. Even in case of malfunctioning of a large number routers, the packets
find a way to reach the destination.
All nodes which are directly or indirectly connected are visited. So, there are no chances for
any node to be left out. This is a main criteria in case of broadcast messages.
The shortest path is always chosen by flooding.
Limitations of Flooding
Flooding tends to create an infinite number of duplicate data packets, unless some measures
are adopted to damp packet generation.
It is wasteful if a single destination needs the packet, since it delivers the data packet to all
nodes irrespective of the destination.
The network may be clogged with unwanted and duplicate data packets. This may hamper
delivery of other data packets.
1. A router transmits its distance vector to each of its neighbors in a routing packet.
2. Each router receives and saves the most recently received distance vector from each of its
neighbors.
3. A router recalculates its distance vector when:
o It receives a distance vector from a neighbor containing different information than
before.
Example – Consider 3-routers X, Y and Z as shown in figure. Each router have their routing table.
Every routing table will contain distance to the destination nodes.
Consider router X , X will share it routing table to neighbors and neighbors will share it routing
table to it to X.
As we can see that distance will be less going from X to Z when Y is intermediate node(hop) so it
will be update in routing table X.
3.Congestion Control
What is congestion?
A state occurring in network layer when the message traffic is so heavy that it slows down network
response time.
Effects of Congestion
Congestion control refers to the techniques used to control or prevent congestion. Congestion control
techniques can be broadly classified into two categories:
Open loop congestion control policies are applied to prevent congestion before it happens. The
congestion control is handled either by the source or the destination.
1. Retransmission Policy :
It is the policy in which retransmission of the packets are taken care of. If the sender feels
that a sent packet is lost or corrupted, the packet needs to be retransmitted. This
transmission may increase the congestion in the network.
To prevent congestion, retransmission timers must be designed to prevent congestion and
also able to optimize efficiency.
2. Window Policy :
The type of window at the sender’s side may also affect the congestion. Several packets in
the Go-back-n window are re-sent, although some packets may be received successfully at
the receiver side. This duplication may increase the congestion in the network and make it
worse.
Therefore, Selective repeat window should be adopted as it sends the specific packet that
may have been lost.
3. Discarding Policy :
A good discarding policy adopted by the routers is that the routers may prevent congestion
and at the same time partially discard the corrupted or less sensitive packages and also be
able to maintain the quality of a message.
In case of audio file transmission, routers can discard less sensitive packets to prevent
congestion and also maintain the quality of the audio file.
4. Acknowledgment Policy :
Since acknowledgements are also the part of the load in the network, the acknowledgment
policy imposed by the receiver may also affect congestion. Several approaches can be used
to prevent congestion related to acknowledgment.
The receiver should send acknowledgement for N packets rather than sending
acknowledgement for a single packet. The receiver should send an acknowledgment only if
it has to send a packet or a timer expires.
5. Admission Policy :
In admission policy a mechanism should be used to prevent congestion. Switches in a flow
should first check the resource requirement of a network flow before transmitting it further.
If there is a chance of a congestion or there is a congestion in the network, router should
deny establishing a virtual network connection to prevent further congestion.
All the above policies are adopted to prevent congestion before it happens in the network.
Closed Loop Congestion Control
Closed loop congestion control techniques are used to treat or alleviate congestion after it happens.
Several techniques are used by different protocols; some of them are:
1. Backpressure :
Backpressure is a technique in which a congested node stops receiving packets from upstream node.
This may cause the upstream node or nodes to become congested and reject receiving data from
above nodes. Backpressure is a node-to-node congestion control technique that propagate in the
opposite direction of data flow. The backpressure technique can be applied only to virtual circuit
where each node has information of its above upstream node.
3. Implicit Signaling :
In implicit signaling, there is no communication between the congested nodes and the source. The
source guesses that there is congestion in a network. For example when sender sends several
packets and there is no acknowledgment for a while, one assumption is that there is a congestion.
4. Explicit Signaling :
In explicit signaling, if a node experiences congestion it can explicitly sends a packet to the source
or destination to inform about congestion. The difference between choke packet and explicit
signaling is that the signal is included in the packets that carry data rather than creating a different
packet as in case of choke packet technique.
Explicit signaling can occur in either forward or backward direction.
It is a data-link layer protocol which is used for transmitting the data over the noiseless channels. It
provides unidirectional data transmission which means that either sending or receiving of data will
take place at a time. It provides flow-control mechanism but does not provide any error control
mechanism.
The idea behind the usage of this frame is that when the sender sends the frame then he waits for
the acknowledgment before sending the next frame.
Sender side
Rule 2: Sender sends the next packet only when it receives the acknowledgment of the previous
packet.
Therefore, the idea of stop and wait protocol in the sender's side is very simple, i.e., send one
packet at a time, and do not send another packet before receiving the acknowledgment.
Receiver side
Therefore, the idea of stop and wait protocol in the receiver's side is also very simple, i.e., consume
the packet, and once the packet is consumed, the acknowledgment is sent. This is known as a flow
control mechanism.
the above figure shows the working of the stop and wait protocol. If there is a sender and receiver, then
sender sends the packet and that packet is known as a data packet. The sender will not send the second
packet without receiving the acknowledgment of the first packet. The receiver sends the acknowledgment
for the data packet that it has received. Once the acknowledgment is received, the sender sends the next
packet. This process continues until all the packet are not sent. The main advantage of this protocol is its
simplicity but it has some disadvantages also. For example, if there are 1000 data packets to be sent, then
all the 1000 packets cannot be sent at a time as in Stop and Wait protocol, one packet is sent at a time.
In this technique, each frame has sent from the sequence number. The sequence numbers are used
to find the missing data in the receiver end. The purpose of the sliding window technique is to
avoid duplicate data, so it uses the sequence number.
Types of Sliding Window Protocol
1. Go-Back-N ARQ
2. Selective Repeat ARQ
Go-Back-N ARQ
Go-Back-N ARQ protocol is also known as Go-Back-N Automatic Repeat Request. It is a data link
layer protocol that uses a sliding window method. In this, if any frame is corrupted or lost, all
subsequent frames have to be sent again.
The size of the sender window is N in this protocol. For example, Go-Back-8, the size of the sender
window, will be 8. The receiver window size is always 1.
If the receiver receives a corrupted frame, it cancels it. The receiver does not accept a corrupted
frame. When the timer expires, the sender sends the correct frame again. The design of the Go-
Back-N ARQ protocol is shown below.
Selective Repeat ARQ is also known as the Selective Repeat Automatic Repeat Request. It is a data
link layer protocol that uses a sliding window method. The Go-back-N ARQ protocol works well if
it has fewer errors. But if there is a lot of error in the frame, lots of bandwidth loss in sending the
frames again. So, we use the Selective Repeat ARQ protocol. In this protocol, the size of the sender
window is always equal to the size of the receiver window. The size of the sliding window is
always greater than 1.
If the receiver receives a corrupt frame, it does not directly discard it. It sends a negative
acknowledgment to the sender. The sender sends that frame again as soon as on the receiving
negative acknowledgment. There is no waiting for any time-out to send that frame. The design of
the Selective Repeat ARQ protocol is shown below.
The example of the Selective Repeat ARQ protocol is shown below in the figure.
Services Provided to Data Link Layer
The primary service of the data link layer is to support error-free transmission. The physical layer
sends the data from the sender’s hub to the receiver’s hub as raw bits. The data link layer should
recognize and correct some errors in the communicated data.
The data link layer provides a distinct connection to the network layer. It is used to handle
communication bugs, control the data stream, and manage sender and receiver inconsistency by
maintaining the multiple services. It can work these actions in the following method −
Unacknowledged connectionless service − This contains separate frames from the source
host to the destination host without some acknowledgment structure. It does not have any
link established or launched. It does not manage with frame recovery due to channel noise.
Acknowledged connectionless service − The transmission medium is more error-prone.
This requires acceptance service for each frame shared between two hosts to provide that
the frame has occurred correctly.
Acknowledged connection-oriented service − This layer supports this service to the
network layer by settling a link between the source and destination hosts before any
information removal occurs.
Framing − In this layer, it receives a raw bitstream from the physical layer that cannot be
bug-free. The data link layer divides the bitstreams into frames to provide a frequent change
of bitstreams to the network layer.
Error Control − It includes sequencing frames and sending control frames for acceptance.
A noisy channel can avoid scanning of bits, falling bits from a frame, introducing specific
bits in the frame, frames final sinking, etc.
Flow Control − There is another fundamental problem in the data link design to regulate
the cost of data communication between two source and destination hosts. If the conflict
among the source and destination hosts data sending and receiving speed, it will create
packets to drop at the receiver end.
Sequence Integrity − The data link layer supports the data bits sequence and sends them to
the physical layer in the similar sequence as received from the network layer. It supports a
reliable share of data link service data unit (DLSDU) over the data link connections.