Unit 2
Unit 2
UNIT-II
INTERNETWORKING:
We believe that a variety of different networks will always be around, for the following
reasons.
First of all, the installed base of different networks is large. Nearly all personal computers
run TCP/IP. Many large businesses have mainframes running IBM's SNA. A substantial number
of telephone companies operate ATM networks. Some personal computer LANs still use Novell
NCP/IPX or AppleTalk. Finally, wireless is an up-and-coming area with a variety of protocols.
Second, as computers and networks get cheaper, the place where decisions get made
moves downward in organizations. Many companies have a policy to the effect that purchases
costing over a million dollars have to be approved by top management; purchases costing over
100,000 dollars have to be approved by middle management, but purchases under 100,000
dollars can be made by department heads without any higher approval. This can easily lead to the
engineering department installing UNIX workstations running TCP/IP and the marketing
department installing Macs with AppleTalk.
Third, different networks have radically different technology, so it should not be
surprising that as new hardware developments occur, new software will be created to fit the new
hardware.
Here we see a corporate network with multiple locations tied together by a wide area
ATM network. At one of the locations, an FDDI optical backbone is used to connect an Ethernet,
an 802.11 wireless LAN, and the corporate data center's SNA mainframe network.
The purpose of interconnecting all these networks is to allow users on any of them to
communicate with users on all the other ones and also to allow users on any of them to access
data on any of them.
4. Connectionless Internetworking
The alternative internetwork model is the datagram model. In this model, the only service
the network layer offers to the transport layer is the ability to inject datagrams into the subnet
and hope for the best. There is no notion of a virtual circuit at all in the network layer. This
model does not require all packets belonging to one connection to traverse the same sequence of
gateways. In the figure datagrams from host 1 to host 2 are shown taking different routes through
the internetwork. A routing decision is made separately for each packet, possibly depending on
the traffic at the moment the packet is sent. This strategy can use multiple routes and thus
achieve a higher bandwidth than the concatenated virtual-circuit model. On the other hand, there
is no guarantee that the packets arrive at the destination in order, assuming that they arrive at all.
Computer Networks_Unit-II
A connectionless internet
5. Tunneling
Handling the general case of making two different networks interwork is exceedingly
difficult. However, there is a common special case that is manageable. This case is where the
source and destination hosts are on the same type of network, but there is a different network in
between. Consider the below example.
The solution to this problem is a technique called tunneling. To send an IP packet to host
2, host 1 constructs the packet containing the IP address of host 2, inserts it into an Ethernet
frame addressed to the Paris multiprotocol router, and puts it on the Ethernet. When the
multiprotocol router gets the frame, it removes the IP packet, inserts it in the payload field of the
WAN network layer packet, and addresses the latter to the WAN address of the London
Computer Networks_Unit-II
multiprotocol router. When it gets there, the London router removes the IP packet and sends it to
host 2 inside an Ethernet frame.
6. Internetwork Routing
Routing through an internetwork is similar to routing within a single subnet, but with
some added complications. Consider, for example, the internetwork of below figure (a) in which
five networks are connected by six routers. Making a graph model of this situation is
complicated by the fact that every router can directly access (i.e., send packets to) every other
router connected to any network to which it is connected This leads to the graph of figure (b).
7. Fragmentation
Each network imposes some maximum size on its packets. These limits have various causes,
among them:
1. Hardware (e.g., the size of an Ethernet frame).
2. Operating system (e.g., all buffers are 512 bytes).
3. Protocols (e.g., the number of bits in the packet length field).
4. Compliance with some (inter)national standard.
5. Desire to reduce error-induced retransmissions to some level.
6. Desire to prevent one packet from occupying the channel too long.
Basically, the only solution to the problem is to allow gateways to break up packets into
fragments, sending each fragment as a separate internet packet. Two opposing strategies exist
for recombining the fragments back into the original packet. The first strategy is to make
fragmentation caused by a ''small-packet'' network transparent to any subsequent networks
Computer Networks_Unit-II
through which the packet must pass on its way to the ultimate destination. This option is shown
in figure (1).
elementary fragment be a single bit or byte, with the fragment number then being the bit or byte
offset within the original packet.
Fragmentation when the elementary data size is 1 byte. (a) Original packet, containing 10 data bytes.
(b) Fragments after passing through a network with maximum packet size of 8 payload bytes plus
header. (c) Fragments after passing through a size 5 gateway
The glue that holds the whole Internet together is the network layer protocol, IP (Internet
Protocol).
1. The IP Protocol
Computer Networks_Unit-II
2. IP Addresses
An IPv4 address is a 32-bit address that uniquely and universally defines the connection of a
device to the Internet. The IPv4 addresses are universal in the sense that the addressing system
must be accepted by any host that wants to be connected to the Internet.
Address Space
An address space is the total number of addresses used by the protocol. If a protocol uses N bits
to define an address, the address space is 2N because each bit can have two different values (0 or
1) and N bits can have 2N values. IPv4 uses 32-bit addresses, which means that the address space
is 232 or 4,294,967,296 (more than 4 billion).
Notations
There are two prevalent notations to show an IPv4 address: binary notation and dotted decimal
notation.
Binary Notation
In binary notation, the IPv4 address is displayed as 32 bits. Each octet is often referred to as a
byte. So it is common to hear an IPv4 address referred to as a 32-bit address or a 4-byte address.
The following is an example of an IPv4 address in binary notation:
01110101 10010101 00011101 00000010
Dotted-Decimal Notation
To make the IPv4 address more compact and easier to read, Internet addresses are usually written
in decimal form with a decimal point (dot) separating the bytes. The following is the dotted
decimal notation of the above address:
117.149.29.2
Classful Addressing
IPv4 addressing, at its inception, used the concept of classes. This architecture is called classful
addressing. In classful addressing, the address space is divided into five classes: A, B, C, D, and
E. Each class occupies some part of the address space.
Finding the classes in binary and dotted-decimal notation
Computer Networks_Unit-II
The last column shows the mask in the form /n where n can be 8, 16, or 24 in classful addressing.
This notation is also called slash notation or Classless Interdomain Routing (CIDR) notation. The
notation is used in classless addressing,
Computer Networks_Unit-II
Subnetting
During the era of classful addressing, subnetting was introduced. If an organization was granted
a large block in class A or B, it could divide the addresses into several contiguous groups and
assign each group to smaller networks (called subnets).
Supernetting
In supernetting, an organization can combine several class C blocks to create a larger range of
addresses. In other words, several networks are combined to create a supernetwork or a supemet.
An organization can apply for a set of class C blocks instead of just one.
Classless Addressing
To overcome address depletion and give more organizations access to the Internet, classless
addressing was designed and implemented.
Address Blocks
In classless addressing, when an entity, small or large, needs to be connected to the Internet, it is
granted a block (range) of addresses. The size of the block (the number of addresses) varies
based on the nature and size of the entity.
Restriction To simplify the handling of addresses, the Internet authorities impose three
restrictions on classless address blocks:
1. The addresses in a block must be contiguous, one after another.
2. The number of addresses in a block must be a power of 2 (I, 2, 4, 8 ...).
3. The first address must be evenly divisible by the number of addresses.
In 1Pv4 addressing, a block of addresses can be defined as x.y.z.t/n in which x.y.z.t defines one
of the addresses and the /n defines the mask.
The first address in the block can be found by setting the rightmost 32 - n bits to 0s.
The last address in the block can be found by setting the rightmost 32 - n bits to 1s.
The number of addresses in the block can be found by using the formula 232-n
Network Addresses
The first address in a block is normally not assigned to any device; it is used as the network
address that represents the organization to the rest of the world.
Computer Networks_Unit-II
Each address in the block can be considered as a two-level hierarchical structure: the leftmost n
bits (prefix) define the network; the rightmost 32 - n bits define the host.
Address Allocation
The ultimate responsibility of address allocation is given to a global authority called the Internet
Corporation for Assigned Names and Addresses (ICANN). It assigns a large block of addresses
to an ISP. Each ISP, in turn, divides its assigned block into smaller sub blocks and grants the sub
blocks to its customers. This is called address aggregation: many blocks of addresses are
aggregated in one block and granted to one ISP.
Any organization can use an address out of this set without permission from the Internet
authorities. Everyone knows that these reserved addresses are for private networks. They are
unique inside the organization, but they are not unique globally. The site must have only one
single connection to the global Internet through a router that runs the NAT software.
Address Translation
All the outgoing packets go through the NAT router, which replaces the source address in the
packet with the global NAT address. All incoming packets also pass through the NAT router,
Computer Networks_Unit-II
which replaces the destination address in the packet (the NAT router global address) with the
appropriate private address.
Using One IP Address: In its simplest form, a translation table has only two columns: the
private' address and the external address (destination address of the packet). When the router
translates the source address of the outgoing packet, it also makes note of the destination
address-where the packet is going. When the response comes back from the destination, the
router uses the source address of the packet to find the private address of the packet.
Using a Pool of IP Addresses: Since the NAT router has only one global address, only one
private network host can access the same external host. To remove this restriction, the NAT
router uses a pool of global addresses. For example, instead of using only one global address
(200.24.5.8), the NAT router can use four addresses (200.24.5.8, 200.24.5.9, 200.24.5.10, and
200.24.5.11). In this case, four private network hosts can communicate with the same external
host at the same time because each pair of addresses defines a connection.
The query messages, which occur in pairs, help a host or a network manager get specific
information from a router or another host. For example, nodes can discover their neighbors.
Message Format
An ICMP message has an 8-byte header and a variable-size data section. Although the general
format of the header is different for each message type, the first 4 bytes are common to all.
Five types of errors are handled: destination unreachable, source quench, time exceeded,
parameter problems, and redirection
Query messages
Debugging Tools
There are several tools that can be used in the Internet for debugging. We can trace the route of a
packet. The two tools that use ICMP for debugging: ping and traceroute.
5. The target machine replies with an ARP reply message that contains its physical address. The
message is unicast.
6. The sender receives the reply message. It now knows the physical address of the target
machine.
7. The IP datagram, which carries data for the target machine, is now encapsulated in a frame
and is unicast to the destination.
Dynamic Address Allocation DHCP has a second database with a pool of available IP
addresses. This second database makes DHCP dynamic. When a DHCP client requests a
temporary IP address, the DHCP server goes to the pool of available (unused) IP addresses and
assigns an IP address for a negotiable period of time.
OSPF works by exchanging information between adjacent routers, which is not the same as
between neighboring routers. In particular, it is inefficient to have every router on a LAN talk to
every other router on the LAN. To avoid this situation, one router is elected as the designated
router. It is said to be adjacent to all the other routers on its LAN, and exchanges information
with them. Neighboring routers that are not adjacent do not exchange information with each
other. A backup designated router is always kept up to date to ease the transition should the
primary designated router crash and need to replaced immediately.
During normal operation, each router periodically floods LINK STATE UPDATE messages
to each of its adjacent routers. This message gives its state and provides the costs used in the
Computer Networks_Unit-II
topological database. The flooding messages are acknowledged, to make them reliable. Each
message has a sequence number, so a router can see whether an incoming LINK STATE
UPDATE is older or newer than what it currently has. Routers also send these messages when a
line goes up or down or its cost changes.
The five kinds of messages are summarized below:
Policies are typically manually configured into each BGP router (or included using some
kind of script). They are not part of the protocol itself.
From the point of view of a BGP router, the world consists of AS and the lines connecting
them. Two AS are considered connected if there is a line between border routers in each one.
Given BGP's special interest in transit traffic, networks are grouped into one of three categories.
The first category is the stub networks, which have only one connection to the BGP graph.
These cannot be used for transit traffic because there is no one on the other side. Then come the
multiconnected networks. These could be used for transit traffic, except that they refuse.
Computer Networks_Unit-II
Finally, there are the transit networks, such as backbones, which are willing to handle third-
party packets, possibly with some restrictions, and usually for pay.
Instead of maintaining just the cost to each destination, each BGP router keeps track of
the path used. Similarly, instead of periodically giving each neighbor its estimated cost to each
possible destination, each BGP router tells its neighbors the exact path it is using.
As an example, consider the BGP routers shown in figure (a). In particular, consider F's
routing table. Suppose that it uses the path FGCD to get to D. When the neighbors give it routing
information, they provide their complete paths, as shown in figure (b)
8. IPv6
In 1990, IETF started work on a new version of IP, one which would never run out of addresses,
would solve a variety of other problems, and be more flexible and efficient as well. Its major
goals were:
1. Support billions of hosts, even with inefficient address space allocation.
2. Reduce the size of the routing tables.
3. Simplify the protocol, to allow routers to process packets faster.
4. Provide better security (authentication and privacy) than current IP.
5. Pay more attention to type of service, particularly for real-time data.
6. Aid multicasting by allowing scopes to be specified.
7. Make it possible for a host to roam without changing its address.
Computer Networks_Unit-II
Advantages
The next-generation IP, or IPv6, has some advantages over IPv4 that can be summarized as
follows:
Larger address space
Better header format
Allowance for extension
Support for resource allocation
Support for more security
Packet Format
Extension Headers
The bottom four layers can be seen as the transport service provider, whereas the upper
layer(s) are the transport service user. This distinction of provider versus user has a
considerable impact on the design of the layers and puts the transport layer in a key position.
3. Berkeley Sockets
Computer Networks_Unit-II
Another set of transport primitives used are the socket primitives used in Berkeley UNIX for
TCP. These primitives are widely used for Internet programming.
1. Addressing
2. Connection Establishment
To solve the problems in transport layer we introduced the three-way handshake. This
establishment protocol does not require both sides to begin sending with the same sequence
number, so it can be used with synchronization methods other than the global clock method. The
normal setup procedure when host 1 initiates is shown in figure (a). Host 1 chooses a sequence
number, x, and sends a CONNECTION REQUEST TPDU containing it to host 2. Host 2 replies
with an ACK TPDU acknowledging x and announcing its own initial sequence number, y.
Finally, host 1 acknowledges host 2's choice of an initial sequence number in the first data TPDU
that it sends.
Computer Networks_Unit-II
(a) Normal operation. (b) Old duplicate CONNECTION REQUEST appearing out of nowhere. (c)
Duplicate CONNECTION REQUEST and duplicate ACK.
3. Connection Release
There are two styles of terminating a connection: asymmetric release and symmetric
release. Asymmetric release is the way the telephone system works: when one party hangs up,
the connection is broken. Symmetric release treats the connection as two separate unidirectional
connections and requires each one to be released separately.
Asymmetric release is abrupt and may result in data loss. Consider the scenario of below figure.
After the connection is established, host 1 sends a TPDU that arrives properly at host 2. Then
host 1 sends another TPDU. Unfortunately, host 2 issues a DISCONNECT before the second
TPDU arrives. The result is that the connection is released and data are lost.
Computer Networks_Unit-II
(a) Normal case of three-way handshake. (b) Final ACK lost. (c) Response lost. (d) Response lost
and subsequent DRs lost
4. Flow Control and Buffering
In some ways the flow control problem in the transport layer is the same as in the data
link layer, but in other ways it is different. The basic similarity is that in both layers a sliding
window or other scheme is needed on each connection to keep a fast transmitter from
overrunning a slow receiver. The main difference is that a router usually has relatively few lines,
whereas a host may have numerous connections. This difference makes it impractical to
implement the data link buffering strategy in the transport layer.
Computer Networks_Unit-II
Even if the receiver has agreed to do the buffering, there still remains the question of the
buffer size. If most TPDUs are nearly the same size, it is natural to organize the buffers as a pool
of identically-sized buffers, with one TPDU per buffer, as in figure (a).
(a) Chained fixed-size buffers. (b) Chained variable-sized buffers. (c) One large circular
buffer per connection
Another approach to the buffer size problem is to use variable-sized buffers, as in figure
(b). The advantage here is better memory utilization, at the price of more complicated buffer
management. A third possibility is to dedicate a single large circular buffer per connection, as in
figure (c). This system also makes good use of memory, provided that all connections are heavily
loaded, but is poor if some connections are lightly loaded.
The flow control mechanism must be applied at the sender to prevent it from having too
many unacknowledged TPDUs outstanding at once. If the network can handle c TPDUs/sec and
the cycle time is r, then the sender's window should be cr. With a window of this size the sender
normally operates with the pipeline full. Any small decrease in network performance will cause
it to block.
5. Multiplexing
In the transport layer the need for multiplexing can arise in a number of ways. When a
TPDU comes in, some way is needed to tell which process to give it to. This situation, called
Computer Networks_Unit-II
upward multiplexing. In this figure, four distinct transport connections all use the same network
connection to the remote host.
6. Crash Recovery
If hosts and routers are subject to crashes, recovery from these crashes becomes an issue.
If the transport entity is entirely within the hosts, recovery from network and router crashes is
straightforward. A more troublesome problem is how to recover from host crashes. In particular,
it may be desirable for clients to be able to continue working when servers crash and then
quickly reboot. In an attempt to recover its previous status, the server might send a broadcast
TPDU to all other hosts, announcing that it had just crashed and requesting that its clients inform
it of the status of all open connections. Each client can be in one of two states: one TPDU
outstanding, S1, or no TPDUs outstanding, S0. Based on only this state information, the client
must decide whether to retransmit the most recent TPDU.
At first glance it would seem obvious: the client should retransmit only if and only if it
has an unacknowledged TPDU outstanding when it learns of the crash.
Three events are possible at the server: sending an acknowledgement writing to the
output process (W), and crashing (C). The three events can occur in six different orderings:
AC(W), AWC, C(AW), C(WA), WAC, and WC(A), where the parentheses are used to indicate that
neither A nor W can follow C. Below table shows all eight combinations of client and server
strategy and the valid event sequences for each one.
Computer Networks_Unit-II
1. Introduction to UDP
The Internet protocol suite supports a connectionless transport protocol, UDP (User
Datagram Protocol). UDP provides a way for applications to send encapsulated IP datagrams
and send them without having to establish a connection.
UDP transmits segments consisting of an 8-byte header followed by the payload. The
two ports serve to identify the end points within the source and destination machines. When a
UDP packet arrives, its payload is handed to the process attached to the destination port. This
attachment occurs when BIND primitive or something similar is used.
parameters and you get back a result. This observation has led people to try to arrange request-
reply interactions on networks to be cast in the form of procedure calls. Such an arrangement
makes network applications much easier to program and more familiar to deal with.
When a process on machine 1 calls a procedure on machine 2, the calling process on 1 is
suspended and execution of the called procedure takes place on 2. Information can be transported
from the caller to the callee in the parameters and can come back in the procedure result. No
message passing is visible to the programmer. This technique is known as RPC (Remote
Procedure Call) and has become the basis for many networking applications. Traditionally, the
calling procedure is known as the client and the called procedure is known as the server.
The idea behind RPC is to make a remote procedure call look as much as possible like a
local one. In the simplest form, to call a remote procedure, the client program must be bound
with a small library procedure, called the client stub that represents the server procedure in the
client's address space. Similarly, the server is bound with a procedure called the server stub.
These procedures hide the fact that the procedure call from the client to the server is not local.
library, which is in user space along with the application. This library then multiplexes the
streams and encodes them in RTP packets, which it then stuffs into a socket. At the other end of
the socket, UDP packets are generated and embedded in IP packets. If the computer is on an
Ethernet, the IP packets are then put in Ethernet frames for transmission.
(a) The position of RTP in the protocol stack. (b) Packet nesting
The basic function of RTP is to multiplex several real-time data streams onto a single
stream of UDP packets. The UDP stream can be sent to a single destination or to multiple
destinations. Because RTP just uses normal UDP, its packets are not treated specially by the
routers unless some normal IP quality-of-service features are enabled. RTP has no flow control,
no error control, no acknowledgements, and no mechanism to request retransmissions.
The RTP header is illustrated in figure below. It consists of three 32-bit words and
potentially some extensions. The first word contains the Version field, which is already at 2.
RTP has a little sister protocol called RTCP (Real-time Transport Control Protocol).
It handles feedback, synchronization, and the user interface but does not transport any data. The
first function can be used to provide feedback on delay, jitter, bandwidth, congestion, and other
Computer Networks_Unit-II
network properties to the sources. This information can be used by the encoding process to
increase the data rate when the network is functioning well and to cut back the data rate when
there is trouble in the network. By providing continuous feedback, the encoding algorithms can
be continuously adapted to provide the best quality possible under the current circumstances.
RTCP also handles interstream synchronization. The problem is that different streams may use
different clocks, with different granularities and different drift rates. RTCP can be used to keep
them in sync.
Finally, RTCP provides a way for naming the various sources. This information can be
displayed on the receiver's screen to indicate who is talking at the moment.
TCP pass the PUSH flag to the application on the receiving side. Furthermore, if additional
PUSHes come in before the first one has been transmitted (e.g., because the output line is busy),
TCP is free to collect all the PUSHed data into a single IP datagram, with no separation between
the various pieces.
One last feature of the TCP service that is worth mentioning here is urgent data. When
an interactive user hits the DEL or CTRL-C key to break off a remote computation that has
already begun, the sending application puts some control information in the data stream and
gives it to TCP along with the URGENT flag. This event causes TCP to stop accumulating data
and transmit everything it has for that connection immediately.
The steps required to establish and release connections can be represented in a finite state
machine with the 11 states listed. In each state, certain events are legal. When a legal event
happens, some action may be taken. If some other event happens, an error is reported.
The states used in the TCP connection management finite state machine
8. TCP Transmission Policy
Window management in TCP is not directly tied to acknowledgements as it is in most
data link protocols.
(a) A fast network feeding a low-capacity receiver. (b) A slow network feeding a high-capacity receiver
(a) Probability density of acknowledgement arrival times in the data link layer (b) Probability density
of acknowledgement arrival times for TCP
TCP is faced with a radically different environment. The probability density function for the time
it takes for a TCP acknowledgement to come back looks more like figure (b) than figure (a).
Determining the round-trip time to the destination is tricky. Even when it is known, deciding on
Computer Networks_Unit-II
the timeout interval is also difficult. If the timeout is set too short, say, T 1, unnecessary
retransmissions will occur, clogging the Internet with useless packets. If it is set too long,
performance will suffer due to the long retransmission delay whenever a packet is lost.
The solution is to use a highly dynamic algorithm that constantly adjusts the timeout
interval, based on continuous measurements of network performance. For each connection, TCP
maintains a variable, RTT, that is the best current estimate of the round-trip time to the
destination in question. When a segment is sent, a timer is started, both to see how long the
acknowledgement takes and to trigger a retransmission if it takes too long. If the
acknowledgement gets back before the timer expires, TCP measures how long the
acknowledgement took, say, M. It then updates RTT according to the formula
where α is a smoothing factor that determines how much weight is given to the old value.
Typically α = 7/8.
Even given a good value of RTT, choosing a suitable retransmission timeout is a
nontrivial matter. Normally, TCP uses βRTT, but the trick is choosing β. In the initial
implementations, β was always 2, but experience showed that a constant value was inflexible
because it failed to respond when the variance went up.
A algorithm requires keeping track of another smoothed variable, D, the deviation.
Whenever an acknowledgement comes in, the difference between the expected and observed
values, | RTT - M |, is computed. A smoothed value of this is maintained in D by the formula
where α may or may not be the same value used to smooth RTT. While D is not exactly the same
as the standard deviation. Most TCP implementations now use this algorithm and set the timeout
interval to
The choice of the factor 4 is somewhat arbitrary, but it has two advantages. First,
multiplication by 4 can be done with a single shift. Second, it minimizes unnecessary timeouts
and retransmissions because less than 1 percent of all packets come in more than four standard
deviations late.
One problem that occurs with the dynamic estimation of RTT is what to do when a
segment times out and is sent again. When the acknowledgement comes in, it is unclear whether
Computer Networks_Unit-II
the acknowledgement refers to the first transmission or a later one. Guessing wrong can seriously
contaminate the estimate of RTT. One scientist made a simple proposal: do not update RTT on
any segments that have been retransmitted. Instead, the timeout is doubled on each failure until
the segments get through the first time. This fix is called Karn's algorithm.
The retransmission timer is not the only timer TCP uses. A second timer is the
persistence timer. It is designed to prevent the following deadlock. The receiver sends an
acknowledgement with a window size of 0, telling the sender to wait. Later, the receiver updates
the window, but the packet with the update is lost. Now both the sender and the receiver are
waiting for each other to do something. When the persistence timer goes off, the sender transmits
a probe to the receiver. The response to the probe gives the window size. If it is still zero, the
persistence timer is set again and the cycle repeats. If it is nonzero, data can now be sent.
A third timer that some implementations use is the keep-alive timer. When a connection
has been idle for a long time, the keep-alive timer may go off to cause one side to check whether
the other side is still there. If it fails to respond, the connection is terminated. This feature is
controversial because it adds overhead and may terminate an otherwise healthy connection due
to a transient network partition.
The last timer used on each TCP connection is the one used in the TIMED WAIT state
while closing. It runs for twice the maximum packet lifetime to make sure that when a
connection is closed; all packets created by it have died off.
Frequently, the path from sender to receiver is heterogeneous. The first 1000 km might be
over a wired network, but the last 1 km might be wireless. Now making the correct decision on a
timeout is even harder, since it matters where the problem occurred. A solution is indirect TCP,
is to split the TCP connection into two separate connections. The first connection goes from the
sender to the base station. The second one goes from the base station to the receiver. The base
station simply copies packets between the connections in both directions.