0% found this document useful (0 votes)
2K views129 pages

Computer Network Notes

The document provides information on network architecture and layering. It discusses the OSI 7-layer model and the 4-layer Internet architecture. Each layer performs specific functions, with lower layers providing services to higher layers. Protocols are implemented at each layer to define rules for communication. Messages are encapsulated as they pass through layers, with headers added and stripped at each level.
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2K views129 pages

Computer Network Notes

The document provides information on network architecture and layering. It discusses the OSI 7-layer model and the 4-layer Internet architecture. Each layer performs specific functions, with lower layers providing services to higher layers. Protocols are implemented at each layer to define rules for communication. Messages are encapsulated as they pass through layers, with headers added and stripped at each level.
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 129

Unit I

Network architecture – layers – Physical links – Channel access on links – Hybrid multiple access
techniques - Issues in the data link layer - Framing – Error correction and detection – Link-level Flow
Control.

Network Architecture

Network architecture that guides the design and implementation of networks. Two of the most widely
referenced architectures—the OSI architecture and the Internet architecture.

Layering

In OSI Architecture, there are 7 layers which can be combined into basic four layers as shown in the below
figure.

Example of a layered network system

Layering Characteristics

 Each layer relies on services from layer below and exports services to layer above
 Hides implementation - layers can change without disturbing other layers

Abstraction

Abstraction hides the complexity of layering.


The idea of an abstraction is to define a unifying model that can capture some important aspect of the
system.
Abstractions naturally lead to layering, especially in network systems.
The following figure depicts the layered system with alternative abstraction available at a give layer.

Layered system with alternative abstractions available at a given layer

In the above figure, two layers of abstraction sandwiched between the application program and the
underlying hardware, as illustrated in above figure. The services provided at the high layers are
implemented in terms of the services provided by the low layers.

The layer immediately above the hardware in this case might provide host-to-host connectivity and the
next layer up on the available host-to-host communication is Process to Process channel that provides
support for application to application communication services. From the above fissure, the process to
process channel is a combination of two abstract channels; one is Request/Reply channel for sending or
receiving the request or reply message. The Message stream abstract channel is used for sending or
receiving the actual message.
Features of Layering

• First, it decomposes the problem of building a network into more manageable components,
rather than implementing a monolithic piece of software
• Second, it provides a more modular design.

Protocol

 Building blocks of a network architecture


 Each protocol object has two different interfaces
 service interface: defines operations on this protocol
 peer-to-peer interface: defines messages exchanged with peer
 Term “protocol” is overloaded
 specification of peer-to-peer interface
 module that implements this interface
Each protocol defines two different interfaces as shown in below figure.

Service and peer interfaces

Protocol graph

A suite of protocols that make up a network system is called as a protocol graph. The following figure
illustrates a protocol graph for the above hypothetical layered system.
Example of a protocol graph

In this example, suppose that the file access program on host 1 wants to send a message to its peer on
host 2 using the communication service offered by protocol RRP. In this case, the file application asks RRP
to send the message on its behalf. To communicate with its peer, RRP then invokes the services of HHP,
which in turn transmits the message to its peer on the other machine. Once the message has arrived at
protocol HHP on host 2, HHP passes the message up to RRP, which in turn delivers the message to the file
application.

Encapsulation

When one of the application programs sends a message to its peer by passing the message to protocol
RRP, RRP adds a header to the message. The header is a data structure contains some control information
which are usually attached to the message at front. We say this, that the application’s data is
encapsulated in the new message created by protocol RRP. This process of encapsulation is then repeated
at each level of the protocol graph; for example, HHP encapsulates RRP’s message by attaching a header
of its own.

Now assume that HHP sends the message to its peer over some network, then when the message arrives
at the destination host, it is processed in the opposite order: HHP first strips its header off the front of the
message, interprets it and passes the body of the message up to RRP, which removes its own header and
passes the body of the message up to the application program. The message passed up from RRP to the
application on host 2 is exactly the same message as the application passed down to RRP on host 1. This
whole process is illustrated in the following figure.
Example of how high-level messages are encapsulated inside of low-level messages

OSI Architecture

The ISO Open Systems Interconnection (OSI) architecture is illustrated in below figure which defines a
partitioning of network functionality into seven layers, where one or more protocols implement the
functionality assigned to a given layer.

• Starting at the bottom and working up, the physical layer handles the transmission of raw bits
over a communications link.
• The data link layer then collects a stream of bits into a larger aggregate called a frame.
• The network layer handles routing among nodes within a packet-switched network. At this layer,
the unit of data exchanged among nodes is typically called a packet rather than a frame.
• The lower three layers are implemented on all network nodes, including switches within the
network and hosts connected along the exterior of the network.
• The transport layer then implements what we have up to this point been calling a process-to-
process channel. Here, the unit of data exchanged is commonly called a message rather than a
packet or a frame.
• The session layer performs the synchronization.
• The presentation layer is concerned with the format of data exchanged between peers
• The application layer where application programs are running which interacts with the user and
receives the message from the user.

Physical Layer Responsibilities


Responsible for transmitting individual bits from one node to the next
1. Defines the characteristics of interfaces and transmission media
2. Defines the type of transmission media
3. To transmit this stream of bits, it must be encoded into signals. Physical layer only
defines the type of representation of bits.
4. Internet Architecture
5. Defines the transmission rate. i.e., number of bits per second.
6. Sender and receiver machine clock must be adjusted in order to have a same bit rate at
both the sender and receiver side.
Data Link Layer Responsibilities
Responsible for transmitting frames from one to the next
1. Framing: Divides the stream of bits into frames.
2. Physical addressing: It adds the header information to the frame. The header
information is the address of sender and receiver.
3. Flow control: It provides the flow control mechanism. The data rate received by the
receiver < the data rate sent by the sender
4. Error control: It provides the mechanism to find the damaged /lost/duplication of data in
the transmission. The error control information is usually added to trailer part of the
message
5. Access control: It determines which device can use the medium when there are more
number devices involved in the transmission.
Network Layer Responsibilities
Source-to-destination delivery, possibly across multiple networks
1. Logical addressing: Adding the network address in the header.
2. Routing: Transmitting the packets to the correct destination in the network.
Transport Layer Responsibilities
Delivery of message from one process (running programs) to another
1. Process-to-process delivery of entire message: Delivering the message is not only to the
correct destination but also to the specific process.
2. Port addressing: Inserting the port address in the header
3. Segmentation and reassembly: Dividing the message into segments and each segment
will have sequence number which is used by the transport layer at the receiver machine
for reassembling it.
4. Connection control: connectionless or connection-oriented
5. End-to-end flow control
6. End-to-end error control
Application Layer Responsibilities
Responsible for providing services to the user
1. Enables user access to the network
2. User interfaces and support for services such as
a. E-Mail
b. File transfer and access
c. Remote log-in
d. WWW

Internet Architecture

The Internet architecture, which is also sometimes called the TCP/IP architecture
• a four-layer model
• At the lowest level are a wide variety of network protocols, denoted NET1, NET2, and so on. In
practice, these protocols are implemented by a combination of hardware (e.g., a network
adaptor) and software (e.g., a network device driver).
• The second layer consists of a single protocol—the Internet Protocol (IP). This is the protocol that
supports the interconnection of multiple networking technologies into a single, logical
internetwork.
• The third layer contains two main protocols—the Transmission Control Protocol (TCP) and the
User Datagram Protocol (UDP). TCP provides a reliable byte-stream channel, and UDP provides
an unreliable datagram delivery channel
• Running above the transport layer are a range of application protocols, such as FTP, TFTP (Trivial
File Transport Protocol), Telnet (remote login), and SMTP (Simple Mail Transfer Protocol, or
electronic mail), that enable the interoperation of popular applications.

Advantages of using IP Arch. over OSI Arch.

• The Internet architecture does not imply strict layering. That is, the application is free to bypass
the defined transport layers and to directly use IP or one of the underlying networks
• Looking closely at the internet protocol graph, it has an hourglass shape—wide at the top, narrow
in the middle, and wide at the bottom. That is, IP serves as the focal point for the architecture—it
defines a common method for exchanging packets among a wide collection of networks.
• It has the ability to adapt rapidly to new user demands and changing technologies.

ISO – OSI Model Vs TCP/IP Model

OSI MODEL TCP/IP MODEL

Seven layers: Physical layer, Data link layer, Four layers:, Network layer, Transport layer, IP
Network layer, Transport Layer, Session layer, layer, Application layer
Presentation layer, Application layer

Each layer defines a family of functions and the Each layer defines number of protocols and they are
functions are interdependent not dependent

Widely used in Local Area Network Used in internet

Links

Links are medium that connects nodes to form a computer network.

Types of Links
The communication between the nodes is either based on a point-to-point model or a Multicast model.
In the point-to-point model, a message follows a specific route across the network in order to get from
one node to another. In the multicast model, on the other hand, all nodes share the same communication
medium and, as a result, a message transmitted by any node can be received by all other nodes. A part of
the message (an address) indicates for which node the message is intended. All nodes look at this address
and ignore the message if it does not match their own address.
Connection Types
Connections between devices may be classified into three categories:
1. Simplex. This is a unidirectional connection, i.e., data can only travel in one direction. Simplex
connections are useful in situations where a device only receives or only sends data (e.g., a printer).
2. Half-duplex. This is a bidirectional connection, with the restriction that data can travel in one direction
at a time.
3. Full-duplex. This is a bidirectional connection in which data can travel in both directions at once. A
full-duplex connection is equivalent to two simplex connections in opposite directions.

Issues in the data link layer

There are five problems that must be addressed before the nodes can successfully exchange packets.
1. encoding problem
2. framing problem
3. error detection problem
4. Reliable delivery and
5. media access control problem

Encoding:

NRZ
Nonreturn to zero transmits 1s as zero voltages and 0s as positive voltages

 Problem: Consecutive 1s or 0s
 Low signal (0) may be interpreted as no signal
 High signal (1) leads to baseline wander
 Unable to recover clock

NRZI
Transition data if input is 1, and no transition if input is 0.

Manchester encoding: A transition occurs in the middle of the bits. 0 becomes a low to high transition and
1 high to low
Differential Manchester: a transition in the beginning of the interval to transmit 0. No transition in the
beginning of the interval to transmit 1. The transition in the middle is always present.

4B/5B

 Problem: consecutive zeros


 Idea: Every 4 bits of data is encoded in a 5-bit code, with the 5-bit codes selected to have no
more than one leading 0 and no more than two trailing 0 (i.e., never get more than three
consecutive 0s).
 Resulting 5-bit codes are then transmitted using the NRZI encoding. Achieves 80% efficiency.

Framing

 Breaking sequence of bits into a frame


 Must determine first and last bit of the frame
 Typically implemented by network adapter
 Adapter fetches (deposits) frames out of (into) host memory
The network adaptor that enables the nodes to exchange frames.

When node A wishes to transmit a frame to node B, it tells its adaptor to transmit a frame from the node’s
memory. This results in a sequence of bits being sent over the link. The adaptor on node B then collects
together the sequence of bits arriving on the link and deposits the corresponding frame in B’s memory.
Recognizing exactly what set of bits constitutes a frame—that is, determining where the frame begins and
ends—is the central challenge faced by the adaptor. .

Approaches
There are several ways to address the framing problem. Some of them are:
1. Byte Oriented: Special character to delineate frames, replace character in data stream
a. Sentinel approach
b. Byte counting approach
2. Bit Oriented: use a technique known as bit stuffing
3. Clock Based: fixed length frames, high reliability required

1. Byte-Oriented Protocols
a byte-oriented approach is exemplified by the BISYNC (Binary Synchronous Communication) protocol
developed by IBM. The BISYNC protocol illustrates the sentinel approach to framing; its frame format is
depicted in the following figure
 Sentinel Approach
– PPP protocol uses 0x7e=01111110 as the flag byte to delimit a frame
– When a 0x7e is seen in the payload, it must be escaped to keep it from being seen as an
end of frame

The beginning of a frame is denoted by sending a special SYN (synchronization) character. The data
portion of the frame is then contained between special sentinel characters: STX (start of text) and ETX
(end of text). The SOH (start of header) field serves much the same purpose as the STX field. The frame
contains additional header fields that are used for, among other things, the link-level reliable delivery
algorithm

The problem with the sentinel approach, is that the ETX character might appear in the data portion of the
frame. BISYNC overcomes this problem by “escaping” the ETX character by preceding it with a DLE (data-
link-escape) character whenever it appears in the body of a frame;
The DLE character is also escaped (by preceding it with an extra DLE) in the frame body. This approach is
often called character stuffing because extra characters are inserted in the data portion of the frame.

Point-to-Point Protocol (PPP) is similar to BISYNC in that it uses character stuffing.


The format for a PPP frame is given in Figure.

The special start-of-text character, denoted as the Flag field is 01111110. The Address and Control fields
usually contain default values, and so are uninteresting. The Protocol field is used for demultiplexing. The
frame payload size can be negotiated, but it is 1500 bytes by default.
The Checksum field is either 2 (by default) or 4 bytes long used for error detection.

 Byte counting approach


The number of bytes contained in a frame can be included as a field in the frame header.
DDCMP protocol uses this approach, as illustrated in the following figure

The COUNT field specifies how many bytes are contained in the frame’s body. One danger with this
approach is that a transmission error could corrupt the COUNT field, in which case the end of the frame
would not be correctly detected.
2. Bit Oriented approach
The High-Level Data Link Control (HDLC) protocol developed by IBM is an example of a bit-oriented
protocol.
HDLC: High-Level Data Link Control
 Delineate frame with a special bit-sequence: 01111110
Its frame format is given in below figure.

Bit-oriented protocols use a technique known as bit stuffing.


Bit Stuffing: The delimiting bit pattern used is 01111110 and is called a flag. To avoid this bit pattern
occurring in user data, the transmitter inserts a 0 bit after every five consecutive 1 bits it finds. This is
called bit stuffing.
 Sender: any time five consecutive 1s have been transmitted from the body of the message,
insert a 0.
 Receiver: should five consecutive 1s arrive, look at next bit(s):
– if next bit is a 0: remove it
– if next bits are 10:end of frame
– if next bits are 11: error
Bit stuffing Example
 Original Data
– 001111111000011111100
 Bit Stuffed
– 00111110110000111110100
 Receiver
– 0011111011000011111010001111110

3. Clock-Based Framing
This approach to framing is used by the Synchronous Optical Network (SONET) standard.

Error Detection and Correction

Data can be corrupted during transmission. For reliable communication, errors must be detected and
corrected

Types of Error
 Single-Bit Error
 Burst Error

Single-Bit Error

 In a single-bit error, Only one bit is changed: 0 changed to 1, or a 1 to a 0


Burst Error
 Two or more bits in data unit are in error, not necessarily consecutive in order

The basic idea behind any error detection scheme is to add redundant information to a frame that can be
used to determine if errors have been introduced. In other words, error detection uses the concept of
redundancy, which means adding extra bits for detecting errors at the destination as shown in below
figure.

We say that the extra bits we send are redundant because they add no new information to the message.
Instead, they are derived directly from the original message using some well-defined algorithm. Both the
sender and the receiver know exactly what that algorithm is. The sender applies the algorithm to the
message to generate the redundant bits. It then transmits both the message and those few extra bits.
When the receiver applies the same algorithm to the received message, it should (in the absence of
errors) come up with the same result as the sender. It compares the result with the one sent to it by the
sender. If they match, it can conclude (with high likelihood) that no errors were introduced in the message
during transmission. If they do not match, it can be sure that either the message or the redundant bits
were corrupted, and it must take appropriate action, that is, discarding the message, or correcting it if
that is possible.

Error Detection methods

Parity Check

1. Simple-parity check
2. Two dimensional parity check
Simple-parity check

In this parity check, a parity bit is added to every data unit so that the total number of 1s is even (or odd
for odd-parity). The following figure illustrates this concept.

Suppose the sender wants to send the word world. In ASCII the five characters are coded as
1110111 1101111 1110010 1101100 1100100
The following shows the actual bits sent
11101110 11011110 11100100 11011000 11001001
Now suppose the word world is received by the receiver without being corrupted in transmission.
11101110 11011110 11100100 11011000 11001001
The receiver counts the 1s in each character and comes up with even numbers (6, 6, 4, 4, 4). The data
are accepted.
Now suppose the word world is corrupted during transmission.
11111110 11011110 11101100 11011000 11001001
The receiver counts the 1s in each character and comes up with even and odd numbers (7, 6, 5, 4, 4).
The receiver knows that the data are corrupted, discards them, and asks for retransmission.

Performance

Simple parity check can detect all single-bit errors. It can detect burst errors only if the total number of
errors in each data unit is odd.

Two dimensional parity check

In two-dimensional parity check, a block of bits is divided into rows and a redundant row of bits is added
to the whole block.

Suppose the following block is sent:


10101001 00111001 11011101 11100111 10101010
However, it is hit by a burst noise of length 8, and some bits are corrupted.
10100011 10001001 11011101 11100111 10101010
When the receiver checks the parity bits, some of the bits do not follow the even-parity rule and the whole
block is discarded.
10100011 10001001 11011101 11100111 10101010

CRC
 Parity checks based on addition; CRC based on binary division
 A sequence of redundant bits (a CRC or CRC remainder) is appended to the end of the data
unit
 These bits are later used in calculations to detect whether or not an error had occurred.
CRC Steps
• On sender’s end, data unit is divided by a predetermined divisor; remainder is the CRC
• When appended to the data unit, it should be exactly divisible by a second predetermined
binary number
• At receiver’s end, data stream is divided by same number
• If no remainder, data unit is assumed to be error-free
Deriving the CRC
• A string of 0s is appended to the data unit; n is one less than number of bits in
predetermined divisor
• New data unit is divided by the divisor using binary division; remainder is CRC
• CRC of n bits replaces appended 0s at end of data unit

CRC Generator function

CRC Checker function


Polynomial
The divisor number used in the CRC algorithm, which is (n+1) bit in length, can also be considered as the
coefficients of a polynomial, called Generator Polynomial which is shown below.

The divisor can give n-bit length CRC remainder. For example, for the divisor 11001 the corresponding
polynomial is X4+X3+1.
A polynomial is
 Used to represent CRC generator
 Cost effective method for performing calculations quickly

Standard CRC polynomials

Name Polynomial Application

CRC-8 x8 + x2 + x + 1 ATM header

CRC-10 x10 + x9 + x5 + x4 + x 2 + 1 ATM AAL

ITU-16 x16 + x12 + x5 + 1 HDLC

x32 + x26 + x23 + x22 + x16 + x12 + x11 + x10 + x8 + x7 + x5 +


ITU-32 LANs
x4 + x2 + x + 1

A polynomial is selected to have at least the following properties:


 It should not be divisible by X.
 It should not be divisible by (X+1).

The first condition guarantees that all burst errors of a length equal to the degree of polynomial are
detected. The second condition guarantees that all burst errors affecting an odd number of bits are
detected.

CRC Performance
 CRC can detect all single-bit errors
 CRC can detect all double-bit errors (three 1’s)
 CRC can detect any odd number of errors (X+1)
 CRC can detect all burst errors of less than the degree of the polynomial.
 CRC detects most of the larger burst errors with a high probability. For example CRC-12 detects
99.97% of errors with a length 12 or more.

Checksum

When the algorithm to create the code is based on addition, they may be called a checksum.

The sender follows these steps:(Checksum generator)


1. The unit is divided into k sections, each of n bits.
2. All sections are added using one’s complement to get the sum.
3. The sum is complemented and becomes the checksum.
4. The checksum is sent with the data.
The receiver follows these steps(Checksum checker)
1. The unit is divided into k sections, each of n bits.
2. All sections are added using one’s complement to get the sum.
3. The sum is complemented.
4. If the result is zero, the data are accepted: otherwise, rejected.

Checksum generator function


Suppose the following block of 16 bits is to be sent using a checksum of 8 bits.
10101001 00111001
The numbers are added using one’s complement
10101001
00111001
------------
Sum 11100010

Checksum 00011101

Now the pattern sent is 10101001 00111001 00011101

Checksum checker function


Now suppose the receiver receives the pattern sent without any corruption, 10101001 00111001
00011101
the receiver performs the checksum checker function to ensure whether the received pattern is corrupted
or not.

When the receiver adds the three sections, it will get all 1s, which, after complementing, is all 0s and
shows that there is no error.
10101001
00111001
00011101
-----------
Sum 11111111
Complement 00000000 means that the pattern is OK.

Now suppose there is a burst error of length 5 introduced as


10101111 11111001 00011101
When the receiver adds the three sections, it gets
10101111
11111001
00011101
-----------
Partial Sum 1 11000101
Carry 1
-----------
Sum 11000110

Complement 00111001 the pattern is corrupted.

Performance
 Detects all errors involving odd number of bits, most errors involving even number of bits
 If one or more bits of a segment are damaged and the corresponding bits of opposite value in a
second segment are also damaged, the sums of these columns will not change and the receiver
will not detect a problem.

Error Correction method

Error Correction can be handled in two ways.


One is when an error is discovered; the receiver can ask the sender to retransmit the entire data unit.
This is known as retransmission.
In the other, receiver can use an error-correcting code, which automatically corrects certain errors. One of
the techniques is Hamming code.
For correcting an error one has to know the exact position of error, i.e. exactly which bit is in error? To
this, we have to add some additional redundant bits. To calculate the numbers of redundant bits (r)
required to correct m data bits, we must find out the relationship between the two. With m bits of data
and r bit of redundancy added to them, the length of the resulting code is m+r. . To find the number of
redundancy bit for a m data bits, the following formulae is used:
2r >= m+r+1
For example, if the value of m is 7, the r value can satisfy this equation is 4:
24 >= 7+4+1
The following table shows some possible m values and the corresponding r values.

Also, it is important to know the locations of the r bits in the m bits data. The r bits are placed in position
1,2,4,8,… (power of 2). Suppose if m =7, then r must be 4 bits and total number of bits becomes
11(m+r), in which the r bits are placed in the locations 1, 2, 4, and 8 as shown below
In a 7 bit data unit, the combinations of locations used to calculate r values are as follows:

r1: 1,3,5,7,9,11 locations


r2: 2, 3,6,7,10,11 locations
r4: 4,5,6,7 locations
r8: 8, 9, 10, 11 locations

Calculating r values
1. We place r bits in its appropriate location in the m+r length data unit.
2. We calculate the even parities for the various bit combinations

For example,
Reliable Transmission

Even when error-correcting codes are used some errors will be too severe to be corrected. As a result,
some corrupt frames must be discarded. A link-level protocol that wants to deliver frames reliably must
somehow recover from these discarded (lost) frames. This is usually accomplished using a combination of
two fundamental mechanisms—
1. acknowledgments
2. timeouts
• An acknowledgment (ACK for short) is a small control frame that a protocol sends back to its
peer saying that it has received an earlier frame. By control frame we mean a header without any
data.
• If the sender does not receive an acknowledgment after a reasonable amount of time, then it
retransmits the original frame. This action of waiting a reasonable amount of time is called a
timeout.

Piggybacking: To improve the use of network bandwidth, an acknowledgment method known as


piggybacking is often used. In piggybacking, instead of sending a separate acknowledgment frame, the
receiver waits until it has data frame to send to the sender and embeds the acknowledgment in that data
frame.
Thus the link bandwidth can be utilized better also it increases the speed of data transmission.

Propagation delay is defined as the delay between transmission and receipt of packets between hosts.
Propagation delay can be used to estimate timeout period

The general strategy of using acknowledgments and timeouts to implement reliable delivery is sometimes
called automatic repeat request (ARQ). There are four different ARQ algorithms:
1. Stop-and-Wait ARQ
2. Sliding Window ARQ
3. Go back N ARQ
4. Selective Repeat ARQ

1. Stop and Wait ARQ

• Sender doesn’t send next frame until he’s sure receiver has last packet
• The data frame/Ack. Frame sequence enables reliability. They are sequenced alternatively 0 and
1
• Sequence numbers help avoid problem of duplicate frames
• If the sender does not receive an acknowledgment after a reasonable amount of time, then it
retransmits the original frame
• The sender also starts retransmission when the timeout occurs.
a) Normal operation b) The Original frame is lost c) The ACK is lost d) Timeout occurs

Disadvantage

• The link capacity can not be utilized effectively since only one data frame or ACK frame can e
sent at a time

2. Sliding Window

The sliding window algorithm works as follows.


1. The sender assigns a sequence number, denoted SeqNum, to each frame.
2. The sender maintains three variables:
a. The send window size, denoted SWS, gives the upper bound on the number of
outstanding (unacknowledged) frames that the sender can transmit;
b. LAR denotes the sequence number of the last acknowledgment received; and
c. LFS denotes the sequence number of the last frame sent.
3. The sender maintains the following invariant:
LFS − LAR ≤ SWS
This situation is illustrated in below figure

When an acknowledgment arrives, the sender moves LAR to the right, thereby allowing the
sender to transmit another frame. The sender associates a timer with each frame it transmits,
and it retransmits the frame should the timer expire before an ACK is received.
4. The receiver maintains the following three variables:
a. The receive window size, denoted RWS, gives the upper bound on the number of out-of-
order frames that the receiver is willing to accept;
b. LAF denotes the sequence number of the largest acceptable frame;
c. LFR denotes the sequence number of the last frame received.
5. The receiver also maintains the following invariant:
LAF − LFR ≤ RWS
6. if LFR<SeqNum<-LFA, then the corresponding SeqNum frame is accepted
7. if SeqNum<=LFR or SeqNum > LFA, then the correspong SeqNum frame is discarded
This situation is illustrated in below figure

Operation
The sender window denotes the frames that have been transmitted but remain unacknowledged. This
window can vary in size, from empty to the entire range. The receiver window size is fixed. A receiver
window size of 1 means that frames must be received in transmission order. Larger window sizes allow
the receiver to receive as many frames out of order.

1. When a frame with sequence number SeqNum arrives, the receiver takes the following action.
a. If SeqNum ≤ LFR or SeqNum > LAF, then the frame is outside the receiver’s window and
it is discarded.
b. If LFR < SeqNum ≤ LAF, then the frame is within the receiver’s window and it is
accepted. Now the receiver needs to decide whether or not to send an ACK. The
acknowledgement can be cumulative.
c. It then sets LFR = Sequence Number to Acknowledge and adjusts LAF = LFR + RWS.
The following figure illustrates the operation of sliding window.

The receiver can set RWS to whatever it wants. Two common settings are:
1. RWS = 1, implies that the receiver will not buffer any frames that arrive out of order
2. RWS = SWS implies that the receiver can buffer any of the frames the sender transmits.
It is no sense to set RWS > SWS since it’s impossible for more than SWS frames to arrive out of order.

Advantages
1. Reliable transmission: The algorithm can be used to reliably deliver messages across an
unreliable network
2. Frame Order: The sliding window algorithm can serve is to preserve the order in which frames
are transmitted. Since each frame has a sequence number.
3. Flow control: The receiver not only acknowledges frames it has received, but also informs the
sender of how many frames it has room to receive
4. The link capacity can be utilized effectively since multiple frames can be transmitted at a time

Selective Repeat ARQ: upon encountering a faulty frame, the receiver requests the retransmission of
that specific frame. Since additional frames may have followed the faulty frame, the receiver needs to be
able to temporarily store these frames until it has received a corrected version of the faulty frame, so that
frame order is maintained.

Go Back N ARQ
A simpler method, Go-Back-N, involves the transmitter requesting the retransmission of the faulty frame
as well as all succeeding frames (i.e., all frames
transmitted after the faulty frame).

Advantages

 The advantage of Selective Reject over Go-Back-N is that it leads to better throughput, because
only the erroneous frames are retransmitted.

 Go-Back-N has the advantage of being simpler to implement and requiring less memory.

Unit II

Medium access – CSMA – Ethernet – Token ring – FDDI - Wireless LAN – Bridges and Switches

Ethernet (802.3)

 most successful local area networking technology


 the Ethernet is a working example of the more general Carrier Sense
 Multiple Access with Collision Detect (CSMA/CD)
 CSMA/CD: Ethernet’s Media Access Control (MAC) policy
o CS = carrier sense
 Send only if medium is idle
o MA = multiple access
o CD = collision detection
 Stop sending immediately if collision is detected
 Most popular packet-switched LAN technology
 Addresses:
–unique, 48-bit unicast address assigned to each adapter
–example: 8:0:e4:b1:2
–the address will be used for broadcast all 1s in the address
–the address is multicast if the first bit is 1
 Bandwidths: 10Mbps, 100Mbps, 1Gbps
 Max bus length: 2500m
 500m segments with 4 repeaters
 Bus and Star topologies are used to connect hosts
 Hosts attach to network via Ethernet transceiver or hub or switch
 Detects line state and sends/receives signals
 Hubs are used to facilitate shared connections
 All hosts on an Ethernet are competing for access to the medium
A transceiver—a small device directly attached to the tap—detects when the line is idle and drives the
signal when the host is transmitting. It also receives incoming signals. The transceiver is, in turn,
connected to an Ethernet adaptor, which is plugged into the host.

Repeater: Multiple Ethernet segments can be joined together by repeaters. A repeater is a device that
forwards digital signals, much like an amplifier forwards analog signals. Any signal placed on the Ethernet
by a host is broadcast over the entire network

Ethernet standards

10Base2- can be constructed from a thinner cable called as thin-net, 200m length
10Base5 can be constructed from a thick cable called as thick-net, 500m length
10BaseT can be constructed from twisted pair cable; “T” stands for twisted pair, limited to under 100 m in
length
10” in 10Base2 means that the network operates at 10 Mbps, “Base” refers to the fact that the cable is
used in a baseband system, and the “2” means that a given segment can be no longer than 200 m

Access Method: CSMA/CD

Carrier Sense: This protocol is applicable to a bus topology. Before a station can transmit, it listens to the
channel to see if any other station is already transmitting. If the station finds the channel idle, it attempt
to transmit; otherwise, it waits for the channel to become idle. If two or more stations find the channel
idle and simultaneously attempt to transmit. This is called a collision. When collision occurs, the station
should suspend transmission and re-attempts after a random period of time.
Use of a random wait period reduces the chance of the collision recurring. The following flow chart depicts
this technique.
If line is idle…
–send immediately
–upper bound message size of 1500 bytes
–minimum frame is 64 bytes (header + 46 bytes of data)

If line is busy…
–wait until idle and transmit immediately

If collision…
–send jam signal, and then stop transmitting frame
–delay for exponential Back off time and try again

Exponential Back off


–1st time: 0 or 51.2us
–2nd time: 0, 51.2, or 102.4us
–3rd time51.2, 102.4, or 153.6us
–nth time: k x 51.2us, for randomly selected k=0..2^n - 1
–give up after several tries (usually 16)

Collision Detection

 Suppose host A begins transmitting a fram at time t as shown in below figure a.

 It takes one link latency for the frame to reach host B. Thus it arrives at B at time of t+d which is
shown in figure b.

 Suppose an instant before host A’s frame arrives, B begins transmit its own frame and it collides
with the A’s frame as shown in figure c.
 This collision is detected by host B. Host B will send a jamming signal which is known as ‘runt
frame’. The runt frame is a combination of 32 bit jamming sequence and 64 bit preamble bits. B’s
runt frame arrives at A at time t+2d. That is A sees the collision at the time of t+2d.

Exponential Back off strategy

Once the adaptor has detected the collision, it stops transmission, and it waits a certain amount of time
and tries again transmission. Each time it tries to transmit bit fails, then the adapter doubles the amount
of time it waits before trying again. This strategy of doubling the delay interval between each transmission
attempt is known as ‘exponential back off’.
The adapter delays are,
–1st time: 0 or 51.2us
–2nd time: 0, 51.2, or 102.4us
–3rd time51.2, 102.4, or 153.6us
–nth time: k x 51.2us, for randomly selected k=0..2^n - 1
–give up after several tries (usually 16)

Frame format

 Preamble allows the receiver to synchronize with the signal. It is 10101010.


 Src addr and Dest addr are address of the source and destination which are 48 bit address.
 Type is a demultiplexing key. It tells the frame can be used by higher level protocol.
 Body is the field where it holds the data. Maximum of 1500 bytes of data can be stored. Minimum
data length should be 46 bytes so that the collision can be detected.
 CRC is a 32 bit used for error detection

Advantages

1. It works better under lightly loaded conditions


2. No flow of control in Ethernet which is done by upper layers
3. New host can be added easily to the network\
4. Very easy to administer and maintain
5. Relatively very cheap
6. It uses round trip delay of closer to 5us than 51.2us

Token Ring (IEEE 802.5)


• Specified by the IEEE 802.5 standard. • Destination saves a copy of the frame
• Set of nodes are connected in a ring. when it flows past.
• Data always flows in one direction • Token used to control who transmits.
• Node receiving frames from upstream
neighbor passes it to downstream
neighbor.
• Distributed algorithm dictates when
each node can transmit.
• All nodes see all frames

Physical Properties

• Electromechanical relays are used to Protection against failures -- single node failure should not
cause the entire ring to fail.

– One approach: change one bit in token which transforms it into a “start-of-frame
sequence” and appends frame for transmission.
– Second approach: station claims token by removing it from the ring.
• Frame circles the ring and is removed by the transmitting station.
• Each station interrogates passing frame, if destined for station, it copies the frame into local
buffer.
• After station has completed transmission of the frame, the sending station should release the
token on the ring.

Access control: Token passing

• Token circulates around the ring.


• The token allows a host to transmit -- contains a special sequence of bits.
• When a node that wishes to send sees the token, it
– picks up the token
– inserts its own frame instead on the ring.
• When frame traverses ring and returns, the sender takes frame off and reinserts token.
• Transmitted frame contains dest addr.
• Each node looks at the frame -- if frame meant for the node, copy frame onto buffer, otherwise
just forwards it.
• Sending node responsible for removal of frame from ring and releases the token back on the ring

Token Holding Time

How long can a node hold onto the token? This is dictated by the token holding time or THT.
• If lightly loaded network you may allow a node to hold onto it as long as it wants -- very high
utilization.
• In 802.5, THT is specified to be 10 milliseconds.

Timers

Token Holding Time (THT)


–defined as upper limit on how long a station can hold the token
Token Rotation Time (TRT)
–defined as how long it takes the token to traverse the ring.
TRT <= Active Nodes X THT + Ring Latency

Target Token Rotation Time (TTRT)


–Agreed-upon upper bound on TRT
Measured TRT
- The time between successive arrivals of the token.

Reliable Delivery in 802.5

• Two bits in a frame trailer A and C bits.


• Both are zero initially.
• When a station notices that it is the destination for a frame it sets the A bit.
• When it copies frame, it sets C bit.
• When sender sees:
1. The ‘A’ bit set to zero; it assumes that the recipient is absent / non-functional.
2. If ‘A’ bit =1 but ‘C’ bit = 0, it assumes that for some reason destination could not accept
frame and tries to retransmit.

Token Release

e en en
m Tok Tok Fr
am
ra e
F

(a) (b)

• EARLY RELEASE: Release token right


after frame • DELAYED RELEASE: Release token after
• Better utilization frame is removed from ring.
The Monitor

• A special node that ensures the health of the ring.


• Any station can become the monitor.
• If monitor is healthy, it periodically announces its presence.
• If no message seen for awhile, a node will assume that the monitor has failed and will try to
become a monitor. Then It transmits a claim token --> the purpose is to become a monitor. If
more than one station sends claim token then highest address node wins.
Maintaining the token by the monitor
• Monitor ensures the presence of token. Token may get corrupted or lost
If no token seen for this time i.e,
TRT = Num_stations X THT + Ring Latency
The monitor assumes that the token may be lost; and it creates a new token.
Other Monitor Functions
• Check for corrupted frames
• Check for orphaned frames. The orphaned frame is an frame which inserted by a node that
dies during the flow.
– The monitor uses a monitor bit, and sets this to 1 to see if the frame keeps on
circulating.
• It bypass the malfunctioning stations
Frame Format in 802.5

8 8 8 48 48 Variable 32 8 8
Start Access Frame Dest Src Body Checksum End Frame
delimiter control control addr addr delimiter status

Token Frame Format

Start of frame: indicates the start of the frame


Control: it identifies the frame type i.e. token or a data frame.
Dest addr: contains the physical address of the destination
Src addr: contains the physical address of the source.
Body: Each data frame carries up to 4500 bytes.
CRC: is used for error detection
End of frame: This represents end of the Token.
Status: FDDI FS field is similar to that of Token Ring. It is included only in data/Command frame and
consists of one and a half bytes.

FDDI

• Fiber Distributed Data Interface


• Similar to Token ring but uses -- optical fiber. The copper version is known as CDDI.
• In FDDI, token is absorbed by station and released as soon as it completes the frame
transmission
• FDDI uses a ring topology of multimode optical fiber transmission links operating at 100 Mbps to
span up to 200 kms and permits up to 500 stations.
• Data is encoded using a 4B/5B encoder
• Two rings instead of one; second used if first fails.
• Two independent rings transmitting data in opposite direction
o FDDI can tolerate single node or link failures.

In case of failure of a node or a fiber link, the ring is restored by wrapping the primary ring to the
secondary ring as shown in Figure b. If a station on the dual ring fails or is powered down, or if the cable
is damaged, the dual ring is automatically wrapped (doubled back onto itself) into a single ring. When the
ring is wrapped, the dual-ring topology becomes a single-ring topology. Data continues to be transmitted
on the FDDI ring without performance impact during the wrap condition. Network operation continues for
the remaining stations on the ring. When two or more failures occur, the FDDI ring segments into two or
more independent rings that are incapable of communicating with each other.

Single Access Stations

• FDDI is expensive for nodes to connect to two cables and so FDDI allows nodes to attach using a
single cable
– called Single Access Stations or SAS.
as shown in below figure.
Upstream Downstream
neighbor neighbor

Concentrator (DAS)

SAS SAS SAS SAS


• A Concentrator is used to attach several SASs to a ring.

Access control: Timed Token Algorithm for FDDI

• Each node measures TRT.


• If measured TRT > TTRT,
– then Token is late, so that station does not transmit data
• If measured TRT < TTRT,
– then Token is early; station holds token for difference between TTRT and measured TRT
and can transmit data.

Division into traffic classes

• Traffic is divided into two classes


1. Synchronous traffic
– Traffic is delay sensitive
– station transmits data whether token is late or early
– But synchronous cannot exceed one TTRT in one TRT
2. Asynchronous traffic
– Station transmits only if token is early

Token Maintenance

• Every node monitors ring for valid token. If operations are correct, a node must observe a
token or a data frame every so often.
Claim frames
• When Greatest idle time = Ring latency + frame transmission time,
and if nothing seen during this time, a node suspects something is wrong on the ring and
sends a “claim frame”.
• Then, node bid (propose) for the TTRT using the claim frame.
The bidding process
• A node can send a claim frame without having the token when it suspects failure
• If claim frame came back, node knows that its TTRT bid is the lowest. And now it is
responsible for inserting token on the ring.
• When a node receives a claim frame, it checks to see if the TTRT bid is lower than its own.
• If yes, it resets local definition of TTRT and simply forwards the claim frame.
• Else, it removes the claim frame and enters the bidding process
• Put its own claim frame on ring.
• When there are ties, highest address wins.
FDDI Analysis

• In the worst case:


– First async. traffic use one TTRT worth of time.
– Next sync. traffic use one TTRT worth of time.
So, TRT at a node = 2 * TTRT
• It is important to note that if Sync. traffic was transmitted first and used TTRT, no async. traffic
can be sent.

Difference between Token Ring and FDDI

Token Ring FDDI


• Shielded twisted pair • Optical Fiber
• 4, 16 Mbps • 100 Mbps
• No reliability specified • Reliability specified (dual ring)
• Differential Manchester • 4B/5B encoding
• Centralized clock • Distributed clocking
• Access control: Token Passing • Access control: Timed Token algorithm
• It uses delayed release of token • Early release of token is used

Wireless LAN - IEEE 802.11

802.11 was designed to run over three different physical media—two based on spread spectrum radio and
one based on diffused infrared.
The idea behind spread spectrum is to spread the signal over a wider frequency band than normal, so as
to minimize the impact of interference from other devices

Frequency Hopping Spread Spectrum (FHSS)

 Frequency hopping is a spread spectrum technique that involves transmitting the signal over a
random sequence of frequencies; that is, first transmitting at one frequency, then a second, then
a third, and so on.
 The receiver uses the same algorithm as the sender—and initializes it with the same seed—and
hence is able to bound with the transmitter to correctly receive the frame.

Direct Sequence Spread Spectrum (DSSS)

 The DSSS encoder spreads the data across a broad range of frequencies using a mathematical
key.
 The receiver uses the same key to decode the data.
 It sends redundant copies of the encoded data to ensure reception.

Infrared (IR)

The Infrared utilizes infrared light to transmit binary data using a specific modulation technique. The
infrared uses a 16-pulse position modulation (PPM).

Access control: CSMA/CA(Carrier Sense Multiple Access /Collision Avoidance)

a wireless protocol would follow exactly the same algorithm as the Ethernet—wait until the link becomes
idle before transmitting and back off should a collision occur.

Consider the situation depicted in figure, where each of four nodes is able to send and receive signals
that reach just the nodes to its immediate left and right. For example, B can exchange frames with
A and C but it cannot reach D, while C can reach B and D but not A.
• a carrier sensing scheme is used.
• a node wishing to transmit data has to first listen to the channel for a predetermined amount of
time to determine whether or not another node is transmitting on the channel within the wireless
range. If the channel is sensed "idle," then the node is permitted to begin the transmission
process. If the channel is sensed as "busy," the node defers its transmission for a random
period of time. This is the essence of both CSMA/CA and CSMA/CD. In CSMA/CA, once the
channel is clear, a station sends a signal telling all other stations not to transmit, and then sends
its packet.

Assume that node A has data to transfer to node B. Node A initiates the process by sending a Request to
Send frame (RTS) to node B. The destination node (node B) replies with a Clear to Send frame (CTS).
After receiving CTS, node A sends data. After successful reception, node B replies with an
acknowledgement frame (ACK). If node A has to send more than one data fragment, it has to wait a
random time after each successful data transfer and compete with adjacent nodes for the medium using
the RTS/CTS mechanism.

To sum up, a successful data transfer (A to B) consists of the following sequence of frames:

• “Request To Send” frame (RTS) from A


to B
• “Clear To Send” frame (CTS) from B to A
• “Data frame” (Data) from A to B
• Acknowledgement frame (ACK) from B
to A.

The following flow graph explains the CSMA/CA technique

Hidden nodes problem

Suppose both A and C want to communicate with B and so they each send it a frame. A and C are
unaware of each other since their signals do not carry that far. These two frames collide with each other
at B, but A and C is not aware of this collision. A and C are said to be hidden nodes with respect to each
other.

Solution for the hidden node problem


 When node A wants to send a packet to
node B
 Node A first sends a Request-to-
Send (RTS) to B
 On receiving RTS
 Node B responds by sending
Clear-to-Send (CTS)
 provided node B is able to
receive the packet
 When a node C sees a CTS, it should keep
quiet for the duration of the transfer

Exposed node problem

A related problem, called the exposed node problem, occurs under the following circumstances.
 B talks to A
 C wants to talk to D
 C senses channel and finds it to be
busy
 So, C stays quiet

B is sending to A in figure. Node C is aware of this communication because it hears B’s transmission. It
would be a mistake for C to conclude that it cannot transmit to anyone just because it can hear B’s
transmission.

Solution for Exposed Terminal Problem

 Sender transmits Request to Send


(RTS)
 Receiver replies with Clear to Send
(CTS)
 If Neighbors
 See CTS - Stay quiet
 See RTS, but no CTS – then
O.K to transmit

Reliability

 When node B receives a data packet from node A, node B sends an Acknowledgement (ACK)

 If node A fails to receive an ACK


 Retransmit the packet

Frame Format
The frame contains the source and destination node addresses, each of which are 48 bits long; up to 2312
bytes of data; and a 32-bit CRC. The Control field contains three subfields of interest (not shown): a 6-bit
Type field that indicates whether the frame carries data, is an RTS or CTS frame. Addr1 identifies the
target node, and Addr2 identifies the source node. Addr3 identifies the intermediate destination.

Switch

–forwards packets from input port to output port


–port selected based on address in packet header
–adding more hosts will not deteriorate older connections

Advantages
–it covers large geographic area (tolerate latency)
–it supports large numbers of hosts (scalable bandwidth)

Types

I. Datagram switching
II. Virtual Circuit switching
III. Source Routing switching

I Datagram Switching
 No connection setup phase
 Each packet forwarded independently
 Sometimes called connectionless model
 Packets may follow different paths to reach their destination
 Receiving station may need to reorder
 Switches decide the route based on source and destination addresses in the packet
 Analogy: postal system
 Each switch maintains a forwarding table

Switching table for switch2


II Virtual Circuit Switching
 Sometimes called connection-oriented model
 Relationship between all packets in a message or session is preserved
 Single route is chosen between sender and receiver at beginning of session
 Call setup establishes virtual circuit; call teardown deletes the virtual circuit
 All packets travel the same route in order
 Approach is used in WANs, Frame Relay, and ATM
 Analogy:phone call
 Each switch maintains a VC table
 The VC table in a single switch contains
o a Virtual circuit identifier (VCI)
o an incoming interface on which packets for this VC arrives
o an outgoing interface on which packets for this VC leave
o potentially different VCI for outgoing packets

III Source Routing

A third approach to switching that uses neither virtual circuits nor conventional datagrams is known as
source routing. The name derives from the fact that all the information about network topology that is
required to switch a packet across the network is provided by the source host. Assign a number to each
output of each switch and to place that number in the header of the packet. The switching function is then
very simple: For each packet that arrives on an input, the switch would read the port number in the
header and transmit the packet on that output. There will be more than one switch in the path between
the sending and the receiving host. In such a case the header for the packet needs to contain enough
information to allow every switch in the path to determine which output the packet needs to be placed on.
In this example, the packet needs to traverse three switches to get from host A to host B. At switch 1, it
needs to exit on port 1, at the next switch it needs to exit at port 0, and at the third switch it needs to exit
at port 3. Thus, the original header when the packet leaves host A contains the list of ports (3, 0, 1),
where we assume that each switch reads the rightmost element of the list. To make sure that the next
switch gets the appropriate information, each switch rotates the list after it has read its own entry. Thus,
the packet header as it leaves switch 1 en route to switch 2 is now (1, 3, 0); switch 2 performs another
rotation and sends out a packet with (0, 1, 3) in the header.
Bridges and Extended LANs
A class of switches that is used to forward packets between shared-media LANs such as Ethernets. Such
switches are sometimes known by the name of LAN switches; historically they have also been referred to
as bridges. Operate in both physical and data link layers

LANs have physical limitations (e.g 2500m). Bridge is used to connect two or more LANs as shown below
It uses ‘Store and forward’ technique.

Extended LANs

a collection of LANs connected by one or more bridges is usually said to form an extended LAN.

Learning Bridges/Transparent bridge

A bridge maintains a forwarding table to forward the packet that it receives. The forwarding table contains
two fields. One is host address filed and another one is used for storing the port number of bridge on
which the host is connected. For example,
Each packet carries a global address, and the bridge decides which output to send a frame on by looking
up that address in a table. Bridge inspects the source address in all the frames it receives and records the
fact in the table. When a frame is received by the bridge, it opens the frame to see the destination
address and then it checks the destination address in the forwarding table. Suppose if the destination
address is available in the table then it forwards the frame to the respective one of its output port which is
mentioned for that destination host in the table. Suppose if the destination address is not in the table then
it floods the frame to all of its output port and then it uses the source address of the frame to update the
table. Thus the bridge learns the table entry to decide whether to forward/discard the frame or to update
the table and the algorithms are listed below:

Learning bridge example


Spanning Trees

• It ensures the topology has no loops


• Spanning tree
• Sub-graph that covers all
vertices but contains no cycles
• Links not in the spanning tree
do not forward frames

Constructing a Spanning Tree


• Need a distributed algorithm
– Switches cooperate to build the spanning tree

Key ingredients of the algorithm


• Switches need to elect a “root”
– The switch with the smallest
identifier
• Each switch identifies if its interface is
on the shortest path from the root
– And it exclude from the tree if
not
• Messages (Y, d, X)
– From node X
– Claiming Y is the root
– And the distance is d
Steps in Spanning Tree Algorithm

• Initially, each switch thinks it is the root


– Switch sends a message out every interface
– … identifying itself as the root with distance 0
– Example: switch X announces (X, 0, X)
• Switches update their view of the root
– Upon receiving a message, check the root id
– If the new id is smaller, start viewing that switch as root
• Switches compute their distance from the root
– Add 1 to the distance received from a neighbor
– Identify interfaces not on a shortest path to the root
– … and exclude them from the spanning tree

Example From Switch #4’s Viewpoint

• Switch #4 thinks it is the root


– Sends (4, 0, 4) message to 2
and 7
• Then, switch #4 hears from #2
– Receives (2, 0, 2) message
from 2
– … and thinks that #2 is the
root
– And realizes it is just one hop
away
• Then, switch #4 hears from #7
– Receives (2, 1, 7) from 7
– And realizes this is a longer
path
– So, prefers its own one-hop
path
– And removes 4-7 link from the
tree

– And realizes this is a longer


• Switch #2 hears about switch #1 path
– Switch 2 hears (1, 1, 3) from 3 – So, prefers its own three-hop
– Switch 2 starts treating 1 as path
root – And removes 4-7 Iink from the
– And sends (1, 2, 2) to tree
neighbors
• Switch #4 hears from switch #2
– Switch 4 starts treating 1 as
root
– And sends (1, 3, 4) to
neighbors
• Switch #4 hears from switch #7
– Switch 4 receives (1, 3, 7)
from 7

Unit III

Circuit switching vs. packet switching / Packet switched networks – IP – ARP – RARP – DHCP – ICMP –
Queuing discipline – Routing algorithms – RIP – OSPF – Subnetting – CIDR – Interdomain routing – BGP
– Ipv6 – Multicasting – Congestion avoidance in network layer

Circuit switching Vs Packet switching


The phases of circuit switching

• Dedicated communication path between two stations


• Three phases
— Establish
— Transfer
— Disconnect
• Must have switching capacity and channel capacity to establish connection
• Must have intelligence to work out routing

The pros and cons of circuit switching


• Inefficient
— Channel capacity dedicated for duration of connection
— If no data, capacity wasted
• Set up (connection) takes time
• Once connected, transfer is transparent

• Developed for voice traffic (phone)

Types of packet switching techniques

• Datagram
• Virtual circuit

Datagram Packet Switching

• Each packet treated independently


• Packets can take any practical route
• Packets may arrive out of order
• Packets may go missing
• Up to receiver to re-order packets and recover from missing packets
• no call setup at network layer
Virtual Circuit Packet Switching

• Preplanned route established before any packets sent


• Call request and call accept packets establish connection (handshake)
• Each packet contains a virtual circuit identifier instead of destination address
• No routing decisions required for each packet
• Clear request to drop circuit
• Not a dedicated path

• used to setup, maintain teardown VC


• used in ATM, frame-relay, X.25
• not used in today’s Internet

Compare and contrast datagram and virtual circuit approaches for packet switching

• Virtual circuits
— Network can provide sequencing and error control
— Packets are forwarded more quickly
• No routing decisions to make
— Less reliable
• Loss of a node looses all circuits through that node
• Datagram
— No call setup phase
• Better if few packets
— More flexible
• Routing can be used to avoid congested parts of the network
Internetworking

An internetwork is an interconnected collection of networks. An internetwork is often referred to as a


“network of networks” because it is made up of lots of smaller networks. In this figure, we see Ethernets,
an FDDI ring, and a point-to-point link.

Each of these is a single-technology network. The nodes that interconnect the networks are called routers.
They are also sometimes called gateways. The Internet Protocol is the key tool used today to build
scalable, heterogeneous internetworks. The IP datagram is fundamental to the Internet Protocol. A
datagram is a type of packet that happens to be sent in a connectionless manner over a network. Every
datagram carries enough information to let the network forward the packet to its correct destination;
there is no need for any advance setup mechanism to tell the network what to do when the packet
arrives.

Service Model

To build an internetwork, it is better to define its service model, that is, the host-to-host services you
want to provide. The IP service model can be thought of as having two parts:
• an addressing scheme, which provides a way to identify all hosts in the internetwork,
• a datagram (connectionless) model of data delivery.
This service model is sometimes called best effort because, although IP makes every effort to deliver
datagrams, it makes no guarantees.

Datagram model of Delivery

The IP datagram is fundamental to the Internet Protocol. It is a connectionless model of data delivery.
Every datagram carries enough information so that network forwards the packet to its correct destination;
there is no need for any advance setup mechanism to tell the network what to do when the packet
arrives. if something goes wrong and the packet gets lost, corrupted, or in any way fails to reach its
intended destination, the network does nothing—it made its best effort. So that, this is sometimes called
an unreliable service.

Packet Format IPv4


Version: 4 bits
The Version field indicates the format of the internet header. This document describes version 4.
HLen: 4 bits
Internet Header Length is the length of the internet header in 32 bit words, and thus points to
the beginning of the data. Note that the minimum value for a correct header is 5.
Type of Service: 8 bits
The Type of Service provides an indication of the abstract parameters of the quality of service
desired.
Total Length: 16 bits
Total Length is the length of the datagram, measured in octets, including internet header and
data
Identification: 16 bits
An identifying value assigned by the sender to aid in assembling the fragments of a datagram.
Flags: 3 bits
Various Control Flags.
Fragment Offset: 13 bits
This field indicates where in the datagram this fragment belongs.
Time to Live: 8 bits
This field indicates the maximum time the datagram is allowed to remain in the internet system.
If this field contains the value zero, then the datagram must be destroyed.
Protocol: 8 bits
This field indicates the next level protocol used in the data portion of the internet datagram
ource Address: 32 bits
The source address
Destination Address: 32 bits
The destination address
Checksum: used for Error correction
Options: variable
The options may appear or not in datagrams.
Padding: variable
The internet header padding is used to ensure that the internet header ends on a 32 bit boundary

IP Fragmentation and Reassembly

In internetwork, each hardware technology specifies the maximum amount of data that a frame can carry.
This is called the Maximum Transmission Unit (MTU). IP uses a technique called fragmentation to solve the
problem of heterogeneous MTUs. When a datagram is larger than the MTU of the network over which it
must be sent, it is divided into smaller fragments which are each sent separately. This process is
illustrated in Figure

.
Each fragment becomes its own datagram and is routed independently of any other datagrams. At the
final destination, the process of re-constructing the original datagram is called reassembly

Datagram Forwarding in IP

The forwarding of IP datagrams are the following:

• Every IP datagram contains the IP address of the destination host.


• The “network part” of an IP address uniquely identifies a single physical network that is part of
the larger Internet.
• All hosts and routers that share the same network part of their address are connected to the
same physical network and can thus communicate with each other by sending frames over that
network.
• Every physical network that is part of the Internet has at least one router that, by definition, is
also connected to at least one other physical network; this router can exchange packets with
hosts or routers on either network.

We describe the datagram forwarding algorithm in the following way:

if (NetworkNum of destination = NetworkNum of one of my interfaces) then


deliver packet to destination over that interface
else
if (NetworkNum of destination is in my forwarding table) then
deliver packet to NextHop router
else
deliver packet to default router
For a host with only one interface and only a default router in its forwarding table, this simplifies to

if (NetworkNum of destination = my NetworkNum) then


deliver packet to destination directly
else
deliver packet to default router

Global addressing

The IP service model, that one of the things that it provides is an addressing scheme. If you want to be
able to send data to any host on any network, there needs to be a way of identifying all the hosts. Thus,
we need a global addressing scheme—in which no two hosts have the same address.

Addressing Scheme

 Address is need to uniquely and universally identify every device to allow global communication
 Internet address or IP address is used in the network layer of the Internet model
 Consists of a 32-bit binary address

IP Address Representation
 Binary notation – IP address is displayed as 32 bits
 Dotted-decimal notation – more compact and easier to read form of an IP address
o Each number is between 0 and 255

Types

 Class full address


 Classless address

Classful Addressing

In classful addressing, the address space is divided into five classes: A, B, C, D, and E. We can find the
class of an address when given the address in binary notation or dotted-decimal notation. If the address is
given in binary notation, the first few bits can immediately tell us the class of the address. If the address
is given in decimal-dotted notation, the first byte defines the class. Both methods are shown in figures
Example
Find the class of each address:
a.227.12.14.87
b.193.14.56.22
c.14.23.120.8
d.252.5.15.111
e.134.11.78.56
Solution
a.The first byte is 227 (between 224 and 239); the class is D.
b. The first byte is 193 (between 192 and 223); the class is C.
c.The first byte is 14 (between 0 and 127); the class is A.
d.The first byte is 252 (between 240 and 255); the class is E.
e.The first byte is 134 (between 128 and 191); the class is B.

Classes and Blocks

One problem with classful addressing is that each class is divided into a fixed number
of blocks with each block having a fixed size as shown in Table

Netid and Hostid

In classful addressing, an IP address in class A, B, or C is divided into netid and hostid.


These parts are of varying lengths as shown in below.

Network address

The network address is an address that defines the network itself. It can not be assigned to a host. A
network address has several properties:
All hosted bytes are 0
The network address is the first address in the set of address (block)
Given the network address, we can find the class of address.

Example
Given the address 23.56.7.91, find the network address
Solution:
The class is A. Only the first byte defines the netid and the remaining 3bytes define the host id in class A.
We can find the network address by replacing the hosted bytes with 0s. Therefore, the network address is
23.0.0.0

Example
Given address is 132.6.17.85, find the network address
Soultion
The class is B. The first 2 bytes defines the netid in class B. We can replace the next 2 bytes with 0s. So
the network address is 132.6.0.0

Example
Given the network address 17.0.0.0, find the class.
Solution
The class is A because the netid is only 1 byte

Example
Given the network address 17.0.0.0, find the class, the block, and the range of the addresses.
Solution
The class is A because the first byte is between 0 and 127. The block has a netidof 17. The addresses
range from 17.0.0.0 to 17.255.255.255.

Example
Given the network address 132.21.0.0, find the class, the block,and the range of the addresses.
Solution
The class is B because the first byte is between 128 and 191. The block has a netidof 132.21. The
addresses range from 132.21.0.0 to 132.21.255.255.

Unicast, Multicast, and Reserved Addresses


 Unicast address – identifies a specific device
 Multicast address – identifies a host belongs to a group or groups (used only as a destination
address)
 Reserved addresses – class E addresses; only used in special cases
Class A Addresses
 Numerically the lowest
 Use only one byte to identify the class type and netid
 Three bytes are available for hostid numbers
 127 possible class A networks with a maximum of 16,777,214 computers on each network
 Designed for large organizations with a large number of hosts or routers
 Many addresses are wasted
Class B Addresses
 First two octets are the network number and the last two octets are the host number
 16,382 possible blocks for assignment to organizations with a maximum of 65,534 computers on
each network
 Designed for mid-size organizations that may have tens of thousands of hosts or routers
 Many addresses are wasted
Class C Addresses
 The first three octets are the network number and the last octet is the host number
 2,096,896 blocks for assignment to organizations
 First three bytes (netid) are the same
 Each block only contains 256 addresses, which may be smaller than what many organizations
need
Class D and Class E Addresses
 Class D – reserved for multicast addresses
o Multicasting – transmission method which allows copies of a single packet to be sent to a
selected group of receivers
 Class E – reserved for future use

Subnetting

 IP addressing is hierarchical
 First it reaches network using its netid
 Then it reaches the host itself using the second portion (hostid)
 Since an organization may not have enough address, subnetting may be used to divide the
network into smaller networks or subnetworks
 Subnetting creates an intermediate level of hierarchy
 IP datagram routing then involves three steps: delivery to the network, delivery to the
subnetwork, and delivery to the host

A single IP class A, B, or C network is further divided into a group of hosts to form an IP sub-network.
Sub-networks are created for manageability, performance, and security of hosts and networks and to
reduce network congestion. The host ID portion of an IP address is further divided into a sub-network ID
part and a host ID part. The sub-network ID is used to uniquely identify the different sub-networks within
a network.

Mask
A mask is a 32-bit binary number that gives the first address in the block (the network address) when
bitwise ANDed with an address in the block. The masks for classes A, B, and C are shown in table.

The last column of table shows the mask in the form /n where n can be 8, 16, or 24 in classful addressing.
This notation is also called slash notation or Classless Interdomain Routing (CIDR) notation. The notation
is used in classless addressing.

Masking concept

AND Operation
The network address is the beginning address of each block. It can be found by applying the default mask
to any of the addresses in the block (including itself). It retains the netid of the block and sets the hostid
to zero.

Subnet Mask

o A process that extracts the address of the physical network (network/subnetwork portion) from
an IP address

Determine to the network ID, sub-network ID and the host ID, given the IP address and the subnet mask

The network class (A or B or C) of a given IP address can be easily determined by looking at the value of
the first 4 bits of the first byte. From the network class, the number of bytes used to represent the
network can be determined and hence the network ID can be determined. By performing a "AND" logical
operation of the IP address and the subnet mask, the sub-network ID can be determined. In the value
resulting from the "AND" operation, by removing the bytes used for the network ID, the remaining bits for
which the corresponding bit in the subnet mask is one, represents the sub-network ID.

Finding the Subnet Mask Address

o Given an IP address, we can find the subnet address the same way we found the network
address. We apply the mask to the address.
• we use binary notation for both the address and the mask and then apply the AND
operation to find the subnet address.
Example
What is the subnetwork address if the destination address is 200.45.34.56 and the subnet mask is
255.255.240.0?

Classless Addressing
 Addressing mechanism in which the IP address space is not divided into classes
 IP address block ranges are variable, as long as they are a power of 2 (2, 4, 8...)
 Masking is also used as well as sub netting

Limitations of IPv4 address classes

The limitations of IPv4 address classes are:

1. A large number of IP addresses are wasted because of using IP address classes.


2. The routing tables will become large. A separate routing table entry is needed for each network
resulting in a large number of routing table entries.

CIDR
Classless Inter Domain Routing (CIDR) is a method for assigning IP addresses without using the standard
IP address classes like Class A, Class B or Class C. In CIDR, depending on the number of hosts present in
a network, IP addresses are assigned.

Representation of IP address in CIDR notation

In CIDR notation, an IP address is represented as A.B.C.D /n, where "/n" is called the IP prefix or network
prefix. The IP prefix identifies the number of significant bits used to identify a network. For example,
192.9.205.22 /18 means, the first 18 bits are used to represent the network and the remaining 14 bits are
used to identify hosts.

Advantages of CIDR

The advantages of CIDR over the classful IP addressing are:

1. CIDR can be used to effectively manage the available IP address space.


2. CIDR can reduce the number of routing table entries.

The difference between classful IP addressing and classless IP addressing

The difference between classful IP addressing and classless IP addressing is in selecting the number of bits
used for the network ID portion of an IP address. In classful IP addressing, the network ID portion can
take only the predefined number of bits 8, 16, or 24. In classless addressing, any number of bits can be
assigned to the network ID.

ARP

The Address Resolution Protocol (ARP) is used to resolve IP addresses to MAC addresses. This is important
because on a network, devices find each other using the IP address, but communication between devices
requires the MAC address.

When a computer wants to send data to another computer on the network, it must know the MAC address
of the destination system. To discover this information, ARP sends out a discovery packet to obtain the
MAC address. When the destination computer is found, it sends its MAC address to the sending computer.
The ARP-resolved MAC addresses are stored temporarily on a computer system in the ARP cache. Inside
this ARP cache is a list of matching MAC and IP addresses. This ARP cache is checked before a discovery
packet is sent on to the network to determine if there is an existing entry. Entries in the ARP cache are
periodically flushed so that the cache doesn't fill up with unused entries.

MAC address

Media Access Control address is an identifier for assigned to most network adapters or Network Interface
Cards by the manufacturer for the purpose of identification. MAC address is used in MAC protocol sub
layer.

The Need for ARP

ARP is a protocol for mapping link layer addresses to a physical machine address that is recognized in the
local network. For example, in IP Version 4, addresses are 32 bits long, are hardware independent, but
are dependent upon the network to which a device is connected. In an Ethernet Local Area Network,
addresses for attached devices are 48 bits long. In other words, the IP address of a device changes when
the device is moved. So that we need a mapping mechanism to resolve IP addresses to Ethernet
addresses.

ARP Packet format for mapping IP address into Ethernet address


• a HardwareType field, which specifies the type of physical network (e.g., Ethernet)
• a ProtocolType field, which specifies the higher-layer protocol (e.g., IP)
• HLen (“hardware” address length) and PLen (“protocol” address length) fields, which specify the
length of the link-layer address and higher-layer protocol address, respectively
• an Operation field, which specifies whether this is a request or a response
• the source and target hardware (Ethernet) and protocol (IP) addresses

RARP

RARP is used to resolve a Ethernet MAC address to an IP address. All the mappings between the hardware
MAC addresses and the IP addresses of the hosts are stored in a configuration file in a host in the
network. This host is called the RARP server. This host responds to all the RARP requests. Normally, the IP
address of a system is stored in a configuration file in the local disk. When the system is started, it
determines its IP address from this file. In the case of a diskless workstation, its IP address cannot be
stored in the system itself. In this case, RARP can be used to get the IP address from a RARP server. RARP
uses the same packet format as ARP.

The differences between ARP and RARP:

• Address Resolution Protocol is utilized for mapping IP network address to the hardware address
that uses data link protocol.
• Reverse Address Resolution Protocol is a protocol using which a physical machine in a LAN could
request to find its IP address from ARP table or cache from a gateway server.
• IP address of destination to physical address conversion is done by ARP, by broadcasting in LAN.
• Physical address of source to IP address conversion is done by RARP.
• ARP associates 32 bit IP address with 48 bit physical address.

DHCP

As its name indicates, DHCP provides dynamic IP address assignment. The Internet is a vast source of
information that is continuously updated and accessed via computers and other devices. For a device to
connect to the Internet, it is necessary that among other configurations, it must have an Internet Protocol
(IP) address. The IP address is the computer's address on the Internet. The protocol Bootstrap Protocol
(BOOTP) was the first Transmission Control Protocol/Internet Protocol (TCP/IP) network configuration tool
used to prevent the task of having to manually assign IP addresses by automating the process. The
improvement of BOOTP is Dynamic Host Configuration Protocol (DHCP).

How DHCP Works

DHCP relies on the existence of a DHCP server that is responsible for providing configuration information
to hosts. There is at least one DHCP server for an administrative domain. the DHCP server maintains a
pool of available addresses that it hands out to hosts on demand. When a network device newly added to
network, it searches the DHCP server to get an IP address. To contact a DHCP server, a newly attached
host sends a DHCPDISCOVER message to a special IP address (255.255.255.255) that is an IP broadcast
address. This means it will be received by all hosts and routers on that network. There is at least one
DHCP relay agent on each network, and it is configured with the IP address of the DHCP server. When a
relay agent receives a DHCPDISCOVER message, it unicasts it to the DHCP server and awaits the
response, which it will then send back to the newly added host. The process of relaying a message from a
host to a remote DHCP server is shown in figure

ICMP

The Internet Control Message Protocol (ICMP) is a helper protocol that supports IP with facility for
– Error reporting
– Simple queries
ICMP messages are encapsulated as IP datagrams.

ICMP message format

Type (1 byte): type of ICMP message


• Code (1 byte): subtype of ICMP message
• Checksum (2 bytes): similar to IP header checksum.

Example of ICMP Queries

Type/Code: Description
8/0 Echo Request
0/0 Echo Reply
13/0 Timestamp Request
14/0 Timestamp Reply
10/0 Router Solicitation
9/0 Router Advertisement

ICMP Error message

o ICMP error messages report error conditions


o Typically sent when a datagram is discarded
o Error message is often passed from ICMP to the application program

ICMP Error message


ICMP error messages include the complete IP header and the first 8 bytes of the payload

Frequent ICMP Error message

Routing
 Routing tables are used to store information identifying the location of nodes on the network
 Several techniques are used to make the size of the routing table manageable and to handle
issues such as security

Static vs. Dynamic Routing


 Static – contains information entered manually; cannot be updated automatically
 Dynamic – updated periodically using dynamic routing protocols such as RIP, OSPF, or BGP
Default Router
 Router is assigned to receive all packets with no match in the routing table

Dynamic Routing

The success of dynamic routing depends on two basic router functions:

1. Maintenance of a routing table


2. Timely distribution of knowledge, in the form of routing updates, to other routers.

Dynamic routing relies on the routing protocol. Routing Protocols can be Distant Vector or Link-State.

Distance-Vector Routing (RIP)

Each node constructs a vector containing the distances"(costs) to all other nodes and distributes that
vector to its immediate neighbors.
1. The starting assumption for distance-vector routing is that each node knows the cost of the link
to each of its directly connected neighbors.
2. A link that is down is assigned an infinite cost

Initial distances stored at each node(global


view)

We can represent each node's knowledge about the distances to all other nodes as a table like the one
given in Table 1.

Note that each node only knows the information in one row of the table.

1. Every node sends a message to its directly connected neighbors containing its personal list of
distance. ( for example, A sends its information to its neighbors B,C,E, and F. )
2. If any of the recipients of the information from A find that A is advertising a path shorter than the
one they currently know about, they update their list to give the new path length and note that
they should send packets for that destination through A. ( node B learns from A that node E can
be reached at a cost of 1; B also knows it can reach A at a cost of 1, so it adds these to get the
cost of reaching E by means of A. B records that it can reach E at a cost of 2 by going through
A.)
3. After every node has exchanged a few updates with its directly connected neighbors, all nodes
will know the least-cost path to all the other nodes.
4. In addition to updating their list of distances when they receive updates, the nodes need to keep
track of which node told them about the path that they used to calculate the cost, so that they
can create their forwarding table. ( for example, B knows that it was A who said " I can reach E
in one hop" and so B puts an entry in its table that says " To reach E, use the link to A.)
In practice, each node's forwarding table consists of a set of triples of the form:

( Destination, Cost, NextHop).

For example, Table 3 shows the complete routing table maintained at node B for the network in figure1.

Link State

 is a class of intradomain routing protocol

 Each node is assumed to be capable of finding out the state of the link to its neighbors (up or
down) and the cost of each link.

Strategy

1. Advertise about neighborhood: instead of sending its entire routing table, a router sends
information about its neighborhood only
2. Flooding: Each router sends this information to every router on the internetwork not just to its
neighbor. It does so process of flooding
3. Each router sends out information about the neighbor when there is a change in the table
4. Find and use the shortest path to reach any point in the network.
Two mechanisms needed

I. Reliable flooding
II. Route calculation using Dijkstra’s algorithm

LSP

Each router creates a packet called link state packet which contains the following information:
 id of the node that created the LSP
 cost of link to each directly connected neighbor
 sequence number (SEQNO)
 time-to-live (TTL) for this packet

Link State Database

Every router receives every LSP and then prepares a database, which represents a complete network
topology. This Database is known as Link State Database

I. Reliable Flooding

As the term “flooding” suggests, the basic idea is for a node to send its link-state information out on its
entire directly connected links, with each node that receives this information forwarding it out on its entire
links. This process continues until the information has reached all the nodes in the network

Strategy

–store most recent LSP from each node


–forward LSP to all nodes but one that sent it
–generate new LSP periodically
–increment SEQNO
–start SEQNO at 0 when reboot
–decrement TTL of each stored LSP
–discard when TTL=0

Flooding works in the following way. Consider a node X that receives a copy of an LSP that originated at
some other node Y. Note that Y may be any other router in the same routing domain as X. X checks to see
if it has already stored a copy of an LSP from Y. If not, it stores the LSP. If it already has a copy, it
compares the sequence numbers; if the new LSP has a larger sequence number, it is assumed to be the
more recent, and that LSP is stored, replacing the old one. A smaller (or equal) sequence number would
imply an LSP older (or not newer) than the one stored, so it would be discarded and no further action
would be needed. If the received LSP was the newer one, X then sends a copy of that LSP to all of its
neighbors except the neighbor from which the LSP was just received. Since X passes the LSP on to all its
neighbors, who then turn around and do the same thing, the most recent copy of the LSP eventually
reaches all nodes

II. Route Calculation using Dijkstra’s algorithm

Each node maintains two lists, known as Tentative and Confirmed. Each of these lists contains a set of
entries of the form (Destination, Cost, NextHop). The algorithm works as follows:
.
Properties of Link State Routing

 Stabilizes quickly
 Keeps routing control traffic low
 Responds rapidly to topology changes
 The amount of information stored in each node is large.

Difference between DV and LS

In DV
–Talk only to neighbor
–Tell the neighbor everything that you learnt
In LS
– Talk to every node
– Tell them about the neighbor nodes link state

OSPF

 This protocol is open, which means that its specification is in the public domain. It means that
anyone can implement it without paying license fees.
 OSPF is based on the Dijkstra’s algorithm
 The basic building block of link-state messages in OSPF is known as the link-state advertisement
(LSA).
 OSPF is a link-state routing protocol that calls for the sending of link-state advertisements (LSAs)
to all other routers within the same hierarchical area.

Figure shows the packet format for a link-state advertisement. LSAs advertise the cost of links between
routers. The LS Age is the equivalent of a time to live. The Type field tells us that this is a type 1 LSA. the
Link-state ID and the Advertising router field are identical.
Each carries a 32-bit identifier for the router that created this LSA. The LS sequence number is used to
detect old or duplicate LSAs. The LS checksum is used to verify that data has not been corrupted . Length
is the length in bytes of the complete LSA. Link ID, metric and Link Data fields are used identify the link;
TOS is a type of service information

 OSPF specifies that all the exchanges between routers must be authenticated.
 OSPF provides Load Balancing. When several equal-cost routes to a destination exist, traffic is
distributed equally among them
 OSPF allows sets of networks to be grouped together. Such a grouping is called an Area. Each
Area is self-contained
 OSPF uses different message formats to distinguish the information acquired from within the
network (internal sources) with that which is acquired from a router outside (external sources).

Network address translation (NAT)

is the process where a network device, assigns a public address to a host inside a private network. To
separate the addresses used inside private network and the ones used for the public network (Internet),
the Internet authorities have reserved three sets of addresses as private addresses, shown in table

The private addressing scheme works well for computers that only have to access resources inside the
network. However, to access resources outside the network, like the Internet, these computers have to
have a public address in order for responses to their requests to return to them. This is where NAT comes
into play.

Address Translation

All the outgoing packets go through the NAT router, which replaces the source address in the packet with
the global NAT address. All incoming packets also pass through the NAT router, which replaces the
destination address in the packet with the appropriate private address
Intradomain versus Interdomain they are commonly known as Interior gateway protocols and
Exterior-gateway protocols respectively.

Interior gateway protocols:

In small and slowly changing network the network administrator can establish or modify routes by hand
i.e. manually. Administrator keeps a table of networks and updates the table whenever a network is added
or deleted from the autonomous system. The disadvantage of the manual system is obvious; such
systems are neither scalable nor adaptable to changes. Automated methods must be used to improve
reliability and response to failure. To automate the task this task, interior router (within a autonomous
system) usually communicate with one another, exchanging network routing information from which
reachability can be deduced. These routing methods are known as Interior gateway Protocols (IGP).
Two Interior gateway protocols widely used namely,
1. routing Information Protocol (RIP)
2. Open Shortest path first (OSPF)

Interdomain routing protocols (BGP)

Autonomous system

An autonomous system (AS) is a network or group of networks under a common administration and with
common routing policies. BGP is used to exchange routing information for the Internet and is the protocol
used between Internet service providers (ISP), which are different ASes. V The basic idea behind
autonomous systems is to provide an additional way to hierarchically aggregate routing information in a
large internet, thus improving scalability.

One feature of the autonomous system idea is that it enables some ASs to dramatically
reduce the amount of routing information they need to care about by using default routes. For example, if
a corporate network is connected to the rest of the Internet by a single router (this router is typically
called a border router since it sits at the boundary between the AS and the rest of the Internet), then it is
easy for a host or router inside the autonomous system to figure out where it should send packets that
are headed for a destination outside of this AS—they first go to the AS’s border router. This is the default
route. Similarly, a regional Internet service provider can keep track of how to reach the networks of all its
directly connected customers and can have a default route to some other provider (typically a backbone
provider) for everyone else.

We now divide the routing problem into two parts: routing within a single autonomous system and routing
between autonomous systems. Since another name for autonomous systems in the Internet is routing
domains, we refer to the two parts of the routing problem as interdomain routing and intradomain routing.
The interdomain routing problem is one of having different ASes share information with each other.

There have been two major interdomain routing protocols in the recent history of the Internet:
1. Exterior Gateway Protocol (EGP).
2. Border Gateway Protocol (BGP)

BGP

The Border Gateway Protocol (BGP) is an inter-autonomous system routing protocol. The replacement
for EGP is the Border Gateway Protocol. Today’s Internet consists of an interconnection of multiple
backbone networks (they are usually called service provider networks. The following figure illustrates the
BGP Model of the Internet

Given this rough sketch of the Internet, if we define


 local traffic as traffic that originates at or terminates on nodes within an AS, and
 transit traffic as traffic that passes through an AS

Classification of AS:

we can classify ASs into three types:


1. Stub AS: Only one connection to another AS.
2. Multihomed AS: An AS with connections to multiple ASs, which refuses to carry external traffic.
3. Transit AS: An AS with connections to multiple AS and carries internal and external traffic.
One of the most important characteristics of BGP is its flexibility.

BGP Routing Challenges:


–Domains are autonomous and can assign arbitrary metrics to its internal paths,
–ASs can only cooperate if they can trust one another to publicize accurate routing information and to
carry out their promises,
–Policies should be flexible to allow ASs freedom of action. This choice may override optimal paths and
determine the use of paths that are “good enough”.

BGP Example
• Speaker for AS2 advertises reachability to P and Q
o network 128.96, 192.4.153, 192.4.32, and 192.4.3, can be reached directly from AS2
• Speaker for backbone advertises
o –networks 128.96, 192.4.153, 192.4.32, and 192.4.3 can be reached along the path
(AS1, AS2).
• Speaker can cancel previously advertised paths

Few characteristics of BGP

 Inter-Autonomous System Configuration: BGP’s primary role is to provide communication


between two autonomous systems.
 Next-Hop paradigm: Like RIP, BGP supplies next hop information for each destination.
 Coordination among multiple BGP speakers within the autonomous system: If an
Autonomous system has multiple routers each communicating with a peer in other autonomous
system, BGP can be used to coordinate among these routers, in order to ensure that they all
propagate consistent information.
 Path information: BGP advertisements also include path information, along with the reachable
destination and next destination pair, which allows a receiver to learn a series of autonomous
system along the path to the destination.
 Runs over TCP: BGP uses TCP for all communication. So the reliability issues are taken care by
TCP.
 Support for CIDR: BGP supports classless addressing (CIDR). That it supports a way to send
the network mask along with the addresses.
 Security: BGP allows a receiver to authenticate messages, so that the identity of the sender can
be verified.

Integrating Interdomain and Intradomain Routing

• Stub AS: The border router injects a default route into the intradomain routing protocol.
• Multihomed and Transit ASs: The border routers inject routes that they have learned from
outside the AS.
• Transit ASs: The information learned from BGP may be “too much” to inject into the intradomain
protocol: if a large number of prefixes in inserted, large link-state packets will be circulated and
path calculations will get very complex.

IP Version 6 (IPV6)

Features
–128-bit addresses (classless)
–multicast
–real-time service
–authentication and security
–autoconfiguration
–end-to-end fragmentation
–protocol extensions
Header
–40-byte “base” header
–extension headers (fixed order, mostly fixed length)
–fragmentation
–source routing
–authentication and security
–other options

Address Space Allocation

IPv6 addresses do not have classes, but the address space is still subdivided in various ways based on the
leading bits. Rather than specifying different address classes, the leading bits specify different uses of the
IPv6 address. The current assignment of prefixes is listed in Table

“link local use” addresses is useful for auto configuration`, “site local use” addresses are intended to allow
valid addresses to be constructed on a site and the multicast address space is for multicast, thereby
serving the same role as class D addresses in IPv4

Address Notation

An example would be
47CD:1234:4422:ACO2:0022:1234:A456:0124
• When there are many consecutive 0s, omit them:
47CD:0000:0000:0000:0000:0000:0000:A456:0124 becomes
47CD::A456:0124 (double colon means a group of 0s)
• Two types of IPv6 address can contain embedded IPv4 addresses. For example, an IPv4 host address
128.96.33.81 becomes
::FFFF:128.96.33.81 (the last 32 bits are an IPv4 address)
This notation facilitates the extraction of an IPv4 address from an IPv6 address

Aggregatable Global Unicast Addresses

 Subscriber is: non-transit AS (stub and multihomed)


 Provider is : transit AS
o Direct: connected directly to subscribers
o Indirect or Backbone network: connected to other providers
 Plan: Aggregate routing information to reduce the burden on intradomain routers. Assign a prefix
to a direct provider; within the provider assign longer prefixes that reach its subscribers.

Packet Format

Congestion control in network layer(Refer: Andrew S. Tanenbaum, “Computer Networks”)

In a packet switching network, packets are introduced in the nodes (i.e. offered load), and the nodes in-
turn forward the packets (i.e. throughput) into the network. When the “offered load” crosses certain limit,
then there is a sharp fall in the throughput. This phenomenon is known as congestion. In other words,
when too much traffic is offered, congestion sets in and performance degrades sharply.

Congestion affects two vital parameters of the network performance, namely throughput and delay. The
throughput can be defined as the percentage utilization of the network capacity.

Congestion Control Techniques

Congestion control refers to the mechanisms and techniques used to control congestion and keep the
traffic below the capacity of the network. The congestion control techniques can be broadly classified two
broad categories:
 Open loop: Protocols to prevent or avoid congestion, ensuring that the system never enters a
Congested State.
 Close loop: Protocols that allow system to enter congested state, detect it, and remove it.

1 Admission control is one such closed-loop technique, used in virtual circuit subnets where action is taken
once congestion is detected in the network. Different approaches can be followed:
 First approach is once the congestion has been signaled, do not set-up new connections, once the
congestion is signaled. This type of approach is often used in normal telephone networks. When
the exchange is overloaded, then no new calls are established.
 Second approach is to allow new virtual connections, but route these carefully so that none of the
congested router or none of the problem area is a part of this route.
 Third approach is to negotiate different parameters between the host and the network, when the
connection is setup. During the setup time itself, Host specifies the volume and shape of traffic,
quality of service, maximum delay and other parameters, related to the traffic it would be
offering to the network. Once the host specifies its requirement, the resources needed are
reserved along the path, before the actual packet follows.

2 Choke Packet Technique a closed loop control technique, can be applied in both virtual circuit and
datagram subnets.
 Each router monitors the utilisation of its outgoing lines
 Whenever the utilization rises above some threshold, a warning flag is set for that link
 When a newly arriving data packet arrives for routing over that link, the router extracts the
packet's source address and sends a "choke packet" back to the source. This choke packet
contains the destination address
 The original data packet is tagged so that it will not generate any more choke packets, then
forwarded
 When the source host gets the choke packet, it is required to reduce the traffic sent to the
particular destination by X%; it ignores other choke packets for the same destination for a fixed
time interval
The following figure depicts the functioning of choke packets.

Figure depicts the functioning of choke packets, (a) Heavy traffic between nodes P and Q, (b) Node Q
sends the Choke packet to P, (c) Choke packet reaches P, (d) P reduces the flow and send a reduced flow
out, (e) Reduced flow reaches node Q

3 Hop-by-Hop Choke Packets


This technique is advancement over Choked packet method. In this approach, the choke packet affects
each and every intermediate router through which it passes by. Here, as soon as choke packet reaches a
router back to its path to the source, it curtails down the traffic between those intermediate routers. The
following figure illustrates this.
Figure depicts the functioning of Hop-by-Hop choke packets, (a) Heavy traffic between nodes P and Q, (b)
Node Q sends the Choke packet to P, (c) Choke packet reaches R, and the flow between R and Q is curtail
down, d) Choke packer reaches P, and P reduces the flow out

4 Load Shedding

Another simple closed loop technique is Load Shedding; it is one of the simplest and more effective
techniques. In this method, whenever a router finds that there is congestion in the network, it simply
starts dropping out the packets. One of the technique is Random Early Detection.

Random Early Detection (RED): There are different methods by which a host can find out which
packets to drop. Simplest way can be just choose the packets randomly which has to be dropped.
More effective ways are there but they require some kind of cooperation from the sender too. For
many applications, some packets are more important than others. So, sender can mark the
packets in priority classes to indicate how important they are. If such a priority policy is
implemented than intermediate nodes can drop packets from the lower priority classes and use
the available bandwidth for the more important packets.

5 Jitter is the variation in the packet arrival times belonging to the same flow. For example, in a real time
video broad casts, 99% of packets to be delivered with delay in the range of 24.5msec to 25.5msec might
be acceptable. The chosen range must be feasible.
When a packet arrives at a router, the router checks to see how much packet is behind or ahead of its
schedule. If the packet is ahead of its schedule, it is held just long enough to get it back on schedule. If its
behind schedule, the router tries to get it out quickly.

Unit IV

UDP – TCP – Adaptive Flow Control – Adaptive Retransmission - Congestion control – Congestion
avoidance – QoS

TCP
Stands for Transmission Control Protocol
- TCP provides a connection oriented, reliable, byte stream service. The term connection-oriented
means the two applications using TCP must establish a TCP connection with each other before
they can exchange data. It is a full duplex protocol, meaning that each TCP connection supports a
pair of byte streams, one flowing in each direction. TCP includes a flow-control mechanism for
each of these byte streams that allow the receiver to limit how much data the sender can
transmit. TCP also implements a congestion-control mechanism.

TCP frame format

TCP data is encapsulated in an IP datagram. The figure shows the format of the TCP header. Its normal
size is 20 bytes unless options are present. Each of the fields is discussed below:

- Source port (16 bits) – identifies the sending port


- Destination port (16 bits) – identifies the receiving
- The sequence number identifies the byte in the stream of data from the sending TCP to the
receiving TCP that the first byte of data in this segment represents.
- Acknowledgment number (32 bits) – if the ACK flag is set then the value of this field is the next
sequence number that the receiver is expecting.
- Header length (4 bits) – specifies the size of the TCP header in 32-bit words.
- Reserved (4 bits) – for future use and should be set to zero
- URG (1 bit) – indicates that the Urgent pointer field is significant
- ACK (1 bit) – indicates that the Acknowledgment field is significant. All packets after the initial
SYN packet sent by the client should have this flag set.
- PSH (1 bit) – Push function. Asks to push the buffered data to the receiving application.
- RST (1 bit) – Reset the connection
- SYN (1 bit) – Synchronize sequence numbers. Only the first packet sent from each end should
have this flag set. Some other flags change meaning based on this flag, and some are only valid
for when it is set, and others when it is clear.
- FIN (1 bit) – No more data from sender
- Window (16 bits) – the size of the receive window, which specifies the number of bytes (beyond
the sequence number in the acknowledgment field) that the receiver is currently willing to receive
- Checksum (16 bits) – The 16-bit checksum field is used for error-checking of the header and data
- Urgent pointer (16 bits) – if the URG flag is set, then this 16-bit field is an offset from the
sequence number indicating the last urgent data byte
- Options (Variable 0-320 bits, divisible by 32) – The length of this field is determined by the data
offset field. Options 0 and 1 are a single byte (8 bits) in length
- The data portion of the TCP segment is optional.

TCP vs UDP

TCP UDP
- Transmission control protocol - User Datagram protocol
- Faster than TCP
- Slower than UDP - Unreliable
- Reliable - Connection less
- Connection oriented - No flow control
- Provides flow control and error - UDP doesn’t offer error connection
control & delivery
- TCP offers error connection and - Provides or sends smaller packets
Guaranteed Delivery - Unordered message delivery
- Provides or sends larger packets - Light weight- No ordering of
- Ordered message delivery messages and no tracking
- Heavyweight-when the segment connections etc
arrives in the wrong order, resend - No Acknowledgement
requests have to be sent, and all
the out of sequence parts have to
be put back together
- If any segments are lost, the
receiver Will send ACK to intimate
lost segments

TCP Connection establishment and Termination


The algorithm used by TCP to establish and terminate a connection is called a three-way handshake.
Three way handshake for connection establishment

It involves exchange of three messages between client and server as illustrated by the timeline given in
figure

1. The client sends the starting sequence number (x) it plans to use.
2. The server responds with a ACK segment (x+1) that acknowledges the client sequence number
and starts its own sequence number (y).
3. Finally client responds with a third segment (y+1) that acknowledges the server sequence
number
Three way handshake for connection termination
Three-way handshaking for connection termination as shown in figure

1. The client TCP, after receiving a close command from the client process, sends the first segment,
a FIN segment. The FIN segment consumes one sequence number(x)
2. The server TCP, after receiving the FIN segment, informs its process of the situation and sends
the second segment, a FIN +ACK segment, to confirm the receipt of the FIN segment from the
client and at the same time to announce the closing of the connection in the other direction. The
FIN +ACK segment consumes one sequence number(y)
3. The client TCP sends the last segment, an ACK segment, to confirm the receipt of the FIN
segment from the TCP server. This segment contains the acknowledgment number(y+1), which
is 1 plus the sequence number received in the FIN segment from the server.

TCP’s State Transition Diagram


TCP uses a finite state machine to keep track of all the events happening during the connection
establishment, termination and data transfer. The following figure shows that.
State Description
CLOSED No connection
LISTEN The server is waiting for call from client
SYN-SENT A connection request is sent, waiting for ACK
SYN-RECD A connection request is received
ESTABLISHED A connection is established
FIN-WAIT1 The application has requested the closing connection
FIN-WAIT2 The other side has accepted the closing connection request
TIME-WAIT Waiting for retransmitted segment to die
CLOSE-WAIT The server is waiting for the application to close
LAST-ACK The server is waiting for the LAST-ACK

Client program
The client program can be in one of the following states:
CLOSED, SYS-SENT, ESTABLISHED, FIN-WAIT1, FIN-WAIT2 and TIME-WAIT
1. The client TCP starts at CLOSED-STATE

2. The client TCP can receive an active open request from the client application program. It sends
SYN segment to the server TCP and goes to the SYN-SENT state.

3. While in this state, the client TCP can receive a SYN+ACK segment from the other TCP and goes
to the ESTABLISHED state. This is data transfer state. The client remains in this state as long as
sending and receiving data.
4. While in this state, the client can receive a close request from the client application program. It
sends FIN segment to the other TCP and goes to FIN-WAIT1 state.

5. While in this state, the client TCP waits to receive an ACK from the server TCP. When the ACK is
received, it goes to the FIN-WAITE2 state. It does not send any thing. Now the connection is
closed in one direction.

6. The client remains in this state, waiting for the server TCP to close the connection from the other
end. If the client TCP receives FIN segment from the other end, it sends an ACK and goes to
TIME-WAIT state.

7. When the client in this state, it starts time waited timer and waits until this timer goes off. After
the time out the client goes to CLOSED state.

Server program
The server program can be in one of the following states:
CLOSED, LISTEN, SYN-RECD, ESTABLISHED, CLOSE-WAIT, LAST-ACK
1. The server TCP starts in the CLOSED-STATE.

2. While in this state, the server TCP can receive a passive open request from the server application
program. It goes to LISTEN state.

3. While in this state, the server TCP can receive a SYN segment from the client TCP. It sends a
SYN+ACK segment to the client TCP and goes to the SYN-RECD state.

4. While in this state, the server TCP can receive ACK segment from the client TCP. Then it goes to
ESTABLISHED state. This is data transfer state. The sender remains this state as long as it is
receiving and sending data.

5. While in this state, the server TCP can receive a FIN segment from the client, which means that
the client wishes to close the connection. It can send an ACK segment to the client and goes to
the CLOSE-WAIT state.

6. While in this state, the server waits until it receives a close request from the server application
program. It then sends a FIN segment to the client and goes to LAST-ACK state.

7. While in this state, the server waits for the last ACK segment. If then goes to the CLOSED state.

TCP Flow control

– Flow control defines the amount of data a source can send before receiving an ACK from the
receiver.
– Sliding window protocol is used by TCP for flow control.

Sliding window for flow control by TCP


The sliding window serves several purposes:
(1) It guarantees the reliable delivery of data
(2) It ensures that the data is delivered in order,
(3) It enforces flow control between the sender and the receiver.
For reliable and ordered delivery
The sending and receiving sides of TCP interact in the following manner to implement reliable and ordered
delivery:
Each byte has a sequence number.
ACKs are cumulative.
Sending side
o LastByteAcked <=LastByteSent
o LastByteSent <= LastByteWritten
o bytes between LastByteAcked and LastByteWritten must be buffered.
Receiving side
o LastByteRead < NextByteExpected
o NextByteExpected <= LastByteRcvd + 1
o bytes between NextByteRead and LastByteRcvd must be buffered.
For flow Control
Sender buffer size : MaxSendBuffer
Receive buffer size : MaxRcvBuffer
Receiving side
o LastByteRcvd - NextBytteRead <= MaxRcvBuffer
o AdvertisedWindow = MaxRcvBuffer - (LastByteRcvd - NextByteRead)
Sending side
o LastByteSent - LastByteAcked <= AdvertisedWindow
o EffectiveWindow = AdvertisedWindow - (LastByteSent - LastByteAcked)
o LastByteWritten - LastByteAcked <= MaxSendBuffer
o Block sender if (LastByteWritten - LastByteAcked) + y > MaxSendBuffer

Always send ACK in response to an arriving data segment

Silly Window Syndrome:


A problem can occur in the sliding window operation when either the sender creates the data slowly or
receiver consumes data slowly or both. This situation is called as silly window syndrome.

Solution for creating data slowly by sender


1 Nagle’s Algorithm
Step 1: The sender sends first segment of data even if it is only 1 byte.
Step2: After sending the segment, the sender accumulates the data in the output buffer and waits for an
ACK from the receiver or accumulated data enough to fill the maximum size segment. At this time TCP
can send next segment
Step3: Step2 is repeated for the rest of the transmission. Segment 3 must be sent if an ACK for segment
2 is received or enough data are accumulated to fill a maximum size segment.

Solution for consuming data slowly by the receiver


1 Clark’s algorithm
Send an ACK as soon as data arrive but to announce the window size of zero until either there is enough
space to accommodate a segment of maximum size in the buffer or until one half of the buffer is empty.
2 Delayed Acknowledgements
Delay the sending ACK. This means, when a segment arrives, it is not acknowledged immediately. The
receiver waits until there is space in its buffer.

TCP Timers

The TCP uses the following timers during the data transmission
• Retransmission timer
• Persistence timer
• Keep alive timer
• Time waited timer
Retransmission timer: used to calculate the retransmission time of the next segment.
Retransmission time = 2 x RTT where RTT is round trip time.
RTT= ∞ (previous RTT) + (1-∞) (Current RTT)
Persistence timer: When the sender receives the ACK with a window size zero, it starts the persistence
timer. When it goes off, the sender sends special segment called probe. The probe is an alert that alerts
the receiver that the ACK was lost and should be resent.
Keep alive timer: is used to prevent a long idle connection between sender and receiver
Time waited timer: is used during the connection termination

Adaptive Retransmission

• TCP uses multiple timers. Most important timer is retransmission timer

• Adaptive retransmission is used for retransmission timer management

• When a segment is sent, a retransmission timer is started. If the segment is acknowledged


before the timer expires, the timer is stopped. On the other hand, if the acknowledgement comes
in after the time out occurs, then retransmission is started.

• How long should be the time out interval?

• Choosing an appropriate timeout value is not easy.

• The length of time we use for retransmission timer is very important. If it is set too low, we may
start retransmitting a segment earlier, if we set the timer too long, we waste time that reduces
overall performance.

• To address this problem, TCP uses an adaptive retransmission mechanism

• The solution is to use a dynamic algorithm that constantly adjusts the timeout interval based on
continuous measurement of n/w performance

1. Original Algorithm

• Adaptive Retransmission Based on Round-Trip Time (RTT) Calculations

• Measure SampleRTT for each segment / ACK pair

• every time TCP sends a data segment, it records the time. When an ACK for that
segment arrives, TCP reads the time again and then takes the difference between these
two times as a SampleRTT

• Compute weighted average of RTT

• EstimatedRTT = α*EstimatedRTT + (1- α )*SampleRTT

Where α between 0.8 and 0.9

• Set timeout based on EstimatedRTT

• TimeOut = 2 * EstimatedRTT

2. Karn/Partridge Algorithm

• Used in calculating accurate SampleRTT

• One problem that occurs with dynamic algorithm(Original algorithm) while estimating RTT
dynamically. The following figure illustrates this.
• Whenever a segment is retransmitted and then an ACK arrives at the sender, it is impossible to
determine if this ACK should be associated with the first or the second transmission of the
segment for the purpose of measuring the sample RTT. To compute accurate SampleRTT, it is
necessary to know, the ACK is associated with which transmission

• if you assume that the ACK is for the original transmission but it was really for the second, then
the SampleRTT is too large as shown in (a)

• if you assume that the ACK is for the second transmission but it was actually for the first, then
the SampleRTT is too small as shown in (b).

• The solution is simple

• Do not measure SampleRTT when retransmitting a segment; instead only measures


SampleRTT for segments that have been sent only once.

• Double the timeout after each retransmission

3. Jacobson/Karel’s Algorithm

• Used to measure accurate timeout value

• According to original algorithm, the timeout is calculated as

• TimeOut = 2 * EstimatedRTT

Where the constant value (2) was inflexible because it failed to respond when the variance
went up. The experience shows that.

• In the new approach, the sender measures a new SampleRTT as before. It then

• Calculates the timeout as follows:

• Difference = SampleRTT − EstimatedRTT

• EstimatedRTT = EstimatedRTT + (δ × Difference)

• Deviation = Deviation + δ(|Difference| − Deviation)

where δ is a fraction between 0 and 1. That is, calculate both the mean RTT and the
variation in that mean.
• TCP then computes the timeout value as a function of both EstimatedRTT and Deviation as
follows:

• TimeOut = μ × EstimatedRTT + φ × Deviation

where based on experience, μ is typically set to 1 and φ is set to 4.

TCP Congestion Control

Congestion

 Occurs when the load on the network exceeds capacity


 Congestion control refers to mechanisms and techniques used to control congestion and keep
load below capacity
 Techniques and mechanisms that can either prevent congestion or remove congestion
 Open-loop policy – applied to prevent congestion before it happens
 Closed-loop policy – applied to reduce congestion after it happens
 TCP contain four algorithms for congestion control

1. Slow start
2. Additive Increase and Multiplicative Decrease (AMID)
3. Fast retransmit and Fast Recovery
4. Equation based congestion control

1. Slow start(SS) : algorithm When the window size reaches


this threshold, slow start stops
o The sender starts by transmitting and the next phase starts.
one segment and waiting for its
ACK.
o When that ACK is received, the
congestion window is
incremented from one to two,
and two segments can be sent.
o When each of those two
segments is acknowledged, the
congestion window is increased
to four.
o This provides an exponential
growth(2^0, 2^1, and so on)
o Slow start cannot continue
indefinitely. There must be
threshold to stop this phase.
2. Additive Increase and Multiplicative Decrease (AIMD)

To avoid congestion before it happens, it is necessary to stop this exponential growth. TCP
performs another algorithm called additive increase.

Additive Increase (AI)

o When the congestion window reaches the threshold value, the size of the congestion window
is increased by 1.
o TCP increases the congestion window additively until the time out occurs.
Multiplicative Decrease (MD)

o If time out occurs, the threshold must be set to one half of the last congestion window
size and congestion window size should start from 1 again.
o The threshold is reduced to one half of the previous congestion window size each time a
time out occurs means, the threshold is decreased in a multiplicative manner.

The following graph illustrates the TCP congestion control

3. Fast retransmit and Fast Recovery

Fast retransmission

– The idea of fast retransmit is straightforward. Every time a data packet arrives at the
receiving side, the receiver responds with an acknowledgment.
– When a packet arrives out of order i.e., not received by the receiver, it resends the same
acknowledgment that it sent the last time. This second transmission of the same
acknowledgment is called a duplicate ACK.
– When the sender sees a duplicate ACK, it knows that the receiver must have received a
packet out of order, which suggests that an earlier packet might have been lost.
– The sender waits until it sees some number of duplicate ACKs and then retransmits the
missing packet. In practice, TCP waits until it has seen three duplicate ACKs before
retransmitting the packet.
– After the retransmission of the lost segment, the receiver will send a cumulative ACK to the
sender.
The following figure illustrates this.

In this example, the destination receives packets 1 and 2, but packet 3 is lost in the network. Thus,
the destination will send a duplicate ACK for packet 2 when packet 4 arrives, again when packet 5
arrives, and so on. When the sender sees the third duplicate ACK for packet 2—the one sent because
the receiver had gotten packet 6—it retransmits packet 3. When the retransmitted copy of packet 3
arrives at the destination, the receiver then sends a cumulative ACK for everything up to and
including packet 6 back to the source.

Fast recovery

– When the fast retransmit mechanism signals congestion, rather than dropping the congestion
window and starting the slow start, it is possible to use the ACKs that are still in the pipe to
reduce the sending of packets. This mechanism is called fast recovery.
– It removes the slow start phase and cuts the congestion window in half and resumes additive
increase.

4. The Equation based congestion control

In TCP, the data transmission is modeled by the following equation:

which says the transmission rate must be inversely proportional to the round-trip time (RTT) and the
square root of the loss rate (ρ).

TCP congestion avoidance

It refers predict when congestion is about to happen and reduce the rate of data transmission before
packets being discarded. Congestion avoidance can be either
1. router-centric (Router Based Congestion Avoidance): a) DECbit and b) RED gateways
2. host-centric (Source Based Congestion Avoidance): c) TCP Vegas
a) DECbit

– Destination Experiencing Congestion bit


– The idea is to more evenly split the responsibility for congestion control between the routers and
the end nodes. Each router monitors the load it is experiencing and explicitly notifies the end
nodes when congestion is about to occur. This notification is implemented by setting a binary
congestion bit in the packets that flow through the router; hence the name DECbit.
– The destination host then copies this congestion bit into the ACK and it sends back to the source.
Finally, the source adjusts its sending rate so as to avoid congestion.
– A bit is added to the packet header to signify the congestion.
– A router sets this bit in a packet if its average queue length is greater than or equal to 1 at the
time the packet arrives
– .The Router monitors average queue length over last busy+idle cycle
– Figure shows the queue length at a router as a function of time. The router calculates the area
under the curve and divides this value by the time interval to compute the average queue length.

– The router sets the congestion bit if average queue length > 1
– The router attempts to balance throughout against delay
– The algorithm uses a threshold of 50%. If less than 50% of the ACK’s for a connection show the
congestion bit to be set we increase the CongestionWindow setting by 1.
– Once more than 50% of the ACK’s have the congestion bit set we decrease the Congestion
Window by .85.times the previous value.

b) Random Early Detection (RED)

• This is a proactive approach in which the router discards one or more packets before the buffer
becomes completely full.
• In RED, First, each time a packet arrives, the RED algorithm computes the average queue length,
AvgLen.
• AvgLen is computed as

AvgLen = (1 −Weight) × AvgLen + Weight × SampleLen

where 0 <Weight < 1 and SampleLen is the length of the queue when a sample measurement is
made.
• Second, RED has two queue length thresholds that trigger certain activity: MinThreshold and
MaxThreshold. When a packet arrives at the gateway, RED compares the current AvgLen with
these two thresholds, according to the following rules:

if AvgLen ≤ MinThreshold

−→ queue the packet

if MinThreshold < AvgLen < MaxThreshold

−→ calculate probability P

−→ drop the arriving packet with probability P

if MaxThreshold ≤ AvgLen

−→ drop the arriving packet

That is, if the average queue length is smaller than the lower threshold, no action is taken, and if
the average queue length is larger than the upper threshold, then the packet is always dropped.
If the average queue length is between the two thresholds, then the newly arriving packet is
dropped with some probability P. This situation is depicted in following figure

The approximate relationship between P and AvgLen is shown in following figure

To summarize it,
– If AvgLen is lower than some lower threshold, congestion is assumed to be minimal or non-
existent and the packet is queued.
– If AvgLen is greater than some upper threshold, congestion is assumed to be serious and the
packet is discarded.
– If AvgLen is between the two thresholds, this might indicate the onset of congestion. The
probability of congestion is then calculated.

c) TCP Vegas

TCP Vegas was the nickname used for the next version of the TCP/IP UNIX stack. These approaches are
host-centric. The basis of the technique is:
Compare the current RTT with the current window size:
(CurrentWindow – OldWindow)X(CurrentRTT-OldRTT)
When this value is greater than zero assume congestion is approaching and incrementally decrease the
window. When the value is negative or zero values, incrementally increase the window size

QUALITY OF SERVICE (QoS)

There are some techniques that can be used to improve the quality of service. There are four common
methods:
1. scheduling
2. traffic shaping
3. admission control
4. resource reservation

1. Scheduling

Several scheduling techniques are designed to improve the quality of service. Two of them are
1. FIFO queuing
2. Fair Queuing (FQ) (Weighted fair queuing).

a) FIFO queuing

– first-in, first-out (FIFO) queuing


– packets wait in a buffer (queue) until it is processed by the node
– FIFO are two separable ideas.
o Scheduling discipline—it determines the order in which packets are transmitted.
o Tail drop is a drop policy—it determines which packets get dropped.

If the average arrival rate is > the average processing rate, the queue will fill up and new packets will be
discarded.

selected for processing is based on the


corresponding weight.
b) FAIR queuing (weighted fair queueing) For example, if the weights are 3, 2, and 1,
three packets are processed from the first
A better scheduling method queue, two from the second queue, and one
The packets are assigned to different Flows and from the third queue
admitted to different queues.
The queues are weighted based on the priority
of the queues; higher priority means a higher
weight.
The system processes packets in each queue in
a round-robin fashion Number of packets
2. Traffic Shaping unless the bucket is empty. The input rate can
vary, but the output rate remains constant.”
Traffic shaping is a mechanism to control the
amount and the rate of the traffic sent to the
network. Two techniques can shape traffic:
1. leaky bucket
2. token bucket

a) Leaky Bucket - can send regulated rate of


data
“If a bucket has a small hole at the bottom, the
water leaks from the bucket at a constant rate
as long as there is water in the bucket. The rate
at which the water leaks does not depend on the
rate at which the water is input to the bucket

The following steps are performed:


• When the host has to send a packet, the packet is thrown into the bucket.
• The bucket leaks at a constant rate, meaning the network interface transmits packets at a
constant rate.
• Bursty traffic is converted to a uniform traffic by the leaky bucket.
• In practice the bucket is a finite queue that outputs at a finite rate.
• Whenever a packet arrives, if there is room in the queue it is queued up and if there is no room
then the packet is discarded.

2. If there is a ready packet, a token is


removed from the bucket, and the
packet is send.
3. If there is no token in the bucket, the
packet cannot be send.

b) Token bucket - It sends bursty amount of


data
In this algorithm leaky bucket holds token,
generated at regular intervals.
Steps:

1. In regular intervals tokens are thrown


into the bucket. The bucket has a
maximum capacity.
Combining Token Bucket and Leaky Bucket

The two techniques can be combined to credit an idle host and at the same time regulate
the traffic.

3. Admission control

Different approaches can be followed:


 First approach is once the congestion has been signaled, do not set-up new connections, once the
congestion is signaled. This type of approach is often used in normal telephone networks. When
the exchange is overloaded, then no new calls are established.
 Second approach is to allow new virtual connections, but route these carefully so that none of the
congested router or none of the problem area is a part of this route.
 Third approach is to negotiate different parameters between the host and the network, when the
connection is setup. During the setup time itself, Host specifies the volume and shape of traffic,
quality of service, maximum delay and other parameters, related to the traffic it would be
offering to the network. Once the host specifies its requirement, the resources needed are
reserved along the path, before the actual packet follows.

4. Resource Reservation

Some of the approaches that have been developed to provide a range of qualities of service. These can be
divided into two broad categories:
• fine-grained approaches, which provide QoS to individual applications or flows
• coarse-grained approaches, which provide QoS to large classes of data or aggregated traffic

“Integrated Services and associated RSVP is belonging to first category”.


“Differentiated Services” lies in the second category.

INTEGRATED SERVICES

Integrated Services, sometimes called IntServ, is a fiow-based QoS model, which means that a user needs
to create a flow, a kind of virtual circuit, from the source to the destination and inform all routers of the
resource requirement.

Integrated Services is a flow~based QoS model designed for IP.

Signaling

IP is a connectionless, datagram, packet-switching protocol. How can we implement a flow-based model


over a connectionless protocol? The solution is a signaling protocol to run over IP that provides the
signaling mechanism for making a reservation. This protocol is called resource ReSerVation Protocol
(RSVP)

Flow Specification

When a source makes a reservation, it needs to define a flow specification. A flow spedfication has two
parts:
Rspec (resource specification) and
Tspec (traffic specification).
Rspec defines the resource that the flow needs to reserve (buffer, bandwidth, etc.).
Tspec defines the traffic characterization of the flow.

Service Classes

Two classes of services have been defined for Integrated Services:


• guaranteed service and
• controlled-load service.

Guaranteed Service Class

This type of service is designed for real-time traffic that needs a guaranteed minimum end-to-end delay.
The end-to-end delay is the sum of the delays in the routers, the propagation delay in the media, and the
setup mechanism. This type of service guarantees that the packets will arrive within a certain delivery
time and are not discarded if flow traffic stays within the boundary of Tspec. The guaranteed services are
quantitative services.

Controlled-Load Service Class

This type of service is designed for applications that can accept some delays, but are sensitive to an
overloaded network and to the danger of losing packets. Good examples of these types of applications are
file transfer, e-mail, and Internet access. The controlled load service is a qualitative type of service

RSVP

In the Integrated Services model, an application program needs resource reservation. In IntServ model,
the resource reservation is for a flow.
This means that if we want to use IntServ at the IP level, we need to create a flow, a kind of virtual-circuit
network, out of the IP, which was originally designed as a datagram packet-switched network. A virtual-
circuit network needs a signaling system to set up the virtual circuit before data traffic can start. The
resource reservation protocol (RSVP) is a signaling protocol to help IP create a flow and consequently
make a resource reservation. RSVP Messages

RSVP has two types of messages: Path and Resv.

Path Messages

In Receiver Based Reservation, the receivers, not the sender, make the reservation. However, the
receivers do not know the path traveled by packets before the reservation is made. The path is needed for
the reservation. To solve the problem, RSVP uses Path messages. A Path message travels from the sender
and reaches all receivers in the multicast path. On the way, a Path message stores the necessary
information for the receivers. A Path message is sent in a multicast environment; a new message is
created when the path diverges. The following figure shows path messages.

Resv Messages After a receiver has received a Path message, it sends a Resv message. The Resv message
travels toward the sender (upstream) and makes a resource reservation on the routers that support RSVP.
If a router does not support RSVP on the path, it routes the packet based on the best-effort delivery
methods the following figure shows the Resv messages.

Reservation Styles

When there is more than one flow, the router needs to make a reservation to accommodate
all of them. RSVP defines three types of reservation styles, as shown in the following figure
Wild Card Filter Style In this style, the router creates a single reservation for all senders. The reservation
is based on the largest request. This type of style is used when the flows from different senders do not
occur at the same time.

Fixed Filter Style In this style, the router creates a distinct reservation for each flow. This means that if
there are n flows, n different reservations are made. This type of style is used when there is a high
probability that flows from different senders will occur at the same time.

Shared Explicit Style In this style, the router creates a single reservation which can be shared by a set of
flows.

Soft State

The reservation information (state) stored in every node for a flow needs to be refreshed periodically. This
is referred to as a soft state. The default interval for refreshing is currently 30 s.

Problems with Integrated Services

There are at least two problems with Integrated Services that may prevent its full implementation in the
Internet: scalability and service-type limitation.

1. Scalability

The Integrated Services model requires that each router keep information for each flow. As the
Internet is growing every day, this is a serious problem.

2. Service-Type Limitation

The Integrated Services model provides only two types of services, guaranteed and control-load.
Those opposing this model argue that applications may need more than these two types of
services.

DIFFERENTIATED SERVICES

Differentiated Services (DS or Diffserv) was introduced by the IETF (Internet Engineering Task Force) to
handle the shortcomings of Integrated Services. Two fundamental changes were made:

1. The main processing was moved from the core of the network to the edge of the network. This solves
the scalability problem. The routers do not have to store information about flows. The applications, or
hosts, define the type of service they need each time they send a packet.

2. The per-flow service is changed to per-class service. The router routes the packet based on the class of
service defined in the packet, not the flow. This solves the service-type limitation problem. We can define
different types of classes based on the needs of applications.

Differentiated Services is a class-based QoS model designed for IP.

DS Field

In Diffserv, each packet contains a field called the DS field. The value of this field is set at the boundary of
the network by the host or the first router designated as the boundary router.
The DS field contains two subfields: DSCP and CU.

• The DSCP (Differentiated Services Code Point) is a 6-bit subfield that defines the per-hop
behavior (PHB).
• The 2-bit CU (currently unused) subfield is not currently used.

Per-Hop Behavior

The Diffserv model defines per-hop behaviors (PHBs) for each node that receives a packet. There are two
PHBs are defined:
a. EF PHB
b. AF PHB.

EF: The EF (expedited forwarding) provides the following services


• Low loss
• Low latency
• Ensured bandwidth:

AF: The AF (assured forwarding) delivers the packet with a high assurance as long as the class traffic does
not exceed the traffic profile of the node.

Unit V

Email (SMTP, MIME, IMAP, POP3) – HTTP – DNS- SNMP – Telnet – FTP – Security – PGP - SSH

E-MAIL

Stands for Electronic Mail


It is a network application for transfer the electronic message from one user to another user.
For the mail transfer, it uses e-mail address, for example, [email protected]
E-mail uses SMTP protocol for mail transfer
MIME is a protocol that defines the format of the e-mail messages being exchanged

First, users interact with a mail reader when they compose, file, search, and read their email.
There are number of mail readers available, just like there are Web browsers.
Second, there is a mail daemon (or process) running on each host. This process plays the role of a post
office.
Mail readers give the daemon messages they want to send to other users, the daemon uses SMTP running
over TCP to transmit the message to a daemon running on another machine, and the daemon puts
incoming messages into the user’s mailbox at the receiver machine from where that user’s mail reader
can later find it.
The mail traverses one or more mail gateways on its route from the sender’s host to the receiver’s host.
The intermediate nodes are called “gateways”, their job is to store and forward email messages.
The reason for storing the message by gateway is that the recipient’s machine may not always be up, in
which case the mail gateway holds the message until it can be delivered.

SMTP
MIME
POP and IMAP
IMAP State Transition Diagram
IMAP is similar to SMTP in many ways. It is a client/server protocol running over TCP, where the client
issues commands and the mail server responds. The exchange begins with the client authenticating him-
or herself, and identifying the mailbox he or she wants to access. This can be represented by the simple
state transition diagram shown in below figure.
In this diagram, LOGIN, AUTHENTICATE, SELECT, EXAMINE, CLOSE, and LOGOUT are example commands
that the client can issue, while OK is one possible server response.
Other common commands include FETCH, STORE, DELETE, and EXPUNGE, with the obvious meanings.
Additional server responses include NO and BAD commands.

HTTP

HTTP stands for Hypertext Transfer Protocol.


This is a protocol used for communication between web browsers and web servers
It is an TCP/IP based communication protocol which is used to deliver all files and other data, collectively
called resources, on the World Wide Web. These resources could be HTML files, image files, query results,
or anything else.
A browser is works as an HTTP client because it sends requests to an HTTP server which is called Web
server. The Web Server then sends responses back to the browser.
The standard and default port for HTTP servers to listen on is 8080
HTTP uses URL to search the requested resources.
• URL
– Uniform Resource Locator
• URL consists several parts:
• Protocol name: It is a protocol and it can be either http, ftp, or news
• host name (name.domain name)
• port (usually 8080)
• directory path to the resource
• resource name
• for example, http://xxx.myplace.com/www/index.html

HTTP Messages
1. Request message
2. Response Message
HTTP uses the client-server model: An HTTP client opens a connection and sends a request message to an
HTTP server; the server then returns a response message, usually containing the resource that was
requested. After delivering the response, the server closes the connection.

HTTP Request message


A request message consists of a request line, headers and body as shown below

A request line has three parts, separated by spaces:


• An HTTP Method Name
• The local path of the requested resource.
• The version of HTTP being used.
For an example, GET /path/to/file/index.html HTTP/1.0

HTTP Methods
HEAD: Used for retrieving meta-information written in response headers
GET: Used for requesting a specified resource.
POST: Used for submitting data to be processed
PUT: Used for uploading the specified resource.
DELETE: Used for deleting the specified resource.

HTTP Headers
Headers are name/value pairs that appear in both request and response messages. The name of the
header is separated from the value by a single colon. For example, this line in a request message:
User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)
provides a header called User-Agent whose value is Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1).
The purpose of this particular header is to supply the web server with information about the type of
browser making the request.

The Message Body


An HTTP message may have a body of data sent or this is the requested resource which is going to be
returned to the client. The message body contains
• The Content-Type: header gives the MIME-type of the data in the body, such as text/html or
image/gif.
• The Content-Length: header gives the number of bytes in the body.
HTTP Response message

A request message consists of a status line, headers and body as shown below
The status line also has three parts separated by spaces:
• The version of HTTP being used.
• A response status code that gives the result of the request.
• An English reason phrase describing the status code.
For an example, HTTP/1.0 200 OK
Or
HTTP/1.0 404 Not Found

HTTP Status Codes and Errors

An HTTP request can fail because of a network error or because of problems encountered while the
request is executing on the web server.
HTTP status codes are returned by web servers to describe if and how a request was processed. The codes
are grouped by the first digit:

1xx - Informational
Any code starting with '1' is an intermediate response and indicates that the server has received
the request but has not finished processing it.
2xx – Successful
These codes are used when a request has been successfully processed.
3xx – Redirection
Codes starting with a '3' indicate that the request was processed, but the browser should get the
resource from another location.
4xx - Client Error
The server returns these codes when they is a problem with the client's request.
5xx - Server Error
A status code starting with the digit 5 indicates that an error occurred on the server while
processing the request. For example:

Some of the HTTP error messages are:

Message: Description:

100 Continue Only a part of the request has been received by the server, but as long as it has not
been rejected, the client should continue with the request

200 OK The request is OK

201 Created The request is complete, and a new resource is created

202 Accepted The request is accepted for processing, but the processing is not complete

302 Found The requested page has moved temporarily to a new url

303 See Other The requested page can be found under a different url

403 Forbidden Access is forbidden to the requested page

404 Not Found The server can not find the requested page
SNMP

Simple Network Management Protocol, running over UDP, is used to configure remote devices, monitor
network performance, audit network usage, and detect network faults or inappropriate access.
The SNMP is composed of three major elements
1. Managers are responsible for communicating with network devices that implement SNMP Agents
2. Agents reside in devices such as workstations, switches, routers, microwave radios, printers, and
provide information to Managers.
3. MIBs (Management Information Base) describe data objects to be managed by an Agent within a
device.
SNMP is based on the manager/agent model consisting of an SNMP manager, an SNMP agent, a database
of management information, managed SNMP devices and the network protocol.
The SNMP manager provides the interface between the human network manager and the management
system.
The SNMP agent provides the interface between the manager and the physical devices
The SNMP manager and agent use an SNMP Management Information Base (MIB) and a relatively small
set of commands to exchange information.

The SNMP MIB


A Management Information Base (MIB) describes a set of managed objects.
Each managed object in a MIB has a unique identifier called as object identifier(OID). The identifier
includes the object's type, the object's access level (such as read or read/write), size restrictions, and
range information.
OIDs are very structured, and follow a hierarchical tree pattern – much like a folder structure on your
computer.
The below image shows how OID tree looks like.

All SNMP objects are numbered. So, the top level after the root is ISO, and has the number “1”. The next
level, ORG, has the number “3”, since it is the 3rd object under ISO, and so on. OIDs are always written
in a numerical form, instead of a text form. So the top three object levels are written as 1.3.1 not
iso\org\dod.
The MIB is extensible, which means that hardware and software manufacturers can add new objects to
the MIB.
These new MIB definitions must be added both to the network element and to the network management
system.

The image below shows the packet formats.

Protocol Data Units (PDU)


Information is passed in the form of packets, known as protocol data units (PDUs). The packet size and
definition depends on the protocol suite involved in the communications.
The SNMP PDUs are five commands or messages. They are used between the Agent and the Manager to
pass information and make requests. The five SNMP messages are:
1. GetRequest - issued by the Manager to the Agent to request information about a particular
object; fetches a value from a specific variable.
2. GetNextRequest - issued by the Manager to the Agent to request information about the next
object in the MIB; fetches a value without knowing the exact name.
3. GetResponse - issued by the Agent to the Manager in response to a Get command; replies to a
fetch operation; the agent returns the requested information to the Manager with this command.
4. SetRequest - sets a variable in the MIB at the Agent.
5. Traps - issued by an Agent to the Manager to report a significant network event. Traps include:
 enterprise - value of the agent's sysObjectID.
 agent-addr - value of the agent's NetworkAddress.
 specific-trap - identifies the enterpriseSpecific trap.
 time-stamp - value of the agent's sysUpTime MIB object.
 variable-bindings - list of variables containing information about the trap.
 vendor-specific - traps that are added by the device vendor.

Applications
Here are some typical uses of SNMP:
Monitoring device performance
• Detecting device faults, or recovery from faults
• Collecting long term performance data
• Remote configuration of devices
• Remote device control

TELNET

TELNET is an abbreviation for TErminaL NETwork.


TELNET enables the establishment of a connection(log in) to a remote system in such a way that the local
terminal appears to be a terminal at the remote system.
TELNET is a general-purpose client/server application program.

Logging

To access the system, the user logs into the system with a user id or log-in name. The system checks the
password to prevent an unauthorized user from accessing the resources. The following figure shows the
logging process.

Local login
When a user logs into a local timesharing system, it is called local log-in. As a user types at a terminal or
at a workstation running a terminal emulator, the keystrokes are accepted by the terminal driver. The
terminal driver passes the characters to the operating system. The operating system, in turn, interprets
the combination of characters and invokes the desired application program or utility.

Remote login

When a user wants to access an application program or utility located on a remote machine, he/she
performs remote log-in. Here the TELNET client and server programs come into use. The user sends the
keystrokes to the terminal driver, where the local operating system accepts the characters but does not
interpret them. The characters are sent to the TELNET client, which transforms the characters to a
universal character set called network virtual terminal (NVT) characters and delivers them to the local
TCP/IP protocol stack.
The commands or text, in NVT form, travel through the Internet and arrive at the TCP/IP stack at the
remote machine. Here the characters are delivered to the operating system and passed to the TELNET
server, which changes the characters to the corresponding characters understandable by the remote
computer. However, the characters cannot be passed directly to the operating system because the remote
operating system is not designed to receive characters from a TELNET server: It is designed to receive
characters from a terminal driver. The solution is to add a piece of software called a pseudoterminal driver
which pretends that the characters are coming from a terminal. The operating system then passes the
characters to the appropriate application program.

Network Virtual Terminal

In a heterogeneous network system, if we want to access any remote computer in the world, we must first
know what type of computer we will be connected to, and we must also install the specific terminal
emulator used by that computer. TELNET solves this problem by defining a universal interface called the
network virtual terminal (NVT) character set. Via this interface, the client TELNET translates characters
(data or commands) that come from the local terminal into NVT form and delivers them to the network.
The server TELNET, on the other hand, translates data and commands from NVT form into the form
acceptable by the remote computer. The following figure explains this concept.

The following table lists some of the NVT character sets


FTP

Stands for File Transfer Protocol


TFTP

Stands for Trivial File Transfer Protocol


The Trivial File Transfer Protocol (TFTP) allows a local host to obtain files from a remote host but does not
provide reliability or security. It uses the fundamental packet delivery services offered by UDP.

Ftp vs. Tftp


 FTP is a complete, session-oriented, general purpose file transfer protocol. TFTP is used as a
special purpose file transfer protocol.
 FTP can be used interactively. TFTP allows only unidirectional transfer of files.
 FTP depends on TCP, is connection oriented, and provides reliable control. TFTP depends on UDP,
requires less overhead, and provides virtually no control.
 FTP provides user authentication. TFTP does not.

Security

Need of security
When systems are connected through the network, attacks are possible during transmission time.
Cryptography
It is a science of writing Secret code using mathematical techniques. The many schemes used for
enciphering constitute the area of study known as cryptography. The sender applies an encryption
function to the original plaintext message, the resulting ciphertext message is sent over the network, and
the receiver applies a reverse function (called decryption) to recover the original plaintext.
Encryption: The process of converting from plaintext to cipher text.
Decryption: The process of converting from cipher text to plain text.

Cryptographic Algorithms
There are three types of cryptographic algorithms:
1. secret key algorithms,
2. public key algorithms,
3. Hashing(Message Digest) algorithms.

The roles of public and private key


The two keys used for public-key encryption are referred to as the public key and the private key.
Invariably, the private key is kept secret and the public key is known publicly. Usually the public key is
used for encryption purpose and the private key is used in the decryption side.

Symmetric/Secret/Private key
Secret key algorithms are symmetric in the sense that both participants1 in the communication share a
single key. Below figure illustrates the use of secret key encryption to transmit data over Examples: DES
(Data Encryption Standard) IDEA (International Data Encryption Algorithm)

Asymmetric/Public Key
public key cryptography involves each participant having a private key that is shared with no one else and
a public key that is published so everyone knows it. To send a secure message to this participant, you
encrypt the message using the widely known public key. The participant then decrypts the message using
his or her private key. This scenario is depicted in below figure
Examples: RSA

The third type of cryptography algorithm is called a hash or message digest function. Unlike the preceding
two types of algorithms, cryptographic hash functions typically don’t involve the use of keys instead it
computes a cryp- tographic checksum over a message. That is, a cryptographic checksum protects the
receiver from malicious changes to the message. This is because all cryptographic hash algorithms are
carefully selected to be one-way functions.
Example: The most widely used cryptographic checksum algorithm is Message Digest version 5 (MD5) and
SHA

One way function


One way function is one that map the domain into a range such that every function value has a unique
inverse with a condition that the calculation of the function is easy where as the calculations of the inverse
is infeasible.
Hash function
Hash function accept a variable size message M as input and produces a fixed size hash code H(M) called
as message digest as output.

Secret Key Encryption: DES


The Data Encryption Standard (DES) was developed in the 1970s by the National Bureau of Standards
(NBS)with the help of the National Security Agency (NSA).

Its purpose is to provide a standard method for protecting sensitive commercial and unclassified data.
DES has three distinct phases:
1 The 64 bits in the block are permuted (shuffled).
2 Sixteen rounds of an identical operation are applied to the resulting data and the key.
3 The inverse of the original permutation is applied to the result.
Initial Permutation
The permutation shuffles the bits.
Final Permutation
The final permutation is the inverse of the initial permutation
Details of Each Round

 The 64-bit block being enciphered is broken into two halves.


 The left half and the right half go through one DES round, and the result becomes the new right
half.
 The old right half becomes the new left half half, and will go through one round in the next
round.
 This goes on for 16 rounds, but after the last round the left and right halves are not swapped, so
that the result of the 16th round becomes the final right half, and the result of the 15th round
(which became the left half of the 16th round) is the final left half.

Public Key Encryption (RSA) algorithm

1. Generate two large random primes, p and q,


2. Compute n = p*q
3. Compute φ(n) = (p-1)(q-1).
4. Choose an integer e, where gcd(e, φ(n)) = 1, and 1 < e < φ(n)
5. Compute d, where e*d =1 (mod φ(n)) and 1 < d < φ(n)
6. The public key is KU={n, e}
7. The private key is KR={n, d}

Encryption
Plain Text m

Cipher Text c=me mod n

Decryption

Cipher text c

Plain text m=cd mod n

Message digest
Message digest functions also called hash functions, are used to produce digital summaries of information
called message digests. Message digests are commonly 128 bits to 160 bits in length and provide a digital
identifier for each digital file or document.
Message digest functions are mathematical functions that process information to produce a different
message digest for each unique document
The following figure shows the basic message digest process.

MD5
Stands for Message digest 5
Message digests are commonly used in conjunction with public key technology to create digital signatures
or "digital thumbprints" that are used for authentication, and integrity. Message digests also are
commonly used to provide data integrity for electronic files and documents. Two of the most commonly
used message digest algorithms today are MD5, and SHA-1.
The following five steps are performed to compute the message digest of the message.

Step 1. Append Padding Bits

Step 2. Append Length

Step 3. Initialize MD Buffer

Step 4. Process Message in 16-Word Blocks

Step 5. Output

Step 1. Append Padding Bits


The message is "padded" (extended) so that its length (in bits) is congruent to 448, modulo 512. That is,
the message is extended so that it is just 64 bits shy of being a multiple of 512 bits long. Padding is
always performed, even if the length of the message is already congruent to 448, modulo 512.
Step 2. Append Length
A 64-bit representation of message (the length of the message before the padding bits were added) is
appended to the result of the previous step. In the unlikely event that length of the message is greater
than 2^64, then only the low-order 64 bits of message are used. At this point the resulting message
(after padding with bits and with b) has a length that is an exact multiple of 512 bits.
Step 3. Initialize MD Buffer
A four-word buffer (A,B,C,D) is used to compute the message digest. Here each of A, B, C, D is a 32-bit
register. These registers are initialized to the following values in hexadecimal, low-order bytes first):
word A: 01 23 45 67
word B: 89 ab cd ef
word C: fe dc ba 98
word D: 76 54 32 10
Step 4. Process Message in 16-Word Blocks
We first define four auxiliary functions that each take as input three 32-bit words and produce as output
one 32-bit word.
In the first phase, transformation performed as shown below:

where d0, d1, d2, d3 are four32-bit words, m0, m1, m2, m3, m4, m5 are digested as sixteen 32-bit words
The function F (a, b, c) is a combination of bitwise operations (OR, AND, NOT) on its arguments. The Ti s
are constants. The operator rotates the operand left by n bits.
In the second phase,
• F is replaced by a slightly different function G.
• The constants T1 through T16 are replaced by another set (T17 through T32).
In the third phase,
• G is replaced by yet another function H, which is just the XOR function.
• Another set of constants (T33 through T48) are used.
In the fourth phase,
• H is replaced by the function I, which is which is a combination of bitwise XOR, OR, and NOT
• Another set of constants (T49 through T64) are used.
The algorithm now proceeds to digest the next 16 bytes of the message until there is no more to be
digested; the output of the last stage is the message digests.

PGP

Stands for Pretty Good Privacy


Pretty Good Privacy is a popular approach to providing encryption and authentication capabilities for E-
mail.
Suppose user A wants to send a message to user B and prove to B that it truly came from A. PGP follows
the sequence of steps shown in below figure.
First, A creates a cryptographic checksum over the message body using MD5 and then encrypts the
checksum using A’s private key. Then it is transmitted to B.
On receipt of the message, Using A’s public key, the checksum of the received message is calculated. The
received encrypted checksum is decrypted using A’s public key, and the two checksums are compared. If
they agree, B knows that A sent the message and that it was not modified after A signed it.

SSH

Stands for Secure SHell.


Secure Shell is a program to log into another computer over a network, to execute commands in a remote
machine, and to move files from one machine to another. It provides strong authentication and secure
communications over unsecure channels. It is intended as a replacement for telnet, rlogin, rsh, and rcp.
For SSH2.
There are two versions of Secure Shell available: SSH1 and SSH2.
It consists of three major components:
• SSH-TRANS: a transport layer protocol
• SSH-AUTH: an authentication protocol
• SSH-CONN: a connection protocol
The Transport Layer Protocol provides server authentication, confidentiality, and integrity. It may
optionally also provide compression. The transport layer will typically be run over a TCP/IP connection, but
might also be used on top of any other reliable data stream.
The User Authentication Protocol authenticates the client-side user to the server. It runs over the
transport layer protocol.
The Connection Protocol multiplexes the encrypted tunnel into several logical channels. It runs over the
user authentication protocol.
Secure Shell uses the following ciphers for encryption:
Cipher SSH1 SSH2
DES yes no
3DES yes yes
IDEA yes no
Blowfish yes yes
Secure Shell uses the following ciphers for authentication:
Cipher SSH1 SSH2
RSA yes no
DSA no yes
Secure Shell protects against:
• IP spoofing, where a remote host sends out packets which pretend to come from another, trusted
host.
• IP source routing, where a host can pretend that an IP packet comes from another, trusted host.
• DNS spoofing, where an attacker forges name server records
• Interception of clear text passwords and other data by intermediate hosts
• Manipulation of data by people in control of intermediate hosts
The client sends a service request once a secure transport layer connection has been established. A
second service request is sent after user authentication is complete. Once the connection is established, it
provides a channel that can be used for a wide range of purposes.

The run these applications over a secure ssh, it uses a technique called port forwarding. The idea is
illustrated in the above figure where we see a client on host A indirectly communicating with a server on
host B by forwarding its traffic through an SSH connection. The mechanism is called port forwarding
because when messages arrive at the well-known SSH port on the server, SSH first decrypts the contents,
and then “forwards” the data to the actual port at which the server is listening.

Additional (Students are advised to study this portion also)

Unit – I

Channel access method

In telecommunications and computer networks, a channel access method or multiple access method
allows several terminals connected to the same physical medium to transmit over it and to share its
capacity.

A multiple access method is based on a multiplex method that allows several data streams or signals to
share the same communication channel or physical media.

List of channel access methods

1) Channelization methods: The following are common channelization channel access methods:

– Frequency division multiple access (FDMA)


– Time-division multiple access (TDMA)
– Code division multiple access (CDMA)
– Direct-sequence spread spectrum (DSSS)
– Frequency-hopping spread spectrum (FHSS)

2) Packet mode methods: The following are examples of packet mode channel access methods:
– Carrier sense multiple access (CSMA)
– Carrier sense multiple access with collision detection (CSMA/CD)
– Carrier sense multiple access with collision avoidance (CSMA/CA)

Multiple Access techniques

Multiple Access techniques specify the way signals from different sources are to be combined efficiently for
transmission over a given radio frequency band and then separated at the destination without mutual
interference.

There are three basic multiple access techniques in use in cellular systems:

1) TDMA: Divide radio spectrum into time slots


2) FDMA: Frequency has been divided into sub channels
3) CDMA: Different user separated by different spread code

TDMA

The time division multiple access (TDMA) channel access scheme is based on the time division multiplex
(TDM) scheme

It allows several users to share the same frequency channel by dividing the signal into different time slots.

Time Division Multiple Access (TDMA) is used by several cellular communication systems.

It specifies how signals from different sources can be combined efficiently for transmission over a given
radio frequency band and then separated at the destination without mutual interference. Multiple access
techniques enable many users to share the available spectrum in an efficient way.

FDMA

The frequency division multiple access (FDMA) channel-access scheme is based on the frequency-division
multiplex (FDM) scheme

FDMA is a channel access method that is used by radio systems to share a certain radio spectrum between
multiple users.

FDMA gives users an individual allocation of one or several frequency bands or channels.
The users are individually allocated one or several frequency bands, allowing them to access the radio
system without interfering with each other

CDMA:

The code division multiple access (CDMA) scheme is based on spread spectrum

In which all the users are permitted to transmit simultaneously, operate at the same nominal frequency
and use the entire system's spectrum.

Because all the users can transmit simultaneously throughout the all system frequency spectrum, a
private code must be assigned to each user, so that his transmission can be identified. This privacy is
achieved by the use of spreading codes or pseudo number code (PN).

The information from an individual user is modulated by means of the unique PN code assigned to each
user. All the PN-modulated signals from different users are then transmitted over the entire CDMA
frequency channel.

At the receiving end, the desired signal is recovered by despreading the signal with a copy of the PN code
for the individual user. All the other signals (belonging to other users), whose PN-codes do not match that
of the desired signal, are not despread and as a result are perceived as noise by the receiver.

Hybrid channel access scheme application examples

• The GSM cellular system combines the use of FDMA and TDMA
• GPRS packet switched service use FDMA
• Wireless LANs are based on FDMA
• HIPERLAN/2 wireless networks combine FDMA with dynamic TDMA
• Bluetooth packet mode communication combines frequency hopping with CSMA/CA
• Most second generation cellular systems are based on TDMA.
• 3G cellular systems are primarily based upon CDMA

Transmission Media - Guided

 There are 2 basic categories of Transmission Media:


o Guided
o Unguided

Guided Transmission Media: uses a "cabling" system that guides the data signals along a specific path.

Unguided: The medium transmits the waves but does not guide

Twisted Pair Cable

 The popularity can be attributed to the fact that it is lighter, more flexible, and easier to install
than coaxial or fiber optic cable
 It is also cheaper and can achieve greater speeds than its coaxial competition.
 Ideal solution for most network environments.
 Two main types of twisted-pair cabling are:
 Unshielded Twisted Pair (UTP)
 more commonplace than STP and is used for most networks
 Shielded Twisted Pair (STP)
 used in environments in which greater resistance to EMI and attenuation is
required.
 the greater resistance comes at a price.
 This extra protection increases the distances that data signals can travel over STP but
also increases the cost of the cabling
 UTP: one or more pairs of twisted copper wires insulated and contained in a plastic cover
 Uses RJ-45 telephone connector
 STP: Same as UTP but with a aluminium/ polyester shield.
 Connectors are more awkward to work with

 Twisted nature to reduce crosstalk.


 Any interference from a physically adjacent channel that corrupts the signal and causes trans-
mission errors is what is known as crosstalk.
 UTP Categories:
 Categories 1 and 2 (CAT 1 and CAT 2)
 voice grade
 low data rates up to 4 Mbps
 Category 3 (CAT 3)
 suitable for most LANs
 up to 16 Mbps
 Category 4
 up to 20 Mbps
 Category 5
 Supports Fast Ethernet
 100Mbps
 more twists per foot
 more stringent standards on connectors
 Category 5e
 up to 1000 Mbps
 Category 6
 up to 1000 Mbps +
 Data grade UTP cable usually consists of either 4 or 8 wires, two or four pair

Coaxial Cable

 Commonly referred to as coax


 Coax found success in both TV signal transmission as well as in network implementations.
 Constructed with a copper core at the centre that carries the signal, plastic insulation, braided
metal shielding, and an outer plastic covering
 Constructed this way to avoid:
 Attenuation
 the loss of signal strength as it travels over distance
 Crosstalk
 the degradation of a signal caused by signals from other cables running close to
it
 EMI
 Electromagnetic Interference
 Networks can use two types of coaxial cabling: thin coaxial and thick coaxial
 Thin coax is only .25 inches in diameter, making it fairly easy to install
 Disadvantages of all thin coax types are that they are prone to cable breaks

 Size of Coax
 RG-8, RG-11
 50 ohm Thick Ethernet
 RG-58
 50 ohm Thin Ethernet
 RG-59
 75 ohm Cable T.V.
Fiber Optic

 Addresses the shortcomings associated with copper-based media


 Use light transmissions instead of electronic pulses
 Advantages:
 Threats such as EMI, crosstalk, and attenuation become a nonissue
 Well suited for the transfer of data, video, and voice transmissions
 It is the most secure of all cable media
 Disadvantages:
 difficult installation and maintenance procedures of fiber
 often require skilled technicians with specialized tools
 the cost of a fiber-based solution limits the number of organizations that can afford to
implement it
 incompatible with most electronic network equipment; you have to purchase
fiber-compatible network hardware.
 Composed of a core glass fiber surrounded by cladding
 An insulated covering then surrounds both of these within an outer protective cover
 Two types of fiber-optic cable are available: single and multimode fiber
 multimode fiber, many beams of light travel through the cable bouncing off of the cable
walls
 weakens the signal, reducing the length and speed the data signal can travel
 Single-mode fiber uses a single direct beam of light
 allows for greater distances and increased transfer speeds

– core: inner-most section


– cladding: surrounding the core
– jacket: outermost layer, surrounding one or a bundle of cladded fibers
 Common types of fiber-optic cable include the following:
 62.5 micron core/125 micron cladding multimode
 50 micron core/125 micron cladding multimode
 8.3 micron core/125 micron cladding single mode
 The main advantage of optical fiber is the great bandwidth it can carry.

2 Unguided Media

 Provides a means for transmitting electro-magnetic signals through air but do not guide them.
 Also referred to as wireless transmission
 Wireless communications uses specific frequency bands which separates the ranges.
 Main types: radio waves, microwaves, Bluetooth and Infrared.
 Transmission and reception are achieved by means of antennas
 For transmission, an antenna radiates and electromagnetic radiation in the air
 For reception, the antenna picks up electromagnetic waves from the surrounding
medium
 The antenna plays a key role; the characteristics of the antenna and the frequency that
it receives

Unit II

Multicasting
A message can be unicast, multicast, or broadcast. Let us clarify these terms as they
relate to the Internet
In unicasting, the router forwards the received packet through only one of its interfaces.

In multicasting, the router may forward the received packet through several of its interfaces.

Applications

Multicasting has many applications today such as access to distributed databases, information
broadcasting of News, teleconferencing, and distance learning.

Multicast Routing

When a router receives a multicast packet, the situation is different from when it receives a unicast
packet. A multicast packet may have destinations in more than one network. Forwarding of a single
packet to members of a group requires a shortest path tree. If we have n groups, we may need n shortest
path trees. Two approaches have been used to solve the problem:
• Source-based trees
• group-shared trees

In the source-based tree approach, each router needs to have one shortest path tree for each group.
In the group-shared tree approach, only the core router, which has a shortest path tree for each group, is
involved in multicasting.

Multicast Routing Protocols

a. Link state multicast


b. Distance Vector Multicast
c. Protocol independent Multicast
Unit V

Authentication Protocols

1. Three way handshake protocol

The client and server authenticate each other using a simple three-way handshake
Protocol. The follwoing figure illustrates this.

- x, y are random numbers


- CHK is client handshake key
- SHK is server handshake key
- E(x, CHK) refers encryption of prime number x, using Client handshake key
- E(y, SHK) refers encryption of prime number y, using server handshake key
- Sk is Session key

1. The client first selects a random number x and encrypts it using its secret key, which we denote
as CHK (client handshake key). The client then sends E(x, CHK), along with an identifier
(ClientId), for itself to the server.
2. The server uses the key SHK for server handshake key to decrypt the random number. Then
server adds 1 to the number and sends the result back to the client. It also sends back a random
number y that has been encrypted with SHK.
3. The client also decrypts the random number y the server sent it, encrypts this number plus 1,
and sends the result to the server.

After the third message, each side has authenticated itself to the other.
The fourth message in figure corresponds to the server sending the client a session key (SK), encrypted
using SHK. The advantage of using a session key is making it harder for an attacker to gather data.
2. Kerberos

Kerberos is an authentication service developed as a part of project Athena at MIT. In Greek, it is a three-
headed watchdog that guards the entrance to the underworld.
Kerberos provides a centralized authentication server whose function is to provide authentication.
When two participants want to communicate and know nothing about each other, but both trust a third
party. This third party is sometimes called an authentication server, and it uses a protocol called Kerberos
to help the two participants authenticate each other. The following figure illustrates this.

A, B -> Identifier of two participants


S->trusted authentication server
KA, KB-> secret keys of A and B
T-> Timestamp is a prime number
L->Lifetime of the session
K->Session key

1. A first sends a message to server S that it wants to communicate with B


2. The server then generates a timestamp T, a lifetime L, and a new session key K. Server S then
replies to A with a two-part message.
a. The first part encrypts the three values T, L, and K, along with the identifier for
participant B, using the key that the server shares with A (KA)
b. The second part encrypts the three values T, L, and K, along with participant A’s
identifier, but this time using the key that the server shares with B (KB)
3. A receives this message and decrypt the first part but not the second part. A simply passes this
second part on to B, along with the encryption of A and T using the new session key K.
4. Finally, B decrypts the part of the message from A, it recovers T, K, and A. It uses K to decrypt
the half of the message encrypted by A and, upon seeing that A and it add one to T and encrypt
that using the new session key K.
5. Now A and B can now communicate with each other using the shared secret session key K to
ensure privacy.

3. Public Key Distribution (X.509)

X.509 defines framework for authentication services to users.


X.509 defines authentication protocols based on public key certificates.

A Digital certificate

Digital certificates are the equivalents of a driver’s license any other form of identity. The only difference
is that a digital certificate is used in conjunction with a public key encryption system. Digital certificates
are electronic files that simply work as an online passport.

The most common use of a digital certificate is to verify that a user sending a message is who he or she
claims to be, and to provide the receiver with the means to encode a reply.
An individual wishing to send an encrypted message applies for a digital certificate from a Certificate
Authority (CA). The CA issues an encrypted digital certificate containing the applicant's public key and a
variety of other identification information. The CA makes its own public key readily available through print
publicity or perhaps on the Internet.

The recipient of an encrypted message uses the CA's public key to decode the digital certificate attached
to the message, verifies it as issued by the CA and then obtains the sender's public key and identification
information held within the certificate. With this information, the recipient can send an encrypted reply.

The most widely used standard for digital certificates is X.509.

B Digital Signature

A digital signature (not to be confused with a digital certificate) is an electronic signature that can be used
to authenticate the identity of the sender of a message or the signer of a document, and possibly to
ensure that the original content of the message or document that has been sent is unchanged.

To verify the contents of digitally signed data, the recipient generates a new message digest from the data
that was received, decrypts the original message digest with the originator's public key, and compares the
decrypted digest with the newly generated digest. If the two digests match, the integrity of the message
is verified.

You might also like