0% found this document useful (0 votes)
142 views31 pages

Unit 3

COMPUTER NETWORKS NOTES
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
142 views31 pages

Unit 3

COMPUTER NETWORKS NOTES
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 31

UNIT-3

DATA LINK LAYER


 Specific responsibilities of the data link layer include framing, addressing, flow control,
error control, and media access control.

3.1 DESIGN ISSUES:

 Frame synchronization: Data are sent in blocks called frames. The beginning and end of
each frame must be recognizable.
 Flow control: The sending station must not send frames at a rate faster than the receiving
station can absorb them.
 Error control: Bit errors introduced by the transmission system should be corrected.
 Addressing: On a shared link, such as a local area network (LAN), the identity of the
two stations involved in a transmission must be specified.
 Access Control: It is usually not desirable to have a physically separate communications
path for control information. Accordingly, the receiver must be able to distinguish control
information from the data being transmitted.
 Link management: The initiation, maintenance, and termination of a sustained data
exchange require a fair amount of coordination and cooperation among stations.
Procedures for the management of this exchange are required.

3.2 FRAMING

 The data link layer divides the stream of bits received from the network layer into
manageable data units called frames.
 When a frame arrives at the destination, the checksum is recomputed.

 If the newly-computed checksum is different from the one contained in the frame,
the data link layer knows that an error has occurred and takes steps to deal with it
(e.g., discarding the bad frame and possibly also sending back an error report).

 Framing can be done in two ways:


1. Fixed Size Framing
2. Variable Size Framing
 Fixed Size Framing:
 In fixed-size framing, there is no need for defining the boundaries of the frames;
the size itself can be used as a delimiter.

Ex: ATM Cell which is of 53Bytes.

 Variable-Size Framing:
 In variable-size framing, we need a way to define the end of the frame and the
beginning of the next.
 There are two approaches were used for this purpose:
1
1. Character-Oriented Approach and
2. Bit-Oriented Approach.
 Character-Oriented Framing Protocols:
o In a character-oriented protocol, data to be carried are 8-bit characters
from a coding system such as ASCII.
o The header, which normally carries the source and destination addresses
and other control information, and the trailer, which carries error detection
or error correction redundant bits, are also multiples of 8 bits.
o To separate one frame from the next, an 8-bit (1-byte) flag is added at the
beginning and the end of a frame.
o The flag, composed of protocol-dependent special characters, signals the
start or end of a frame. Figure 11.1 shows the format of a frame in a
character-oriented protocol.

Figure: A frame in a character-oriented protocol


o Character-oriented framing was popular when only text was exchanged by
the data link layers.
o The flag could be selected to be any character not used for text
communication.
o Character Count:
 This Method Specifies The Number Of Characters That Are
Present In particular Frame. This Information Is specified By using
a Special Field In the Header Frame.
 When the data link layer at the destination sees the character count,
it knows how many characters follow and hence where the end of
the frame is.
Ex:

 Problem: The trouble with this algorithm is that the count can be
garbled by a transmission error.
Ex:

2
if the character count of 5 in the second frame of becomes a 7, the
destination will get out of synchronization and will be unable to
locate the start of the next frame.
o Starting & Ending Characters With Character Stuffing:
 In this Method Frame Starts & End With a special Character That
Mark The Beginning & End Of Frame.
 Each character Begins With the ASCII Character Sequence DLE
STX (Data Link Escape Start Of Text ) And End With ASCII
Character Sequence DLE ETX (Data Link Escape End Of Text)
Ex:

 Character Stuffing: Character stuffing uses the special start/end


characters for framing and allows those characters in the message
also.
 The method is for the sender to stuff an extra special
character whenever the start or end character occurs
naturally so that within the message the special character
always occurs in pairs.
 The receiver recognizes the single special character as
start/end and removes from the message the first special
character from pairs received.

Figure: Character Stuffing


3
 Bit Oriented Framing Protocols: A protocol in which the data frame is
interpreted as a sequence of bits.
o Starting & ending Flags With Bit Stuffing:
 In this Method , Each frame begins & ends with a special bit
pattern 01111110 called Flags.
 Bit Stuffing: Whenever the sender's data link layer encounters
five consecutive 1s in the data, it automatically stuffs a zero bit
into the outgoing bit stream. This technique is called bit stuffing.
When the receiver sees five consecutive 1s in the incoming data
stream, followed by a zero bit, it automatically destuffs the 0 bit.
The boundary between two frames can be determined by locating
the flag pattern.

3.3 ERROR DETECTION AND CORRECTION

 The bit stream transmitted by the physical layer is not guaranteed to be error free.
 The data link layer is responsible for error detection and correction.
 The most common error control method is to compute and append some form of a
checksum to each outgoing frame at the sender's data link layer and to recomputed the
checksum and verify it with the received checksum at the receiver's side. If both of them
match, then the frame is correctly received; else it is erroneous.
 Error control is both Error Detection and Error Correction.

 Types of Errors:
 Single-Bit error:
o The term single-bit error means that only 1 bit of a given data unit (such
as a byte, character, or packet) is changed from 1 to 0 or from 0 to 1.

 Multiple-Bit error:
o The term Multiple-bit error means that there is a change in more than one
non consequence bits.

4
 Burst Error:
o The term burst error means that 2 or more bits in the data unit have
changed from 1 to 0 or from 0 to 1 which are consecutive.

 Detection Versus Correction


 Error detection, looks only to see if any error has occurred. The answer is a
simple yes or no. For an error detection method, a single-bit error is the same for
us as a burst error.
 Error correction, know the exact number of bits that are corrupted and more
importantly, their location in the message. The number of the errors and the size
of the message are important factors. If we need to correct one single error in an
8-bit data unit, we need to consider eight possible error locations; if we need to
correct two errors in a data unit of the same size, we need to consider 28
possibilities. It allows the receiver to inform the sender of any frames lost or
damaged in transmission and coordinates the retransmission of those frames by
the sender.
 Forward Error Correction Versus Retransmission
 There are two main methods of error correction.
o Forward error correction is the process in which the receiver tries to
guess the message by using redundant bits.
o Correction by retransmission is a technique in which the receiver detects
the occurrence of an error and asks the sender to resend the message.
Resending is repeated until a message arrives that the receiver believes is
error-free (usually, not all errors can be detected).

 ERROR DETECTION
 The Hamming distance between two words (of the same size) is the number of
differences between the corresponding bits. The Hamming distance can easily be
found if we apply the XOR operation on the two words and count the number of
1s in the result. The Hamming distance define number of bit changes.
 Redundancy: The central concept in detecting or correcting errors is redundancy.
To be able to detect or correct errors, we need to send some extra bits with our
data. These redundant bits are added by the sender and removed.

5
Figure: The Concept of Redundancy
o Redundancy is achieved through various coding schemes. The sender adds
redundant bits through a process that creates a relationship between the
redundant bits and the actual data bits.
o The receiver checks the relationships between the two sets of bits to detect
or correct the errors.
o The ratio of redundant bits to the data bits and the robustness of the
process are important factors in any coding scheme.
 Simple Parity Check:
o In this code, a k-bit data word is changed to an n-bit codeword where n = k
+ 1.
o The extra bit, called the parity bit, is selected to make the total number of
1s in the codeword even.
o Although some implementations specify an odd number of 1s.
o The minimum Hamming distance for this category is d min=2, which means
that the code is a single-bit error-detecting code.
o At Sender side:
 The encoder uses a generator that takes a copy of a 4-bit dataword
(ao, a1',a2' and a3) and generates a parity bit r0. The dataword bits
and the parity bit create the 5-bit codeword. The parity bit that is
added makes the number of 1s in the codeword even.
 This is normally done by adding the 4 bits of the dataword
(modulo-2); the result is the parity bit. In other words,
 If the number of 1s is even, the result is 0; if the number of 1s is
odd, the result is 1.
 In both cases, the total number of 1s in the codeword is even.
 The sender sends the codeword which may be corrupted during
transmission
o At The receiver side:
 The receiver receives a 5-bit word. The checker at the receiver
does the same thing as the generator in the sender with one
exception: The addition is done over all 5 bits. The result, which is
6
called the syndrome, is just 1 bit. The syndrome is 0 when the
number of 1s in the received codeword is even; otherwise, it is 1.
 The syndrome is passed to the decision logic analyzer.
 If the syndrome is 0, there is no error in the received
codeword; the data portion of the received codeword is
accepted as the dataword.
 If the syndrome is 1, the data portion of the received
codeword is discarded. The dataword is not created.
o Simple-parity check code cannot correct errors

Figure: Encoding and Decoding process of simple parity check

 Two dimensional Parity check method:


o A better approach is the two-dimensional parity check. In this method, the
dataword is organized in a table (rows and columns).
o The data to be sent, five are put in separate rows. For each row and each
column, 1 parity-check bit is calculated.

7
Figure: Two dimensional Parity check method
o The whole table is then sent to the receiver, which finds the syndrome for
each row and each column.

Figure: Two dimensional parity check at sender and receiver


o The two-dimensional parity check can detect up to three errors that occur
anywhere in the table (arrows point to the locations of the created nonzero
syndromes). However, errors affecting 4 bits may not be detected.

8
Figure: Effect of 1 bit error

Figure: The effect of 2 bit errors

Figure: The Effect of 3 bit errors


 CRC ( Cyclic Redundancy Check):
o One of the most common, and one of the most powerful, error-detecting
codes is the cyclic redundancy check (CRC), which can be described as
follows.
9
o Given a k-bit block of bits, or message, the transmitter generates an
sequence, known as a frame check sequence (FCS), such that the resulting
frame, consisting of n bits, is exactly divisible by some predetermined
number. The receiver then divides the incoming frame by that number
and, if there is no remainder, assumes there was no error.
o To clarify this, we present the procedure in three equivalent ways: modulo
2 arithmetic, polynomials, and digital logic.
o Modulo 2 Arithmetic: Modulo 2 arithmetic uses binary addition with no
carries, which is just the exclusive-OR (XOR) operation. Binary
subtraction with no carries is also interpreted as the XOR operation.
Ex:

Figure: Modulo 2 Arithmetic: Addition, Subtraction and Multiplication

Figure: Encoder and Decoder in CRC


o At the sender side:
 The encoder takes the dataword and augments it with n - k number
of as. It then divides the augmented dataword by the divisor
 The process of modulo-2 binary division is the same as the familiar
division process. we use for decimal numbers.
 As in decimal division, the process is done step by step. In each
step, a copy of the divisor is XORed with the 4 bits of the
dividend.

10
 The result of the XOR operation (remainder) is 3 bits (in this case),
which is used for the next step after 1 extra bit is pulled down to
make it 4 bits long.
 If the leftmost bit of the dividend (or the part used in each step) is
0, the step cannot use the regular divisor; we need to use an all-0s
divisor. When there are no bits left to pull down, we have a result.
The 3-bit remainder forms the check bits (r2', rl' and ro). They are
appended to the dataword to create the codeword.

Ex:
 Suppose we want to transmit the information string:
1111101.
 The receiver and sender decide to use the polynomial
pattern, 1101.
 The information string is shifted left by one position less
than the number of positions in the divisor.
 The remainder is found through modulo 2 division (at
right) and added to the information string: 1111101000 +
111 = 1111101111.

Figure: Modulo-2 division


Mapping polynomial to a Binary data word:

11
o At the receiver side:
 The codeword can change during transmission.
 The decoder does the same division process as the encoder.
 The remainder of the division is the syndrome. If the syndrome is
all 0s, there is no error; the dataword is separated from the received
codeword and accepted. Otherwise, everything is discarded.
 The value of syndrome when no error has occurred; the syndrome
is 000. The syndrome is not all 0s.

o CRC standard polynomials:

 Check Sum:
o Checksum is an error detection method.
o The checksum is used in the Internet by several protocols although not at
the data link layer.
o The receiver follows these steps:
 The unit is divided into k sections, each of n bits.
 All sections are added using one’s complement to get the sum
 The sum is complemented.
 If the result is zero, the data are accepted: otherwise rejected.
o The sender follows these steps:

12
 The unit is divided into k sections, each of n bits.
 All sections are added using one’s complement to get the sum.
 The sum is complemented and becomes the checksum.
 The checksum is sent with the data.

Figure: Checksum
 ERROR CORRECTION:
 Hamming Code:
o Hamming codes provide for Forward Error Correction using a “Block
Parity” i.e, instead of one parity bit send a block of parity bits
o Allows correction of single bit errors.
o This is accomplished by using more than one parity bit.
o Each computed on different combination of bits in the data.
o In hamming code, the redundant bits are added to the original data at all
2ith position where i=0,1,2... such that 2i<n where n is the no of bits in
original data.
Ex:

o The Redundant bits are calculated as

13
Figure: Calculation of redundant bits.
Ex:

3.4 FLOW CONTROL

 Flow control coordinates the amount of data that can be sent before receiving an
acknowledgment and is one of the most important duties of the data link layer.

14
 Flow control is a set of procedures that tells the sender how much data it can transmit
before it must wait for an acknowledgment from the receiver.
 The receiving device must be able to inform the sending device before those limits are
reached and to request that the transmitting device send fewer frames or stop temporarily.
 Incoming data must be checked and processed before they can be used.
 The rate of such processing is often slower than the rate of transmission. For this reason,
each receiving device has a block of memory, called a buffer.
 The Buffer is reserved for storing incoming data until they are processed. If the buffer
begins to fill up, the receiver must be able to tell the sender to halt transmission until it is
once again able to receive.

Figure: Model of Frame Transmission


 Elementary Data Link Protocols:
 Stop-and-Wait Flow Control
o The simplest form of flow control, known as stop-and-wait flow control.
o A source entity transmits a frame. After the destination entity receives the
frame, it indicates its willingness to accept another frame by sending back
an acknowledgment to the frame just received.
o The source must wait until it receives the acknowledgment before sending
the next frame.
o The destination can thus stop the flow of data simply by withholding
acknowledgment.

15
Figure: Stop-and-Wait Protocol
o Sender keeps a copy of the last frame until it receives an
acknowledgement. For identification, both data frames and
acknowledgements (ACK) frames are numbered alternatively 0 and 1.
o Sender has a control variable (S) that holds the number of the recently sent
frame. (0 or 1)
o Receiver has a control variable ® that holds the number of the next frame
expected (0 or 1).
o Sender starts a timer when it sends a frame. If an ACK is not received
within a allocated time period, the sender assumes that the frame was lost
or damaged and resends it
o Receiver send only positive ACK if the frame is intact.
o ACK number always defines the number of the next expected frame

Figure: Flow diagram of Stop-and-Wait Protocol


o Limitations:
 The Buffer size may be limited.
 At a time, only one frame at a time can be in transit. This means
low utilization of bandwidth.
 Piggy Backing:
o A method to combine a data frame with ACK.
o Station A and B both have data to send.
16
o Instead of sending separately, station A sends a data frame that includes
an ACK. Station B does the same thing.
o Piggybacking saves bandwidth.

 Sliding Window Protocols


o The stop and wait protocol suffers from a few drawbacks:
1. if the receiver had the capacity to accept more than one frame, its
resources are being underutilized.
2. if the receiver was busy and did not wish to receive any more
packets, it may delay the acknowledgement. However, the timer on
the sender's side may go off and cause an unnecessary
retransmission.
o These drawbacks can be overcome by the sliding window protocols.
o In sliding window protocols the sender's data link layer maintains a
'sending window' which consists of a set of sequence numbers
corresponding to the frames it is permitted to send.

o Figure: The Sender's Window

17
o Similarly, the receiver maintains a 'receiving window' corresponding to
the set of frames it is permitted to accept.

Figure: Receiver's Window


o The window size is dependent on the retransmission policy and it may
differ in values for the receiver's and the sender's window.
o The sequence numbers within the sender's window represent the frames
sent but as yet not acknowledged.
o Whenever a new packet arrives from the network layer, the upper edge of
the window is advanced by one.
o When an acknowledgement arrives from the receiver the lower edge is
advanced by one.
o The receiver's window corresponds to the frames that the receiver's data
link layer may accept.
o When a frame with sequence number equal to the lower edge of the
window is received, it is passed to the network layer, an acknowledgement
is generated and the window is rotated by one.
o If however, a frame falling outside the window is received, the receiver's
data link layer has two options. It may either discard this frame and all
subsequent frames until the desired frame is received or it may accept
these frames and buffer them until the appropriate frame is received and
then pass the frames to the network layer in sequence.

18
Figure: Example of Sliding-Window Protocol

 Go-back-N ARQ protocol:


o If a frame is lost or received in error, the receiver may simply discard all
subsequent frames, sending no acknowledgments for the discarded frames.
o In this case the receive window is of size 1. Since no acknowledgements are
being received the sender's window will fill up, the sender will eventually time
out and retransmit all the unacknowledged frames in order starting from the
damaged or lost frame.
o The maximum window size for this protocol can be obtained as follows. Assume
that the window size of the sender is n. So the window will initially contain the
frames with sequence numbers from 0 to (w-1).
o Consider that the sender transmits all these frames and the receiver's data link
layer receives all of them correctly. However, the sender's data link layer does
not receive any acknowledgements as all of them are lost. So the sender will
retransmit all the frames after its timer goes off. However the receiver window
has already advanced to w. Hence to avoid overlap , the sum of the two windows
should be less than the sequence number space.

19
Figure: Go-back-N ARQ protocol

Figure: Go-back-N with lost frame


 Selective Repeat ARQ:
o In this protocol rather than discard all the subsequent frames following a
damaged or lost frame, the receiver's data link layer simply stores them in
buffers.
o When the sender does not receive an acknowledgement for the first frame
it's timer goes off after a certain time interval and it retransmits only the
lost frame.
o Assuming error - free transmission this time, the sender's data link layer
will have a sequence of a many correct frames which it can hand over to
the network layer. Thus there is less overhead in retransmission than in the
case of Go Back n protocol.

20
o In case of selective repeat protocol the window size may be calculated as
follows. Assume that the size of both the sender's and the receiver's
window is w. So initially both of them contain the values 0 to (w-1).
o Consider that sender's data link layer transmits all the w frames, the
receiver's data link layer receives them correctly and sends
acknowledgements for each of them. However, all the acknowledgments
are lost and the sender does not advance it's window.
o The receiver window at this point contains the values w to (2w-1). To
avoid overlap when the sender's data link layer retransmits, we must have
the sum of these two windows less than sequence number space. Hence,
we get the condition

Figure: Selective-Repeat ARQ

3.5 DATA LINK LAYER IN INTERNET

 The Internet consists of individual machines (hosts and routers) and the communication
infrastructure that connects them. Within a single building, LANs are widely used for
interconnection, but most of the wide area infrastructure is built up from point-to-point
leased lines.
 In practice, point-to-point communication is primarily used

1. Thousands of organizations have one or more LANs, each with some number of
hosts (personal computers, user workstations, servers, and so on) along with a
router (or a bridge, which is functionally similar. All connections to the outside
world go through one or two routers that have point-to-point leased lines to
distant routers.
2. Millions of individuals who have home connections to the Internet using modems
and dial-up telephone lines.

21
Figure: Internet
 For both the router-router leased line connection and the dial-up host-router connection,
some point-to-point data link protocol is required on the line for framing, error control,
and the other data link layer functions.
 The Protocols used here are SLIP and PPP.

 SLIP (Serial Line IP):

 SLIP is so simple it hardly deserves to be called a protocol. It is designed to


transmit signals over a serial connection (which in most cases means a modem
and a telephone line) and has very low control overhead, meaning that it doesn't
add much information to the network layer data that it is transmitting.
 Compared to the 18 bytes that Ethernet adds to every packet, for example, SLIP
adds only 1 byte. Of course, with only 1 byte of overhead, SLIP can't provide
functions like error detection, network layer protocol identification, security, or
anything else.
 SLIP works by transmitting an IP datagram received from the network layer and
following it with a single framing byte called an End Delimiter (OXCO). If the
end delimiter occurs inside the IP Packet, a form of character stuffing is used and
the two flag byte sequence (OXDB, OXDC) is sent in its place.
 In some cases, SLIP can have two flag bytes at the starting and ending of the IP
Packet.
 Recent versions of SLIP do some header compression.
 SLIP has some serious problems:

o It does not perform any error detection or correction.


o SLIP supports only IP.
o Each side must know other's IP address in advance.
o SLIP does not provide any authentication.
o SLIP is not an approved Internet standard.
22
 After finishing the data transfer,

o NCP (Network Core Protocol) is used to tear down the network layer
connection and free up the IP address.
o Then LCP (Link Control Protocol) is used to shutdown the data link
layer connection.
o Finally, the computer tells the modem to hang up the phone, releasing the
physical layer connection.

 PPP (Point-to-Point Protocol):

 PPP handles error detection, supports multiple protocols, allows IP addresses to


be negotiated at connection time, permits authentication, and has many other
features.
 PPP provides three features:

o A framing method that unambiguously describe the end of one frame and
the start of the next one. The frame format also handles error detection.
o A link control protocol for bringing lines up, testing them, negotiating
options, and bringing them down again gracefully when they are no longer
needed. This protocol is called LCP (Link Control Protocol). It supports
synchronous and asynchronous circuits and byte-oriented and bit-oriented
encodings.
o A way to negotiate network-layer options in a way that is independent of
the network layer protocol to be used. The method chosen is to have a
different NCP (Network Control Protocol) for each network layer
supported.

 Frame Format:

o The PPP frame format closely resemble the HDLC frame format. The
major difference between PPP and HDLC is that PPP is character oriented
rather than bit oriented.
o PPP uses byte stuffing on dial-up modem lines, so all frames are an
integral number of bytes. It is not possible to send a frame consisting of
30.25 bytes, as it is with HDLC.
o PPP is a multiprotocol framing mechanism suitable for use over modems,
HDLC bit-serial lines, SONET, and other physical layers. It supports error
detection, option negotiation, header compression, and, optionally, reliable
transmission using an HDLC-type frame format.

Figure: PPP Frame Format

 Flag: PPP is Character-oriented version of HDLC. It uses the flag


byte 0x7E (01111110) for starting and ending of the frame.
23
Byte Stuffing: Any occurrence of flag or control escape inside of
frame is replaced with 0x7D followed by original octet XORed
with 0x20 (00100000).

Figure: Byte stuffing

 Address: Address field, which is always set to the binary value


11111111 to indicate that all stations are to accept the frame. Using
this value avoids the issue of having to assign data link addresses.
 Control: The default value of control field is 00000011. This
value indicates an unnumbered frame. In other words, PPP does
not provide reliable transmission using sequence numbers and
acknowledgements as the default.
 Protocol: It tells the kind of packet in the payload field. PPP was
designed to support multiple network protocols simultaneously.

Ex: LCP, NCP, IP, OSI CLNP, IPX

 Payload: The Payload field is variable length, up to some


negotiated maximum. If the length is not negotiated using LCP
during line setup, a default length of 1500 bytes is used.
 Checksum: which is normally 2 bytes, but a 4-byte checksum can
be negotiated. It is used for error detection.

 The PPP provides a method for encapsulating Internet protocol packets over
point-to-point links.
 PPP can be used as a data link control to connect two routers or can be used to
connect personal computer to an ISP using telephone line and modem.
 Phase Diagram for PPP:

1. PC calls router via modem


2. PC and router exchange LCP packets to negotiate PPP parameters
3. Check on identities
4. NCP packets exchanged to configure the network layer, e.g. TCP/IP ( requires IP
address assignment)
5. Data transport, e.g. send/receive IP packets
6. NCP used to tear down the network layer connection (free up IP address); LCP used to
shutdown data link layer connection
7. Modem hangs up.

24
Figure: Phase Diagram of PPP

 LCP: (Link Control Protocols):

o LCP negotiates data link protocol options during the ESTABLISH phase.
o It provides a way for the initiating process to make a proposal and for the
responding process to accept or reject it, in whole or in part. It also
provides a way for the two processes to test the line quality to see if they
consider it good enough to set up a connection. Finally, the LCP protocol
also allows lines to be taken down when they are no longer needed.
o Eleven types of LCP frames are defined in RFC 1661.

 The four Configure- types allow the initiator (I) to propose option
values and the responder (R) to accept or reject them.
 The Terminate- codes shut a line down when it is no longer
needed.
 The Code-reject and Protocol-reject codes indicate that the
responder got something that it does not understand.
 The Echo- types are used to test the line quality.
 Discard-request help debugging.
 The options that can be negotiated include setting the maximum
payload size for data frames, enabling authentication and choosing
a protocol to use, enabling line-quality monitoring during normal
operation, and selecting various header compression options.

25
Figure: LCP packet types

 SLIP Vs PPP:

SLIP PPP
1 Older protocol Newer Protocol
2 Can only transport TCP/IP traffic Can support multiprotocol transport
mechanism
3 Each side should Know other side IP Operates on any type of full duplex
address point-point links
4 Does not provide authorizations Provide and permits authorizations
5 Connection configuration is done Connection configuration is done
manually automatically
6 Does not provide error handling Provide error detection
7 It follows asynchronous transmission It follows asynchronous as well as
synchronous
8 It is not authorized or approved by It is an official internet standard protocol.
Internet standards

3.6 DATA LINK LAYER IN HIGH-LEVEL DATA LINK CONTROL (HDLC)

 The most important data link control protocol is HDLC (ISO 3009, ISO 4335).
 High-level Data Link Control (HDLC) is a bit-oriented protocol for communication over
point-to-point and multipoint links.
 Basic Characteristics
 To satisfy a variety of applications, HDLC defines three types of stations, two
link configurations, and three data transfer modes of operation.
 The three station types are
o Primary station: Responsible for controlling the operation of the link.
Frames issued by the primary are called commands.

26
o Secondary station: Operates under the control of the primary station.
Frames issued by a secondary are called responses. The primary maintains
a separate logical link with each secondary station on the line.
o Combined station: Combines the features of primary and secondary. A
combined station may issue both commands and responses.
 The two link configurations are
o Unbalanced configuration: Consists of one primary and one or more
secondary stations and supports both full-duplex and half-duplex
transmission.
o Balanced configuration: Consists of two combined stations and supports
both full-duplex and half-duplex transmission.
 The three data transfer modes are
o Normal response mode (NRM): Used with an unbalanced configuration.
The primary may initiate data transfer to a secondary, but a secondary may
only transmit data in response to a command from the primary.
o NRM can be configured using
 Point-to-point links
 Multipoint links

Figure: Normal Response Mode (Point-to-point and Multipoint)

o Asynchronous balanced mode (ABM): Used with a balanced


configuration. Either combined station may initiate transmission without
receiving permission from the other combined station.

Figure: Asynchronous Balanced Mode

27
o Asynchronous response mode (ARM): Used with an unbalanced
configuration. The secondary may initiate transmission without explicit
permission of the primary. The primary still retains responsibility for the
line, including initialization, error recovery, and logical disconnection.
 NRM is used on multi-drop lines, in which a number of terminals are connected
to a host computer.
 NRM is also sometimes used on point-to-point links, particularly if the link
connects a terminal or other peripheral to a computer.
 ABM is the most widely used of the three modes; it makes more efficient use of a
full-duplex point-to-point link because there is no polling overhead.
 ARM is rarely used; it is applicable to some special situations in which a
secondary may need to initiate transmission.
 HDLC FRAME STRUCTURE
 HDLC uses synchronous transmission. All transmissions are in the form of
frames, and a single frame format suffices for all types of data and control
exchanges.

Figure: HDLC Frame format

28
 The above Figure depicts the structure of the HDLC frame. The flag, address,
and control fields that precede the information field are known as a header.
The FCS and flag fields following the data field are referred to as a trailer.
o Flag Fields:
 Flag fields delimit the frame at both ends with the unique pattern
01111110.
 A single flag may be used as the closing flag for one frame and the
opening flag for the next.
 On both sides of the user-network interface, receivers are
continuously hunting for the flag sequence to synchronize on the
start of a frame. While receiving a frame, a station continues to
hunt for that sequence to determine the end of the frame. Because
the protocol allows the presence of arbitrary bit patterns (i.e., there
are no restrictions on the content of the various fields imposed by
the link protocol).
 HDLC uses Bit Stuffing: If the pattern 01111110 appear
somewhere inside the frame, a procedure known as bit stuffing is
used. For all bits between the starting and ending flags, the
transmitter inserts an extra 0 bit after each occurrence of five 1s in
the frame.

Figure: Bit Stuffing and Unstuffing


o Address Field:
 The address field identifies the secondary station that transmitted
or is to receive the frame.
 The address field is usually 8 bits long but, by prior agreement, an
extended format may be used in which the actual address length is
a multiple of 7 bits.
 The leftmost bit of each octet is 1 or 0 according as it is or is not
the last octet of the address field.
 The remaining 7 bits of each octet form part of the address.
 The single-octet address of 11111111 is interpreted as the all-
stations address in both basic and extended formats.

29
 It is used to allow the primary to broadcast a frame for reception
by all secondaries.
o Control Field
 HDLC defines three types of frames, each with a different control
field format.
 Information frames (I-frames) carry the data to be transmitted for
the user. Additionally, flow and error control data, using the ARQ
mechanism, are piggybacked on an information frame.
 Supervisory frames (S-frames) provide the ARQ mechanism
when piggybacking is not used.
 Unnumbered frames (U-frames) provide supplemental link
control functions.
 The first one or two bits of the control field serves to
identify the frame type.
 All of the control field formats contain the poll/final (P/F)
bit. Its use depends on context.
 P bit and is set to 1 to solicit (poll) a response frame from
the peer HDLC entity.
 F bit and is set to 1 to indicate the response frame
transmitted as a result of a soliciting command.
 Note that the basic control field for S- and I-frames uses 3-bit
sequence numbers.
 An extended control field can be used for S- and I-frames that
employs 7-bit sequence numbers.
 U-frames always contain an 8-bit control field.
o Information Field: The information field is present only in I-frames and
some U-frames. The field can contain any sequence of bits but must
consist of an integral number of octets. The length of the information field
is variable up to some system defined maximum.
o Frame Check Sequence Field: The frame check sequence (FCS) is an
error detecting code calculated from the remaining bits of the frame,
exclusive of flags.
 The normal code is the 16-bit CRC-CCITT.
 An optional 32-bit FCS, using CRC-32, may be employed if the
frame length or the line reliability dictates this choice.

5.7 DATA LINK LAYER IN ATM

 In ATM network, Transmission Convergence (TC) Sub-layer act as Data Link Layer.
 When an application program produces a message to be sent, that message is given to the
lower level layers in the ATM.
 The message is added with headers, trailers and undergoing segmentation in to cells.
 The cells reach the TC sub layer for transmission.
 Each cell contains 5byte header consisting of 4bytes of virtual circuit and control
information followed by a 1-byte check sum.

30
 The header was made the decision to checksum to reduce the probability of cells being
delivered incorrectly due to the header error, but to avoid paying the price of check-
summing the much larger payload field. This eight bit checksum field is also called the
HEC.
 The HEC corrects all single bit errors and detect any multi bit errors
 The TC sub-layer must also provide idle cells when no data cells are available for
synchronous transmission media like SONET and OAM cells (operation and
management). In addition, OAM cells can be used to slow down the rate of actual data
transfer.
 The TC sub-layer produces SONET frames when a SONET is being used (not a trivial
business since 53 byte packets don't fit integrally in a SONET frame).
 Cell delineation is performed by using the correlation between the header bits to be
protected (32 bits) and the relevant control bits (8 bits) introduced in the header by the
HEC using a shortened cyclic code with generating polynomial x8 + x2 + x + 1.
 There are no flags in ATM cells; so recognizing the beginning and end of a frame is a
major pain. One does it by search for a valid HEC byte:
 In the HUNT state, one does a bit-by-bit check until a valid HEC is found.
 One then enters the PRE-SYNCH state in which one checks cell by cell to make
sure that there is a valid HEC. When there are a certain number in a row (5-10,
typically) then one moves in the SYNCH state and begins normal operation.

Figure: Cell Delineation state diagram

31

You might also like