0% found this document useful (0 votes)
8 views46 pages

Lec04 Link

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views46 pages

Lec04 Link

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 46

Data-Link Layers

Note: The slides are adapted from the materials from Prof. Richard Han at CU Boulder and Profs. Jennifer Rexford and Mike Freedman at Princeton University,
and the networking book (Computer Networking: A Top Down Approach) from Kurose and Ross.
Broadcast Links: Shared Media

2
3

Digital adaptors Communicating

• Link layer implemented in adaptor (network interface card)


– Ethernet card, WiFi card or chip
• Sending side:
– Encapsulates datagram in a frame
– Adds error checking bits, reliable data transfer, flow control, etc.
• Receiving side
– Looks for errors, reliable data transfer, flow control, etc.
– Extracts datagram and passes to upper layer at receiving node
Link-Layer Services
• Encoding
– Representing the 0s and 1s
• Framing
– Encapsulating packet into frame, adding header, trailer
– Using MAC addresses, rather than IP addresses
• Error detection
– Errors caused by signal attenuation, noise.
– Receiver detecting presence of errors
• Error correction
– Receiver correcting errors without retransmission
• Flow control
– Pacing between adjacent sending and receiving nodes
4
Link-Layer Protocols

5
Outline
• Link-layer protocols
– Encoding, framing, error detection

• Multiple-access links: sharing is caring!


– Strict isolation: division over time or frequency
– Centralized management (e.g., token passing)
– Decentralized management (e.g., CSMA/CD)

6
Digital -> Analog Encoding
• Signals sent over physical links
– Source node: bits -> signal
– Receiving node: signal -> bits

• Encoding in telegraph
– Morse code: “long”
and “short” signals

7
Digital -> Analog Encoding
• Signals sent over physical links
– Source node: bits -> signal
– Receiving node: signal -> bits
• Simplify some electrical engineering details
– Assume two discrete signals, high and low
– E.g., could correspond to two different voltages
• Simple approach: Non-return to zero
– High for a 1, low for a 0

8
Problem With NRZ
• Long strings of 0s or 1s introduce problems
– No transitions from low-to-high, or high-to-low

• Receiver keeps average of signal it has received


– Uses the average to distinguish between high and low
– Long flat strings make receiver sensitive to small change

• Transitions also necessary for clock recovery


– Receiver uses transitions to derive its own clock
– Long flat strings do not produce any transitions
– Can lead to clock drift at the receiver
• Alternatives (see Section 2.2)
– Non-return to zero inverted, and Manchester encoding9
Protocols with clock-recovery
• Manchester encoding (basic Ethernet)
– clock XOR NRZ: 802.3: H→L (0), L→H (1) : self-clocking

• Efficiency? (read 4B/5B encoding in Sec 2.2)


– Manchester: 2 clock transitions per bit: 50% efficient
– 1 GbE: 8b/10b: 80% efficient
– 10 GbE: 64b/66b: 96.9% efficient 10
Different Encoding Strategies
Framing
• Break sequence of bits into a frame
– Typically implemented by the network adaptor
• Sentinel-based
– Delineate frame with special pattern (e.g., 01111110)

01111110 Frame contents 01111110

– Problem: what if special patterns occurs within frame?


– Solution: escaping the special characters (stuffing)
• E.g., sender always inserts a 0 after five 1s
• … and receiver always removes a 0 appearing after five 1s
• Byte stuffing (BiSync, PPP) and bit stuffing (HDLC)
– Similar to escaping special characters in C programs 12
Framing (Continued)
• Counter-based
– Include the payload length in the header
– … instead of putting a sentinel at the end
– Problem: what if the count field gets corrupted?
• Causes receiver to think the frame ends at a different place
– Solution: catch later when doing error detection
• And wait for the next sentinel for the start of a new frame

• Clock-based
– Make each frame a fixed size
– No ambiguity about start and end of frame
– But, may be wasteful 13
Character/Byte Stuffing Example
• Sentinel X, at both start and end of packet
• Stuff X in data, replace X with escape character
“E” (DLE in textbook) and X, i.e., X -> (E,X)
• PPP uses byte stuffing for PPP data payload
Data: AKLWXIKKEZLXKDKLEXYBE

Send: XAKLWEXIKKEEZLEXKDKLEEEXYBEEX

Receiver: AKLWXIKKEZLXKDKLEXYBE

14
Bit Stuffing Example
• Similar to byte stuffing, except bit stuffing is not confined to byte
boundaries
• HDLC denotes beginning and end of a packet/frame with
“01111110” flag
• Since “01111110” may occur anywhere (across byte boundaries)
in data, then “stuff” it:
– At sender, after 5 consecutives one, insert a “0”
– At receiver, “0111110” -> stuffing, so destuff, “01111110” -> end of frame,
“01111111” -> error

Data: 0110111111100111110111111111100000

Send: 0111110011011111011001111100111110111110000000111110

Receiver: 0110111111100111110111111111100000
15
Error Detection
• Errors are unavoidable
– Electrical interference, thermal noise, etc.

• Error detection
– Transmit extra (redundant) information
– Use redundant information to detect errors
– Extreme case: send two copies of the data
– Trade-off: accuracy vs. overhead

16
Probability of Packet Error
• Send N bits. Probability of bit error = Pb
• Assume independent bit errors. What is the
probability of packet error?
– Prob[packet error] = Prob[at least 1 bit is corrupt]
= 1- Prob[every bit is clean] = 1-(1-Pb)N
– If Pb= 10-6, and N=10000 bits, then
Prob[packet error] = 9.95 * 10-3 ~= 1%
• Optical links have much lower probabilities of bit
error: 10-12
• Wireless links have much higher probabilities of bit
error: 10-3
– Bit errors correlated, not independent
17
Error detection
EDC: error detection and correction bits (e.g., redundancy)
D: data protected by error checking, may include header fields

datagram datagram Error detection not 100%


otherwise reliable!
all
bits in D’ N ▪ protocol may miss
OK detected some errors, but rarely
? error
d data bits ▪ larger EDC field yields
D EDC D’ EDC’ better detection and
correction
bit-error prone link
Parity checking
single bit parity: two-dimensional bit parity:
▪ detect single bit ▪ detect and correct single bit errors
errors
Internet checksum (review)

Goal: detect errors (i.e., flipped bits) in transmitted segment


sender: receiver:
▪ treat contents of UDP ▪ compute checksum of received
segment (including UDP header segment
fields and IP addresses) as
sequence of 16-bit integers ▪ check if computed checksum equals
▪ checksum: addition (one’s checksum field value:
complement sum) of segment • not equal - error detected
content • equal - no error detected. But maybe
▪ checksum value put into errors nonetheless? More later ….
UDP checksum field
Cyclic redundancy check
• more powerful error-detection coding
• view data bits, D, as a binary number
• choose r+1 bit pattern (generator), G
• goal: choose r CRC bits, R, such that
– <D,R> exactly divisible by G (modulo 2)
– receiver knows G, divides <D,R> by G. If non-zero remainder:
error detected!
– can detect all burst errors less than r+1 bits
• widely used in practice (Ethernet, 802.11 WiFi, ATM)

Link Layer and LANs 6-21


CRC example
want:
D.2r XOR R = nG
equivalently:
D.2r = nG XOR R
equivalently:
if we divide D.2r
by G, want
remainder R to
satisfy:
D.2r
R = remainder[ ]
G

Link Layer and LANs 6-22


Error Correction
• Correct an error, rather than just detecting an error
• Simple technique: Send 2 copies of data with the
data (repetition coding), and then do majority logic
decoding at the receiver
– Original data: 0100 Corrupt
Receiver: 0100
– Send: 0100 0100 0100 0110
Bit
• Problems 0100
– In efficient Decode: 0100
– It cannot correct 2 errors in the same bit

23
Error Correction (Continued)
• Forward Error Correction (FEC)
– Many types, e.g., Reed-Solomon coding
– Add K bits of redundancy to N bits, to form a (N+K)-bit
long packet, or vector

– N dimensions -> N+K dimensions


– 2N patterns or vectors mapped into 2N+K possibilities
– Spread out these vectors as far away from neighbors as
possible in (N+K)-dimensional space 11 11111
– N=2, K=3, N+K = 5 10 01010
01 10101
00 00000
– Receive 01111 – closet to 11111, so decode “11” and
correct one bit error 24
Sharing the Medium

25
Collisions

71-65-F7-2B-08-53 1A-2F-BB-76-09-AD

0C-C4-11-6F-E3-98

• Single shared broadcast channel


– Avoid having multiple nodes speaking at once
– Otherwise, collisions lead to garbled data
26
Multiple Access Protocol
• Single shared broadcast channel
– Avoid having multiple nodes speaking at once
– Otherwise, collisions lead to garbled data

• Multiple access protocol


– Distributed algorithm for sharing the channel
– Algorithm determines which node can transmit
• Classes of techniques
– Channel partitioning: divide channel into pieces
– Taking turns: passing a token for the right to transmit
– Random access: allow collisions, and then recover27
Channel Partitioning: TDMA
TDMA: time division multiple access
• Access to channel in "rounds"
– Each station gets fixed length slot in each round
• Time-slot length is packet transmission time
– Unused slots go idle
• Example: 6-station LAN with slots 1, 3, and 4

28
Channel Partitioning: FDMA
FDMA: frequency division multiple access
• Channel spectrum divided into frequency bands
– Each station has fixed frequency band (Wifi channels 1-11)
• Unused transmission time in bands go idle
• Example: 6-station LAN with bands 1, 3, and 4
frequency bands

29
Channel Partitioning: FDMA
FDMA: frequency division multiple access
• Channel spectrum divided into frequency bands
– Each station has fixed frequency band (Wifi channels 1-11)
• Unused transmission time in bands go idle
• Example: 6-station LAN with bands 1, 3, and 4

WDM: Wavelength division multiplexing


• Multiple wavelengths λ on same optical fiber

30
“Taking Turns” MAC protocols
Polling Token passing
• Primary node “invites” • Control token passed from
secondary nodes to one node to next sequentially
transmit in turn • Token message
• Concerns: • Concerns:
– Polling overhead – Token overhead
– Latency – Latency
– Single point of failure – Single point of failure (token)
(primary)

31
Random Access Protocols
• When node has packet to send
– Transmit at full channel data rate R.
– No a priori coordination among nodes

• Two or more transmitting nodes ➜ “collision”


• Random access MAC protocol specifies:
– How to detect collisions
– How to recover from collisions (e.g., via delayed
retransmissions)
• Examples of random access MAC protocols:
– ALOHA, slotted ALOHA
– CSMA, CSMA/CD, CSMA/CA
32
Key Ideas of Random Access
• Carrier Sense (CS)
– Listen before speaking, and don’t interrupt
– Checking if someone else is already sending data
– … and waiting till the other node is done
• Collision Detection (CD)
– If someone else starts talking at the same time, stop
– Realizing when two nodes are transmitting at once
– …by detecting that the data on the wire is garbled
• Randomness
– Don’t start talking again right away
– Waiting for a random time before trying again
33
Slotted ALOHA

assumptions: operation:
▪ all frames same size ▪ when node obtains fresh
▪ time divided into equal size frame, transmits in next slot
slots (time to transmit 1 • if no collision: node can send
frame)
new frame in next slot
▪ nodes start to transmit only
slot beginning • if collision: node retransmits
▪ nodes are synchronized frame in each subsequent
slot with probability p until
▪ if 2 or more nodes transmit in success
slot, all nodes detect collision
randomization – why?
Slotted ALOHA

node 1 1 1 1 1

2 2 2
C: collision
node 2
S: success
3
node 3 3 3 E: empty
C E C S E C E S S

Pros: Cons:
▪ single active node can ▪ collisions, wasting slots
continuously transmit at full rate ▪ idle slots
of channel
▪ nodes may be able to detect collision
▪ highly decentralized: only slots in in less than time to transmit packet
nodes need to be in sync
▪ simple ▪ clock synchronization
Slotted ALOHA: efficiency
Efficiency: long-run fraction of successful slots (many
nodes, all with many frames to send)
• suppose: N nodes with many frames to send, each
transmits in slot with probability p
– prob that given node has success in a slot = p(1-p)N-1
– prob that any node has a success = Np(1-p)N-1
– max efficiency: find p* that maximizes Np(1-p)N-1
– for many nodes, take limit of Np*(1-p*)N-1 as N goes to infinity, gives:

max efficiency = 1/e = .37

• at best: channel used for useful transmissions


37% of time!
Pure ALOHA
▪ unslotted Aloha: simpler, no synchronization
• when frame first arrives: transmit immediately
▪ collision probability increases with no synchronization:
• frame sent at t0 collides with other frames sent in [t0-1,t0+1]

will overlap will overlap


with start of with end of
i’s frame i’s frame

t0 - 1 t0 t0 + 1

▪ pure Aloha efficiency: 18% !


ALOHA vs. Slotted ALOHA

0.37

0.18

G is a measure of the aggregate offered load

38
CSMA (Carrier Sense Multiple Access)
• Collisions hurt the efficiency of ALOHA protocol
– At best, channel is useful 37% of the time
• ALOHA: transmit before listen
• CSMA: listen before transmit
– If channel sensed idle: transmit entire frame
– If channel sensed busy, defer transmission
• CSMA/CD: CSMA with collision detection
– collisions detected within short time
– colliding transmissions aborted, reducing channel wastage
– collision detection easy in wired, difficult with wireless
– human analogy: the polite conversationalist
39
CSMA (Carrier Sense Multiple Access)
CSMA: Listen before transmit spatial layout of nodes

Collisions can still occur:


propagation delay means
two nodes may not hear
each other’s transmission

Collision: entire packet


transmission time wasted

40
CSMA/CD Collision Detection
• Detect collision
– Abort transmission
– Jam the link

• Wait random time


– Transmit again

• Hard in wireless
– Must receive data
while transmitting

41
Ethernet CSMA/CD algorithm

1. NIC receives datagram from network layer, creates frame


2. If NIC senses channel:
if idle: start frame transmission.
if busy: wait until channel idle, then transmit
3. If NIC transmits entire frame without collision, NIC is done with frame !
4. If NIC detects another transmission while sending: abort, send jam signal
5. After aborting, NIC enters binary (exponential) backoff:
• after mth collision, NIC chooses K at random from {0,1,2, …, 2m-1}. NIC waits
K·512 bit times, returns to Step 2
• more collisions: longer backoff interval
Cable access network: FDM, TDM and
random access!
Internet frames, TV channels, control transmitted
downstream at different frequencies

cable headend

CMTS

splitter cable
cable modem
… modem
ISP termination system

▪ multiple downstream (broadcast) FDM channels: up to 1.6 Gbps/channel


▪ single CMTS transmits into channels
▪ multiple upstream channels (up to 1 Gbps/channel)
▪ multiple access: all users contend (random access) for certain
upstream channel time slots; others assigned TDM
Cable access network:

MAP frame for


Interval [t1, t2]

Downstream channel i
CMTS
Upstream channel j

cable headend
t1 t2 Residences with cable modems

Minislots containing Assigned minislots containing cable modem


minislots request frames upstream data frames

DOCSIS: data over cable service interface specification


▪ FDM over upstream, downstream frequency channels
▪ TDM upstream: some slots assigned, some have contention
• downstream MAP frame: assigns upstream slots
• request for upstream slots (and data) transmitted random
access (binary backoff) in selected slots
Three Ways to Share the Media
• Channel partitioning MAC protocols:
– Share channel efficiently and fairly at high load
– Inefficient at low load: delay in channel access, 1/N
bandwidth allocated even if only 1 active node!

• “Taking turns” protocols


– Eliminates empty slots without causing collisions
– Vulnerable to failures (e.g., failed node or lost token)

• Random access MAC protocols


– Efficient at low load: single node can fully utilize channel
– High load: collision overhead
45
Comparing the Three Approaches
• Channel partitioning is
(a) Efficient/fair at high load, inefficient at low load
(b) Inefficient at high load, efficient/fair at low load

• “Taking turns”
(a) Inefficient at high load
(b) Efficient at all loads
(c) Robust to failures
• Random access
(a) Inefficient at low load
(b) Efficient at all load
(c) Robust to failures
46

You might also like