0% found this document useful (0 votes)
7 views130 pages

Datalinklayer Intro DLC

Chapter 9 of Fourouzan's 5th edition discusses the data-link layer, its functions, and protocols, including framing, flow control, error control, and media access control. It explains the importance of link-layer addressing for communication between nodes and the use of the Address Resolution Protocol (ARP) to resolve IP addresses to link-layer addresses. The chapter also outlines the types of addresses (unicast, multicast, broadcast) and the role of routers and nodes in data transmission across networks.

Uploaded by

Bhoomika
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views130 pages

Datalinklayer Intro DLC

Chapter 9 of Fourouzan's 5th edition discusses the data-link layer, its functions, and protocols, including framing, flow control, error control, and media access control. It explains the importance of link-layer addressing for communication between nodes and the use of the Address Resolution Protocol (ARP) to resolve IP addresses to link-layer addresses. The chapter also outlines the types of addresses (unicast, multicast, broadcast) and the role of routers and nodes in data transmission across networks.

Uploaded by

Bhoomika
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 130

Datalink layer

Chapter 9 5th edition Fourouzan


• Data-link Layer: Introduction
• Nodes and Links,
• Services, Categories of link,
• Sublayers, Link Layer addressing
• Types of addresses,
• ARP(repeated here for better understanging)
• Data Link Control (DLC) services: Framing, Flow and Error Control
• Data Link Layer Protocols: Stop and Wait protocol
• Piggybacking
• Media Access Control: Random Access ALOHA ,CSMA,CSMA/CD,CSMA/CA
• The Internet is a combination of networks glued together by
connecting devices (routers or switches). If a packet is to travel from a
host to another host, it needs to pass through these networks. Figure
9.1 shows the same scenario
• Communication at the data-link layer is made up of five separate
logical connections between the data-link layers in the path.
• The data-link layer at Alice’s computer communicates with the data-
link layer at router R2. The data-link layer at router R2 communicates
with the data-link layer at router R4,and so on.
• Finally, the data-link layer at router R7 communicates with the data-
link layer at Bob’s computer.
• Only one data-link layer is involved at the source or the destination,
but two data-link layers are involved at each router.
• The reason is that Alice’s and Bob’s computers are each connected to
a single network, but each router takes input from one network and
sends output to another network
Nodes and Links
• Communication at the data-link layer is node-to-node.
• A data unit from one point in the Internet needs to pass through
many networks (LANs and WANs) to reach another point.
• Theses LANs and WANs are connected by routers.
• It is customary to refer to the two end hosts and the routers as nodes
and the networks in between as links.
Services
• The data-link layer is located between the physical and the network layers.
• The datalink layer provides services to the network layer; it receives services
from the physical layer.
• Let us discuss services provided by the data-link layer.
• The duty scope of the data-link layer is node-to-node.
• When a packet is travelling in the Internet, the data-link layer of a node (host or
router) is responsible for delivering a datagram to the next node in the path.
• For this purpose, the data-link layer of the sending node needs to encapsulate
the datagram received from the network in a frame, and the data-link layer of
the receiving node needs to decapsulate the datagram from the frame.
• The datagram received by the data-link layer of the source host is
encapsulated in a frame.
• The frame is logically transported from the source host to the router.
The frame is decapsulated at the data-link layer of the router and
encapsulated at another frame.
• The new frame is logically transported from the router to the
destination host.
• Note that, although we have shown only two data-link layers at the
router, the router actually has three data-link layers because it is
connected to three physical links.
• Framing- The data-link layer at each node needs to encapsulate the
datagram (packet received from the network layer) in a frame before
sending it to the next node. The node also needs to decapsulate the
datagram from the frame received on the logical channel
• Flow Control
• Error Control
• Congestion Control
Flow Control
• Flow Control
• Whenever an entity produces items and another entity consumes
them, there should be a balance between production and
consumption rates.
• If the items are produced faster than they can be consumed, the
consumer can be overwhelmed and may need to discard some items.
• If the items are produced more slowly than they can be consumed,
the consumer must wait, and the system becomes less efficient.
• Flow control is related to the first issue. We need to prevent losing the
data items at the consumer site.
Buffers

• Although flow control can be implemented in several ways, one of the


solutions is normally to use two buffers; one at the sending data-link
layer and the other at the receiving data-link layer.
• A buffer is a set of memory locations that can hold packets at the
sender and receiver.
• The flow control communication can occur by sending signals from
the consumer to the producer.
• When the buffer of the receiving data-link layer is full, it informs the
sending data-link layer to stop pushing frames.
Error Control
• Error control at the data-link layer is normally very simple
and implemented using one of the following two methods.
• In both methods, a CRC is added to the frame header by
the sender and checked by the receiver.
• ❑ In the first method, if the frame is corrupted, it is
silently discarded; if it is not corrupted, the packet is
delivered to the network layer. This method is used mostly
in wired LANs such as Ethernet.
• ❑ In the second method, if the frame is corrupted, it is
silently discarded; if it is not corrupted, an
acknowledgment is sent (for the purpose of both flow and
error control) to the sender.
Congestion Control

• Although a link may be congested with frames, which may result in


frame loss, most data-link-layer protocols do not directly use a
congestion control to alleviate congestion, although some wide-area
networks do.
• In general, congestion control is considered an issue in the network
layer or the transport layer because of its end-to-end nature
Two Categories of Links
• Although two nodes are physically connected by a transmission medium such as
cable or air, we need to remember that the data-link layer controls how the
medium is used.
• We can have a data-link layer that uses the whole capacity of the medium; we can
also have a data-link layer that uses only part of the capacity of the link.
• In other words, we can have a point-to-point link or a broadcast link. In a point-to-
point link, the link is dedicated to the two devices; in a broadcast link, the link is
shared between several pairs of devices.
• For example, when two friends use the traditional home phones to chat, they are
using a point-to-point link; when the same two friends use their cellular
phones, they are using a broadcast link (the air is shared among many cell
phone users)
Two Sublayers
• To better understand the functionality of and the services provided by
the link layer, we can divide the data-link layer into two sublayers:
data link control (DLC) and media access control (MAC).
LINK-LAYER ADDRESSING

• The source and destination IP addresses define the two ends but
cannot define which links the datagram should pass through.
• The IP addresses in a datagram should not be changed. If the
destination IP address in a datagram changes, the packet never
reaches its destination;
• if the source IP address in a datagram changes, the destination host or
a router can never communicate with the source if a response needs
to be sent back or an error needs to be reported back to the source
• => we need another addressing mechanism in a connectionless
internetwork: the link-layer addresses of the two nodes.
• A link-layer address is sometimes called a link address, sometimes a
physical address, and sometimes a MAC address.
• Since a link is controlled at the data-link layer, the addresses need to belong
to the data-link layer.
• When a datagram passes from the network layer to the data-link layer, the
datagram will be encapsulated in a frame and two data-link addresses are
added to the frame header.
• These two addresses are changed every time the frame moves from one
link to another.
• three links and two routers and 2 hosts in the example Alice (source)
and Bob (destination)
• For each host, we have shown two addresses, the IP addresses (N)
and the link-layer addresses (L).
• Note that a router has as many pairs of addresses as the number of
links the router is connected to. We have shown three frames, one in
each link.
• Each frame carries the same datagram with the same source and
destination addresses (N1 and N8), but the link-layer addresses of the
frame change from link to link. In link 1, the link-layer addresses are
L1 and L2.
• In link 2, they are L4 and L5. In link 3, they are L7 and L8.
• Note that the IP addresses and the link-layer addresses are not in the
same order
• For IP addresses, the source address comes before the destination
address; for link-layer addresses, the destination address comes
before the source. Few queries ….
• If the IP address of a router does not appear in any datagram sent
from a source to a destination, why do we need to assign IP addresses
to routers? The answer is that in some protocols a router may act as a
sender or receiver of a datagram
• Why do we need more than one IP address in a router, one for each
interface?
• The answer is that an interface is a connection of a router to a link.
We will see that an IP address defines a point in the Internet at which
a device is connected.
• A router with n interfaces is connected to the Internet at n points.
This is the situation of a house at the corner of a street with two
gates; each gate has the address related to the corresponding street.
• How are the source and destination IP addresses in a packet
determined?
• The answer is that the host should know its own IP address, which
becomes the source IP address in the packet.,
• the application layer uses the services of DNS to find the destination
address of the packet and passes it to the network layer to be
inserted in the packet.
• Again, each hop (router or host) should know its own link-layer
address, as we discuss later in the chapter.
• The destination link-layer address is determined by using the Address
Resolution Protocol, which we discuss shortly.
• What is the size of link-layer addresses?
• The answer is that it depends on the protocol used by the link.
• Although we have only one IP protocol for the whole Internet, we
may be using different data-link protocols in different links.
• This means that we can define the size of the address when we
discuss different link-layer protocols.
Three Types of addresses
• Unicast Address
• Each host or each interface of a router is assigned a unicast address.
Unicasting means one-to-one communication. A frame with a unicast
address destination is destined only for one entity in the link.
A3:34:45:11:92:F1
• Multicast Address
• Some link-layer protocols define multicast addresses. Multicasting
means one-to-many communication. However, the jurisdiction is local
(inside the link). The second digit, however, needs to be an even
number in hexadecimal.
• A2:34:45:11:92:F1
• Broadcast Address
• Some link-layer protocols define a broadcast address.
• Broadcasting means one-to-all communication. A frame with a
destination broadcast address is sent to all entities in the link.
• Ethernet, are 48 bits, all 1s, that are presented as 12 hexadecimal
digits separated by colons.
• FF:FF:FF:FF:FF:FF
Address Resolution Protocol (ARP)
• Anytime a node has an IP datagram to send to another node in a link,
it has the IP address of the receiving node.
• The source host knows the IP address of the default router.
• Each router except the last one in the path gets the IP address of the
next router by using its forwarding table.
• The last router knows the IP address of the destination host.
• However, the IP address of the next node is not helpful in moving a
frame through a link; we need the link-layer address of the next node.
• This is the time when the Address Resolution Protocol (ARP) becomes
helpful.
• The ARP protocol is one of the auxiliary protocols defined in the
network layer,
• Anytime a host or a router needs to find the link-layer address of
another host or router in its network, it sends an ARP request packet.
• The packet includes the link-layer and IP addresses of the sender and
the IP address of the receiver.
• Because the sender does not know the link-layer address of the
receiver, the query is broadcast over the link using the link-layer
broadcast address, which we discuss for each protocol late
ARP operation
• Every host or router on the network receives and processes the ARP
request packet, but only the intended recipient recognizes its IP
address and sends back an ARP response packet.
• The response packet contains the recipient’s IP and link-layer
addresses. The packet is unicast directly to the node that sent the
request packet.
• In Figure 9.7a, the system on the left (A) has a packet that needs to be
delivered to another system (B) with IP address N2.
• System A needs to pass the packet to its data-link layer for the actual
delivery, but it does not know the physical address of the recipient.
• It uses the services of ARP by asking the ARP protocol to send a
broadcast ARP request packet to ask for the physical address of a
system with an IP address of N2.
• This packet is received by every system on the physical network, but only
system B will answer it, as shown in Figure 9.7b. System B sends an ARP reply
packet that includes its physical address. Now system A can send all the
packets it has for this destination using the physical address it received

• Caching A question that is often asked is this: If system A can broadcast a


frame to find the linklayer address of system B, why can’t system A send the
datagram for system B using a broadcast frame?
• In other words, instead of sending one broadcast frame (ARP request), one
unicast frame (ARP response), and another unicast frame (for sending the
datagram), system A can encapsulate the datagram and send it to the network.
• Let us assume that there are 20 systems connected to the network
(link): system A, system B, and 18 other systems.
• We also assume that system A has 10 datagrams to send to system B
in one second.
• a. Without using ARP, system A needs to send 10 broadcast frames.
• Each of the 18 other systems need to receive the frames, decapsulate
the frames, remove the datagram and pass it to their network-layer to
find out the datagrams do not belong to them.
• This means processing and discarding 180 broadcast frames.
Packet Format
• Figure 9.8 shows the format of an ARP packet. The names of the fields are
selfexplanatory.
• The hardware type field defines the type of the link-layer protocol; Ethernet
is given the type 1.
• The protocol type field defines the network-layer protocol: IPv4 protocol is
(0800)16.
• The source hardware and source protocol addresses are variable-length
fields defining the link-layer and network-layer addresses of the sender.
• The destination hardware address and destination protocol address fields
define the receiver link-layer and network-layer addresses.
• An ARP packet is encapsulated directly into a data-link frame. The frame
needs to have a field to show that the payload belongs to the ARP and not
to the network-layer datagram.
• b. Using ARP, system A needs to send only one broadcast frame.
• Each of the 18 other systems need to receive the frames, decapsulate
the frames, remove the ARP message and pass the message to their
ARP protocol to find that the frame must be discarded.
• This means processing and discarding only 18 (instead of 180)
broadcast frames. After system B responds with its own data-link
address, system A can store the link-layer address in its cache
memory.
• The rest of the nine frames are only unicast. Since processing
broadcast frames is expensive (time consuming), the first method is
preferable
• A host with IP address N1 and MAC address L1 has a packet to send to
another host with IP address N2 and
physical address L2 (which is unknown
to the first host).
• The two hosts are on the same network.
Figure 9.9 shows the ARP request and
response messages.
Framing
• Data transmission in the physical layer means moving bits in the form of
a signal from the source to the destination.
• The physical layer provides bit synchronization to ensure that the sender
and receiver use the same bit durations and timing
• The data-link layer, on the other hand, needs to pack bits into frames, so
that each frame is distinguishable from another.
• Ex: Our postal system practices a type of framing. The simple act of
inserting a letter into an envelope separates one piece of information
from another; the envelope serves as the delimiter.
• In addition, each envelope defines the sender and receiver addresses,
which is necessary since the postal system is a many-to-many carrier
facility
• Framing in the data-link layer separates a message from one source to
a destination by adding a sender address and a destination address.
• The destination address defines where the packet is to go; the
sender address helps the recipient acknowledge the receipt.
• Although the
• One reason is that a frame can be very large, making flow and error
control very inefficient.
• When a message is carried in one very large frame, even a single-bit
error would require the retransmission of the whole frame.
• When a message is divided into smaller frames, a single-bit error
affects only that small frame.
• Frame Size: Frames can be of fixed or variable size.
• In fixed-size framing, there is no need for defining the boundaries of
the frames; the size itself can be used as a delimiter.
• An example of this type of framing is the ATM WAN, which uses
frames of fixed size called cells.
• Our main discussion in this chapter concerns variable-size framing,
prevalent in local-area networks.
• In variable-size framing, we need a way to define the end of one
frame and the beginning of the next.
• Historically, two approaches were used for this purpose:
• a character-oriented approach and
• a bit-oriented approach.
Character-Oriented Framing
• In character-oriented (or byte-oriented) framing, data to be carried
are 8-bit characters from a coding system such as ASCII
• The header, which normally carries the source and destination
addresses and other control information, an d the trailer, which
carries error detection redundant bits, are also multiples of 8 bits.
• To separate one frame from the next, an 8-bit (1-byte) flag is added at
the beginning and the end of a frame.
• The flag, composed of protocol-dependent special characters, signals
the start or end of a frame.
• Figure 11.1 shows the format of a frame in a character-oriented
protocol.
• Character-oriented framing was popular when only text was exchanged by the
data-link layers.
• The flag could be selected to be any character not used for text
communication.
• Now, however, we send other types of information such as graphs, audio, and
video; any character used for the flag could also be part of the information.
• To fix this problem, a byte-stuffing strategy was added to character-oriented
framing.
• In byte stuffing (or character stuffing), a special byte is added to the data
section of the frame when there is a character with the same pattern as the
flag. The data section is stuffed with an extra byte.
• This byte is usually called the escape character (ESC) and has a predefined bit
pattern.
• Whenever the receiver encounters the ESC character, it removes it from the
data section and treats the next character as data, not as a delimiting flag.
• Character-oriented protocols present another problem in data
communications.
• The universal coding systems in use today, such as Unicode, have 16-
bit and 32-bit characters that conflict with 8-bit characters.

• Move towards bit oriented


Bit-Oriented Framing
• In bit-oriented framing, the data section of a frame is a sequence of
bits to be interpreted by the upper layer as text, graphic, audio, video,
and so on.
• However, in addition to headers (and possible trailers), we still need a
delimiter to separate one frame from the other.
• Most protocols use a special 8-bit pattern flag, 01111110, as the
delimiter to define the beginning and the end of the frame, as shown
in Figure 11.3.
• This flag can create the same type of problem we saw in the
character-oriented protocols.
• . That is, if the flag pattern appears in the data, we need to somehow
inform the receiver that this is not the end of the frame.
• We do this by stuffing 1 single bit (instead of 1 byte) to prevent the
pattern from looking like a flag. The strategy is called bit stuffing
• In bit stuffing, if a 0 and five consecutive 1 bits are encountered, an
extra 0 is added.
• This extra stuffed bit is eventually removed from the data by the
receiver.
• Note that the extra bit is added after one 0 followed by five 1s
regardless of the value of the next bit.
• This guarantees that the flag field sequence does not inadvertently
appear in the frame.
• Figure 11.4 shows bit stuffing at the sender and bit removal at the
receiver. Note that even if we have a 0 after five 1s, we still stuff a 0.
The 0 will be removed by the receiver.
• This means that if the flaglike pattern 01111110 appears in the data, it
will change to 011111010 (stuffed) and is not mistaken for a flag by
the receiver.
• The real flag 01111110 is not stuffed
by the sender and is recognized by the
receiver.
Flow and Error Control
• Flow Control
• Whenever an entity produces items and another entity consumes
them, there should be a balance between production and
consumption rates.
• If the items are produced faster than they can be consumed, the
consumer can be overwhelmed and may need to discard some items.
• If the items are produced more slowly than they can be consumed,
the consumer must wait, and the system becomes less efficient.
• Flow control is related to the first issue. We need to prevent losing the
data items at the consumer site.
Buffers

• Although flow control can be implemented in several ways, one of the


solutions is normally to use two buffers; one at the sending data-link
layer and the other at the receiving data-link layer.
• A buffer is a set of memory locations that can hold packets at the
sender and receiver.
• The flow control communication can occur by sending signals from
the consumer to the producer.
• When the buffer of the receiving data-link layer is full, it informs the
sending data-link layer to stop pushing frames.
Error Control
• Error control at the data-link layer is normally very simple
and implemented using one of the following two methods.
• In both methods, a CRC is added to the frame header by
the sender and checked by the receiver.
• ❑ In the first method, if the frame is corrupted, it is
silently discarded; if it is not corrupted, the packet is
delivered to the network layer. This method is used mostly
in wired LANs such as Ethernet.
• ❑ In the second method, if the frame is corrupted, it is
silently discarded; if it is not corrupted, an
acknowledgment is sent (for the purpose of both flow and
error control) to the sender.
Connectionless and Connection-Oriented
• A DLC protocol can be either connectionless or connection-oriented
• Connectionless Protocol
• In a connectionless protocol, frames are sent from one node to the
next without any relationship between the frames; each frame is
independent.
• The frames are not numbered and there is no sense of ordering. Most
of the data-link protocols for LANs are connectionless protocols.
• Connection-Oriented Protocol
• a logical connection should first be established between the two
nodes (setup phase).
• After all frames that are somehow related to each other are
transmitted (transfer phase), the logical connection is terminated
(teardown phase).
• In this type of communication, the frames are numbered and sent in
order.
• If they are not received in order, the receiver needs to wait until all
frames belonging to the same set are received and then deliver them
in order to the network layer.
• Connection oriented protocols are rare in wired LANs, but we can see
them in some point-to-point protocols, some wireless LANs, and some
WANs.
DATA-LINK LAYER PROTOCOLS
• Traditionally four protocols have been defined for the data-link layer
to deal with flow and error control: Simple, Stop-and-Wait, Go-Back-
N, and Selective-Repeat.
• The behavior of a data-link-layer protocol can be better shown as a
finite state machine (FSM).
• A FSM is thought of as a machine with a finite number of states.
• The machine is always in one of the states until an event occurs.
• Each event is associated with two reactions: defining the list (possibly
empty) of actions to be performed
• and determining the next state (which can be the same as the
current state).
• In Figure 11.6, we show an example of a machine using FSM. We have
used rounded-corner rectangles to show states, colored text to show
events, and regular black text to show actions.
• A horizontal line is used to separate the event from the actions,
although later we replace the horizontal line with a slash.
• The arrow shows the movement to the next state.
Simple protocol
• A simple protocol
• The figure shows a machine with three states
Stop and wait
• Our second protocol is called the Stop-and-Wait protocol, which uses
both flow and error control.
• In this protocol, the sender sends one frame at a time and waits for
an acknowledgment before sending the next one
• To detect corrupted frames, we need to add a CRC to each data frame. When a
frame arrives at the receiver site, it is checked.
• If its CRC is incorrect, the frame is corrupted and silently discarded. The silence
of the receiver is a signal for the sender that a frame was either corrupted or
lost.
• Every time the sender sends a frame, it starts a timer. If an acknowledgment
arrives before the timer expires, the timer is stopped and the sender sends the
next frame (if it has one to send).
• If the timer expires, the sender resends the previous frame, assuming that the
frame was either lost or corrupted.
• This means that the sender needs to keep a copy of the frame until its
acknowledgment arrives. When the corresponding acknowledgment arrives,
the sender discards the copy and sends the next frame if it is ready.
• Note that only one frame and one acknowledgment can be in the channels at
any time.
• Sender States The sender is initially in the ready state, but it can move between the
ready and blocking state.
• Ready State. When the sender is in this state, it is only waiting for a packet from the
network layer.
• If a packet comes from the network layer, the sender creates a frame, saves a copy of
the frame, starts the only timer and sends the frame. The sender then moves to the
blocking state.
• ❑ Blocking State. When the sender is in this state, three events can occur: a. If a time-
out occurs, the sender resends the saved copy of the frame and restarts the timer.
• b. If a corrupted ACK arrives, it is discarded.
• c. If an error-free ACK arrives, the sender stops the timer and discards the saved copy of
the frame. It then moves to the ready state.

• Receiver The receiver is always in the ready state.


• Two events may occur:
• a. If an error-free frame arrives, the message in the frame is delivered to the network
layer and an ACK is sent.
• b. If a corrupted frame arrives, the frame is discarded.
Sequence and Acknowledgment Numbers
• We saw a problem in Example 11.3 that needs to be addressed and corrected.
Duplicate packets, as much as corrupted packets, need to be avoided.
• As an example, assume we are ordering some item online. If each packet
defines the specification of an item to be ordered, duplicate packets mean
ordering an item more than once.
• To correct the problem in Example 11.3, we need to add sequence numbers
to the data frames and acknowledgment numbers to the ACK frames.
However, numbering in this case is very simple. Sequence numbers are 0, 1,
0, 1, 0, 1, . . . ; the acknowledgment numbers can also be 1, 0, 1, 0, 1, 0, …
• In other words, the sequence numbers start with 0, the acknowledgment
numbers start with 1
Piggybacking
• The two protocols we discussed in this section are designed for
unidirectional communication, in which data is flowing only in one
direction although the acknowledgment may travel in the other
direction.
• Protocols have been designed in the past to allow data to flow in
both directions.
• However, to make the communication more efficient, the data in one
direction is piggybacked with the acknowledgment in the other
direction.
• In other words, when node A is sending data to node B, Node A also
acknowledges the data received from node B.
• Because piggybacking makes communication at the datalink layer
more complicated, it is not a common practice.
• More details on two-way communication and piggybacking in chap23
Medium Access Control chap12
Figure 12.1 Data link layer divided into two functionality-oriented
sublayers

12.
73
Figure 12.2 Taxonomy of multiple-access protocols discussed in this
chapter

12.
74
12-1 RANDOM ACCESS
In random access or contention methods, no station is
superior to another station and none is assigned the
control over another.
No station permits, or does not permit, another station
to send..
At each instance, a station that has data to send uses
a procedure defined by the protocol to make a
decision on whether or not to send.
Topics discussed in this section:
ALOHA
Carrier Sense Multiple Access
Carrier Sense Multiple Access with Collision Detection
Carrier Sense Multiple Access with Collision Avoidance
12.
75
Figure 12.3 Frames in a pure ALOHA
network

12.
76
Figure 12.4 Procedure for pure ALOHA
protocol

12.
77
Example
12.1
The stations on a wireless ALOHA network are a
maximum of 600 km apart. If we assume that signals
propagate at 3 × 108 m/s, we find
Tp = (600 × 105 ) / (3 × 108 ) = 2 ms.
Now we can find the value of TB for different values of K .

a. For K = 1, the range is {0, 1}. The station needs to


generate a random number with a value of 0 or 1. This
means that TB is either 0 ms (0 × 2) or 2 ms (1 × 2),
based on the outcome of the random variable.

12.
78
Example 12.1
(continued)
b. For K = 2, the range is {0, 1, 2, 3}. This means that T B
can be 0, 2, 4, or 6 ms, based on the outcome of the
random variable.

c. For K = 3, the range is {0, 1, 2, 3, 4, 5, 6, 7}. This


means that TB can be 0, 2, 4, . . . , 14 ms, based on the
outcome of the random variable.

d. We need to mention that if K > 10, it is normally set to


10.

12.
79
Vulnerable time
• Vulnerable time, the length of time in which there is a possibility of collision.
• We assume that the stations send fixed-length frames with each frame taking
Tfr seconds to send
• Figure 12.4 shows the vulnerable time for station B.
• Station B starts to send a frame at time t. Now imagine station A has started
to send its frame after t − Tfr. This leads to a collision between the frames
from station B and station A.
• On the other hand, suppose that station C starts to send a frame before time
t + Tfr. Here, there is also a collision between frames from station B and
station C.
• Looking at Figure 12.4, we see that the vulnerable time during which a
collision may occur in pure ALOHA is 2 times the frame transmission time
Figure 12.5 Vulnerable time for pure ALOHA
protocol
Tfr=each frame
takes Tfr sec

12.
81
• Let us call G the average number of frames generated by the system during one frame
transmission time.
• Then it can be proven that the average number of successfully transmitted frames for
pure ALOHA is S = G × e−2G.
• The maximum throughput Smax is 0.184, for G = 1/2. (We can find it by setting the
derivative of S with respect to G to 0;.)
• In other words, if one-half a frame is generated during one frame transmission time (one
frame during two frame transmission times), then 18.4 percent of these frames reach their
destination successfully.
• We expect G = 1/2 to produce the maximum throughput because the vulnerable time is 2
times the frame transmission time.
• Therefore, if a station generates only one frame in this vulnerable time (and no other
stations generate a frame during this time), the frame will reach its destination successfully
Note

The throughput for pure ALOHA


is S = G × e −2G .
The maximum throughput
Smax = 0.184 when G= (1/2).

12.84
Example
12.3
A pure ALOHA network transmits 200-bit frames on a
shared channel of 200 kbps. What is the throughput if the
system (all stations together) produces
a. 1000 frames per second b. 500 frames per second
c. 250 frames per second.
Solution
The frame transmission time is 200/200 kbps or 1 ms.
a. If the system creates 1000 frames per second, this is 1
frame per millisecond. The load is 1. In this case S =
G× e−2 G or S = 0.135 (13.5 percent). This means that
the throughput is 1000 × 0.135 = 135 frames. Only 135
frames out of 1000 will probably survive.
12.85
Example 12.3
(continued)
b. If the system creates 500 frames per second, this is
(1/2) frame per millisecond. The load is (1/2). In this
case S = G × e −2G or S = 0.184 (18.4 percent). This
means that the throughput is 500 × 0.184 = 92 and that
only 92 frames out of 500 will probably survive. Note
that this is the maximum throughput case,
percentagewise.

c. If the system creates 250 frames per second, this is


(1/4) frame per millisecond. The load is (1/4). In this
case S = G × e −2G or S = 0.152 (15.2 percent). This
means that the throughput is 250 × 0.152 = 38. Only
38 frames out of 250 will probably survive.
12.86
Slotted ALOHA
• Pure ALOHA has a vulnerable time of 2 × Tfr. This is so because there is
no rule that defines when the station can send. A station may send
soon after another station has started or just before another station
has finished.
• Slotted ALOHA was invented to improve the efficiency of pure ALOHA.
• In slotted ALOHA we divide the time into slots of Tfr seconds and force
the station to send only at the beginning of the time slot.
• Figure 12.5 shows an example of frame collisions in slotted ALOHA
Figure 12.6 Frames in a slotted ALOHA
network

12.89
Note

The throughput for slotted ALOHA


is S = G × e−G .
The maximum
throughput Smax = 0.368
when G = 1.

12.90
Figure 12.7 Vulnerable time for slotted ALOHA
protocol

12.91
Example
12.4
A slotted ALOHA network transmits 200-bit frames on a
shared channel of 200 kbps. What is the throughput if the
system (all stations together) produces
a. 1000 frames per second b. 500 frames per second
c. 250 frames per second.
Solution
The frame transmission time is 200/200 kbps or 1 ms.
a. If the system creates 1000 frames per second, this is 1
frame per millisecond. The load is 1. In this case S =
G× e−G or S = 0.368 (36.8 percent). This means that the
throughput is 1000 × 0.0368 = 368 frames. Only 386
frames out of 1000 will probably survive.
12.92
Example 12.4
(continued)
b. If the system creates 500 frames per second, this is
(1/2) frame per millisecond. The load is (1/2). In this
case S = G × e−G or S = 0.303 (30.3 percent). This
means that the throughput is 500 × 0.0303 = 151. Only
151 frames out of 500 will probably survive.

c. If the system creates 250 frames per second, this is


(1/4) frame per millisecond. The load is (1/4). In this
case S = G × e −G or S = 0.195 (19.5 percent). This
means that the throughput is 250 × 0.195 = 49. Only
49 frames out of 250 will probably survive.

12.93
Carrier Sense Multiple Access

• To minimize the chance of collision and, therefore, increase the


performance, the CSMA method was developed.
• The chance of collision can be reduced if a station senses the medium
before trying to use it.
• Carrier sense multiple access (CSMA) requires that each station first
listen to the medium (or check the state of the medium) before
sending.
• In other words, CSMA is based on the principle “sense before
transmit” or “listen before talk.”
• CSMA can reduce the possibility of collision, but it cannot eliminate it.
• The reason for this is shown in Figure 12.7, a space and time model of a
CSMA network.
• Stations are connected to a shared channel (usually a dedicated
medium).
• The possibility of collision still exists because of propagation delay;
when a station sends a frame, it still takes time (although very short)
for the first bit to reach every station and for every station to sense it.
• In other words, a station may sense the medium and find it idle, only
because the first bit sent by another station has not yet been received.
Figure 12.7 Space/time model of the collision in
CSMA

12.96
• At time t1, station B senses the medium and finds it idle, so it sends a
frame.
• At time t2 (t2 > t1), station C senses the medium and finds it idle
because, at this time, the first bits from station B have not reached
station C.
• Station C also sends a frame. The two signals collide and both frames
are destroyed
• The vulnerable time for CSMA is the propagation time Tp.
• This is the time needed for a signal to propagate from one end of the
medium to the other.
• When a station sends a frame and any other station tries to send a frame
during this time, a collision will result.
• But if the first bit of the frame reaches the end of the medium, every
station will already have heard the bit and will refrain from sending.
• Figure 12.8 shows the worst case. The leftmost station, A, sends a frame
at time t1, which reaches the rightmost station, D, at time t1 + Tp.
• The gray area shows the vulnerable area in time and space.
Figure 12.8 Vulnerable time in CSMA is the propogation time Tp

12.99
What should a station do if the channel is busy
or Idle

solution

persistence methods
Figure 12.9 and 12.10 Behavior of three
persistence methods

12.101
Flow diagram for three persistence methods

12.102
• The p-persistent method is used if the channel has time slots with a
slot duration equal to or greater than the maximum propagation time.
• The p-persistent approach combines the advantages of the other two
strategies.
• 1. With probability p, the station sends its frame.
• 2. With probability q = 1 − p, the station waits for the beginning of the
next time slot and checks the line again.
• a. If the line is idle, it goes to step 1.
• b. If the line is busy, it acts as though a collision has occurred and uses the
backoff procedure
CSMA/CD
• The CSMA method does not specify the procedure following a
collision. Carrier sense multiple access with collision detection
(CSMA/CD) augments the algorithm to handle the collision.
• In this method, a station monitors the medium after it sends a frame
to see if the transmission was successful.
• If so, the station is finished. If, however, there is a collision, the frame
is sent again
Figure 12.12 Collision of the first bit in
CSMA/CD

12.105
• To better understand CSMA/CD, let us look at the first bits transmitted
by the two stations involved in the collision.
• Although each station continues to send bits in the frame until it
detects the collision, we show what happens as the first bits collide.
• In Figure 12.11, stations A and C are involved in the collision.
• At time t1, station A has executed its persistence procedure and starts
sending the bits of its frame. At time t2, station C has not yet sensed
the first bit sent by A.
Figure 12.13 Collision and abortion in CSMA/CD

12.108
• Station C executes its persistence procedure and starts sending the
bits in its frame, which propagate both to the left and to the right.
• The collision occurs sometime after time t2. Station C detects a
collision at time t3 when it receives the first bit of A’s frame.
• Station C immediately (or after a short time, but we assume
immediately) aborts transmission.
• Station A detects collision at time t4 when it receives the first bit of
C’s frame; it also immediately aborts transmission.
• Looking at the figure, we see that A transmits for the duration t4 − t1;
C transmits for the duration t3 − t2.
Minimum frame size
• For CSMA/CD to work we need restriction on the frame size.
• Minumum frame size 2Tp
• If the 2 stations involved in a collision are maximum distance apart
the signal from the first takes time Tp to reach the second , and the
effect of the collision takes another time Tp to reach the first.
• For CSMA/CD to work, we need a restriction on the frame size. Before sending the
last bit of the frame, the sending station must detect a collision, if any, and abort
the transmission.
• This is so because the station, once the entire frame is sent, does not keep a copy
of the frame and does not monitor the line for collision detection.
• Therefore, the frame transmission time Tfr must be at least two times the
maximum propagation time Tp. To understand the reason, let us think about the
worst-case scenario.
• If the two stations involved in a collision are the maximum distance apart, the
signal from the first takes time Tp to reach the second, and the effect of the
collision takes another time TP to reach the first.
• So the requirement is that the first station must still be transmitting after 2Tp.
Example
12.5
A network using CSMA/CD has a bandwidth of 10 Mbps.
If the maximum propagation time (including the delays
in the devices and ignoring the time needed to send a
jamming signal, as we see later) is 25 6 μs, what is the
minimum size of the frame?
Solution
The frame transmission time is Tfr = 2 × Tp = 51.2 μs.
This means, in the worst case, a station needs to transmit
for a period of 51.2 μs to detect the collision. The
minimum size of the frame is 10 Mbps × 51.2 μs = 512
bits or 64 bytes. This is actually the minimum size of the
frame for Standard Ethernet.
12.112
Figure 12.14 Flow diagram for the
CSMA/CD

12.113
Figure 12.15 Energy level during transmission, idleness, or
collision

12.114
Carrier Sense Multiple Access with Collision
Avoidance(CSMA/CA)

In wired network the received signal has almost same strength/


energy as the sent signal ---either the cable length is small or use
of repeaters.
In wireless network energy of the signal is lost as the distance
increases.
Though the collision adds or increases the energy level (5 to
10%) it is not enough for collision detection
So CSMA/CA was invented.
Collisions are avoided using 3 strategies
⮚ Interframe spacing(IFS)
⮚ Contention window
⮚ Acknowledge
Figure 12.16 Timing in
CSMA/CA

IFS allows the front of the transmitted signal by the distant station to reach this station
Contention window is an amount of time divided into slots.

12.116
Note

In CSMA/CA, the IFS can also be used to


define the priority of a station or a
frame.

12.117
Note

In CSMA/CA, if the station finds the


channel busy, it does not restart
the timer of the contention
window;
it stops the timer and restarts it
when the channel becomes idle.

12.118
Figure 12.17 Flow diagram for
CSMA/CA

12.119
• Interframe Space (IFS). First, collisions are avoided by deferring
transmission even if the channel is found idle.
• When an idle channel is found, the station does not send
immediately.
• It waits for a period of time called the interframe space or IFS.
• Even though the channel may appear idle when it is sensed, a distant
station may have already started transmitting
• Contention Window. The contention window is an amount of time
divided into slots.
• A station that is ready to send chooses a random number of slots as
its wait time.
• The number of slots in the window changes according to the binary
exponential backoff strategy.
• This means that it is set to one slot the first time and then doubles
each time the station cannot detect an idle channel after the IFS time.
• This is very similar to the p-persistent method except that a random
outcome defines the number of slots taken by the waiting station
• Acknowledgment. With all these precautions, there still may be a
collision resulting in destroyed data.
• In addition, the data may be corrupted during the transmission.
• The positive acknowledgment and the time-out timer can help
guarantee that the receiver has received the frame.
• Frame Exchange Time Line Figure 12.17 shows the exchange of data
and control frames in time.
• 1. Before sending a frame, the source station senses the medium by
checking the energy level at the carrier frequency.
• a. The channel uses a persistence strategy with backoff until the channel is
idle.
• b. After the station is found to be idle, the station waits for a period of time
called the DCF interframe space (DIFS); then the station sends a control frame
called the request to send (RTS).
• 2. After receiving the RTS and waiting a period of time called the short
interframe space (SIFS), the destination station sends a control frame,
called the clear to send (CTS), to the source station. This control frame
indicates that the destination station is ready to receive data.
• 3. The source station sends data after waiting an amount of time
equal to SIFS.
• 4. The destination station, after waiting an amount of time equal to
SIFS, sends an acknowledgment to show that the frame has been
received.
• Acknowledgment is needed in this protocol because the station does
not have any means to check for the successful arrival of its data at
the destination.
• On the other hand, the lack of collision in CSMA/CD is a kind of
indication to the source that data have arrived.
Network Allocation Vector
• How do other stations defer sending their data if one station acquires
access? In other words, how is the collision avoidance aspect of this
protocol accomplished? The key is a feature called NAV.
• When a station sends an RTS frame, it includes the duration of time that
it needs to occupy the channel.
• The stations that are affected by this transmission create a timer called a
network allocation vector (NAV) that shows how much time must pass
before these stations are allowed to check the channel for idleness.
• Each time a station accesses the system and sends an RTS frame, other
stations start their NAV
• Collision During Handshaking What happens if there is a collision
during the time when RTS or CTS control frames are in transition,
often called the handshaking period? Two or more stations may try to
send RTS frames at the same time.
• These control frames may collide. However, because there is no
mechanism for collision detection, the sender assumes there has
been a collision if it has not received a CTS frame from the receiver.
• The backoff strategy is employed, and the sender tries again
• Hidden-Station Problem
• The solution to the hidden station problem is the use of the
handshake frames (RTS and CTS). Figure 12.17 also shows that the RTS
message from A reaches B, but not C.
• However, because both B and C are within the range of A, the CTS
message, which contains the duration of data transmission from B to
A, reaches C.
• Station C knows that some hidden station is using the channel and
refrains from transmitting until that duration is over.
• CSMA/CA and Wireless Networks CSMA/CA was mostly intended for
use in wireless networks.
• The procedure described above, however, is not sophisticated enough
to handle some particular issues related to wireless networks, such as
hidden terminals or exposed terminals.
• We will see how these issues are solved by augmenting the above
protocol with handshaking features.
Difference Between LAN and WLAN

LAN WLAN
LAN stands for Local Area Network. WLAN stands for Wireless Local Area Network.
LAN connections include both wired and wireless connections. WLAN connections are completely wireless.
LAN network is a collection of computers or other such network devices WLAN network is a collection of computers or other such network
in a particular location that are connected together by communication devices in a particular location that are connected together wirelessly by
elements or network elements. communication elements or network elements.
LAN is free from external attacks like interruption of signals, cyber
Whereas, WLAN is vulnerable to external attacks.
criminal attacks and so on.
LAN is secure. WLAN is not secure.
LAN network has lost its popularity due to the arrival of latest wireless
WLAN is popular.
networks.
Wired LAN needs physical access like connecting the wires to the
Work on connecting wires to the switches and routers are neglected.
switches or routers.
In LAN, devices are connected locally with Ethernet cable. For WLAN Ethernet cable is not necessary.
Mobility limited. Outstanding mobility.
It may or may not vary with external factors like environment and quality It varies due to external factors like environment and quality of cables.
of cables. Most of the external factors affect the signal transmission.
LAN is less expensive. WLAN is more expensive.
Example: Laptops, cellphones, tablets connected to a wireless router or
Example: Computers connected in a college.
hotspot.

You might also like