Module 2 CN Notes - Edited
Module 2 CN Notes - Edited
INTRODUCTION
The Internet is a combination of networks glued together by connecting devices (routers or
switches). If a packet is to travel from a host to another host, it needs to pass through these
networks. Figure shows the same scenario, Communication at the data-link layer is made up of
five separate logical connections between the data-link layers in the path.
The first node is the source host; the last node is the destination host. The other four nodes are
four routers. The first, the third, and the fifth links represent the three LANs; the second and the
fourth links represent the two WANs.
Services
The datalink layer provides services to the network layer; it receives services from the
physical layer.
The duty scope of the data-link layer is node-to-node.
It is responsible for delivering a datagram to the next node in the path, sending node needs
to encapsulate the datagram received from the network in a frame, and the receiving node
needs to decapsulate the datagram from the frame.
Framing
A packet at the data-link layer is normally called a frame.
Flow Control
The sending data-link layer at the end of a link is a producer of frames; the receiving data-link
layer at the other end of a link is a consumer. If the rate of produced frames is higher than the rate
of consumed frames, frames at the receiving end need to be buffered while waiting to be
consumed (processed). Definitely, we cannot have an unlimited buffer size at the receiving side.
Error Control
Frame in a data-link layer needs to be changed to bits, transformed to electromagnetic signals, and
transmitted through the transmission media. Frame is susceptible to error. The error needs first
to be detected. After detection, it needs to be either corrected at the receiver node or discarded
and retransmitted by the sending node. Since error detection and correction is an issue in every
layer (node-to-node or host-to-host).
Congestion Control
Congestion control is considered an issue in the network layer or the transport layer because of its
end-to-end nature.
In a point-to-point link, the link is dedicated to the two devices; in a broadcast link, the link is
shared between several pairs of devices.
Two Sublayers
Data-link layer is divided into two sub-layers: data link control (DLC) and media access control
(MAC). The media access control sublayer deals only with issues specific to broadcast links.
LINK-LAYER ADDRESSING
A node with physical address 10 sends a frame to a node with physical address 87. The two
nodes are connected by a link (bus topology LAN). As the figure shows, the computer with
physical address 10 is the sender, and the computer with physical address 87 is the
receiver.
Unicast Address
Each host or each interface of a router is assigned a unicast address. Unicasting means one-to-one
communication. A frame with a unicast address destination is destined only for one entity in the
link.
Example:1
The unicast link-layer addresses in the most common LAN, Ethernet, are 48 bits (six bytes) that
are presented as 12 hexadecimal digits separated by colons; for example, the following is a link-
layer address of acomputer.
Multicast Address
Some link-layer protocols define multicast addresses. Multicasting means one-to-many
communication. However, the jurisdiction is local (inside the link).
Example: 2
The multicast link-layer addresses in the most common LAN, Ethernet, are 48 bits (six bytes) that
are presented as 12 hexadecimal digits separated by colons.
The second digit, however, needs to be an even number in hexadecimal. The following shows a
multicast address:
A2:34:45:11:92:F1
Broadcast Address
Some link-layer protocols define a broadcast address. Broadcasting means one-to-all
communication. A frame with a destination broadcast address is sent to all entities in the link.
Example: 3
The broadcast link-layer addresses in the most common LAN, Ethernet, are 48 bits, all 1s, that are
presented as 12 hexadecimal digits separated by colons. The following shows a broadcast address:
FF:FF:FF:FF:FF:FF
The system (A) has a packet that needs to be delivered to another system (B) with IP address N2.
System A needs to pass the packet to its data-link layer for the actual delivery, but it does not
know the physical address of the recipient. It uses the services of ARP by asking the ARP protocol
to send a broadcast ARP request packet to ask for the physical address of a system with an IP
address of N2.
This packet is received by every system on the physical network, but only system B will answer it,
as shown in Figure b. System B sends an ARP reply packet that includes its physical address. Now
system A can send all the packets it has for this destination using the physical address it received.
Packet Format
Figure shows the format of an ARP packet. The names of the fields are self explanatory. The
hardware type field defines the type of the link-layer protocol; Ethernet is given the type 1. The
protocol type field defines the network-layer
protocol: IPv4 protocol is (0800)16. The source
hardware and source protocol addresses are
variable-length fieldsdefiningthelink-layer and
network-layer addresses of the sender. The
destination hardware address and destination
protocol address fields define the receiver link-
layer and network-layer addresses. An ARP
packet is encapsulated directly into a data-link
frame. The frame needs to have a field to show
that the payload belongs to the ARP and not to
the network-layer datagram.
Framing
Data transmission in the physical layer means moving bits in the form of a signal from the source
to the destination. The physical layer provides bit synchronization to ensure that the sender and
receiver use the same bit durations and timing.
Framing in the data-link layer separates a message from one source to a destination by adding a
sender address and a destination address. The destination address defines where the packet is to
go; the sender address helps the recipient acknowledge the receipt.
Frame Size
Frames can be of fixed or variable size.
In fixed-size framing, there is no need for defining the boundaries of the frames; the size itself can
be used as adelimiter.
In variable-size framing, need a way to define the end of one frame and the beginning of the next.
Historically, two approaches were used for this purpose: a character-oriented approach and a bit-
oriented approach.
Character-Oriented Framing
In character-oriented (or byte-oriented) framing, data to be carried are 8-bit characters from a
coding system such as ASCII. The header, which normally carries the source and destination
addresses and other control information, and the trailer, which carries error detection redundant
bits, are also multiples of 8 bits. To separate one frame from the next, an 8-bit (1-byte) flag is
added at the beginning and the end of a frame. The flag, composed of protocol-dependent special
characters, signals the start or end of a frame. Figure shows the format of a frame in a character-
oriented protocol.
Character-oriented framing was popular when only text was exchanged by the data-link layers.
The flag could be selected to be any character not used for text communication. Now, however, we
send other types of information such as graphs, audio, and video; any character used for the flag
could also be part of the information. If this happens, the receiver, when it encounters this pattern
in the middle of the data, thinks it has reached the end of the frame. To fix this problem, a byte-
stuffing strategy was added to character-orientedframing.
In byte stuffing (or character stuffing), a special byte is added to the data section of the frame
when there is a character with the same pattern as the flag. The data section is stuffed with an
extra byte. This byte is usually called the escape character (ESC) and has a predefined bit pattern.
Whenever the receiver encounters the ESC character, it removes it from the data section and
treats the next character as data, not as a delimiting flag. Figure shows the situation.
Figure below shows bit stuffing at the sender and bit removal at the receiver. Note that even if we
have a 0 after five 1s, we still stuff a 0. The 0 will be removed by the receiver. This means that if
the flaglike pattern 01111110 appears in the data, it will change to 011111010 (stuffed) and is not
mistaken for a flag by the receiver. The real flag 01111110 is not stuffed by the sender and is
recognized by the receiver.
Flow Control
The figure shows that the data-link layer at the sending node tries to push frames toward the data-
link layer at the receiving node. If the receiving node cannot process and deliver the packet to its
network at the same rate that the frames arrive, it becomes overwhelmed with frames. Flow
control in this case can be feedback from the receiving node to the sending node to stop or slow
down pushing frames.
Buffers
Although flow control can be implemented in several ways, one of the solutions is normally to use
two buffers; one at the sending data-link layer and the other at the receiving data-link layer. A
buffer is a set of memory locations that can hold packets at the sender and receiver. The flow
control communication can occur by sending signals from the consumer to the producer. When
the buffer of the receiving data-link layer is full, it informs the sending data-link layer to stop
pushing frames.
Error Control
Error control at the data-link layer is normally very simple and implemented using one of the
following two methods. In both methods, a CRC is added to the frame header by the sender and
checked by the receiver.
In the first method, if the frame is corrupted, it is silently discarded; if it is not corrupted,
the packet is delivered to the network layer. This method is used mostly in wired LANs
such as Ethernet.
In the second method, if the frame is corrupted, it is silently discarded; if it is not corrupted,
an acknowledgment is sent (for the purpose of both flow and error control) to the sender.
Connectionless Protocol
In a connectionless protocol, frames are sent from one node to the next without any relationship
between the frames; each frame is independent. It means that there is no connection between
frames. The frames are not numbered and there is no sense of ordering. Most of the data-link
protocols for LANs are connectionlessprotocols.
The first two protocols still are used at the data-link layer, the last two have disappeared.
The behavior of a data-link-layer protocol can be better shown as a finite state machine (FSM). An
FSM is thought of as a machine with a finite number of states. The machine is always in one of the
states until an event occurs. Each event is associated with two reactions: defining the list of actions
to be performed and determining the next state. One of the states must be defined as the initial
state, the state in which the machine starts when it turns on.
The figure shows a machine with three states. There are only three possible events and three
possible actions. The machine starts in state I. If event 1 occurs, the machine performs actions 1
and 2 and moves to state II. When the machine is in state II, two events may occur. If event 1
occurs, the machine performs action 3 and remains in the same state, state II. If event 3 occurs, the
machine performs no action, but move to state I.
Simple Protocol
Our first protocol is a simple protocol with neither flow nor error control. Assume that the
receiver can immediately handle any frame it receives. Figure shows the layout for this protocol.
FSMs
Each FSM has only one state, the ready state. The sending machine remains in the ready
state until a request comes from the process in the network layer.
When this event occurs, the sending machine encapsulates the message in a frame and
sends it to the receivingmachine.
The receiving machine remains in the ready state until a frame arrives from the sending
machine. When this event occurs, the receiving machine decapsulates the message out of
the frame and delivers it to the process at the network layer.
Figure shows the FSMs for the simple protocol.
Example
Stop-and-Wait Protocol
Stop-and-Wait protocol, which uses both flow and error control. In this protocol, the sender sends
one frame at a time and waits for an acknowledgment before sending the next one. To detect
corrupted frames, we need to add a CRC to each data frame. When a frame arrives at the receiver
site, it is checked. If its CRC is incorrect, the frame is corrupted and silently discarded.
The silence of the receiver is a signal for the sender that a frame was either corrupted or lost.
Every time the sender sends a frame, it starts a timer. If an acknowledgment arrives before the
timer expires, the timer is stopped and the sender sends the next frame (if it has one to send). If
the timer expires, the sender resends the previous frame, assuming that the frame was either lost
or corrupted. This means that the sender needs to keep a copy of the frame until its
acknowledgment arrives. When the corresponding acknowledgment arrives, the sender discards
FSMs
Sender States
The sender is initially in the ready state, but it can move between the ready and blocking
state.
Ready State.
When the sender is in this state, it is only waiting for a packet from the network layer. If a
packet comes from the network layer, the sender creates a frame, saves a copy of the frame,
starts the only timer and sends the frame. The sender then moves to the blocking state.
Blocking State. When the sender is in this state, three events can occur:
If a time-out occurs, the sender resends the saved copy of the frame and restarts the timer.
If a corrupted ACK arrives, it is discarded.
If an error-free ACK arrives, the sender stops the timer and discards the saved copy of the
frame. It then moves to the ready state.
Receiver
The receiver is always in the ready state. Two events may occur:
If an error-free frame arrives, the message in the frame is delivered to the network layer
and an ACK is sent.
If a corrupted frame arrives, the frame is discarded.
Piggybacking
The two protocols we discussed in this section are designed for unidirectional communication, in
which data is flowing only in one direction although the acknowledgment may travel in the other
direction. Protocols have been designed in the past to allow data to flow in both directions.
However, to make the communication more efficient, the data in one direction is piggybacked with
the acknowledgment in the other direction. In other words, when node A is sending data to node
B, Node A also acknowledges the data received from node B. Because piggybacking makes
communication at the datalink layer more complicated, it is not a common practice.
Introduction:
Data link layer has two sub layers: Upper sub layer – Data link Control & Lower sub
layer – Multiple access Control.
The upper sub layer is responsible for flow and error control.
The lower sub layer is responsible for multiple access resolution.
When the nodes are connected using a dedicated link, lower sub layer is not required,
but when the nodes are connected using a multipoint link (broadcast), multiple access
resolution is required.
RANDOM ACCESS
No station is superior to another station and none is assigned control over another.
No station permits, or does not permit, another station to send.
At each instance, a station that has data to send uses a procedure defined by the
protocol to make a decision on whether or not to send.
Transmission is random among thestations.
Each station has the right to the medium without being controlled by any other station.
All stations compete with one another to access the medium. Random access methods
are also called as contention methods.
If more than one station tries to send, there is an access conflict – collision, frames will
be either destroyed or modified.
The earliest random access method, was developed at the university of Hawaii in early
1970.
It was designed for a radio (wireless) LAN, but it can be used on any shared medium.
The medium is shared between the stations. When a station sends data, another station
may attempt to do so at the same time. The data from the two stations collide and
become garbled.
PURE ALOHA
The original ALOHA protocol is called pure ALOHA.
Each station sends a frame whenever it has a frame to send.
Since there is only one channel to share, there is possibility of collision between frames
from different stations.
Even if one bit of a frame coexists on the channel with one bit from another frame,
there is a collision and both will be destroyed.
A collision involves two or more stations. If all these stations try to resend their frames
after the time-out, the frames will collide again.
Pure ALOHA dictates that when the time-out period passes, each station waits a
random amount of time before resending its frame. This time is called back-off time TB.
This randomness will help avoid more collisions.
To prevent congesting the channel with retransmitted frames, pure ALOHA dictates
that after a maximum number of retransmission attempts Kmax, a station must give up
and try later.
Problem:
A pure ALOHA network transmits 200-bits frames on a shared channel of 200 kbps. What is
the throughput if the system produces?
a. 1000 frames per second
b. 500 frames per second
c. 250 frames per second
Solution:
b) SLOTTED ALOHA
Throughput
G = average number of framesgenerated by the system during one frame transmission time.
Successful transmissions for pure ALOHAis
S = G * e-G
The maximum throughput, Smax is for G =1 i.e., Smax = 0.368 = 36.8%.
CSMA can reduce the possibility of collision, but it can’t eliminate it. The reason for this is
shown in the above figure, a space and time model of a CSMA network. Stations are connected
to a shared channel. The possibility of collision still exists because of the propagation delay.
At time t1, station B senses the medium and finds it idle, so it sends a frame. At time t2 (t2
> t1), station C senses the medium and finds it idle because, at this time, the first bits from
station B have not reached station C. Station C also sends a frame. The two signals collide and
both frames are destroyed.
Vulnerable Time
The vulnerable time for CSMA is the propagation time Tp. This is the time needed for a signal
to propagate from one end of the medium to the other.
When a station sends a frame and any other station tries to send a frame during this time, a
collision will result. But if the first bit of the frame reaches the end of the medium, every
station will already have heard the bit and will refrain from sending.
Persistence strategies
Three persistence methods
1-persistent method, the non-persistent method, and the p-persistent method.
1- Persistent
The 1-persistent method is simple and straightforward. In this method, after the station finds
the line idle, it sends its frame immediately (with probability 1). This method has the highest
chance of collision because two or more stations may find the line idle and send their frames
immediately. We will see later that Ethernet uses this method.
Non-persistent
In the non-persistent method, a station that has a frame to send senses the line. If the line is
idle, it sends immediately. If the line is not idle, it waits a random amount of time and then
senses the line again. The non-persistent approach reduces the chance of collision because it is
unlikely that two or more stations will wait the same amount of time and retry to send
simultaneously. However, this method reduces the efficiency of the network because the
medium remains idle when there may be stations with frames to send.
p- Persistent
The p-persistent method is used if the channel has time slots with slot duration equal to or
greater than the maximum propagation time.
The p-persistent approach combines the advantages of the other two strategies. It reduces the
chance of collision and improves efficiency. In this method, after the station finds the line idle
it follows these steps:
1. With probability p, the station sends its frame.
2. With probability q = 1 − p, the station waits for the beginning of the next time slot and
checks the line again.
If the line is idle, it goes to step 1.
If the line is busy, it acts as though a collision has occurred and uses the backoff
procedure.
The CSMA method does not specify the procedure following a collision. Carrier sense multiple
access with collision detection (CSMA/CD) augments the algorithm to handle the collision. In
this method, a station monitors the medium after it sends a frame to see if the transmission
was successful. If so, the station is finished. If, however, there is a collision, the frame is sent
again. One of the LAN protocols that used CSMA/CD is the traditional Ethernet with the data
rate of 10 Mbps.
To better understand CSMA/CD, let us look at the first bits transmitted by the two stations
involved in the collision. In Figure, stations A and C are involved in the collision.
At time t1, station A has executed its persistence procedure and starts sending the bits
of its frame.
At time t2, station C has not yet sensed the first bit sent by A. Station C executes its
persistence procedure and starts sending the bits in its frame, which propagate both to
the left and to the right.
The collision occurs sometime after timet2.
Station C detects a collision at time t3 when it receives the first bit of A’s frame.
Station C immediately (or after a short time) aborts transmission.
Station A detects collision at time t4 when it receives the first bit of C’s frame; it also
immediately aborts transmission.
Looking at the figure, we see that A transmits for the duration t4 − t1; C transmits for
the duration t3 − t2.
Procedure
Now let us look at the flow diagram for CSMA/CD in below Figure. It is similar to the one for
the ALOHA protocol, but there are differences.
Energy Level
A station that has a frame to send or is sending a frame needs to monitor the energy level to
determine if the channel is idle, busy, or in collision mode.
The level of energy in a channel can have three values: zero, normal, and abnormal.
At the zero level, the channel is idle.
At the normal level, a station has successfully captured the channel and is sending its frame.
At the abnormal level, there is a collision and the level of the energy is twice the normal level.
Figure shows the situation.
Throughput
The throughput of CSMA/CD is greater than that of pure or slotted ALOHA. The maximum
throughput occurs at a different value of G and is based on the persistence method.
For the p-persistent method, the maximum throughput is depends on the value of p.
For the 1-persistent method, the maximum throughput is around 50 percent when G =
1.
For the non-persistent method, the maximum throughput can go up to 90 percent
when G is between 3 and 8.
CSMA/CA was invented for wireless networks. Collisions are avoided through the use of
CSMA/CA’s three strategies:
The interframe space
The contention window and
Acknowledgments
After waiting an IFS time, if the channel is still idle, the station can send, but it still needs to
wait a time equal to the contention window. The IFS variable can also be used to prioritize
stations or frame types. For example, a station that is assigned shorter IFS has a higher
priority.
Contention Window
The contention window is an amount of time divided into slots. A station that is readyto
send chooses a random number of slots as its wait time. The number of slots in the window
changes according to the binary exponential backoff strategy. One interesting point about the
contention window is it just stops the timer and restarts it when the channel is sensed as idle.
This gives priority to the station with the longest waiting time.
Acknowledgment
In addition, the data may be corrupted during the transmission. The positive acknowledgment
and the time-out timer can help guarantee that the receiver has received the frame.
When a station sends an RTS (request to send) frame, it includes the duration of time that it
needs to occupy the channel. The stations that are affected by this transmission create a timer
called a network allocation vector (NAV) that shows how much time must pass before these
stations are allowed to check the channel for idleness. Each time a station accesses the system
and sends an RTS frame, other stations start their NAV.
CONTROLLED ACCESS
In controlled access, the stations consult one another to find which station has the right to
send. A station can’t send unless it has been authorized by other stations.
a) Reservation:
In the reservation method, a station needs to make a reservation before sending the data.
Time is divided into intervals. In each interval, a reservation frame precedes the data frames
sent in the interval.
b) Polling:
Polling works with topologies in which one device is designated as a primary station and the
other devices are secondary stations. All data exchanges must be made through the primary
device even when the ultimate destination is a secondary device.
The select function is used whenever the primary device has something to send. The poll
function is used by the primary device to solicit transmissions from the secondary device.
c) Token Passing:
In the token-passing method, the stations in a network are organized in a logical ring. Token
management is needed for this access method. Stations must be limited in the time they can
have possession of the token. The token must be monitored to ensure it has not been lost or
destroyed.
Token-passing method:
IEEE STANDARDS
In 1985, the Computer Society of the IEEE started a project to set standards to enable
intercommunication among equipment from a variety of manufacturers is called Project
802.
It is a way of specifying functions of the physical layer and the data link layer of major LAN
protocols.
The standard was adopted by the American National Standards Institute (ANSI).
In 1987, the International Organization for Standardization (ISO) also approved under the
designation ISO 8802.
Framing:
Figure shows one single LLC protocol serving several MAC protocols.
Framing is handled in both the LLC sublayer and the MAC sublayer.
LLC defines a protocol data unit (PDU), similar to that of High-level Data Link Control
(HDLC). The header contains a control field like the one in HDLC; this field is used for flow
and error control.
The two other header fields define the upper-layer protocol at the source and destination
that uses LLC.
These fields are called the destination service access point (DSAP) and the source
service access point (SSAP). The other fields defined in a typical data link control
protocol such as HDLC are moved to the MAC sublayer.
Aframedefined in HDLC is dividedinto a Protocol Data Unit(PDU) at the LLC sublayer and
a frame at the MAC sublayer, as shown in Figure.
LLC is needed to provide flow and error control for the upper-layer protocols.
For example, if a LAN or several LANs are used in an isolated system, LLC may be needed to
provide flow and error control for the application layer protocols.
Physical Layer
The physical layer is dependent on the implementation and type of physical media used.
IEEE defines detailed specifications for each LAN implementation. For example, although
there is only one MAC sublayer for Standard Ethernet, there is a different physical layer
specification for each Ethernet implementations as we will see later.
STANDARD ETHERNET
It has gone through four generations:
Standard Ethernet (10† Mbps),
Fast Ethernet (100 Mbps),
Gigabit Ethernet (1 Gbps), and
Ten-Gigabit Ethernet (10 Gbps).
MAC Sublayer:
In Standard Ethernet, the MAC sublayer governs the operation of the access method.
It also frames data received from the upper layer and passes them to the physical layer.
Ethernet is also unreliable like IP and UDP. If a frame is corrupted during transmission
and the receiver finds out about the corruption, which has a high level of probability of
happening because of the CRC-32, the receiver drops the frame silently. It is the duty of
high-level protocols to find out about it.
Frame Format
The Ethernet frame contains seven fields: Preamble, SFD, DA, SA, length or type of
protocol data unit (PDU), Upper-layer data and padding and the CRC.
Ethernet does not provide any mechanism for acknowledging received frames
The first field of the 802.3 frame contains 7 bytes (56 bits) of alternating 0s and 1s that
alerts the receiving system to the coming frame and enables it to synchronize its input
timing.
The pattern provides only an alert and a timing pulse.
The 56-bit pattern allows the stations to miss some bits at the beginning of the frame.
The preamble is actually added at the physical layer and is not part of the frame.
Data:
This field carries data encapsulated from the upper-layer protocols.
It is a minimum of 46 and a maximum of 1500 bytes.
CRC: The last field contains error detection information, in this case a CRC-32
Frame Length
Ethernet has imposed restrictions on both the minimum and maximum lengths of a
frame.
The minimum length restriction is required for the correct operation of CSMA/CD.
An Ethernet frame needs to have a minimum length of 512 bits or 64 bytes.
Part of this length is the header and the trailer.
If we count 18 bytes of header and trailer (6 bytes of source address, 6 bytes of destination
address, 2 bytes of length or type, and 4 bytes of CRC), then the minimum length of data from
the upper layer is 64 − 18 = 46 bytes. If the upper-layer packet is less than 46 bytes, padding is
added to make up the difference.
The standard defines the maximum length of a frame (without preamble and SFD field) as
1518 bytes.
If we subtract the 18 bytes of header and trailer, the maximum length of the payload is 1500
bytes. The maximum length restriction has two historical reasons.
First, memory was very expensive when Ethernet was designed: a maximum length
restriction helped to reduce the size of the buffer.
Second, the maximum length restriction prevents one station from monopolizing the
shared medium, blocking other stations that have data to send.
Addressing
Each station on an Ethernet network (such as a PC, workstation, or printer) has its own
network interface card (NIC).
The NIC fits inside the station and provides the station with a 6-byte physical address.
As shown in Figure, the Ethernet address is 6 bytes (48 bits), normally written in hexadecimal
notation, with a colon between thebytes.
A source address is always a unicast address—the frame comes from only one station.
The destination address, however, can be unicast, multicast, or broadcast.
If the least significant bit of the first byte in a destination address is 0, the address is
unicast; otherwise, it ismulticast.
Example 1:
Define the type of the following destination addresses:
a. 4A:30:10:21:10:1A
b. 47:20:1B:2E:08:EE
c. FF:FF:FF:FF:FF:FF
Solution:
To find the type of the address, we need to look at the second hexadecimal digit from the left.
If it is even, the address is unicast.
If it is odd, the address is multicast.
If all digits are F’s, the address is broadcast.
Therefore, we have thefollowing:
a. This is a unicast address because A in binary is 1010 (even).
b. This is a multicast address because 7 in binary is 0111 (odd).
c. This is a broadcast address because all digits are F’s.
The way the addresses are sent out on line is different from the way they are written in
hexadecimal notation.
The transmission is left-to-right, byte by byte; however, for each byte, the least
significant bit is sent first and the most significant bit is sent last.
This means that the bit that defines an address as unicast or multicast arrives first at
the receiver.
Solution:
The address is sent left-to-right, byte by byte; for each byte, it is sent right-to-left, bit by bit, as
shown below:
11100010 00000100 11011000 01110100 00010000 01110111
In the bus topology, when station A sends a frame to station B, all stations will receive it.
In the star topology, when station A sends a frame to station B, the hub will receive it. Since
the hub is a passive element, it does not check the destination address of the frame; it
regenerates the bits (if they have been weakened) and sends them to all stations except
station A. In fact, it floods the network with the frame. The answer is in the way the frames are
kept or dropped.
Access Method
Since the network that uses the standard Ethernet protocol is a broadcast network, we need to
use an access method to control access to the sharing medium. The standard Ethernet chose
CSMA/CD with 1-persistent method; Let us use a scenario to see how this method works for
the Ethernet protocol.
The medium sensing does not stop after station A has started sending the frame. Station A
needs to send and receive continuously.
a. Station A has sent 512 bits and no collision is sensed, the station then is sure that the frame
will go through and stops sensing the medium. If we consider the transmission rate of the
Ethernet as 10 Mbps, this means that it takes the station 512/(10 Mbps) = 51.2 μs to send out
512 bits. With the speed of propagation in a cable (2 × 108 meters), the first bit could have
gone 10,240 meters (one way) or only 5120 meters (round trip), have collided with a bit from
the last station on the cable, and have gone back. In other words, if a collision were to occur, it
should occur by the time the sender has sent out 512 bits (worst case) and the first bit has
made a round trip of 5120 meters. We should know that if the collision happens in the middle
of the cable, not at the end, station A hears the collision earlier and aborts the transmission.
We also need to mention another issue. The above assumption is that the length of the cable is
5120 meters. The designer of the standard Ethernet actually put a restriction of 2500 meters
because we need to consider the delays encountered throughout the journey. It means that
they considered the worst case. The whole idea is that if station A does not sense the collision
before sending 512 bits, there must have been no collision, because during this time, the first
bit has reached the end of the line and all other stations know that a station is sending and
refrain from sending. In other words, the problem occurs when another station (for example,
the last station) starts sending before the first bit of station A has reached it. The other station
mistakenly thinks that the line is free because the first bit has not yet reached it. The reader
should notice that the restriction of 512 bits actually helps the sending station: The sending
station is certain that no collision will occur if it is not heard during the first 512 bits, so it can
discard the copy of the frame in its buffer.
b. Station A has sensed a collision before sending 512 bits. This means that one of the previous
bits has collided with a bit sent by another station. In this case both stations should refrain
from sending and keep the frame in their buffer for resending when the line becomes
available. However, to inform other stations that there is a collision in the network, the station
sends a 48-bit jam signal. The jam signal is to create enough signal (even if the collision
happens after a fewbits) to alert other stations about the collision. After sending the jam
signal, the stations need to increment the value of K (number of attempts). If after increment K
Efficiency = 1 / (1 + 6.4 * a)
Where ―a” is the number of frames that can fit on the medium.
Example
In the Standard Ethernet with the transmission rate of 10 Mbps, we assume that the length of
the medium is 2500 m and the size of the frame is 512 bits. The propagation speed of a signal
in a cable is normally 2 x 108 m/s.
The example shows that a = 0.24, which means only 0.24 of a frame occupies the whole medium
in this case. The efficiency is 39 percent, which is considered moderate; it means that only 61
percent of the time the medium is occupied but not used by a station.
10Base5 implementation
The transceiver is responsible for transmitting, receiving, and detecting collisions.
The transceiver is connected to the station via a transceiver cable that provides
separate paths for sending andreceiving.
The maximum length of the coaxial cable must not exceed 500 m, otherwise, there is
excessive degradation of the signal.
If a length of more than 500 m is needed, up to five segments, each a maximum of 500-
meter, can be connected using repeaters.
10Base2 implementation
10Base-T: Twisted-Pair Ethernet
10Base-T implementation
10Base-F: Fiber Ethernet
There are several types of optical fiber 10-Mbps Ethernet; the most common is called
10Base-F.
10Base-F uses a star topology to connect stations to a hub.
The stations are connected to the hub using two fiber-optic cables.
10Base-F implementation
INTRODUCTION
Wireless communication is one of the fastest-growing technologies. The demandfor connecting
devices without the use of cables is increasing everywhere. Wireless LANs can be found on college
campuses, in office buildings, and in many public areas.
ARCHITECTURAL COMPARISON
Medium
The first difference between a wired and a wireless LAN is the medium.
In a wired LAN, wires to connect hosts (Multiple access to point-to-point access through
the generation of the Ethernet).
In a switched LAN, with a link-layer switch, the communication between the hosts is
point-to-point and full-duplex.
Inawireless LAN, themedium isair, sharing thesame medium(multiple access) use MAC
protocols.
Hosts
In a wired LAN,
A host is always physically connected to its network at a point with a fixed link layer
address related to its network interface card (NIC).
A host can move from one point in the Internet to another point. In this case, its link-layer
address remains the same, but its network-layer address will change,
In a wireless LAN,
A host is not physically connected to the network; it can move freely and can use the
services provided by thenetwork.
Therefore, mobility in a wired network and wireless network are totally different issues.
Isolated LANs
A wired isolated LAN
It is a set of hosts connected via a link-layer switch.
The role of the access point is completely different from the role of a link-layer switch in
the wired environment.
An access point is gluing two different environments together: one wired and one wireless.
Communication between the AP and the wireless host occurs in a wireless environment;
communication between the AP and the infrastructure occurs in a wired environment.
Characteristics
There are several characteristics of wireless LANs that either do not apply to wired LANs or the
existence of which is negligible and can be ignored. We discuss some of these characteristics here
to pave the way for discussing wireless LAN protocols.
Attenuation
The strength of electromagnetic signals decreases rapidly because the signal disperses in all
directions; only a small portion of it reaches the receiver. The situation becomes worse with
mobile senders that operate on batteries and normally have small power supplies.
Interference
Another issue is that a receiver may receive signals not only from the intended sender, but also
from other senders if they are using the same frequency band.
Multipath Propagation
A receiver may receive more than one signal from the same sender because electromagnetic
waves can be reflected back from obstacles such as walls, the ground, or objects. The result is that
the receiver receives some signals at different phases (because they travel different paths). This
makes the signal less recognizable.
Error
With the above characteristics of a wireless network, we can expect that errors and error
detection are more serious issues in a wireless network than in a wired network. If we think about
theerrorlevelasthemeasurementofsignal-to-noiseratio(SNR),wecan betterunderstand why
error detection and error correction and retransmission are more important in a wireless
network. We discussed SNR in more detail in Chapter 3, but it is enough to say that it measures the
ratio of good stuff to bad stuff (signal to noise). If SNR is high, it means that the signal is stronger
than the noise (unwanted signal), so we may be able to convert the signal to actual data. On the
other hand, when SNR is low, it means that the signal is corrupted by the noise and the data
cannot be recovered.
Mrs.Chaitra S N., Dept of ECE., GMIT., Davangere 2
Computer Communication Networks -18EC61 Module-2– Wireless LANs
Access Control
The most important issue in a wireless LAN is access control. How a wireless host can get access to
the shared medium (air).
The CSMA/CD algorithm does not work in wireless LANs for three reasons:
1. To detect a collision, a host needs to send and receive at the same time. Wireless hosts donot
have enough power to do so (the power is supplied by batteries). They can only send or receive at
one time.
2. Because of the hidden station problem, in which a station may not be aware of another station’s
transmission due to some obstacles or range problems, collision may occur but not be detected.
Figure shows an example of the hidden station problem. Station B has a transmission range shown
by the left oval (sphere in space); every station in this range can hear any signal transmitted by
station B. Station C has a transmission range shown by the right oval (sphere in space); every
station located in this range can hear any signal transmitted by C. Station C is outside the
transmission range of B; likewise, station B is outside the transmission range of C. Station A,
however, is in the area covered by both B and C; it can hear any signal transmitted by B or C. The
figure also shows that the hidden station problem may also occur due to an obstacle.
Assume that station B is sending data to station A. In the middle of this transmission, station C also
has data to send to station A. However, station C is out of B’s range and transmissions from B
cannot reach C. Therefore C thinks the medium is free. Station C sends its data to A, which results
in a collision at A because this station is receiving data from both B and C. In this case, we say that
stations B and C are hidden from each other with respect to A. Hidden stations can reduce the
capacity of the network because of the possibility of collision.
3. The distance between stations can be great. Signal fading could prevent a station at one end
from hearing a collision at the other end. To overcome the above three problems, Carrier Sense
Multiple Access with Collision Avoidance (CSMA/CA) was invented for wireless LANs.