Computer Network Module-02 Search Creators
Computer Network Module-02 Search Creators
MODULE-2
Types of Errors:
Causes of Errors:
Errors occur during data transmission due to interference, which alters the
signal.
Redundancy:
Error Detection:
Error Correction:
Complexity of Correction:
The number of error possibilities increases with the number of bits and errors:
Redundancy in Coding:
Importance of Ratios:
Block Coding:
Convolution Coding:
More complex.
Not covered in this book.
Block Coding
One-to-One Mapping:
Error Detection:
Limitation:
o If a valid codeword is corrupted but still matches a valid codeword, the error will
go undetected.
Cyclic Codes
Definition:
Cyclic codes are a type of linear block code with the property that cyclic shifts (or
rotations) of a codeword result in another valid codeword.
Importance:
The cyclic property of these codes simplifies error detection and correction
processes, making them robust for various applications.
Definition: CRC is a type of cyclic code used to detect errors in digital data
transmission.
Usage: Commonly employed in networks such as Local Area Networks (LANs) and
Wide Area Networks (WANs).
Properties:
Error Detection: CRC can effectively identify accidental changes to raw data,
making it crucial for reliable data transmission.
Implementation: The CRC algorithm involves polynomial division of the data bits,
where the divisor is a predetermined polynomial.
Example: Table 10.3 illustrates a specific CRC code, highlighting its linear and cyclic
properties.
Dataword Representation:
Divisor Representation:
Augmented Dataword:
o To prepare for encoding, the dataword is left-shifted by the degree of the divisor
(3 bits), resulting in x6 + x3.
Polynomial Division:
o The division process starts by dividing the first term of the dividend (x6) by the
first term of the divisor (x3), resulting in x3.
Search Creators... Page 8
BCS502 | COMPUTER NETWORKS | SEARCH CREATORS
o This quotient term is then multiplied by the divisor and subtracted from the
dividend to produce a new dividend.
o The division continues until the remainder's degree is less than that of the
divisor.
Simplification of Division:
o The polynomial representation simplifies the division process, avoiding the need
for handling all-0s divisors.
o This streamlined approach makes polynomial division more efficient compared to
binary division, where similar steps can also be eliminated.
Error Detection:
Implementation:
Simple Algebra:
o More powerful cyclic codes exist, based on abstract algebra and Galois fields,
though they are beyond this discussion.
o Notable examples include Reed-Solomon codes, which are used for both
error detection and correction.
Hardware Benefits:
Optional Content:
o The section on hardware implementation is optional and does not impact the
overall understanding of the chapter.
Chapter: 2
DLC Services:
Framing:
o Bits transmitted in the physical layer need to be organized into frames in the
data-link layer.
o Frames help in distinguishing different messages and ensure correct
communication with sender and receiver addresses.
Frame Size:
Character-Oriented Framing:
o Frames carry 8-bit characters (e.g., ASCII) with header and trailer containing
control information.
o Special flag characters are used to mark the beginning and end of frames.
o If the flag character appears in the data, the receiver might incorrectly identify it
as the end of the frame.
o To fix the flag issue, an escape character (ESC) is added before any data
matching the flag pattern.
o The receiver removes the ESC and treats the next character as part of the data.
o If an escape character is followed by a byte with the same pattern as the flag,
the receiver could incorrectly interpret it as the end of the frame.
o Solution: If the escape character is part of the text, it is doubled to indicate that
it’s part of the data and not a delimiter.
This process ensures smooth communication even when non-text data, such as
audio or video, are transmitted.
Bit-Oriented Framing:
o To prevent confusion if the flag pattern appears in the data, a technique called
bit stuffing is used.
o After encountering five consecutive 1s, an extra 0 is inserted to avoid creating
the flag pattern by mistake.
o The stuffed bit is removed by the receiver during the decoding process.
o Even if the flag pattern (01111110) appears in the data, it is modified (via
stuffing) to 011111010, so the receiver does not mistake it for the real frame
delimiter.
Flow Control:
Communication Entities:
o At the data-link layer, four entities are involved: the network and data-link
layers at both the sending and receiving nodes.
o The focus is on flow control between the two data-link layers of the sending and
receiving nodes.
o The sending data-link layer pushes frames toward the receiving node’s data-
link layer.
o If the receiving node cannot process the frames fast enough, flow control signals
slow down or stop the transmission of frames.
Buffers:
o Buffers, used at both the sending and receiving data-link layers, are memory
locations to hold frames.
o When the buffer of the receiving data-link layer becomes full, it signals the
sending data-link layer to pause transmission until space is available in the
buffer again.
Flow control ensures data transmission efficiency by preventing data overflow at the
receiver, maintaining smooth communication between nodes.
Error Control:
o Error control ensures that corrupted frames are not delivered to the network
layer due to the unreliability of the physical layer.
o Two common methods of error control:
1. Silent Discarding: Corrupted frames are discarded, and uncorrupted
frames are delivered without acknowledgment (used in wired LANs like
Ethernet).
2. Acknowledgment-Based: Corrupted frames are discarded, but
uncorrupted frames prompt an acknowledgment sent to the sender (used
for both flow and error control).
o A single acknowledgment (ACK) can be used for both flow and error control.
o An ACK indicates successful and uncorrupted delivery of a frame, while no
acknowledgment implies a problem with the frame.
Connectionless Protocol:
o Frames are not numbered and are sent without any logical connection. This is
typical for data-link protocols in LANs.
Connection-Oriented Protocol:
Flow and error control strategies ensure reliable and efficient communication
between nodes, handling corrupted frames and preventing data overflow while
maintaining orderly communication when needed.
Traditional Protocols:
o Initial state: One state must be defined as the starting state when the machine
is activated.
This FSM model helps visualize the behavior of data-link layer protocols through
state transitions based on events and actions.
Simple Protocol
FSMs
In the Simple Protocol, the communication process between the sender and receiver
can be modeled using Finite State Machines (FSMs).
FSM Behavior:
Both the sender and receiver operate in a single "ready" state, and they
transition based on the arrival of a message (sender side) or a frame
(receiver side).
Sender: Transitions from waiting for a message to encapsulating and
sending a frame.
Receiver: Transitions from waiting for a frame to decapsulating and
delivering the message to the network layer.
This simple FSM structure reflects the minimal functionality of the protocol, with only
essential actions like sending and receiving messages.
Stop-and-Wait Protocol:
Protocol Overview:
Error Detection:
Sender Process:
Frame Resending:
o The sender retains a copy of the sent frame until it receives a valid ACK, at
which point the frame is discarded.
Receiver Process:
Sender States:
o Ready State: Waits for a packet from the network layer, creates a frame,
sends it, and starts the timer.
o Blocking State:
Timeout: The sender resends the frame.
Corrupted ACK: Discarded.
Error-free ACK: Timer is stopped, and the next frame is sent.
Example 11.4:
Piggybacking
Piggybacking Concept:
Efficiency: Acknowledgments from one node (e.g., Node A) are included (or
piggybacked) with the data being sent in the opposite direction (e.g., to Node B).
This avoids sending separate ACK frames, improving the efficiency of
communication.
Challenges:
Piggybacking increases complexity at the data link layer, which can make it
less commonly used in practice.
Further Discussion:
Protocol Overview:
Implementation:
o HDLC implements the Stop-and-Wait protocol, which controls the flow of data
and manages error detection and correction.
o Although HDLC is largely theoretical, many of its concepts are foundational for
other practical protocols.
Applications:
o HDLC concepts serve as the basis for protocols like PPP (Point-to-Point
Protocol), discussed later.
o It also influences Ethernet protocols in wired LANs (discussed in Chapter 13)
and wireless LANs (Chapter 15).
HDLC Configurations:
Framing in HDLC:
Frame Structure:
Address Field: Identifies the secondary station; it may carry either the to or
from address depending on the source.
Control Field: Specifies the type of frame (I-frame, S-frame, or U-frame) and its
functionality (flow/error control).
Information Field: Holds user data from the network layer or management
information (variable length).
Frame Check Sequence (FCS) Field: Used for error detection via a 2- or
4-byte CRC (Cyclic Redundancy Check).
Ending Flag Field: Marks the end of the frame; in some cases, it can also serve
as the beginning flag of the next frame in consecutive transmission.
o The control field differs based on the frame type (I-frame, S-frame, U-frame),
defining the specific operations for flow control, error handling, or system
management
Purpose: I-frames carry user data from the network layer and may also include
flow and error control information through piggybacking.
Subfields:
o First bit (Type): A value of 0 indicates that the frame is an I-frame.
o N(S) (Next Send Sequence Number): The next 3 bits define the
sequence number of the frame (values between 0 and 7).
o P/F bit (Poll/Final): A single bit between N(S) and N(R) with two
functions:
Poll (P): When the frame is sent by a primary station to a
secondary.
Final (F): When the frame is sent by a secondary station to a
primary.
o N(R) (Next Receive Sequence Number): The last 3 bits represent the
acknowledgment number when piggybacking is used.
Purpose: S-frames are used for flow control and error control when
piggybacking is not feasible. They do not contain an information field.
Subfields:
First two bits (Type): A value of 10 indicates that the frame is an S-frame.
Code (2 bits): Defines the specific type of S-frame, allowing for four possible
values:
3. Reject (REJ):
Code: 01
A negative acknowledgment (NAK) used in Go-Back-N ARQ
to notify the sender that a frame is lost or corrupted
before the sender's timer expires. N(R) contains the negative
acknowledgment number.
4. Selective Reject (SREJ):
Code: 11
A NAK used in Selective Repeat ARQ, known as Selective
Reject in HDLC. It specifies the negative acknowledgment
number (N(R)).
N(R): The last 3 bits in S-frames represent the acknowledgment number or
negative acknowledgment number depending on the type of S-frame.
Purpose:
P/F Bit:
Similar to I-frames and S-frames, the P/F bit in U-frames indicates whether
the frame is acting as a Poll (P) or Final (F) signal.
o Poll (P): When the frame is sent by the primary station to the
secondary.
o Final (F): When the frame is sent by the secondary station to the
primary.
Example:
Chapter: - 3
Equal Access Rights: No station has superiority over another, and any station
can attempt transmission based on the state of the medium.
Random Transmission: Stations transmit data when they wish, leading to
random access, with no predetermined order or schedule.
Collision Handling: When multiple stations transmit simultaneously, collisions
occur, resulting in the need for procedures to resolve these conflicts.
ALOHA Protocol:
The ALOHA protocol was one of the earliest random access methods, originally
designed for wireless communication but applicable to any shared medium.
Pure ALOHA:
How it works: In pure ALOHA, stations transmit whenever they have data to
send, without sensing the medium for activity. This leads to potential collisions
between stations sending data at the same time.
If a collision occurs, all stations involved wait for a time-out and then try
again after a random backoff time.
Pure ALOHA operates with relatively low efficiency due to the high possibility
of collisions. At best, pure ALOHA achieves around 18% throughput in an
ideal scenario, meaning only a small fraction of total transmission time is used
successfully.
Improved Versions:
To improve upon pure ALOHA, Carrier Sense Multiple Access (CSMA) was
introduced. It adds the ability to sense the medium before transmitting, reducing the
likelihood of collisions.
CSMA improves upon the ALOHA protocol by introducing a mechanism to sense the
medium before transmitting, aiming to reduce collisions and improve overall
efficiency.
the network. If a station transmits a frame and another station does not
detect this transmission due to the delay in signal propagation, both may
transmit simultaneously, leading to a collision.
Example: As illustrated, at time t1, station B senses the medium and finds it
idle, so it starts sending its frame. At a slightly later time t2 (where t2 > t1),
station C also senses the medium and detects it as idle because the first bit
from station B has not yet reached it. Both stations transmit simultaneously,
causing a collision and the destruction of both frames.
CSMA Types:
1-Persistent CSMA:
o After sensing the medium, the station sends data immediately if the medium is
idle. If the medium is busy, it continuously senses the medium until it becomes
idle.
o Drawback: This approach increases the chances of a collision if multiple stations
are waiting for the medium to become idle simultaneously.
Non-Persistent CSMA:
o After sensing the medium, if it is busy, the station waits for a random amount
of time before attempting to sense and transmit again.
o Advantage: It reduces the probability of collisions by preventing stations from
all attempting to send at the exact moment the medium becomes idle.
P-Persistent CSMA:
o Used mainly in slotted channels. After sensing the medium, if it is idle, the station
sends the frame with a probability of p or defers to the next slot with a
probability of 1-p.
Even with sensing, collisions can still occur due to propagation delays, and strategies
like CSMA with Collision Detection (CSMA/CD) and CSMA with Collision
Avoidance (CSMA/CA) have been developed to manage this.
Vulnerable Time
The vulnerable time in Carrier Sense Multiple Access (CSMA) represents the
window of time during which a collision can occur. This time is directly related to the
propagation delay (Tp), which is the time it takes for a signal to travel from one
end of the network medium to the other.
Detailed Explanation:
When a station (e.g., station A) starts sending a frame, it takes time for the
first bit of that frame to propagate to the furthest station (e.g., station D).
This propagation time is referred to as Tp.
During this propagation time, the medium still appears idle to other stations
because the signal hasn't reached them yet. If another station (such as B, C,
or D) starts transmitting during this period, a collision will occur.
Vulnerable time is thus defined as the propagation time, Tp, because that
is the period in which other stations might still attempt to transmit and cause
a collision.
Worst-Case Scenario:
Once the first bit of the frame reaches station D (or the furthest point on the
medium), all stations will be aware that the medium is busy, and no further
collisions will occur for this transmission.
CSMA/CD
1. Sense the Medium: Before sending data, a station first listens to the
communication medium to check if it is idle or busy.
2. Transmit the Frame: If the medium is idle, the station starts transmitting
its frame.
3. Monitor for Collision: While transmitting, the station keeps listening to the
medium to detect any possible collisions.
4. Collision Detection: A collision occurs when another station also begins
transmitting. The signals overlap, and the stations detect that a collision has
occurred.
5. Abort Transmission: Upon detecting a collision, the station immediately
stops transmitting to minimize data loss and avoid network congestion.
6. Random Backoff: After aborting, the station waits for a random backoff
period (using a method like binary exponential backoff) before trying to
retransmit the frame, thus reducing the likelihood of another immediate
collision.
At time t1: Station A senses that the medium is idle and begins transmitting
its frame.
At time t2: Station C, unaware that Station A has started transmitting (due
to propagation delay), also senses the medium as idle and begins transmitting
its frame.
The first bits of both frames collide somewhere on the network medium.
At time t3: Station C detects the collision upon receiving the first bit of
Station A's frame and immediately stops transmitting.
At time t4: Station A also detects the collision when it receives the first bit of
Station C's frame and stops transmitting as well.
Transmission Durations:
Both stations wait for a random period after the collision to avoid transmitting at the
same time again, then retransmit their frames.
Significance of CSMA/CD:
In this scenario, CSMA/CD is managing a collision that occurs when two stations, A
and C, transmit data at the same time.
1. At time t1: Station A executes its persistence procedure and begins transmitting
its frame. It has successfully sensed the medium as idle, so it starts to send the
data.
2. At time t2: Station C, unaware that station A is transmitting (because of
propagation delay), senses the medium as idle and starts transmitting its own
frame. The signals from both stations begin propagating in both directions.
3. Collision Detection:
o At time t3: Station C detects a collision. This happens when the first bit
of Station A’s frame reaches C. Upon detecting the collision, C
immediately stops transmitting (or after a very short delay).
o At time t4: Station A detects the collision when the first bit of Station C's
frame reaches it. A also aborts transmission at this point.
4. Transmission Durations:
o Station A transmits for the time interval between t1 and t4. So, the
duration of A’s transmission is t4 − t1.
o Station C transmits for the time interval between t2 and t3. The duration
of C’s transmission is t3 − t2.
5. Collision Handling:
o Both stations detect that their transmissions failed, so they stop sending
data and will follow the CSMA/CD backoff algorithm, waiting a random
time before attempting to retransmit.
Key Observations:
can cause a station to believe the medium is idle when it is not, leading to a
collision.
Collision Detection: Both stations detect the collision after they have
already begun transmitting, but the time of detection differs based on their
positions and the propagation delay.
The graph illustrates the timeline of events, showing the duration of transmission for
both stations and highlighting when each station detects the collision.
The timeline is essential for visualizing how collisions happen and how CSMA/CD
handles them by aborting transmissions and avoiding further collisions using a
backoff strategy.
In CSMA/CD (Carrier Sense Multiple Access with Collision Detection), the concept
of minimum frame size ensures that a station can detect collisions before finishing
its transmission. This requirement is based on the fact that the station must still be
transmitting when a collision occurs, in order to detect it and stop the transmission.
For collision detection to work effectively, the frame transmission time (Tfr) must be
at least twice the maximum propagation time (2Tp). The propagation time is the
time it takes for a signal to travel from one end of the network to the other.
Worst-Case Scenario:
Example 12.5:
Given:
This means that the frame transmission time must be at least 51.2 µs.
This results in a minimum frame size of 64 bytes, which ensures that the station
can still detect collisions before the frame is fully transmitted. This frame size is also
the minimum frame size for Standard Ethernet.
Procedure
In CSMA/CD (Carrier Sense Multiple Access with Collision Detection), the procedure
for handling frame transmission and collisions is more sophisticated than in simpler
protocols like ALOHA, due to the inclusion of persistence mechanisms, collision
detection, and jamming signals.
1. Persistence Process:
o Before transmission begins, the station senses the channel using one of
the persistence processes:
Nonpersistent: The station waits a random amount of time if
the channel is busy.
1-persistent: The station keeps sensing and transmits as soon
as the channel is idle.
p-persistent: The station transmits with a probability p if the
channel is idle, or waits for the next time slot if busy.
o These persistence processes help minimize collisions by controlling when
a station attempts to access the medium.
2. Frame Transmission with Continuous Collision Detection:
o Unlike in ALOHA where the entire frame is transmitted before checking
for acknowledgment, in CSMA/CD, transmission and collision detection
are continuous.
The flow diagram for CSMA/CD illustrates the procedure in a loop. The
station continuously monitors the medium while transmitting, checking for
one of two outcomes:
o Transmission completes successfully: The entire frame is sent,
and no collision is detected.
o Collision detected: Transmission stops, the jamming signal is sent,
and the process restarts after backoff.
CSMA/CA
CSMA/CA (Carrier Sense Multiple Access with Collision Avoidance) is primarily used
in wireless networks and is designed to prevent collisions before they happen.
Unlike CSMA/CD, which detects and manages collisions after they occur, CSMA/CA
uses a proactive approach to avoid collisions altogether.
CSMA/CA Strategies
o After a station finds the channel idle, it does not immediately start transmitting.
It waits for a brief period known as the Interframe Space (IFS).
o This waiting period is crucial because there might be a distant station that has
already started transmitting, but the signal hasn't reached the current station yet.
o The IFS time allows for any ongoing transmissions from other stations to be
detected before starting a new transmission.
o Prioritization: Stations or frame types with shorter IFS times are given higher
priority, meaning they can access the channel sooner.
Contention Window:
o After the IFS period, the station waits for an additional time period called the
contention window. This window is divided into slots, and the station selects a
random number of slots as its wait time.
o The size of the contention window is dynamic, growing with each attempt to
access the medium, following the binary exponential backoff strategy. This
helps reduce congestion on the channel.
o During this period, the station continually senses the medium. If it detects that
the channel is busy, it pauses the timer and resumes once the channel is idle
again. This ensures that stations with the longest waiting time have priority
when the channel becomes free.
Acknowledgment:
Operation of CSMA/CA
Sense the Channel: Before transmission, the station checks if the medium
is idle.
Wait for IFS: Even if the channel is idle, the station waits for the IFS period
to allow any ongoing distant transmissions to be detected.
Choose Random Slot in Contention Window: After the IFS, the station
chooses a random slot within the contention window to minimize the
likelihood of multiple stations transmitting simultaneously.
Monitor Channel Continuously: The station checks the channel after each
time slot. If the channel becomes busy, it pauses the countdown and waits
until the channel is idle again.
Transmit and Wait for Acknowledgment: If the channel remains idle, the
station transmits its frame and waits for an acknowledgment to ensure
successful transmission.
Any station that receives this RTS frame will set a timer known as the NAV
(Network Allocation Vector).
Purpose of NAV: The NAV serves as a countdown timer, telling each station
how long it must wait before attempting to access the medium. While the
NAV is active, a station will defer from transmitting, even if it senses that
the physical channel is idle. This ensures that stations don't interfere with an
ongoing transmission, thus preventing collisions.
Process: When a station sends an RTS, other stations in range start their
NAV timers based on the duration mentioned in the RTS frame. Before
checking the channel for idleness, a station first checks whether its NAV has
expired. If it has not expired, the station will not attempt to access the
medium.
The NAV helps with the collision avoidance part of CSMA/CA by ensuring that
other stations remain silent for the required period, reducing the chance of collisions.
The process looks like this:
1. Station sends RTS: The station wanting to transmit data sends an RTS
(Request to Send) frame that includes the time it needs to occupy the
medium.
2. Other stations set NAV: All stations that receive this RTS start their NAV
timer, refraining from transmitting during this period.
3. Transmission proceeds: The transmitting station receives a Clear to Send
(CTS) frame from the receiver, allowing the data transfer to proceed.
4. NAV countdown: During the data transmission, the NAV of other stations
continues counting down. Once the timer expires, stations can check the
channel for idleness and try to access it again.
During the handshaking period (when RTS or CTS frames are exchanged), collisions
can still occur.
If multiple stations send RTS frames simultaneously, these control frames may
collide because there is no mechanism for collision detection in wireless
networks.
Handling collisions: If a station sends an RTS and does not receive a CTS
frame from the receiver, it assumes that a collision occurred. In response, the
station initiates a backoff process, waiting a random period before trying to
send the RTS again. This reduces the chance of repeated collisions.
The hidden station problem occurs when a station is out of range of other
stations communicating with the access point. In such cases, stations may
unknowingly transmit at the same time, leading to collisions.
The combination of the NAV and the RTS/CTS handshake mechanism helps
CSMA/CA reduce collisions, particularly in wireless networks, where issues like
hidden terminals are common.
CONTROLLED ACCESS
In controlled-access methods, stations collaborate to decide which one has the right
to send data. This approach ensures more organized communication and reduces
the chances of collision, as stations take turns or follow specific rules. Below are the
three main controlled-access methods: Reservation, Polling, and Token Passing.
Reservation Method
In the reservation method, time is divided into intervals, and within each interval,
a reservation frame precedes the transmission of data frames. The idea is that a
station must first reserve a time slot to transmit data by signaling its intention in a
"minislot" that belongs to it in the reservation frame.
Example:
In the next interval, only station 1 makes a reservation, so it is the only one
that sends data in that time period.
Polling Method
The polling method is commonly used in systems where one device acts as a
primary station, and the other devices are secondary stations. The primary
station controls the communication process and decides which secondary station is
allowed to transmit data at any given time.
However, the major drawback of polling is that the entire system depends on
the primary station. If the primary station fails, communication stops, causing the
whole system to break down (as seen in Figure 12.19).
Polling Example:
In a network, the primary station first polls secondary station A. If A has data
to send, it sends its data.
Once A finishes, the primary polls station B, and the process continues in this
manner.
A token is passed around the network, and only the station that holds the token is
allowed to transmit data. Once the station finishes, it passes the token to the next
station in the sequence.
This method ensures that only one station sends data at a time, eliminating
the possibility of collisions.
It is efficient in networks with predictable or steady traffic, as stations get
regular chances to transmit.
Example:
In a ring network, Station A holds the token and sends data. Once done, it
passes the token to Station B, which then transmits its data, and so on.
This method is used in protocols like Token Ring and FDDI (Fiber Distributed
Data Interface).
Each method ensures controlled access and reduces the likelihood of collisions, but
they differ in complexity, scalability, and robustness based on the network structure
and requirements.
Select
In controlled access, polling and selecting are key functions used by the primary
device to manage communication with secondary devices.
Select Function
The select (SEL) function is used by the primary device whenever it has data to
send to a secondary device.
Since the primary controls the communication link, it knows that the link is available,
but it needs to ensure that the target secondary device is ready to receive data.
Steps:
1. The primary device sends a select (SEL) frame, which contains the address
of the intended secondary device.
2. The secondary device receives the select frame and responds, indicating
whether it is ready to receive data.
3. If the secondary is ready, the primary device proceeds to send the data.
The select function ensures that data is only sent when the secondary device is
prepared to handle the transmission, preventing wasted communication attempts.
Poll Function
The poll function is used by the primary device to check whether any of the
secondary devices have data to send.
Steps:
1. The primary device sends a poll frame to a secondary device, asking if it has
data to send.
2. The secondary device responds with:
o A NAK (Negative Acknowledgment) frame if it has no data to
send.
o A data frame if it has data ready for transmission.
3. If the secondary device sends a data frame, the primary device receives the
data and sends an ACK (Acknowledgment) frame, confirming the
successful receipt of the data.
4. If the response is a NAK frame, the primary device moves on to poll the next
secondary device in the same manner.
Polling ensures that the primary device systematically checks with each secondary
device to see if any have data to send, maintaining orderly communication without
collisions.
Select: Used when the primary device has data to send. It checks if the
secondary device is ready to receive.
Poll: Used when the primary device is ready to receive data. It checks if
secondary devices have any data to send.
Token Passing
Each station has a predecessor (the station before it in the ring) and a
successor (the station after it).
The right to access the channel is controlled by a special packet known as a
token.
1. Token Circulation:
o The token circulates through the network. Possession of the token
grants a station the right to send data.
o When a station wants to send data, it waits to receive the token from
its predecessor.
o Once it receives the token, the station can send its data.
2. Passing the Token:
o After the data has been sent, the station releases the token, passing it
to the next logical station (its successor).
o If a station receives the token and has no data to send, it simply
passes the token to the next station.
Token Management
Token Limitation: Stations must be limited in how long they can hold the
token to prevent monopolization.
Token Monitoring: Mechanisms are needed to detect if the token has been
lost or destroyed (e.g., if a station holding the token fails).
Priority Assignment: Priorities can be assigned to stations and types of
data, ensuring that high-priority stations can access the token when needed.
Logical Ring
In a token-passing network, the logical arrangement of stations can differ from their
physical connections:
Dual Ring Topology: This design uses a secondary ring that operates in the
reverse direction. If the main ring fails, the two rings can combine to form a
temporary ring, ensuring continued operation. This topology is used in high-
speed networks like FDDI (Fiber Distributed Data Interface) and CDDI
(Copper Distributed Data Interface).
Bus Ring Topology (Token Bus): Stations are connected to a single bus
but create a logical ring by knowing the addresses of their predecessors and
successors. When a station releases the token, it includes the address of its
successor. Only the station with the matching address can use the token.
Star Ring Topology: Stations connect to a central hub, which creates a
logical ring. This design offers resilience against link failures; if one link goes
down, the hub can bypass it, allowing the network to continue functioning.
This topology is used in IBM's Token Ring LAN.
Efficiency: Token passing reduces collisions since only one station can
transmit at a time.
Predictability: Token management allows for orderly data transmission and
can provide guarantees for bandwidth and latency.
Complexity: Maintaining the token and ensuring it is not lost requires
additional overhead and management.