0% found this document useful (0 votes)
10 views39 pages

Datacommunication Module4

Module 3 discusses data link control, which is essential for reliable data transfer across a network's physical link. It covers key functionalities such as line discipline, flow control, error control, and framing, detailing protocols like ENQ/ACK and Poll/Select for line discipline, and various ARQ methods for error control. The document emphasizes the importance of managing data exchange to prevent issues like data overflow and corruption during transmission.

Uploaded by

mayasankar412
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views39 pages

Datacommunication Module4

Module 3 discusses data link control, which is essential for reliable data transfer across a network's physical link. It covers key functionalities such as line discipline, flow control, error control, and framing, detailing protocols like ENQ/ACK and Poll/Select for line discipline, and various ARQ methods for error control. The document emphasizes the importance of managing data exchange to prevent issues like data overflow and corruption during transmission.

Uploaded by

mayasankar412
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 39

MODULE 3-DATA LINK CONTROL

1. Data link control

• Sending data in the form of signals over a transmission link within the network
requires much more than just the physical transmission link and the
synchronization and interfacing techniques, to control and manage the data
exchange for ensuring reliable data transfer across a physical link.

• To achieve the necessary data control and management, a layer of logic


(software layer) is added above the physical layer and this logic is referred to as
data link control or a data link control protocol.
• When a data link control protocol is used, the transmission medium between
systems is referred to as a data link.
• In the layered network architecture like ISO-OSI reference model, the logic for
data link control is provided by the data link layer, which is responsible for
providing reliable data transfer across data link within the network.
• The data link layer is sub-divided into 2 sub-layers.
• The upper sub-layer that is responsible for flow and error control is called the
logical link control (LLC) layer and the lower sub-layer that is mostly
responsible for multiple access resolution is called the media access control
(MAC) layer.
• The data link control functionalities include:
o Line discipline
o Flow Control
o Error Control and
o Framing

2. LINE DISCIPLINE

• In network systems, no device should be allowed to transmit until that device


has evidence that the intended receiver is able to receive and is prepared to
accept the transmission.
• With no way to determine the status of the intended receiver, the transmitting
device may waste its time sending data to a non-functioning receiver or may
interfere with signals already on the link.
• The line discipline functions of the data link layer supervise the establishment
of links and the right of a particular device to transmit at a given time.
• Line discipline answers the question who should send now?
• There are two fundamental protocols that are used to accomplish line
discipline in a data communications network:

o Enquiry/Acknowledgement(ENQ/ACK)

o Poll/Select

Enquiry/Acknowledgement(ENQ/ACK)

• This method is used when there is dedicated link between sender and receiver.
• This method is used in point to point communication.
• It uses half duplex method.
• Enquiry/Acknowledgment (ENQ/ACK) is a relatively simple data-link-layer line
discipline protocol that works best in simple network environments where there
is no doubt as to which station is the intended receiver.
• ENQ/ACK line discipline procedures determine which device on a network can
initiate a transmission and whether the intended receiver is available and ready
to receive a message.
• Assuming all stations on the network have equal access to the transmission
medium, a data session can be initiated by any station using ENQ/ACK.
• The initiating station begins a session by transmitting a frame of data called
an enquiry (ENQ), which identifies the receiving station.
• The ENQ sequence solicits the receiving station to determine if it is ready to
receive a message.
• If the destination station is ready to receive, it responds with a positive
acknowledgment (ACK), and if it is not ready to receive, it responds with a
negative acknowledgment (NAK).
• After a specific time period (called the timeout ),the sender retransmits the
ENQ frame .
• A NAK transmitted by the destination station in response to an ENQ generally
indicates a temporary unavailability and the initiating station will simply attempt
to establish a session later
• An ACK from the destination station indicates that it is ready to receive data and
tells the initiating station that it is free to transmit its message.
• All transmitted message end with a special unique terminating sequence called
EOT(end of transmission).
• An ACK transmitted in response to a received message indicates the message
was received without errors, and a NAK indicates that the message was
received containing errors.
POLL/SELECT

• The poll/select line discipline is best suited to centrally controlled data


communications networks using a multipoint topology.
• The one station or device is designated as the primary or host station and all
other stations are designated as secondaries.
• Multipoint data communications networks using a single transmission medium
must coordinate access to the transmission medium to prevent more than one
station from attempting to transmit data at the same time.
• In addition, all exchanges of data must occur through the primary station.
Therefore, if a secondary station wishes to transmit data to another secondary
station, it must do so through the primary station.
• The primary initiates all data transmissions on the network with polls and
selections.
• A poll is a solicitation sent from the primary to a secondary to determine if the
secondary has data to transmit.
• A selection is how the primary designates a secondary as a destination or
recipient of data. A selection is also a query from the primary to determine if the
secondary is ready to receive data.
• All secondary stations receive all polls and selections transmitted from the
primary. With poll/select procedures each secondary station is assigned one or
more address for identification.
• It is the secondaries' responsibility to examine the address and determine if the
poll or selection is intended for them.
• A primary can poll only one station at a time; however, it can select more than
one secondary at a time using group (more than one station) or broadcast (all
stations) addresses.
• If the secondary has a message to send, it responds to the poll with the
message. This is called a positive acknowledgment to a poll.
• If the secondary has no message to send, it responds with a negative
acknowledgment to the poll. This is called a negative acknowledgment to a
poll.

3. FLOW CONTROL

• Whenever an entity produces items and another entity consumes them,


there should be a balance between production and consumption rates.
• Flow control is a technique for assuring that a sender does not overpower a
receiver with data frames at a rate which is faster than the receiver can
accept them.
• The receiving entity typically allocates a data buffer of some maximum length
for a transfer.
• In the absence of flow control, the receiver's buffer may fill up and overflow
while it is processing old data frames.
• The flow-control protocols commonly employed by the Data Link layer
includes:
• Stop-and-Wait Flow Control

• Sliding-Window Flow Control


Stop-and-Wait Flow Control

• The simplest form of flow control, known as stop-and-wait flow control, works
as follows:
1. A sender transmits a frame.

2. After the receiver receives the frame, it indicates its willingness to accept
another frame by sending back an acknowledgment to the frame just
received.

3. After having sent a frame, the sender is required to wait until the
acknowledgement frame arrives before sending the next frame.

• The receiver can thus stop the flow of data simply by withholding
acknowledgment.
• This process of sending a frame and waiting for an acknowledgment
continues as long as the sender has data to send.
• To end up the transmission sender transmits an end of transmission (EOT)
frame.
• As part of error control, in the stop-and-wait protocol, when the sender does
not receive ACK for previous sent packet after a certain period of time, the
sender times out and retransmits that packet.
• There are two cases when the sender doesn't receive ACK: one is when the
ACK is lost and the other is when the frame itself is not transmitted.
• It works on half-duplex and the channel is capable of bidirectional
information transfer.
• The main advantage of stop & wait protocols is its accuracy.
• To improve efficiency while providing of network bandwidth reliability, sliding
window protocol has been introduced.
Sliding-window flow control

• Efficiency can be greatly improved by allowing multiple frames to be in transit


at the same time.
• It works in full duplex link.
• In this method, the sender does not wait for an acknowledgment for each
frame. Instead, a “window” of frames gets sent, and acknowledgments can
come for any frame within that window.
• This window represents the maximum number of frames in transit at any
given time. If an acknowledgment for a frame is not received within a specific
timeframe, it is assumed to be lost or corrupted, leading to its retransmission.
• Sliding Window Flow Control balances both efficiency and reliability, making
it suitable for scenarios where high-speed data transmission is required
without compromising on data integrity.
• A maintains a list of sequence numbers that it is allowed to send, and B
maintains a list of sequence numbers that it is prepared to receive. Each of
these lists can be thought of as a window of frames. The operation is referred
to as sliding-window flow control.

4. Error Control
• Since the underlying technology at the physical layer is not fully reliable, we
need to implement error control at the data-link layer to prevent the receiving
node from delivering corrupted packets to its network layer
• Error control refers to mechanisms to detect and correct errors that occur in
the transmission of frames.
• During the frame transmission, there is a possibility of occurring two types of
errors:
o Lost frame: A frame (data frame or acknowledgement frame) fails to
arrive at the other side. For example, a noise burst may damage a
frame to the extent that the receiver is not aware that a frame has been
transmitted.
o Damaged frame: A recognizable frame (data frame or
acknowledgement frame) does arrive, but some of the bits are in error
(have been altered during transmission).
• The most common techniques for error control are based on some or all of
the following components:

o Positive acknowledgment: The destination returns a positive


acknowledgment to successfully received, error-free frames.
o Retransmission after timeout: The source retransmits a frame that
has not been acknowledged after a predetermined amount of time.
o Negative acknowledgment and retransmission: The destination
returns a negative acknowledgment to frames in which an error is
detected. The source retransmits such frames.
o Error detection and correction: For a given frame of bits, enough
additional bits (redundant bits) that constitute an error-detecting code
or error correcting code are added by the transmitter.
• The error correcting code can only identify presence of error, once the error
is discovered, the receiver requests the sender retransmit the entire data unit
hence known as Backward Error Correction.
• The use of error-correcting codes is often referred to as Forward Error
Correction as the receiver uses the error-correcting code that can
automatically corrects the errors.
• These mechanisms are all referred to as Automatic Repeat reQuest (ARQ).
The effect of ARQ is to turn an unreliable data link into a reliable one.
• Three versions of ARQ have been standardized:

• Stop-and-wait ARQ
• Go-back-N ARQ
• Selective-reject ARQ
Stop-and-wait ARQ

• Stop-and-wait ARQ is based on the stop-and-wait flow control technique.


• In the Stop-and-wait ARQ, the source station transmits a single frame and
then must await an acknowledgment (ACK).
• No other data frames can be send until the destination station's reply arrives
at the source station.
• Two sorts of errors could occur:
• The frame that arrives at the destination could be damaged. The
receiver detects this by using the error-detection technique and
simply discards the frame.
• The sender retransmits the frame after a specific time and if no
acknowledgement is arrived too ..
• This method requires that the transmitter maintain a copy of a
transmitted frame until an acknowledgment is received for that
frame.

• The second sort of error is a damaged acknowledgment.


• To avoid this problem, frames are alternately labelled with 0 or 1,
and positive acknowledgments are of the form ACKO and ACKI. An
ACKO acknowledges receipt of a frame numbered 1 and indicates
that the receiver is ready for a frame numbered 0.

• The principal advantage of stop-and-wait ARQ is its simplicity. Its principal


disadvantage is that stop-and-wait is an inefficient mechanism.

Go-Back-N ARQ
• In Go-Back-N ARQ, a station may send a series of frames sequentially
numbered modulo some maximum value (called window size).
• The number of unacknowledged frames outstanding is determined by
window size.
• While no errors occur, the destination will acknowledge incoming frames as
usual.
• If the destination station detects an error in a, frame, it may send a negative
acknowledgment (REJ i) for that frame.
• The destination station will discard that frame and all future incoming
frames until the frame in error is correctly received.
• Thus, the source station, when it receives a REJ i, must retransmit the
frame in error (frame i) plus all succeeding frames (frame i + 1 onwards)
that were transmitted in the interim.

Selective- Reject ARQ

• With Selective-Reject (or Selective-Repeat) ARQ, the only frames


retransmitted are those that receive a negative acknowledgment (SREJ - for
Selective REJ), or those that time out.
• Selective reject would appear to be more efficient than Go-Back-N, because
it minimizes the amount of retransmission.
• Selective reject is a useful choice for a satellite link because of the long
propagation delay involved.

5. FRAMING

• The bit stream received by the Data Link layer is not guaranteed to be error
free. It is up to the Data Link layer to detect and, if necessary, correct errors.
• The usual approach is for the Data Link layer to break up the bit stream into
discrete frames, compute a short token called a checksum for each frame,
and include the checksum in the frame when it is transmitted.
• When a frame arrives at the destination, the checksum is recomputed.
• If the newly computed checksum is different from the one contained in the
frame, the Data Link layer knows that an error has occurred .
• Framing can be of two types:
o In fixed size framing, the size of the frame is fixed and so the frame
length acts as delimiter of the frame.
o In variable sized framing, the size of each frame to be transmitted may
be different. So additional mechanisms are kept to mark the end of one
frame and the beginning of the next frame.
• The methods used for breaking up a bit stream into variable sized frames
include:

o Byte Count.

o Flag Bytes with Byte Stuffing (or Character Stuffing).


o Flag Bits with Bit Stuffing.

o Physical Layer Coding Violations.

• The Byte Count framing method uses a field in the header to specify the
number of bytes in the frame. When the data link layer at the destination
sees the byte count, it knows how many bytes follow and hence where the
end of the frame is.
o It will then be unable to locate the correct start of the next frame.
o Even if the checksum is incorrect and so the destination knows that
the frame is bad, it still has no way of telling where the next frame
starts.
• The Flag Byte method gets around the problem of resynchronization after
an error by having each frame start and end with special bytes.
o Flag byte, is used as both the starting and ending delimiter.
o Two consecutive flag bytes indicate the end of one frame and the start
of the next.
o Thus, if the receiver ever loses synchronization it can just search for
two flag bytes to find the end of the current frame and the start of the
next frame.
o A framing flag byte can be distinguished from one in the data by the
absence or presence of an escape byte before it
o The data link layer on the receiving end removes the escape bytes
before giving the data to the network layer. This technique is called
byte stuffing.
• The flag bit stuffing is an alternative to byte stuffing, which uses framing at
the bit level, so that frames can contain an arbitrary number of bits made up
of units of any size.
o In bit flag stuffing, each frame begins and ends with a special flag byte
bit pattern (01111110 or 0x7E in hexadecimal. This pattern is a flag
byte.
o Whenever the sender's data link layer encounters five consecutive 1s
in the data automatically stuffs a bit into the outgoing bit stream.
• The framing technique using Physical Layer Coding Violations is only
applicable to networks in which the encoding on the physical medium
contains some redundancy.
o we are using coding violations to delimit frames.
o The beauty of this scheme is that, because they are unused or
reserved signals, it is easy to find the start and end of frames and there
is no need to stuff the data.
6. DATA TRANSMISSION MODES

• The transmission of binary data across a data link can be accomplished in


either parallel or serial mode.

Parallel transmission

• Binary data, consisting of Is and 0s, may be organized into groups of n bits
each.
• By grouping, we can send data n bits at a time instead of 1. This is called
parallel transmission.
• The mechanism for parallel transmission involves use of n wires to send n
bits at one time.
• That way each bit has its own wire, and all n bits of one group can be
transmitted with each clock tick from one device to another.
• The advantage of parallel transmission is speed.
• Parallel transmission can increase the transfer speed by a factor of n over
serial transmission.
• But the cost of parallel transmission is significantly high as compared to
serial transmission.
• Parallel transmission is usually limited to short distances due to its high
expense.

Serial Transmission

• In serial transmission one bit follows another, thus it requires only one
communication channel rather than a to transmit data between two
communicating devices.
• Since communication within devices is parallel conversion devices are
required at the interface between the sender and the line (parallel-to-serial)
and between the line and the receiver (serial - to - parallel)
• Serial transmission occurs in one of two ways:

o Asynchronous (character-oriented protocol)


o Synchronous (bit-oriented protocol)
Asynchronous serial transmission
• The asynchronous transmission protocols are basically byte-oriented
(character-oriented) protocols which interpret each transmitted frame as a
succession of characters, each typically composed of one byte.
• The communicating endpoints' interfaces are not continuously synchronized
by a common clock signal.
• The start signal prepares the receiver for arrival of data and the stop signal
resets its state to enable triggering of a new sequence.
• This mechanism is called asynchronous because, at the byte level, the
sender and receiver do not have to be synchronized. But within each byte
(character), the receiver must still be synchronized with the incoming bit
stream.
• When the receiver detects a start bit, it sets a timer and begins counting bits
as they come in.
• After n bits, the receiver looks for a stop bit, as soon as it detects the stop
bit, it waits until it detects the next start bit.
• The basic objective is to avoid the timing problem. Timing or synchronization
must only be maintained within each character; the receiver has the
opportunity to resynchronize at the beginning of each new character.
o The beginning of a character is signaled by a start bit with a value of
binary 0.
o This is followed by the 5 to 8 bits that actually make up the character.
o The bits of the character are transmitted beginning with the least
significant bit.
o The parity bit is set by the transmitter such that the total number of
ones in the character, including the parity bit, is even (even parity) or
odd (odd parity),the receiver uses this bit for error detection .
o The final element is a stop element, which is a binary 1.

• Asynchronous transmission is simple and cheap but requires an overhead


of two to three bits per character.

Asynchronous Data Link Protocols


• A number of Asynchronous Data Link protocols have been developed over
the last several decades. These protocols are primarily used in modems.
• XMODEM: XMODEM, is a half-duplex stop-and-wait ARQ protocol.
o The first field in the frame is a one-byte start of header (SOH).
o The second field is a two-byte header.
o The first header byte, sequence number, carries the frame number.
o The second header byte is used to check the validity of the sequence
number.
o The fixed data field holds 128 bytes of data (binary, ASCII. Boolean,
text, etc.).
o The last field, CRC checks for errors in the data field only.
o In this protocol, transmission begins with the sending of a NAK frame
from the receiver to the sender.
o Each time the sender sends a frame, it must wait for an
acknowledgment (ACK) before the next frame can be sent.
o Besides a NAK or an ACK. the sender can receive a cancel signal
(CAN), which aborts the transmission.

• YMODEM: YMODEM is a protocol similar to XMODEM, with the following


major differences:
o The data unit is 1024 bytes
o Two CANs are sent to abort a transmission.
o ITU-T CRC-16 is used for error checking.
o Multiple files can be sent simultaneously.

• ZMODEM: ZMODEM is a newer protocol combining features of both


XMODEM and YMODEM.

• BLAST: Blocked Asynchronous Transmission (BLAST) is more powerful


than XMODEM.
o It is full-duplex with sliding window flow control.
o It allows the transfer of data and binary files.

• Kermit: Kermit, designed at Columbia University, is currently the most widely


used Asynchronous protocol.
o This file transfer protocol is similar in operation to XMODEM, with the
sender waiting for a NAK before it starts transmission.
o First, the control character is transformed to a printable character by
adding a fixed number to its ASCII code representation.
o Second, the "#" character is added to the front of the transformed
character.
SYNCHRONOUS TRANSMISSION

• With synchronous transmission, a block of bits is transmitted in a steady stream


without start and stop codes.
• The block may be many bits in length.
• To prevent timing drift between transmitter and receiver, their clocks must
somehow be synchronized.
• This technique works well over short distances, but over longer distances the
clock pulses are subject to the same impairments as the data signal, and timing
errors can occur.
• The other alternative is to embed the clocking information in the data signal. For
digital signals, this can be accomplished with Manchester or differential
Manchester encoding.
• With synchronous transmission, there is another level of synchronization
required, to allow the receiver to determine the beginning and end of a block of
data.
• For that, each block begins with a preamble bit pattern and generally ends with
a postamble bit pattern.
• The data plus preamble, postamble, and control information are called a frame..

• A typical frame format for synchronous transmission is, the frame starts with a
preamble called a flag, which is 8 bits long.
o The same flag is used as a postamble.
o The receiver looks for the occurrence of the flag pattern to signal the
start of a frame.
o This is followed by some number of control fields (containing data link
control protocol information then a data field variable length for most
protocols more control fields, and finally the flag is repeated.
• If the sender wishes to send data in separate bursts, the gaps between bursts
must be filled with a special sequence of 0s and is that means idle.
• The receiver counts the bits as they arrive and groups them in 8-bit units.
• The advantage of synchronous transmission is speed.
• With no extra bits or gaps to introduce at the sending end and remove at the
receiving end, synchronous transmission is faster than asynchronous
transmission.
7. Link Access Procedures

• Network links can be divided into two categories those using point to-point
connections and those using broadcast channels.
• In point-to-point connection there exist a dedicated link between an individual pair
of sender and receiver. The capacity of the entire channel is reserved only for the
transmission of the packet between the sender and receiver.
• Point-to-point transmission with exactly one sender and one receiver is
sometimes called unicasting.
• Broadcast links (sometimes referred to as multiaccess channels or multipoint
channels or random access channels).
• The channel capacity is shared temporarily by every device connecting to the
link.
• The packet transmitted by the sender is received and processed by every device
on the link. But, by the address field in the packet, the receiver determines
whether the packet belongs to it or not, if not, it discards the packet, otherwise it
accepts and responds accordingly.
• It is the Media Access Control (MAC) sublayer of the Data Link layer that
determines who is allowed to access the media at any one time.
• Random-access protocols, such as ALOHA, CSMA CSMA/CD, and CSMA/CA.
• These protocols are mostly used in LANs and WANS
7.1. Random-Access Protocols
• In random-access or contention methods no station is superior to another station
and none is assigned control over another.
• At each instance, a station that has data to send uses a procedure defined by the
protocol to make a decision on whether or not to send. This decision depends on
the state of the medium (idle or busy).The 2 features of this protocol are:

o There is no scheduled time for a station to transmit Transmission is


random among the stations. That is why these methods are called
random access.
o No rules specify which station should send next. Stations compete with
one another to access the medium. That is why these methods are
also called contention methods.
• if more than one station tries to send, there is an access conflict (collision) and
the frames will be either destroyed or modified.
• To avoid access conflict or to resolve it when it happens, each station follows a
specific procedure based on the protocol in force. The following are the random
access protocols:
1. ALOHA
2. CSMA
3. CSMA/CD
4. CSMA/CA

7.1.1. ALOНА

• ALOHA (originally stood for Additive Links On-line Hawaii Area).


• Developed in a networking system for coordinating and arbitrating access to a
shared communication networks channel.
• It was developed in the 1970s by Norman Abramson and his colleagues at the
University of Hawai.
• The original system used for ground based radio broadcasting, but the system
has been implemented in satellite communication systems.
• In the ALOHA system a node transmits whenever data is available to send.
• If another node transmits at the same time, a collision occurs, and the frames
that were transmitted are lost.
7.1.1.1. Pure ALOHA
• The original ALOHA protocol is called pure ALOHA.
• In pure ALOHA, each station sends a frame whenever it has a frame to send
(multiple access).
• Pure ALOHA does not check whether the channel is busy before transmitting.
• The pure ALOHA protocol relies on acknowledgments from the receiver.
• When a station sends a frame, it expects the receiver to send an
acknowledgment.
• If the acknowledgment does not arrive after a time-out period, the station
assumes that the frame (or the acknowledgment) has been destroyed and
resends the frame.
• Pure ALOHA dictates that when the time-out period passes, each station waits
a random amount of time, called backoff time, before resending its frame.

7.1.1.2. Slotted ALOHA

• Slotted ALOHA was invented to improve the efficiency of pure ALOHA as


chances of collision in pure ALOHA are very high.
• In slotted ALOHA, the time of the shared channel is divided into discrete
intervals called slots.
• The stations can send a frame only at the beginning of the slot and only one
frame is sent in each slot.
• Because a station is allowed to send only at the beginning of the synchronized
time slot, if a station misses this moment, it must wait until the beginning of the
next time slot.
• This means that the station which started at the beginning of this slot has already
finished sending its frame.
CSMA
• It is known as Carrier Sense Multiple Access.
• CSMA verifies the absence of other traffic before transmitting on a shared
transmission medium.
• A transmitter attempts to determine whether another transmission is in progress
before initiating a transmission using a carrier-sense mechanism.
• That is, it tries to detect the presence of a carrier signal from another node before
attempting to transmit.
• If a carrier is sensed, the node waits for the transmission in progress to end
before initiating its own transmission.
• CSMA can reduce the possibility of collision, but it cannot eliminate it due to
propagation delay .
• When a station sends a frame, it still takes time (although very short) for the first
bit to reach every station and for every station to sense it.
• Variations of CSMA use different algorithms to determine when to initiate
transmission onto the shared medium.
• A key distinguishing feature of these algorithms is how aggressive or persistent
they are in initiating transmission.
• A more aggressive algorithm may begin transmission more quickly and utilize
a greater percentage of available bandwidth of the medium.

1-persistent:

• 1-persistent CSMA is an aggressive transmission algorithm.


• When the transmitting node is ready to transmit, it senses the
transmission medium for idle or busy.
• If idle, then it transmits immediately.
• If busy, then it senses the transmission medium continuously until it
becomes idle, then transmits the frame unconditionally (i.e. with
probability = 1).
• In case of a collision, the sender waits for a random period of time and
attempts the same procedure again.
Non-persistent:

• Non persistent CSMA is a non-aggressive transmission algorithm.


• When the transmitting node is ready to transmit data, it senses the
transmission medium for idle or busy.
• If idle, then it transmits immediately. If busy, then it waits for a random
period of time before repeating the whole logic cycle (which started with
sensing the transmission medium for idle or busy) again.
• This approach reduces collision, results in overall higher medium
throughput but with a penalty of longer initial delay compared to 1-
persistent.

P-persistent:

• This is an approach between 1-persistent and non- persistent CSMA


access modes.
• When the transmitting node is ready to transmit data, it senses the
transmission medium for idle or busy. If idle, then it transmits immediately.
• If busy, then it senses the transmission medium continuously until it
becomes idle, then transmits with probability p.
• If the node does not transmit (the probability of this event is 1-p), it waits
until the next available time slot.
• If the transmission medium is not busy, it transmits again with the same
probability p.
• p-persistent CSMA is used in CSMA/CA systems including Wi-Fi and
other packet radio systems.

O-persistent:

• Each node is assigned a transmission order by a supervisory node.


• When the transmission medium goes idle, nodes wait for their time slot in
accordance with their assigned transmission order.
• The node assigned to transmit first transmits immediately.
• The node assigned to transmit second waits one time slot.
• Nodes monitor the medium for transmissions from other nodes and
update their assigned order with each detected transmission.
• Variations on basic CSMA include addition of collision-avoidance.
collision-detection and collision-resolution techniques
7.1.3. CSMA/CD

• Persistent and non-persistent CSMA protocols are definitely an improvement


over ALOHA because they ensure that no station begins to transmit while the
channel is busy.
• An improvement over CSMA is for the stations to quickly detect the collision and
abruptly stop transmitting. This strategy saves time and bandwidth.
• Carrier Sense Multiple Access with Collision Detection (CSMA/CD) extends
the CSMA algorithm to handle the collision In this method.
• A station monitors the medium after it sends a frame to see if the transmission
was successful. If so, the station is finished.
• If, however, there is a collision, the frame is sent again.
• CSMA/CD improves performance by terminating transmission as soon as a
collision is detected.
• CSMA/CD is used by Ethernet
• The collision detection is an analogue process.
• A station can sense a collision by checking the level of energy in a channel,
which can have three values: zero, normal, and abnormal .
o At the zero level, the channel is idle.
o At the normal level, a station has successfully captured the channel
and is sending its frame.
o At the abnormal level, there is a collision and the level of the energy is
twice the normal level.
• A station that has a frame to send or is sending a frame needs to monitor the
energy level to determine if the channel is in idle or collision mode.

7.1.4. CSMA/CA

• In Carrier-sense Multiple Access with Collision Avoidance (CSMA/CA),


collision avoidance is used to improve the performance of CSMA.
• If the transmission medium is sensed busy before transmission, then the
transmission is deferred for a random interval.
• This random interval reduces the likelihood that two or more nodes waiting to
transmat will simultaneously begin transmission upon termination of the
detected transmission, thus reducing the incidence of collision.
• CSMA/CA is used by Wi-Fi.
• CSMA/CA uses the following three strategies to avoid collisions: the inter-frame
space, the contention window, and acknowledgments.

Interframe Space (IFS).


• First, collisions are avoided by delaying transmission even if the channel is
found idle.
• When an idle channel is found, the station does not send immediately. It waits
for a period of time called the inter-frame space or IFS
• Even though the channel may appear idle when it is sensed, a distant station
may have already started transmitting.
• After waiting an IFS time if the channel is still idle, the station can send, but it
still needs to wait a time equal to the contention window.
• The IFS variable can also be used to prioritize stations or frame types.
• For example, a station that is assigned a shorter IFS has a higher priority.

Contention Window.

• The contention window is an amount of time divided into slots.


• A station that is ready to send chooses a random number of slots as its wait
time.
• The number of slots in the window changes according to the binary exponential
back off strategy.
• Thus means that it is set to one slot the first time and then doubles each time
the station cannot detect an idle channel after the IFS time.

Acknowledgment

• The data may be corrupted during the transmission.


• The positive acknowledgment and the time-out timer can help guarantee that
the receiver has received the frame.

7.2. Controlled-Access Protocols

• In controlled access, the stations consult one another to find which station has
the right to send.
• A station cannot send unless it has been authorized by other stations.
• Controlled-access protocols include:

o Reservation

o Polling

o Token Ring Passing


7.2.1 Reservation

• In the reservation method, a station needs to make a reservation before sending


data.
• Time is divided into intervals.
• In each interval, a reservation frame precedes the data frames sent in that
interval. I
• f there are N stations in the system, there are exactly N reservation mini-slots in
the reservation frame.
• Each mini-slot belongs to a station. When a station needs to send a data frame,
it makes a reservation in its own mini-slot.
• The stations that have made reservations can send their data frames after the
reservation frame.

7.2.2 Polling

• Polling works with topologies in which one device is designated as a primary


station and the other devices are secondary stations.
• All data exchanges must be made through the primary device even when the
ultimate destination is a secondary device.
• The primary device controls the link; the secondary devices follow its
instructions.
• It is up to the primary device to determine which device is allowed to use the
channel at a given time.
• primary device, therefore, is always the initiator of a session
• This method uses poll and select functions to prevent collisions.
• The select function is used whenever the primary device has something to send.
o If the primary is neither sending nor receiving data, it knows the link is
available. If it has something to send, the primary device sends it.

o So the primary must alert the secondary to the upcoming transmission


and wait for an acknowledgment of the secondary's ready status.
o Before sending data, the primary creates and transmits a select (SEL)
frame, one field of which includes the address of the intended
secondary.
• The poll function is used by the primary device to solicit transmissions from the
secondary devices
o When the primary is ready to receive data, it must ask (poll) each
device in turn if it has anything to send.

o When the first secondary is approached, it responds either with a NAK


frame if it has nothing to send or with data.

o When the response is positive (a data frame), the primary reads the
frame and returns an acknowledgment (ACK frame), verifying its
receipt.
• The drawback is that if the primary device fails, entire system fails.

7.2.3 Token Passing Protocol

• In the token-passing protocol ,the stations in a network are organized in a logical


ring.
• In other words, for each station, there is a predecessor and a successor.
o The predecessor is the station which is logically before the
station in the ring:
o the successor is the station which is after the station in the ring.
o The current station is the one that is accessing the channel now.
• The right to this access has been passed from the predecessor to the current
station.
• The right will be passed to the successor when the current station has no more
data to send.
• In this method, a special packet called a token circulates through the ring.
• When a station has some data to send, it waits until it receives the token from
its predecessor. It then holds the token and sends its data.
• When the station has no more data to send, it releases the token, passing it to
the next logical station in the ring.
• The station cannot send data until it receives the token again in the next round.
In this process, when a station receives the token and has no data to send, it
just passes the data to the next station
• Token management is needed for this access method.
• Stations must be limited in the time they can have possession of the token.
• The token must be monitored to ensure it has not been lost or destroyed.
8. Wired LAN

• A local area network (LAN) is a computer network that interconnects computers


within a limited area such as a residence. school, laboratory, university campus
or office building.
• A LAN may be wired, wireless, or a combination of the two.
• "Wired" is the term that refers to any physical medium consisting of cables.
• The cables can be copper wire, twisted pair or fibre optic.
• A wired LAN uses cables to connect devices, such as laptop or desktop
computers, to the Internet or another network.
• The LAN market has seen several technologies such as Ethernet, Token Ring.
Token Bus, FDDI (Fibre Distribution Data Interface), and ATM (Asynchronous
Transfer Mode) LAN.

8.1. IEEE Standards(22 sub parts)

• In 1985, the Computer Society of the IEEE (Institute of Electrical and Electronics
Engineers) started a project, called Project 802, to set standards to enable
intercommunication among equipment from a variety of manufacturers.
• The objectives of the Project 802 was to provide a way for specifying functions
of the physical layer and the data link layer of major LAN protocols.
• This standards are restricted to networks carrying variable-size packets.

• The IEEE 802 standard splits the OSI Data Link Layer into two sub layers named
Logical Link Control (LLC) and Media Acces Control (MAC).

• The LLC provides a single link-layer control protocol for all IEEE LANs.
• This means LLC protocol can provide interconnectivity between
different LANs because it makes the MAC sublayer transparent.
• The LLC layer performs flow control, error control, and part of the
framing duties.
• The Media Access Control that defines the specific access method for each
LAN.
• The MAC layer varies for different network types and is defined by different
standards.
• A part of the framing function is also handled by the MAC layer.
• The better known specifications include 802.3 Ethernet, 802.11 Wi-Fi (wireless
LAN), 802.15 Bluetooth (wireless Personal Area Network), and 802.16 Wireless
Metropolitan Area Networks.
IEEE 802.3:ETHERNET

• The services and protocols specified in IEEE 802.3 map to the lower two layers
(Data Link and Physical) of the seven-layer OSI networking reference model.
• The IEEE 802.3 splits the OS1 Data Link Layer into two sub-layers named
logical link control (LLC) and media access control (MAC).
• The LLC provides a single link-layer control protocol and can provide
interconnectivity between different LANs because it makes the MAC sublayer
transparent.
• The original 10BASE5 Ethernet used coaxial cable as a shared medium, while
the newer Ethernet variants use twisted pair and fibreoptic links in conjunction
with switches.
• At the sender, data are converted to a digital signal using the Manchester
scheme, at the receiver, the received signal is interpreted as Manchester and
decoded into data.
• The four most common physical layer implementations are

• 10Base5: Thick Ethernet or Thicknet .


• The nick-name derives from the size of the cable, which is roughly the
size of a garden hose and too stiff to bend with your hands.
• 10Bases was the first Ethernet specification to use a bus topology with
an external transceiver (transmitter/receiver).
• 10Base2: Thin Ethernet or Cheapernet.
• 10Base2also uses a bus topology, but the cable is much thinner and
more flexible.
• In this case, the transceiver is normally part of the network interface
card (NIC), which is installed inside the station.
• 10Base-T: Twisted-Pair Ethernet.
• 10Base-T uses a physical star topology.
• The stations are connected to a hub via two pairs of twisted cable.
• 10Base-F: Fibre Ethernet.
• The most common is called 10Base-F.
• 10Base-F uses a star topology to connect stations to a hub. The
stations are connected to the hub using two fibre-optic cables

• Ethernet station is given a 48-bit MAC address.


• The MAC addresses are used to specify both the destination and the source of
each data packet.
• A scheme known as Carrier Sense Multiple Access with Collision Detection
(CSMA/CD) governed the way the computers shared the channel is an example.
• In a modern Ethernet, each station communicates with a switch, which in turn
forwards that traffic to the destination station.
• In this topology, collisions are only possible if station and switch attempt to
communicate with each other at the same time, and collisions are limited to this
link.
• In full duplex, switch and station can send and receive simultaneously, and
therefore modern Ethernets are completely collision-free.

IEEE 802.4: Token Bus

• Token bus is a network implementing the token ring protocol over a virtual ring
on a coaxial cable.
• A token is passed around the network nodes and only the node possessing the
token may transmit.
• If a node doesn't have anything to send, the token is passed on to the next
node on the virtual ring.
• Each node must know the address of its neighbour in the ring, so a special
protocol is needed to notify the other nodes of connections to, and
disconnections from, the ring
• When the logical ring is initialized, the highest numbered station may send the
first frame.
• The token are passed from one station to another following the numeric
sequence of the station addresses

1. Preamble - It is used for bit synchronization. It is I byte field.


2. Start Delimiter - These bits marks the beginning of frame. It is 1 byte field.
3. Frame Control - This field specifies the type of frame - data frame and
control frames. It is 1 byte field.
4. Destination Address -This field contains the destination address. It is 2 to
6 bytes field.
5. Source Address - This field contains the source address. It is 2 to 6 bytes
field
6. Data-If 2 byte addresses are used than the field may be up to 8182 bytes
and 8174 bytes in case of 6 byte addresses.
7. Checksum -This field contains the checksum bits which isused to detect
errors in the transmitted data. It is 4 bytes field.
8. End Delimiter - This field marks the end of frame. It is 1 byte field
• In token bus, there is no possibility of collision as only one station possesses
a token at any given time.
• In token bus, each station receives each frame, the station whose address
is specified in the frame processes it and the other stations discard the
frame.

IEEE 802.5: Token Ring

• Token ring (IEEE 802.5) is a communication protocol in a local area network


(LAN) where all stations are connected in a ring topology and pass one or
more tokens for channel acquisition.
• A token is a special frame of 3 bytes that circulates along the ring of stations.
• A station can send data frames only if it holds a token.
• The tokens are released on successful receipt of the data frame.
• If a station has a frame to transmit when it receives a token, it sends the
frame and then passes the token to the next station,
• Otherwise it simply passes the token to the next station.
• Passing the token means receiving the token from the preceding station and
transmitting to the successor station.
• The data flow is unidirectional in the direction of the token passing

The data transmission process in token ring goes as follows:

1. Empty information frames are continuously circulated on the ring

2. When a computer has a message to send, it seizes the token The computer
will then be able to send the frame

3. The frame is then examined by each successive workstation The workstation


that identifies itself to be the destination for the message copies it from the
frame and changes the token back to 0.

4. When the frame gets back to the originator, it sees that the token has been
changed to 0 and that the message has been copied and received. It
removes the message from the frame.

5. The frame continues to circulate as an "empty" frame, ready to be taken by


a workstation when it has a message to send.
FDDI - Fibre Distributed Data Interface

• FDDI is a set of ANSI and ISO standards for data transmission on fibre optic
lines in a LAN that can extend in range up to 200 km (124 miles)
• FDDI uses optical fibre as its standard underlying physical medium.
• If the physical medium is copper cable, in which case it may be called CDDI
(Copper Distributed Data Interface) standardized as TP-PMD (Twisted-Pair
Physical Medium-Dependent), also referred to as TP-DDI (Twisted-Pair
Distributed Data Interface).
• The protocol used is derived from the IEEE 802.4 token bus timed token
protocol.
• In addition to covering large geographical areas, FDDI local area networks
can support thousands of users.
• FDDI offers both a Dual-Attached Station (DAS), counter-rotating token ring
topology and a Single-Attached Station (SAS), token bus passing ring
topology.
• An FDDI network contains two token rings, one for possible backup in case
the primary ring fails.
• The primary ring offers up to 100 Mbps capacity. If the secondary ring is not
needed for backup, it can also carry data, extending capacity to 200 Mbps.
• The single ring can extend the maximum distance, a dual ring can extend
100 km (62 miles)

• PA is the preamble, SD is a start delimiter, FC is frame control. DA is the


destination address, SA is the source address, PDU is the protocol data unit
(or packet data unit), FCS is the frame check Sequence (or checksum), and
ED/FS are the end delimiter and frame status.

SWITCHED COMMUNICATION NETWORKS

• In an internetwork, transmission of data beyond a local area is typically


achieved by transmitting data from source to destination through a network
of intermediate nodes, called switches.
• Switches are devices capable of creating temporary connections between
two or more devices linked to the switch.
• In a switched network, some of these nodes are connected to the end
systems (computers or telephones, for example).
• Others are used only for routing.
• The devices attached to the network may be referred to as stations.
• The switching devices whose purpose is to provide communication are
called as nodes.
• Nodes are connected to one another in some topology by transmission links.
• Each station attaches to at least one node, and the collection of nodes is
referred to as a communications network
• In a switched communication network, data entering the network from a
station are routed to the destination by being switched from node to node.
This process is normally referred to as switching.

The interconnections between the nodes and the stations within a


communications network holds the following characteristics

1. Some nodes connect only to other nodes


o Their sole task is the internal (to the network) switching of data
o nodes accept data from and deliver data to the attached stations.

2. Node-station links are generally dedicated point-to-point Iinks.


• Node-node links are usually multiplexed, using either frequency
division multiplexing (FDM) or time division multiplexing (TDM)

3. Usually, the network is not fully connected, that is, there is not a direct link
between every possible pair of nodes .
• It is always desirable to have more than one possible path through the
network for each pair of stations.
• This enhances the reliability of the network

Traditionally, there are three common switching techniques

Circuit Switching.

Packet Switching,

Message Switching.
CIRCUIT SWITCHED NETWORK

• A circuit-switched network consists of a set of switches connected by physical


links.
• Communication via circuit switching implies that there is a dedicated
communication channel between two stations.
• That channel is a connected sequence of links between network nodes. On
each physical link, a logical channel is dedicated to the connection
• The three phases of circuit switching are:

• Circuit Establishment
• Data Transfer
• Circuit Disconnect

1. Circuit Establishment. Before any signals can be transmitted, end-to-end


(station-to-station) circuit must be established. The end systems are
normally connected through dedicated lines to the switches, so connection
setup means creating dedicated channels between the switches.

2. Data Transfer. Data can now be transmitted from station A through the
network to E. The transmission may be analogue or digital, depending on
the nature of the network. Generally, the connection is full duplex.

3. Circuit Disconnect. After some period of data transfer, the connection is


terminated, usually by the action of one of the two stations. Signals must
be propagated to the nodes inorder to deallocate the dedicated resources.

• channel capacity must be reserved between each pair of nodes in the path, and
each node must have available internal switching capacity to handle the
requested connection.
• The switches must have the intelligence to make these allocations and to devise
a route through the network.
• In terms of performance, there is a delay prior to signal transfer for call
establishment.
• However, once the circuit is established, the network is effectively transparent
to the users.

The performance of switching technique is normally measured in terms of the


following three types of delays:
Propagation delay: The time it takes a signal to propagate from one node to
the next. This time is generally negligible.
Transmission time: The time it takes for a transmitter to send out a block of
data. For example, it takes is to transmit a 10,000-bit block of data onto a
10-kbps line.

Node delay: The time it takes for a node to perform the necessary processing
as it switches data

Advantages of Circuit Switching:

• Once the connection is established between two parties, it will be available till
end of the conversation. This guarantees reliable connection in terms of
constant data rate and availability of resources

• No loss of packets or out of order packets here as this is connection oriented


network unlike packet switched network.

• The forwarding of information is based on time or frequency slot assignments


and hence there is no need to examine the header and there is low overhead
in circuit switching network.

• Once the circuit is established, data is transmitted without any delay as there is
no waiting time at each switch.

• Since it is not a store and forward technique, intermediate switching nodes


along the data path do not require storage buffer.

Disadvantages of Circuit :

• It is inefficient in terms of utilization of system resources. As resources are


allocated for the entire duration of connection. these are not available to other
connections.

• Dedicated channels require more bandwidth.

• Prior to actual data transfer, the time required to establish a physical link
between the two stations is too long.

• It is more expensive compared to other switching techniques due to dedicated


path requirement
• Circuit switching involves state maintenance overhead
• As is it designed for voice traffic, it is not so suitable for data transmission.

Packet-Switching Network

• In packet switching, data are transmitted in short packets.


• Packets are made of a header and a payload.
• Data in the header are used by networking hardware to direct the packet to its
destination where the payload is extracted and used by application software.
• A typical upper bound on packet length is 1000 bytes.
• packet contains a portion (or all for a short message) of the user's data plus
some control information.
• The control information, at a minimum, includes the information that the network
requires to be able to route the packet through the network and deliver it to the
intended destination.
• At each node along the route, the packet is received, stored briefly, and passed
on to the next node.
• There is no need to set up a dedicated path in advance.
• It is up to routers to use store-and-forward transmission to send each packet
on its way to the destination on its own.
• With packet switching there is no fixed path, so different packets can follow
different paths, depending on network conditions at the time they are sent, and
they may arrive out of order.
• Packet-switching networks place a tight upper limit on the size of packets so
that networks can handle interactive traffic.
• It also reduces delay since the first packet of a long message can be forwarded
before the second one has fully arrived.
• However, the store-and-forward delay of accumulating a packet in the router's
memory before it is sent on to the next router exceeds that of circuit switching.
With circuit switching, the bits just flow through the wire continuously.
• Bandwidth is reserved with packet switching, packets may have to wait to be
forwarded.
• This introduces queuing delay and congestion if many packets are sent at the
same time.
• Packet switching does not waste bandwidth and thus is more efficient from a
system perspective.
• With packet switching, packets can be routed around dead switches.
Advantages of Packet Switching:

• More robust than the circuit switched systems and more suitable for transmitting
the binary data.

• More faults tolerant as the packets can be routed to bypass the malfunctioning
components of the network as packets can follow different routes to the
destination.
• More efficient as packet switching reduces network bandwidth wastage.
• Destination information is contained in each packet, so numerous messages
can be sent quickly to many different destinations.
• Computers at each node allow dynamic data routing.
• Throughput and efficiency might be maximized.
• Ability to emulate a circuit switched network.
• A damaged packet can be resent. No need to resent an entire message.
• It allows multiplexing. Many users can share the same channel simultaneously.
Hence packet switching makes use of available bandwidth efficiently.
• Bill users only on the basis of duration of connectivity
• It uses a digital network. This method enables digital data to be directly
transmitted to a destination, and is therefore appropriate for data
communication systems.
• Delay in delivery of packets is less, since packets are sent as soon as they are
available.
• Switching devices don't require massive storage, since they don't have to store
the entire messages before forwarding them to the next node.

Disadvantages of Packet Switching:

• They are unsuitable for applications that cannot afford delays in communication
like high quality voice calls.
• Packets may be lost on their route, so sequence numbers are required to
identify missing packets.
• Switching nodes requires more processing power as the packet switching
protocols are more complex
• Switching nodes for packet switching require large amount of RAM to handle
large quantities of packets.
• A significant data transmission delay occurs because of the use of store and
forward method .
• Network problems may introduce errors in packets, delay in delivery of packets
Nd loss of packets. If properly handled this may lead to loss of critical
information.
• There are 2 types of packet switching networks:
o Datagram networks

o Virtual circuit networks

Datagram network

• In the datagram approach, each packet is treated independently, with no


reference to packets that have gone before.
• Packets in this approach are referred to as datagrams.
• Datagram switching is normally done at the network layer .
• Each node chooses the next node on a packet's path, taking into account
information received from neighbouring nodes on traffic, line failures, and so
on.
• So the packets, each with the same destination address, do not all follow the
same route, and they may arrive out of sequence at the exit point.
• In some datagram networks, it is up to the destination rather than the exit node
to do the reordering.
• Each packet, treated independently, is referred to as a datagram.
• The datagram networks are sometimes referred to as connectionless networks,
because it doesn’t keep info about the connection state.
• There is no setup or disconnection phases.
• The packets are routed to their destination by maintaining a routing table.
• The routing tables are dynamic and are updated periodically.
• The destination addresses and the corresponding forwarding output ports are
recorded in the tables.
• Every packet in a datagram network carries a header that contains, among
other information, the destination address of the packet.
• When the switch receives the packet, this destination address is examined.
• The routing table is consulted to find the corresponding port through which the
packet should be forwarded.
• The efficiency of a datagram network is better than that of a circuit-switched
network.
• Resources are allocated only when there are packets to be transferred.
• Although there are no setup and disconnect phases, each packet may
experience a wait at a switch before it is forwarded, there is a delay in each
forwarding.
• In addition, since not all through the same switches, the delay is not uniform for
the packets.
Virtual Circuit Network

• A virtual-circuit network is a cross between a circuit-switched network and a


datagram network .
• It has some characteristics of both:

• As in a circuit-switched network, there are setup and disconnect phases


in addition to the data transfer phase.
• Resources can be allocated during the setup phase, as in a circuit-
switched network, or on demand, as in a datagram network.
• As in a datagram network, data are packetized and each packet carries
an address in the header.
• As in a circuit-switched network, all packets follow the same path
established during the connection.) connection.
• In the virtual circuit approach, a pre-planned route is established before any
packets are sent.
• Once the route is established, all the packets between a pair of communicating
parties follow this same route through the network.
• In a virtual-circuit network, there is a one-time delay for setup and a one-time
delay for disconnection.
• If resources are allocated during the setup phase there is no wait time for
individual packets.
• If two stations wish to exchange data over an extended period of time, there are
certain advantages such as:
o First, the network may provide services related to the virtual circuit,
including sequencing and error control.
o Sequencing refers to the fact that, because all packets follow the same
route, they arrive in the original order.
o Error control is a service that assures not only that packets.

MESSAGE SWITCHING

• Message Switching was the ancestor of packet switching, where messages


were routed in their entirety, one node at a time.
• Each message is treated as a separate entity.
• Each message contains addressing information, and at each switch, this
information is read and the transfer path to the next switch is decided.
• Each message is stored usually on hard drive due to RAM limitations, before
being transmitted to the next switch. Because of this it is also known as a 'store-
and-forward' network.
• Email is a common application for message switching.
• Messages experience an end to end delay which is dependent on the message
length, and the number of intermediate nodes.
• Each additional intermediate node introduces a delay which is at minimum the
value of the minimum transmission delay into or out of the node.
• In a message-switching centre an incoming message is not lost when the
required outgoing route is busy. It is stored in a queue with any other messages
for the same route and retransmitted when the required circuit becomes free.
• Message switching is thus an example of a queuing system.

Advantages:

• As more devices share the same channel simultaneously for message transfer,
it has higher channel efficiency compared to circuit switching.
• In this type, messages are stored temporarily en-route and hence congestion
can be reduced to greater extent.
• It is possible to incorporate priorities to the messages as they use store and
forward technique for delivery.
• It supports message lengths of unlimited size.
• It does not require physical connection between source and destination devices
unlike circuit switching.
• Broadcasting of single message can be done to multiple receivers by appending
broadcast address to the message.

Disadvantages:
• The method is costly as store and forward devices are expensive.
• It can lead to security issues if hacked by intruders.
• Message switching type does not establish dedicated path between the
devices. As there is no direct link between sender and receiver, it is not reliable
communication.
• Because the messages are fully packaged and saved indefinitely at every
intermediate node, the nodes demand substantial storage capacity.
• Message-switched networks are very slow as the processing takes place in
each and every node, which may result in poor performance.
• This switching type is not compatible for interactive applications such as voice
and video. This is due to longer message delivery time.

ISDN

• Integrated Services Digital Network (ISDN) is a set of communication standards


for simultaneous digital transmission of voice, video, data, and other network
services over the traditional circuits of the PSTN.
• The main feature of ISDN is that it can integrate speech and data on the same
lines, which were not available in the classic telephone system.
• ISDN is a circuit-switched telephone network system, which also provides
access to packet switched networks, designed to allow digital transmission of
voice and data over ordinary telephone copper wires.
• ISDN typically provides a maximum of 128 kbit/s bandwidth in both upstream
and downstream directions.
History of ISDN

• ISDN was born out of necessity Analogue phone networks failed constantly and
proved to be unreliable for long-distance connections
• The system began to change over to a packet-based, digital switching system.
The UN-based International Telecommunications Union, or ITU, started
recommending ISDN in 1988 as a new system for operating companies to
deliver data.
• By the 1990s, the National ISDN 1 was created.
• Today, ISDN has been replaced by broadband internet access connections like
DSL, WAN, and cable modems.
• It is still used as a backup when the main lines fail.
ISDN Channels, Access and Interfaces

Although the ISDN operation is relatively straightforward, it utilises a number of


channels and interfaces. The ISDN standard divides a telephone line into following
types of channels:
Channel A: It is an analogue channel of 4 Khz.
Channel B: It is a 64 Kbps digital channel that intended for the transport of user
information.
Channel C: It is an 8 or 16 Kbps digital channel.
Channel D: It is a digital channel of 16 or 64 Kbps intended primarily for the
transmission of user-network signalling information for communication control.\
Channel E: It is a 64 Kbps digital channel (used for internal ISDN signals).
Channel H: It is a digital channel of 384, 1,536 or 1,920 kbps that provides the user
with an information transfer capability.
These channels can be combined differently, giving rise to two types of access:
Basic Access: Basic access (also known as known as 2B + D access, BRA Basic
Rate Access or BRI Basic Rate Interface) provides the user with two B channels
and a 16 Kbps D channel.
• It allows to establish up to two simultaneous communications at 64 Kbps, being
able to use the capacity of the D channel for low-speed data transmission.
• The main application of this type of access occurs in small local network
facilities that require digital transmission or small capacity digital exchanges.

Primary Access: Primary access (also called 30B + D access, PARA Primary
Rate Access or PRI - Primary Rate Interface), offers the user 30 B channels and a
64 kbps D channel, thus providing a bandwidth of up to 2,048 Kbps.
• It allows to establish up to thirty simultaneous communications at 64Kbps
without currently planning to use the capacity of the D-channel for data
transmission.
• You can also use other combinations of channels B, HO, H11 and H12, but
always respecting the speed limit of 2,048 Kbps.
• The main application of this type of access is the connection to ISDN of small
digital exchanges, multi-line systems, and local area networks of medium and
large capacity.

The ISDN has several kinds of access interfaces such as:


Basic Rate Interface (BRI): The Basic Rate Interface or Basic Rate Access,
simply called the ISDN BRI.
• Connection uses the existing telephone infrastructure.
• The BRI configuration provides two data or bearer channels at 64 Kbits/sec
speed and one control or delta channel at 16 Kbits/sec.
• The ISDN BRI interface is commonly used by smaller organizations or home
users or within a local group, limiting a smaller area.
Primary Rate Interface (PRI): The Primary Rate Interface or Primary Rate
Access, simply called the ISDN PRI connection is used by enterprises and offices.
• The PRI configuration is based on T-carrier or T1 in the US, Canada and Japan
countries consisting of 23 data or bearer channels and one control or delta
channel, with 64kbps speed for a bandwidth of 1.544 M bits/sec.
• The PRI configuration is based on E-carrier or El in Europe, Australia and few
Asian countries consisting of 30 data or bearer channels and two-control or
delta channel with 64kbps speed for a bandwidth of 2.048 M bits/sec.
• The ISDN BRI interface is used by larger organizations or enterprises and for
Internet Service Providers.

Narrowband ISDN: The Narrowband Integrated Services Digital Network is called


the N-ISDN.
• This is actually an attempt to digitize the analogue voice information.
• This uses 64kbps circuit switching.
• The narrowband ISDN is implemented to carry voice data, which uses lesser
bandwidth, on a limited number of frequencies.
Broadband ISDN: The Broadband Integrated Services Digital Network is called
the B-ISDN.
• The broadband ISDN speed is around 2 MBPS to 1 GBPS and the transmission
is related to ATM (Asynchronous Transfer Mode).
• The broadband ISDN communication is usually made using the fibre optic
cables.
• As the speed is greater than 1.544 Mbps, the communications based on this
are called Broadband Communications.
• The broadband services provide a continuous flow of information, which is
distributed from a central source to an unlimited number of authorized receivers
connected to the network .

ISDN SERVICES
An ISDN user device obtains a network connection by requesting service over the
D-channel. The requested message over the D channel contains a set of
parameters identifying the desired service. The Services are:
Bearer Services: Bearer Services are those which allow user to send information
from one device on the network to another. They allow information transfer and
involve the lower 3 layers of the OSI model, depending on the service Users may
agree to use any of the higher layer protocols and is transparent to ISDN.

Teleservices: Teleservices are value added services provided by network, can


provide end-to-end services and are characterised by their lower layer attributes
and higher layer attributes. Examples are teletex, videotex, etc.

Supplementary Services: Supplementary services are enhancements to bearer


services or offer facilities before or after bearer services; they are considered call
features. They include capabilities such as call forwarding, call transfer, 3-way
conferencing etc.
Advantages:
• ISDN is governed by a world-wide set of standards,
• ISDN offers multiple digital services that operate through the same copper wire
• Digital signals broadcast through telephone lines.
• ISDN provides a higher data transfer rate.
• ISDN network lines are able to switch manifold devices on the single line such
as faxes, computers, cash registers credit cards readers, and many other
devices.
• ISDN takes only 2 seconds to launch a connection while other modems take 30
to 60 second for establishment.
Disadvantages:

• ISDN is very costly than the other typical telephone system.


• An external power supply is required. The telco's don't supply power for ISDN
lines. If the power fails, the phones won't work.
• Special digital phones are required or a Terminal Adapter to talk to the existing
POTS devices.
• It is very expensive to upgrade a central office switch to ISDN
• ISDN requires specialized digital devices just like Telephone Company.

You might also like