Mod II Datalink Layer
Mod II Datalink Layer
COMPUTER NETWOTKS
AIDS
PREPARED BY: MR NILACHAKRA DASH
UNIT - II
• Transmission Delay (Tt) – Time to transmit the packet from the host to
the outgoing link. If B is the Bandwidth of the link and D is the Data Size
to transmit
Tt = D/B
• Propagation Delay (Tp) – It is the time taken by the first bit transferred by
the host onto the outgoing link to reach the destination.
• It depends on the distance d and the wave propagation speed s (depends
on the characteristics of the medium).
Tp = d/s
• Efficiency – It is defined as the ratio of total useful time to the total cycle
time of a packet. For stop and wait protocol,
Total time(TT) = Tt(data) + Tp(data) + Tt(acknowledgement)
+Tp(acknowledgement)
= Tt(data) + Tp(data) + Tp(acknowledgement)
= Tt + 2*Tp
• Since acknowledgements are very less in size, their transmission delay can
be neglected.
• Efficiency = Useful Time / Total Cycle Time
= Tt/(Tt + 2*Tp) (For Stop and Wait)
= 1/(1+2a) [ Using a = Tp/Tt ]
• Effective Bandwidth(EB) or Throughput – Number of bits
sent per second.
• EB = Data Size(D) / Total Cycle time(Tt + 2*Tp)
Multiplying and dividing by Bandwidth (B),
= (1/(1+2a)) * B [ Using a = Tp/Tt ]
= Efficiency * Bandwidth
• Capacity of link – If a channel is Full Duplex , then bits can
be transferred in both the directions and without any
collisions. Number of bits a channel/Link can hold at
maximum is its capacity.
• Capacity = Bandwidth(B) * Propagation(Tp)
Stop and Wait Sliding Window
Key
protocol protocol
• Data Link Layer protocols are generally responsible to simply ensure and confirm
that the bits and bytes that are received are identical to the bits and bytes being
transferred.
Synchronous Data Link Protocol (SDLC)
• SDLC is basically a communication protocol of computer.
• It usually supports multipoint links even error recovery
or error correction also.
• It is usually used to carry SNA (Systems Network
Architecture) traffic and is present precursor to HDLC. It
is also designed and developed by IBM in 1975.
• It is also used to connect all of the remote devices to
mainframe computers at central locations may be in
point-to-point (one-to-one) or point-to-multipoint (one-
to-many) connections.
• It is also used to make sure that the data units should
arrive correctly and with right flow from one network
point to next network point.
High-Level Data Link Protocol (HDLC)–
• HDLC is basically a protocol that is now assumed to
be an umbrella under which many Wide Area
protocols sit.
• It is also adopted as a part of X.25 network.
• It was originally created and developed by ISO in
1979.
• This protocol is generally based on SDLC.
• It also provides best-effort unreliable service and
also reliable service.
• HDLC is a bit-oriented protocol that is applicable for
point-to-point and multipointcommunications both.
Serial Line Interface Protocol (SLIP)
• SLIP is generally an older protocol that is just used
to add a framing byte at end of IP packet.
• It is basically a data link control facility that is
required for transferring IP packets usually among
Internet Service Providers (ISP) and a home user
over a dial-up link.
• It is an encapsulation of the TCP/IP especially
designed to work with over serial ports and several
router connections simply for communication.
• It is some limitations like it does not provide
mechanisms such as error correction or error
detection.
Point to Point Protocol (PPP)
• PPP is a protocol that is basically used to provide same
functionality as SLIP.
• It is most robust protocol that is used to transport other types
of packets also along with IP Packets.
• It can also be required for dial-up and leased router-router
lines.
• It basically provides framing method to describe frames.
• It is a character-oriented protocol that is also used for error
detection.
• It is also used to provides two protocols i.e.
• NCP and LCP. LCP is used for bringing lines up, negotiation of
options, bringing them down whereas NCP is used for
negotiating network-layer protocols.
• It is required for same serial interfaces like that of HDLC.
Link Control Protocol (LCP)
• It was originally developed and created by
IEEE 802.2.
• It is also used to provide HDLC style services
on LAN (Local Area Network).
• LCP is basically a PPP protocol that is used for
establishing, configuring, testing,
maintenance, and ending or terminating links
for transmission of data frames.
Link Access Procedure (LAP) –
• LAP protocols are basically a data link layer protocols that
are required for framing and transferring data across
point-to-point links.
• It also includes some reliability service features.
• There are basically three types of LAP i.e.
• LAPB (Link Access Procedure Balanced),
• LAPD (Link Access Procedure D-Channel),
• and LAPF (Link Access Procedure Frame-Mode Bearer
Services).
• It is actually originated from IBM SDLC, which is being
submitted by IBM to the ISP simply for standardization.
Network Control Protocol (NCP)
• NCP was also an older protocol that was
implemented by ARPANET.
• It basically allows users to have access to use
computers and some of the devices at remote
locations and also to transfer files among two or
more computers.
• It is generally a set of protocols that is forming a
part of PPP.
• NCP is always available for each and every
higher-layer protocol that is supported by PPP.
NCP was replaced by TCP/IP in the 1980s.
Medium Access Control Sublayer (MAC sublayer)
1. Unicast:
• A Unicast-addressed frame is only sent out to
the interface leading to a specific NIC.
• If the LSB (least significant bit) of the first octet
of an address is set to zero, the frame is meant
to reach only one receiving NIC.
• The MAC Address of the source machine is
always Unicast.
• Multicast:
• The multicast address allows the source to send a
frame to a group of devices.
• In Layer-2 (Ethernet) Multicast address, the LSB
(least significant bit) of the first octet of an
address is set to one.
• IEEE has allocated the address block 01-80-C2-xx-
xx-xx (01-80-C2-00-00-00 to 01-80-C2-FF-FF-FF) for
group addresses for use by standard protocols.
• Broadcast: Similar to Network Layer,
Broadcast is also possible on the underlying
layer( Data Link Layer).
• Ethernet frames with ones in all bits of the
destination address (FF-FF-FF-FF-FF-FF) are
referred to as the broadcast addresses.
Frames that are destined with MAC address
FF-FF-FF-FF-FF-FF will reach every computer
belonging to that LAN segment.
The channel allocation problem
• Channel allocation is a process in which a single channel is
divided and allotted to multiple users in order to carry user
specific tasks.
• There are user’s quantity may vary every time the process
takes place.
• If there are N number of users and channel is divided into N
equal-sized sub channels, Each user is assigned one portion.
• If the number of users are small and don’t vary at times, then
Frequency Division Multiplexing can be used as it is a simple
and efficient channel bandwidth allocating technique.
• Channel allocation problem can be solved by two schemes:
Static Channel Allocation in LANs and MANs, and Dynamic
Channel Allocation.
• 1. Static Channel Allocation in LANs and MANs:
It is the classical or traditional approach of allocating
a single channel among multiple competing users
using Frequency Division Multiplexing (FDM).
• if there are N users, the frequency channel is divided
into N equal sized portions (bandwidth), each user
being assigned one portion. since each user has a
private frequency band, there is no interference
between users.
• However, it is not suitable in case of a large number
of users with variable bandwidth requirements.
• It is not efficient to divide into fixed number of
chunks.
• T = 1/(U*C-L)
• T(FDM) = N*T(1/U(C/N)-L/N)
• Where,
• The basic idea behind CSMA/CA is that the station should be able to receive while
transmitting to detect a collision from different stations. In wired networks, if a
collision has occurred then the energy of the received signal almost doubles, and the
station can sense the possibility of collision. In the case of wireless networks, most of
the energy is used for transmission, and the energy of the received signal increases
by only 5-10% if a collision occurs. It can’t be used by the station to sense collision.
Therefore CSMA/CA has been specially designed for wireless networks.
• These are three types of strategies:
• InterFrame Space (IFS): When a station finds the channel busy it senses the channel
again, when the station finds a channel to be idle it waits for a period of time
called IFS time. IFS can also be used to define the priority of a station or a frame.
Higher the IFS lower is the priority.
• Contention Window: It is the amount of time divided into slots. A station that is
ready to send frames chooses a random number of slots as wait time.
• Acknowledgments: The positive acknowledgments and time-out timer can help
guarantee a successful transmission of the frame.
• Characteristics of CSMA/CA
• Carrier Sense: The device listens to the channel before transmitting,
to ensure that it is not currently in use by another device.
• Multiple Access: Multiple devices share the same channel and can
transmit simultaneously.
• Collision Avoidance: If two or more devices attempt to transmit at
the same time, a collision occurs. CSMA/CA uses random backoff
time intervals to avoid collisions.
• Acknowledgment (ACK): After successful transmission, the
receiving device sends an ACK to confirm receipt.
• Fairness: The protocol ensures that all devices have equal access to
the channel and no single device monopolizes it.
• Binary Exponential Backoff: If a collision occurs, the device waits
for a random period of time before attempting to retransmit. The
backoff time increases exponentially with each retransmission
attempt.
• Interframe Spacing: The protocol requires a minimum amount of
time between transmissions to allow the channel to be clear and
reduce the likelihood of collisions.
• RTS/CTS Handshake: In some implementations, a Request-To-
Send (RTS) and Clear-To-Send (CTS) handshake is used to reserve
the channel before transmission. This reduces the chance of
collisions and increases efficiency.
• Wireless Network Quality: The performance of CSMA/CA is
greatly influenced by the quality of the wireless network, such as
the strength of the signal, interference, and network congestion.
• Adaptive Behavior: CSMA/CA can dynamically adjust its behavior
in response to changes in network conditions, ensuring the
efficient use of the channel and avoiding congestion.
CSMA/CA balances the need for efficient use of the shared
channel with the need to avoid collisions, leading to reliable and
fair communication in a wireless network.
PROCESS FOR CSMA/CA
• Advantages of CSMA
• Increased Efficiency: CSMA ensures that only one
device communicates on the network at a time,
reducing collisions and improving network efficiency.
• Simplicity: CSMA is a simple protocol that is easy to
implement and does not require complex hardware or
software.
• Flexibility: CSMA is a flexible protocol that can be used
in a wide range of network environments, including
wired and wireless networks.
• Low cost: CSMA does not require expensive hardware
or software, making it a cost-effective solution for
network communication.
• Disadvantages of CSMA
• Limited Scalability: CSMA is not a scalable protocol and
can become inefficient as the number of devices on the
network increases.
• Delay: In busy networks, the requirement to sense the
medium and wait for an available channel can result in
delays and increased latency.
• Limited Reliability: CSMA can be affected by
interference, noise, and other factors, resulting in
unreliable communication.
• Vulnerability to Attacks: CSMA can be vulnerable to
certain types of attacks, such as jamming and denial-of-
service attacks, which can disrupt network
communication.
collision free protocols. Wireless LANs,
• Almost all collisions can be avoided in CSMA/CD but they can still
occur during the contention period. The collision during the
contention period adversely affects the system performance, this
happens when the cable is long and length of packet are short.
This problem becomes serious as fiber optics network came into
use. Here we shall discuss some protocols that resolve the
collision during the contention period.
• Bit-map Protocol
• Binary Countdown
• Limited Contention Protocols
• The Adaptive Tree Walk Protocol
• Pure and slotted
Bit-map Protocol:
• Bit map protocol is collision free Protocol.
• In bitmap protocol method, each contention period consists of
exactly N slots.
• If any station has to send frame, then it transmits a 1 bit in the
corresponding slot.
• For example, if station 2 has a frame to send, it transmits a 1 bit
to the 2nd slot.
• In general, Station 1 Announce the fact that it has a frame
questions by inserting a 1 bit into slot 1. In this way, each station
has complete knowledge of which station wishes to transmit.
• There will never be any collisions because everyone agrees on
who goes next.
• Protocols like this in which the desire to transmit is broadcasting
for the actual transmission are called Reservation Protocols.
Binary Count down:
Binary count down protocol is used to overcome the overhead 1 bit per
binary station.
• In binary countdown, binary station addresses are used.
• A station wanting to use the channel broadcast its address as binary bit string
starting with the high order bit.
• All addresses are assumed of the same length. Here, we will see the example
to illustrate the working of the binary countdown.
• In this method, different station addresses are read together who decide the
priority of transmitting. If these stations 0001, 1001, 1100, 1011 all are trying
to seize the channel for transmission. All the station at first broadcast their
most significant address bit that is 0, 1, 1, 1 respectively.
• The most significant bits are read together. Station 0001 see the 1 MSB in
another station address and knows that a higher numbered station is
competing for the channel, so it gives up for the current round.
• Other three stations 1001, 1100, 1011 continue. The next station at which
next bit is 1 is at station 1100, so station 1011 and 1001 give up because
there 2nd bit is 0. Then station 1100 starts transmitting a frame, after which
another bidding cycle starts.
• Limited Contention Protocols:
• Collision based protocols (pure and slotted
ALOHA, CSMA/CD) are good when the
network load is low.
• Collision free protocols (bitmap, binary
Countdown) are good when load is high.
• How about combining their advantages :
• Behave like the ALOHA scheme under light
load
• Behave like the bitmap scheme under heavy
load.
Adaptive Tree Walk Protocol:
• A switch decides the port through which a data packet shall pass with the
help of its destination MAC (Media Access Control) Address.
• A switch does this effectively by maintaining a switching table, (also
known as forwarding table).
• A network switch is more efficient than a network Hub or repeater
because it maintains a switching table, which simplifies its task and
reduces congestion on a network, which effectively improves the
performance of the network.
• A switch is a dedicated piece of computer hardware that facilitates the
process of switching i.e., incoming data packets and transferring them to
their destination.
• A switch works at the Data Link layer of the OSI Model. A switch
primarily handles the incoming data packets from a source computer or
network and decides the appropriate port through which the data
packets will reach their target computer or network.
Process of Switching:
The switching process involves the following steps:
• Frame Reception: The switch receives a data frame or packet from a computer
connected to its ports.
• MAC Address Extraction: The switch reads the header of the data frame and
collects the destination MAC Address from it.
• MAC Address Table Lookup: Once the switch has retrieved the MAC Address, it
performs a lookup in its Switching table to find a port that leads to the MAC
Address of the data frame.
• Forwarding Decision and Switching Table Update: If the switch matches the
destination MAC Address of the frame to the MAC address in its switching table,
it forwards the data frame to the respective port. However, if the destination
MAC Address does not exist in its forwarding table, it follows the flooding
process.
• in which it sends the data frame to all its ports except the one it came from and
records all the MAC Addresses to which the frame was delivered. This way, the
switch finds the new MAC Address and updates its forwarding table .
• Frame Transition: Once the destination port is found, the switch sends the data
frame to that port and forwards it to its target computer/network.
Types of Switching
• Message Switching: This is an older switching
technique that has become obsolete. In message
switching technique, the entire data block/message
is forwarded across the entire network thus, making
it highly inefficient.
• Packet Switching: This technique requires the data to be broken down
into smaller components, data frames, or packets.
• These data frames are then transferred to their destinations according to
the available resources in the network at a particular time. This switching
type is used in modern computers and even the Internet. Here, each data
frame contains additional information about the destination and other
information required for proper transfer through network components.
• Datagram Packet Switching: In Datagram Packet
switching, each data frame is taken as an individual entity
and thus, they are processed separately. Here, no
connection is established before data transmission
occurs. Although this approach provides flexibility in data
transfer, it may cause a loss of data frames or late
delivery of the data frames.
• Virtual-Circuit Packet Switching: In Virtual-Circuit Packet
switching, a logical connection between the source and
destination is made before transmitting any data. These
logical connections are called virtual circuits. Each data
frame follows these logical paths and provides a reliable
way of transmitting data with less chance of data loss.
• Circuit Switching: In this type of switching, a connection is
established between the source and destination beforehand.
This connection receives the complete bandwidth of the
network until the data is transferred completely.
This approach is better than message switching as it does not
involve sending data to the entire network, instead of its
destination only.
• It is a type of switching, in which a connection is established
between the source and destination beforehand.
• This connection receives the complete bandwidth of the
network until the data is transferred completely.
• In circuit switching network resources (bandwidth) are divided
into pieces and the bit delay is constant during a connection.
• The dedicated path/circuit established between the sender and
receiver provides a guaranteed data rate. Data can be
transmitted without any delays once the circuit is
established.
• Phases of Circuit Switching
• Circuit Establishment: A dedicated circuit between the
source and destination is constructed via a number of
intermediary switching center’s. Communication signals
can be requested and received when the sender and
receiver communicate signals over the circuit.
• Data Transfer: Data can be transferred between the source
and destination once the circuit has been established. The
link between the two parties remains as long as they
communicate.
• Circuit Disconnection: Disconnection in the circuit occurs
when one of the users initiates the disconnect. When the
disconnection occurs, all intermediary linkages between
the sender and receiver are terminated.
• Why is Circuit Switching Used for?
• Continuous connections: Circuit switching is used for
connections that must be maintained for long periods,
such as long-distance communication. Circuit switching
technology is used in traditional telephone systems.
• Dial-up network connections: When a computer connects
to the internet through a dial-up service, it uses the public
switched network.
• Dial-up transmits Internet Protocol (IP) data packets via a
circuit-switched telephone network.
• Optical circuit switching: Data centre networks also make
use of circuit switching. Optical circuit switching is used to
expand traditional data centres and fulfil increasing
bandwidth demands.
• Advantages of Circuit Switching
• The main advantage of circuit switching is that a committed transmission channel is
established between the computers which give a guaranteed data rate.
• In circuit switching , there is no delay in data flow because of the dedicated
transmission path.
• Reliability: Circuit switching provides a high level of reliability since the dedicated
communication path is reserved for the entire duration of the communication. This
ensures that the data will be transmitted without any loss or corruption.
• Quality of service: Circuit switching provides a guaranteed quality of service, which
means that the network can prioritize certain types of traffic, such as voice and
video, over other types of traffic, such as email and web browsing.
• Security: Circuit switching provides a higher level of security compared to packet
switching since the dedicated communication path is only accessible to the two
communicating parties. This can help prevent unauthorized access and data
breaches
• Ease of management: Circuit switching is relatively easy to manage since the
communication path is pre-established and dedicated to a specific communication.
This can help simplify network management and reduce the risk of errors.
• Compatibility: Circuit switching is compatible with a wide range of devices and
protocols, which means that it can be used with different types of networks and
applications.
• Disadvantages of Circuit Switching
• Limited scalability: Circuit switching is not well-suited for large-scale
networks with many nodes, as it requires a dedicated communication path
between each pair of nodes. This can result in a high degree of complexity
and difficulty in managing the network.
• Vulnerability to failures: Circuit switching relies on a dedicated
communication path, which can make the network vulnerable to failures,
such as cable cuts or switch failures. In the event of a failure, the
communication path must be re-established, which can result in delays or
loss of data.
• Limited Flexibility: Circuit switching is not flexible as it requires a
dedicated circuit between the communicating devices. The circuit cannot
be used Waste of Resources for any other purpose until the
communication is complete, which limits the flexibility of the network.
• Waste of Resources: Circuit switching reserves the bandwidth and
network resources for the duration of the communication, even if there is
no data being transmitted. This results in the wastage of resources and
inefficient use of the network.
• Expensive: Circuit switching is an expensive technology as it requires dedicated
communication paths, which can be costly to set up and maintain. This makes it
less feasible for small-scale networks and applications.
• Susceptible to Failure: Circuit switching is susceptible to failure as it relies on a
dedicated communication path. If the path fails, the entire communication is
disrupted. This makes it less reliable than other networking technologies, such
as packet switching .
• Not suitable for high traffic: Circuit switching is not suitable for high traffic,
where data is transmitted intermittently at irregular intervals. This is because a
dedicated circuit needs to be established for each communication, which can
result in delays and inefficient use of resources.
• Delay and latency: Circuit switching requires the establishment of a dedicated
communication path, which can result in delay and latency in establishing the
path and transmitting data. This can impact the real-time performance of
applications, such as voice and video.
• High cost: Circuit switching requires the reservation of resources, which can
result in a high cost, particularly in large-scale networks. This can make circuit
switching less practical for some applications.
• No prioritization: Circuit switching does not provide any mechanism for
prioritizing certain types of traffic over others.
Circuit Switching Packet Switching
In Packet switching, each data unit just knows the
In-circuit switching, each data unit knows the entire
final destination address intermediate path is
path address which is provided by the source.
decided by the routers.
In-Circuit switching, data is processed at the source In Packet switching, data is processed at all
system only intermediate nodes including the source system.
The delay between data units in circuit switching is The delay between data units in packet switching is
uniform. not uniform.
Circuit switching is not convenient for handling Packet switching is suitable for handling bilateral
bilateral traffic. traffic.
In-Circuit Switching there is a physical path In Packet Switching there is no physical path
between the source and the destination between the source and the destination
Pure Aloha Slotted Aloha
In pure aloha, data can be transmitted at any time by In slotted aloha, data can be transmitted at the
any station. beginning of the time slot.
It was introduced under the leadership of Norman It was introduced by Robert in 1972 to improve pure
Abramson in 1970 at the University of Hawaii. aloha's capacity.
Time is not synchronized in pure aloha. Time is globally synchronized in slotted aloha.
It does not decrease the number of collisions to half. It decreases the number of collisions to half.
In pure aloha, the vulnerable time is = 2 x Tt Whereas, in slotted aloha, the vulnerable time is = Tt
In pure aloha, the probability of the successful In slotted aloha, the probability of the successful
transmission of the frame is - transmission of the frame is -
S = G * e-2G S = G * e-G
The maximum throughput in slotted aloha is about
The maximum throughput in pure aloha is about 18%. 37%.
• 1.Explain OSI MODEL with neat diagram.
• 2.Describe about characteristics and features
of computer network.
• 3.Describe different types of computer
network .
• 4,Describe different topologies in computer
network.
• 5.Describe different guided transmission
media. With example.