0% found this document useful (0 votes)
24 views20 pages

4-MAC Layer

Uploaded by

Vipul Chaudhary
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views20 pages

4-MAC Layer

Uploaded by

Vipul Chaudhary
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

Prepared By: Divya Kaurani & Mayank Yadav Computer Network

Medium Access Control Sublayer


The Medium Access Control (MAC) sublayer is a part of the data link layer responsible for
controlling access to the transmission medium. It deals with the channel allocation problem,
addressing how multiple nodes communicate and share a single communication channel.

Simply, The (MAC) layer is like a traffic cop for data traveling on a network. Its main job is to
manage how devices share the same communication channel without causing collisions.

Key Features:

● Controlling access: It coordinates how different devices can use the same network at the
same time.

● Avoiding collisions: It helps prevent data from crashing into each other by implementing
various strategies.

● Listening to the medium: It makes sure that devices "listen" to check if the network is
busy before sending data. If the network is busy, they wait for their turn.

● Randomizing access: If devices detect the network is busy, they wait for a random
amount of time before trying again to send their data.

● Handling different types of networks: It works not only for wired networks like Ethernet
but also for wireless networks like Wi-Fi.

Channel Allocation Problem:

Channel allocation is a process in which a single channel is divided and allotted to multiple
users in order to carry user specific tasks. The user’s quantity may vary every time the process
takes place. If there are N number of users and the channel is divided into N equal-sized sub
channels, Each user is assigned one portion. If the number of users are small and don’t vary at
times, then Frequency Division Multiplexing can be used as it is a simple and efficient channel
bandwidth allocating technique.

The channel allocation problem refers to the challenge of coordinating the use of a shared
communication medium among multiple users. This problem becomes more complex in
scenarios where multiple users contend for access simultaneously.
Prepared By: Divya Kaurani & Mayank Yadav Computer Network

Channel allocation problem can be solved by two schemes: Static Channel Allocation in LANs
and MANs, and Dynamic Channel Allocation.

For more explanation: Click here

The Data Link Layer is responsible for transmission of data between two nodes. Its main
functions are-

● Data Link Control


● Multiple Access Control

When a sender and receiver have a dedicated link to transmit data packets, the data link control
is enough to handle the channel. Suppose there is no dedicated path to communicate or
transfer the data between two devices. In that case, multiple stations access the channel and
simultaneously transmit the data over the channel. It may create collisions and cross talk.
Hence, the multiple access protocol is required to reduce the collision and avoid crosstalk
between the channels.

For example, suppose that there is a classroom full of students. When a teacher asks a
question, all the students (small channels) in the class start answering the question at the same
time (transferring the data simultaneously). All the students respond at the same time due to
which data overlap or data is lost. Therefore it is the responsibility of a teacher (multiple access
protocol) to manage the students and give them one answer.
Prepared By: Divya Kaurani & Mayank Yadav Computer Network

ALOHA

It is designed for wireless LAN (Local Area Network) but can also be used in a shared medium
to transmit data. Using this method, any station can transmit data across a network
simultaneously when a data frameset is available for transmission.

1. Any station can transmit data to a channel at any time.


2. It does not require any carrier sensing.
3. Collisions and data frames may be lost during the transmission of data through multiple
stations.
4. Acknowledgment of the frames exists in Aloha. Hence, there is no collision detection.
5. It requires retransmission of data after some random amount of time.

Pure Aloha

Whenever data is available for sending over a channel at stations, we use Pure Aloha. In pure
Aloha, when each station transmits data to a channel without checking whether the channel is
idle or not, the chances of collision may occur, and the data frame can be lost. When any station
transmits the data frame to a channel, the pure Aloha waits for the receiver's acknowledgment.
If it does not acknowledge the receiver ends within the specified time, the station waits for a
random amount of time, called the backoff time (Tb). And the station may assume the frame has
been lost or destroyed. Therefore, it retransmits the frame until all the data is successfully
transmitted to the receiver.
Prepared By: Divya Kaurani & Mayank Yadav Computer Network

Slotted ALOHA:

The slotted Aloha is designed to overcome the pure Aloha's efficiency because pure Aloha has
a very high possibility of frame hitting. In slotted Aloha, the shared channel is divided into a fixed
time interval called slots. So that, if a station wants to send a frame to a shared channel, the
frame can only be sent at the beginning of the slot, and only one frame is allowed to be sent to
each slot. And if the stations are unable to send data to the beginning of the slot, the station will
have to wait until the beginning of the slot for the next time. However, the possibility of a
collision remains when trying to send a frame at the beginning of two or more station time slot.
Prepared By: Divya Kaurani & Mayank Yadav Computer Network

Difference between Pure ALOHA and Slotted ALOHA


Prepared By: Divya Kaurani & Mayank Yadav Computer Network

CSMA

CSMA stands for Carrier Sense Multiple Access. It is a network protocol that allows multiple
devices to share the same communication channel, such as an Ethernet cable or a set of radio
frequencies. The primary function of CSMA is to coordinate the transmission of data packets
among multiple nodes in a network to prevent collisions and ensure efficient data transmission.

In CSMA, before transmitting data, a device listens to the communication channel to determine
whether any other device is currently transmitting. If the channel is found to be idle, the device
can then transmit its data. However, if the channel is busy, the device waits for a random period
of time and rechecks the channel's status before attempting to transmit again.

CSMA is a fundamental protocol used in various network technologies, including Ethernet and
Wi-Fi, to manage and control access to the shared communication medium. Different variations
of CSMA, such as CSMA/CD (CSMA with Collision Detection) and CSMA/CA (CSMA with
Collision Avoidance), incorporate additional features to further enhance the efficiency and
reliability of data transmission in network environments.

There are various CSMA protocol:

1. 1-persistent CSMA
2. Non persistent CSMA
3. p-persistent CSMA
4. CSMA/CD (CSMA with Collision Detection)

1. 1-persistent CSMA:

● Before sending data, a station checks if the channel is free.


● If the channel is busy, it waits until it's idle to transmit.
● If a collision happens, the station waits for a random time before trying again.
● It's called 1-persistent because the station transmits when the channel is free.
● The time it takes for signals to travel affects the protocol's performance.
● If a station starts transmitting just before another becomes ready, a collision
occurs.
● Even with no signal delay, collisions can still happen.
● If two stations become ready during a transmission, they might collide.
● If they were more patient, there would be fewer collisions.
Prepared By: Divya Kaurani & Mayank Yadav Computer Network

2. Non-persistent CSMA:

● Before sending, a station checks if the channel is free.


● If the channel is busy, the station waits for a random time before checking again.
● It's more patient than 1-persistent CSMA, leading to improved channel usage.
● Delays are longer compared to 1-persistent CSMA.

3. p-persistent CSMA:

● Used in slotted channels where time is divided into slots.


● When a station is ready to send, it checks if the channel is free and transmits with
a probability p if it is.
● If the channel is busy, it waits until the next slot with a probability q=1−p.
● This process is repeated until the frame is transmitted or another station starts
transmitting.
● If a collision occurs, the station waits for a random time before trying again.
Prepared By: Divya Kaurani & Mayank Yadav Computer Network

Figure shows the computed throughput versus offered traffic for all three protocols.

4. CSMA/CD:

● CSMA/CD is an improvement over CSMA. It's used in LANs and allows quick
termination of frames in case of a collision.
● When a collision is detected, the station abruptly stops transmitting, saving time
and bandwidth.
● Stations wait for a random period before attempting to transmit again, assuming
the channel is free.
● It operates in a cycle of contention, transmission, and idle periods, with stations
checking for collision and adjusting their transmission accordingly.

Controlled Access Protocols:

In controlled access, the stations seek information from one another to find which station has
the right to send. It allows only one node to send at a time, to avoid the collision of messages on
a shared medium. The three controlled-access methods are:

1. Reservation
2. Polling
3. Token Passing
Prepared By: Divya Kaurani & Mayank Yadav Computer Network

● In the reservation method, a station needs to make a reservation before sending data.

● The timeline has two kinds of periods:


1. Reservation interval of fixed time length
2. Data transmission period of variable frames.

● If there are M stations, the reservation interval is divided into M slots, and each station
has one slot.
● Suppose if station 1 has a frame to send, it transmits 1 bit during slot 1. No other station
is allowed to transmit during this slot.
● In general, i th station may announce that it has a frame to send by inserting a 1 bit into i
th slot. After all N slots have been checked, each station knows which stations wish to
transmit.
● The stations which have reserved their slots transfer their frames in that order.
● After the data transmission period, the next reservation interval begins.
● Since everyone agrees on who goes next, there will never be any collisions.
● The following figure shows a situation with five stations and a five-slot reservation frame.
In the first interval, only stations 1, 3, and 4 have made reservations. In the second
interval, only station 1 has made a reservation.

Advantages of Reservation:

1. Predictable network performance: Reservations enable easy prediction of data


accessing time and rates, ensuring fixed time and rates for data transmission.
2. Prioritization: It allows setting priorities for faster secondary access, ensuring better
service for high-priority data.
3. Reduced contention: Reservation-based methods minimize contention, enhancing
network efficiency and reducing packet loss.
4. Quality of Service (QoS) support: Different reservation types can be allocated for specific
traffic types, ensuring better handling of high-priority data.
5. Efficient bandwidth use: Reservations enable efficient use of available bandwidth by
multiplexing different reservation requests on the same channel.
6. Support for multimedia: It is beneficial for multimedia applications that require
guaranteed network resources, ensuring high-quality performance.
Prepared By: Divya Kaurani & Mayank Yadav Computer Network

Disadvantages of Reservation:

1. Dependability concerns: The system heavily relies on controlled dependability,


potentially leading to trust issues.
2. Decreased capacity: Under light loads, there might be a decrease in channel data rate
and capacity, leading to increased turn-around time.

Polling

● Polling process is similar to the roll-call performed in class. Just like the teacher, a
controller sends a message to each node in turn.
● In this, one acts as a primary station(controller) and the others are secondary stations.
All data exchanges must be made through the controller.
● The message sent by the controller contains the address of the node being selected for
granting access.
● Although all nodes receive the message the addressed one responds to it and sends
data if any. If there is no data, usually a “poll reject”(NAK) message is sent back.
● Problems include high overhead of the polling messages and high dependence on the
reliability of the controller.

Advantages of Polling:

● The maximum and minimum access time and data rates on the channel are fixed and
predictable.
● It has maximum efficiency.
● It has maximum bandwidth.
● No slot is wasted in polling.
Prepared By: Divya Kaurani & Mayank Yadav Computer Network

● There is assignment of priority to ensure faster access from some secondary.

Disadvantages of Polling:

● It consumes more time.


● Since every station has an equal chance of winning in every round, link sharing is
biased.
● Only some stations might run out of data to send.
● An increase in the turnaround time leads to a drop in the data rates of the channel under
low loads.
● Efficiency Let Tpoll be the time for polling and Tt be the time required for transmission of
data. Then,
Efficiency = Tt/(Tt + Tpoll)

Token Passing:

● In the token passing scheme, the stations are connected logically to each other in the
form of a ring and access to stations is governed by tokens.
● A token is a special bit pattern or a small message, which circulates from one station to
the next in some predefined order.
● In Token ring, a token is passed from one station to another adjacent station in the ring
whereas incase of Token bus, each station uses the bus to send the token to the next
station in some predefined order.
● In both cases, a token represents permission to send. If a station has a frame queued for
transmission when it receives the token, it can send that frame before it passes the
token to the next station. If it has no queued frame, it passes the token simply.
● After sending a frame, each station must wait for all N stations (including itself) to send
the token to their neighbors and the other N – 1 stations to send a frame, if they have
one.
● There exists problems like duplication of a token or token is lost or insertion of new
station, removal of a station, which need to be tackled for correct and reliable operation
of this scheme.
Prepared By: Divya Kaurani & Mayank Yadav Computer Network

● Performance of token ring can be concluded by 2 parameters:-

● Delay is a measure of time between when a packet is ready and when it is delivered. So,
the average time (delay) required to send a token to the next station = a/N.
● Throughput, which is a measure of successful traffic.
Throughput, S = 1/(1 + a/N) for a<1
and

S = 1/{a(1 + 1/N)} for a>1.


where N = number of stations
a = Tp/Tt
(Tp = propagation delay and Tt = transmission delay)

Advantages of Token passing:

1. It may now be applied with routers cabling and includes built-in debugging features like
protective relay and auto reconfiguration.
2. It provides good throughput when conditions of high load.

Disadvantages of Token passing:

1. Its cost is expensive.


2. Topology components are more expensive than those of other, more widely used
standards.
3. The hardware element of the token rings are designed to be tricky. This implies that you
should choose on manufacture and use them exclusively.

Wireless LAN Protocols:

Wireless LANs (WLANs) are wireless local area networks that allow devices to connect to a
network wirelessly, without the need for physical cables. WLANs enable users to access
network resources and the internet without being tethered to a specific location. WLANs are
commonly used in environments where mobility and flexibility are essential, such as offices,
public spaces, and homes.

Configuration of Wireless LANs:

Each station in a Wireless LAN has a wireless network interface controller. A station can be of
two categories −

● Wireless Access Point (WAP) − WAPs or simply access points (AP) are generally
wireless routers that form the base stations or access points. The APs are wired together
using fiber or copper wires, through the distribution system.
Prepared By: Divya Kaurani & Mayank Yadav Computer Network

● Client − Clients are workstations, computers, laptops, printers, smart phones etc. They
are around tens of meters within the range of an AP.

Imagine a busy network where multiple devices want to send data at the same time, causing
data collisions and interference. To avoid this chaos, the RTS-CTS protocol steps in, acting like
a virtual traffic controller for the data.

The RTS-CTS (Request to Send - Clear to Send) protocol is used in wireless networks to
prevent data collision and manage the flow of data between the sender and receiver. Here's

How it works:

● Request to Send (RTS): When a device wants to send data, it first asks for permission
by sending a short message (RTS) to the intended receiver. This message contains
details about the data, such as how much data it plans to send and for how long.

● Clear to Send (CTS): If the receiver is ready to accept the data, it sends back a
message (CTS) to the sender. This message signals that the channel is clear, and the
sender can go ahead and transmit its data without any worries.

● Data Transmission: Once the sender receives the clear signal (CTS), it starts sending
the data to the receiver. During this time, the channel is reserved for the sender,
preventing other devices from interfering and causing data collisions.

● Acknowledgment: After the data transmission, the receiver sends an acknowledgment


message (ACK) to the sender, confirming that the data arrived safely. This ensures that
the sender knows the data reached the intended destination without any issues.

In this way, the RTS-CTS protocol helps manage the data flow in busy wireless networks,
ensuring that data transmission happens smoothly without interruptions or collisions. It's like
having a coordinated system in place to regulate the movement of data traffic on a busy road.

Hidden Terminal Problem:

● The hidden terminal problem occurs when two wireless nodes are within range of a
common access point, but they are unable to detect each other.
● This results in simultaneous transmissions by the nodes, causing collisions and data
loss.
● It typically arises in a scenario where a wireless device is within the transmission range
of an access point but not within the transmission range of other wireless devices
communicating with the same access point.
Prepared By: Divya Kaurani & Mayank Yadav Computer Network

Hidden Terminal Problem Solution using MACA:

● When a station wants to transmit data, it sends an RTS frame to the receiver.
● The RTS frame informs other potential transmitters within range, even if they are
hidden, about the ongoing data transmission.
● The receiver sends a CTS frame to all stations within range, indicating that the
channel will be occupied for a specific duration.
● Stations within range, including hidden terminals, detect the CTS frame and
refrain from transmitting, avoiding potential collisions.

Exposed Terminal Problem:

● The exposed terminal problem occurs when a wireless node refrains from
transmitting data, assuming that another node is already transmitting.
● This situation arises due to the conservative nature of the wireless medium
access control protocol, which leads to decreased network throughput.
● It often happens when a wireless node is within the transmission range of a
specific node but out of the transmission range of other nodes with which it can
communicate.
Prepared By: Divya Kaurani & Mayank Yadav Computer Network

Exposed Terminal Problem Solution using MACA:

● When a station sends an RTS frame to the receiver, other exposed terminals
within range can receive this RTS frame.
● This process allows other stations to be aware of the ongoing transmission and
prevents unnecessary deferral of their own transmissions.
● Stations that were exposed terminals can now understand the duration of the
ongoing transmission and plan their own transmissions accordingly.

Ethernet

Ethernet is a popular LAN technology known for its simplicity and cost-effectiveness. It works
well for wired networks and is based on IEEE standards 802.3. Ethernet primarily uses a bus
topology and operates in the physical and data link layers of the OSI model. It handles collisions
using the CSMA/CD access control mechanism.

While wireless networks have become more common, Ethernet is still widely used, especially in
environments where security and reliability are critical. It offers a secure and stable connection,
making it preferable for many businesses and organizations.

Switched Ethernet refers to a type of network where devices are connected through network
switches, enabling them to communicate with each other within a local area network (LAN).
Unlike traditional Ethernet, where all devices are connected to the same network segment and
share the same bandwidth, switched Ethernet allows for dedicated communication channels
between devices, leading to improved performance, speed, and efficiency.

There are two main types of Switched Ethernet:

1. Fast Ethernet:
Fast Ethernet is a standard for Ethernet networks that carry traffic at the rate of 100
megabits per second (Mbps). It provides a significant speed improvement over traditional
Ethernet, making it suitable for small to medium-sized networks, supporting various
network topologies.

Features:

1. Operates at 100 Mbps.


2. Offers a tenfold speed increase compared to traditional Ethernet.
3. Suitable for small to medium-sized networks.
4. Commonly used in various network topologies.
Prepared By: Divya Kaurani & Mayank Yadav Computer Network

Advantages:

1. Provides faster data transfer rates.


2. Improves network performance and efficiency.
3. Enhances reliability and cost-effectiveness.
4. Enables simultaneous data transmission without collisions.

Disadvantages:

1. Limited bandwidth for larger networks with high data traffic.


2. Potential performance limitations for data-intensive applications.
3. Not as suitable for enterprise-level networks with extensive data transfer needs.

2. Gigabit Ethernet:
Gigabit Ethernet is an extension of the Ethernet technology that offers data transmission
rates of 1 gigabit per second (Gbps), providing significantly faster speeds than Fast
Ethernet. It is ideal for enterprise-level networks with high bandwidth requirements,
effectively handling heavy network traffic and data-intensive applications.

Features:

1. Supports data transfer rates up to 1000 Mbps (1 Gbps).


2. Provides significantly faster speeds than Fast Ethernet.
3. Ideal for enterprise-level networks with high bandwidth demands.

Advantages:

1. Offers exceptionally high-speed data transmission capabilities.


2. Efficiently handles heavy network traffic and data-intensive applications.
3. Enhances network performance for large file transfers and video streaming.
4. Suitable for organizations requiring reliable and efficient data communication
solutions.

Disadvantages:

1. Requires compatible hardware to achieve maximum speeds.


2. May involve higher costs for implementation and maintenance.
3. Utilizes more power compared to slower Ethernet standards.
4. Some older devices may not support Gigabit Ethernet speeds.
Prepared By: Divya Kaurani & Mayank Yadav Computer Network

DQDB:

DQDB (Distributed Queue Dual Bus) is a protocol used for creating high-speed networks in
metropolitan areas. It works like a two-lane road, where traffic flows in opposite directions in
each lane. The network has two unidirectional buses, A and B, which can handle data, video,
and voice traffic. These buses are used for efficient data transfer over long distances of up to 30
miles, operating at speeds between 34-55 Mbps.

Directional Traffic:

Each bus supports traffic in only one direction and is opposite to one another. The start of the
bus being represented as a square and the end of the bus being represented as a triangle
(Fig.1). Bus A traffic moves from right to left (i.e. from station 1 to 5) whereas bus B traffic
moves from left to right (i.e. from station 5 to 1).

Upstream and Downstream:

The relationship of stations of the DQDB network depends on the directional flow of traffic of the
buses.
Consider bus A in Fig.1, which has station 1 & 2 marked as upstream w.r.t station 3 and station
4 & 5 are downstream w.r.t station 3. Here in bus A, station 1 is the head of the bus as there is
no upstream station and station 5 has no downstream station and it is regarded as the end of
bus A.
Prepared By: Divya Kaurani & Mayank Yadav Computer Network

Working:

The head of bus A i.e. station 1 generates an empty slot for use of bus A. Similarly, the head of
bus B i.e. station 5 generates an empty slot for use of bus B. The empty slot travels down its
bus until the transmission station drops data into it and the intended destination reads the data.

For example: Click here

Adaptive tree walk protocol in limited contention protocol:

The Adaptive Tree Walk (ATW) protocol is a limited contention protocol that operates on the
principle of efficiently handling contention for shared resources in a network. Specifically, it is
used to manage contention in scenarios where multiple devices need to access a common
communication channel. The ATW protocol implements an adaptive approach, allowing it to
dynamically adjust its behavior based on the current network conditions.

● Hierarchical Grouping: The stations are organized in a binary tree structure, with
internal nodes representing groups and leaf nodes representing individual stations. This
hierarchical arrangement facilitates the controlled contention for channel access.

● Collision Management: In the ATW protocol, the contention period is divided into
discrete time slots, and the contention rights of the stations are restricted. If a collision
occurs, the nodes are split into specific groups that are allowed to compete for the
channel during a particular slot.

● Group-Based Contention: Depending on the success or failure of the channel access


by a group, further division of the groups may occur. The division continues until each
group contains only one node, ensuring that the channel is accessed by a single station
without contention.

● Depth-First Search Algorithm: The protocol employs a depth-first search algorithm to


locate the contending stations within the groups, enabling a systematic approach to
managing the contention during each time slot.
Prepared By: Divya Kaurani & Mayank Yadav Computer Network

Slot-0 : C*, E*, F*, H* (all nodes under node 0 can try which are going to send), conflict
Slot-1 : C* (all nodes under node 1 can try}, C sends
Slot-2 : E*, F*, H*(all nodes under node 2 can try}, conflict
Slot-3 : E*, F* (all nodes under node 5 can try to send), conflict
Slot-4 : E* (all nodes under E can try), E sends
Slot-5 : F* (all nodes under F can try), F sends
Slot-6 : H* (all nodes under node 6 can try to send), H sends.
Prepared By: Divya Kaurani & Mayank Yadav Computer Network

Standard Ethernet is also referred to as Basic Ethernet. It uses 10Base5 coaxial


cables for communications. Ethernet provides service up to the data link layer. At the
data link layer, Ethernet divides the data stream received from the upper layers and
encapsulates it into frames, before passing them on to the physical layer.

The main parts of an Ethernet frame are:


● Preamble − It is the starting field that provides alert and timing pulse for
transmission.
● Destination Address − It is a 6-byte field containing the physical address of
destination stations.
● Source Address − It is a 6-byte field containing the physical address of the
sending station.
● Length − It stores the number of bytes in the data field.
● Data and Padding − This carries the data from the upper layers.
● CRC − It contains error detection information.
Standard Ethernet has many physical layer implementations. The four main
physical layer implementations are shown in the following diagram:

You might also like