Data Link Layer
Data Link Layer
It is
responsible for transferring data between two devices on the same network. This layer ensures that
data is sent and received correctly over the physical medium (like cables or wireless signals) by
organizing data into frames and handling error detection and correction.
Advantages of Ethernet
Ethernet has many benefits for users, which is why it grew so popular. Here are some of the
common benefits of Ethernet:
Disadvantages of Ethernet
Despite its widespread use, Ethernet does have its share of disadvantages, such as the following:
● Frames: Data is transmitted in units called frames, which contain source and destination
MAC (Media Access Control) addresses, as well as error-checking information.
● Medium: Ethernet can use various physical media, including twisted pair cables (like
Cat5e or Cat6), fiber optics, or coaxial cables.
2. Communication Process
● Sending Data: When a device wants to send data, it creates an Ethernet frame containing
the necessary information and data payload.
● Addressing: Each device on the network has a unique MAC address. The sender’s MAC
address is included in the frame, along with the recipient's MAC address.
● Collision Detection: Ethernet uses a protocol called Carrier Sense Multiple Access with
Collision Detection (CSMA/CD). This means that before a device transmits data, it
"listens" to the network to check if it’s clear. If two devices transmit simultaneously, a
collision occurs, and both devices will stop, wait a random amount of time, and try again.
3. Receiving Data
● Frame Reception: When a device receives a frame, it checks the destination MAC
address. If it matches its own, the device processes the data; if not, it ignores the frame.
● Error Checking: Each frame includes a Frame Check Sequence (FCS) for error
detection. If the frame is corrupted, it will be discarded.
● Hubs: In a basic setup, a hub broadcasts incoming frames to all devices. This can lead to
collisions and inefficiencies.
● Switches: More commonly, networks use switches that intelligently send frames only to
the intended recipient, reducing collisions and improving efficiency.
Preamble (7 bytes):
• A series of alternating 0s and 1s. It signals the start of the frame and helps the sender and
receiver establish bit synchronization.
Start of Frame Delimiter (SFD) (1 byte):
• Always set to 10101011. It marks the beginning of the actual frame, right before the
destination address.
Destination Address (6 bytes):
• The MAC address of the intended recipient. It uniquely identifies the receiving device.
Source Address (6 bytes):
• The MAC address of the device that sent the frame.
Length (2 bytes):
• Indicates the length of the entire Ethernet frame (including headers and data). It can hold
a value between 0 and 65535, but Ethernet has a size limit of 1500 bytes for the data
section.
Data (Payload):
• This is the actual data being transmitted. If Internet Protocol (IP) is used, the IP header
and data are inserted here.
VLAN Tagging:
• Ethernet frames can carry a VLAN tag, allowing network administrators to logically
divide a physical network into virtual networks. Each VLAN is identified by its unique
VLAN ID.
Jumbo Frames:
• Some network devices support Jumbo Frames, which are Ethernet frames with a
payload larger than the standard 1500 bytes. This is typically used for high-performance
networks to reduce overhead.
EtherType Field:
• This field identifies the protocol carried in the Ethernet frame's payload.
o 0x0800: Indicates the payload is an IP packet.
o 0x0806: Indicates the payload is an ARP packet (Address Resolution Protocol).
Multicast and Broadcast Frames:
• Unicast: A frame sent to a specific device.
• Multicast: Sent to a group of devices (identified by a multicast MAC address).
• Broadcast: Sent to all devices on the network (identified by a broadcast MAC address).
Collision Detection:
• In half-duplex Ethernet (where data can only travel in one direction at a time),
collisions can occur if two devices try to send data simultaneously. Ethernet uses Carrier
Sense Multiple Access with Collision Detection (CSMA/CD) to detect and resolve
these collisions by retransmitting data after a random backoff time.
Switch: Forwarding and Filtering
A switch is a device that operates at the Data Link layer (Layer 2) and is responsible for
forwarding and filtering data frames based on MAC addresses. Let's break down how
forwarding and filtering work:
Forwarding:
Forwarding refers to the process where the switch sends an incoming data frame to the
appropriate output port based on the destination MAC address. Here's how it works:
1. MAC Address Learning:
o When a switch receives a frame, it examines the source MAC address of the
frame.
o The switch then records this source MAC address along with the port it was
received on in a MAC address table (or forwarding table).
o This table is a list of known MAC addresses and their associated ports. The
switch uses this table to make forwarding decisions.
2. Forwarding Frames:
o If the destination MAC address is found in the MAC address table, the switch
will forward the frame only to the corresponding port where the destination
device is connected.
o If the destination MAC address is not found, the switch floods the frame to all
ports (except the port it was received on). This allows the destination device to
respond and helps the switch learn the destination MAC address, which will be
added to the table for future reference.
Filtering:
Filtering refers to the process where the switch blocks or prevents frames from being forwarded
to certain ports, based on specific conditions. Here's how it works:
1. MAC Address Filtering:
o The switch checks whether the destination MAC address is part of the network
that it is managing. If the address isn't recognized (not in the MAC address table),
it may either flood the frame (in case of unknown destinations) or discard it if it’s
invalid.
2. Preventing Unnecessary Traffic:
o Filtering helps to reduce network congestion by ensuring that frames are only
forwarded to the appropriate ports. It prevents unnecessary frames from being
sent to ports where they are not needed, thus improving network efficiency.
• Forwarding is the process of sending data frames to the correct destination based on the
MAC address.
• Filtering is the process of preventing frames from being unnecessarily forwarded to
certain ports to avoid congestion and ensure that frames only go to devices that need
them.
In summary:
• Advantages of link-layer switches include transparency, reduced collisions, high
throughput, full-duplex communication, VLAN support, and QoS.
• Drawbacks include broadcast traffic, scalability issues, security concerns, network
management complexity, and physical limitations.
VLAN: A Virtual Local Area Network (VLAN) is a logical grouping of devices within a larger
physical network that allows for better management, security, and efficiency. VLANs enable
network administrators to segment networks into smaller, isolated sections, even if the devices
are physically connected to the same switch.
VLAN for solving the drawbacks of Link Layer switch:
● Solution: VLANs create separate logical networks within the same physical switch. This
segmentation reduces the size of broadcast domains, limiting broadcast traffic and
enhancing overall performance.
Improved Scalability:
● Solution: By dividing the network into VLANs, organizations can manage growth more
effectively. Each VLAN can grow independently, reducing the risk of broadcast storms.
Enhanced Security:
● Solution: VLANs can isolate sensitive data and resources. By grouping users based on
roles or departments, organizations can restrict access to only those who need it,
improving security.
● Solution: VLANs allow for easier implementation of network policies. For example,
Quality of Service (QoS) settings can be applied to specific VLANs to prioritize traffic
for critical applications.
● Solution: VLANs enable the dynamic grouping of devices, regardless of their physical
location. This flexibility allows for more efficient use of network resources and simplifies
the reconfiguration of network segments.
• Labels: Instead of using the traditional IP routing to determine the next hop, MPLS uses
labels attached to packets. Each packet gets a label at the entry point (Label Edge Router,
or LER), and based on that label, MPLS routers (Label Switch Routers, or LSRs) forward
packets along pre-established paths, without needing to look at the IP address.
• Label Switching: The router uses these labels to make forwarding decisions. Each
MPLS-enabled router reads the label, performs a lookup in a forwarding table, and
switches the packet to the next router or device in the MPLS network.
1. Label Switching: As mentioned, the MPLS process uses labels instead of the traditional
routing based on IP addresses.
2. Traffic Engineering: MPLS allows for the explicit control of traffic paths, which helps
in avoiding congestion, improving network performance, and managing traffic flows
more efficiently.
3. Quality of Service (QoS): MPLS supports prioritization of different types of traffic,
such as voice, video, and data, ensuring that more important traffic is delivered with low
latency and high reliability.
4. Multiprotocol: It can carry a wide variety of protocols, such as IP, Ethernet, ATM, and
Frame Relay. This makes it a versatile technology that can support different types of
traffic.
5. Scalability: MPLS is highly scalable and can support large and complex networks by
managing how data is routed and ensuring it follows the best possible paths.
1. Label Assignment: When a packet enters an MPLS network, it is assigned a label at the
entry point (Label Edge Router). This label is based on the packet's destination and the
desired path.
2. Label Forwarding: Each intermediate router (Label Switch Router) forwards the packet
based on its label, without needing to check the IP header. The routers perform label
lookups, and the packet is forwarded to the next hop in the MPLS network.
3. Label Removal: When the packet reaches its destination (Label Edge Router), the label
is removed, and the packet is forwarded to its final destination based on its original IP
address.
Advantages of MPLS:
1. Improved Performance: MPLS can speed up data forwarding as routers don't need to
examine the entire packet header but only the label.
2. Traffic Engineering: Allows operators to control the path data takes, improving network
efficiency and load balancing.
3. Scalability: MPLS can handle large networks, and its ability to support multiple
protocols makes it adaptable to various configurations.
4. Support for VPNs: MPLS can be used to create Virtual Private Networks (VPNs),
ensuring secure communication between different locations of an organization.
5. Quality of Service (QoS): MPLS allows the prioritization of time-sensitive traffic, such
as voice or video, ensuring high-quality communication.
Disadvantages of MPLS:
2. Cost: MPLS infrastructure can be costly to deploy and maintain, requiring specialized
hardware and expertise.
Data Centre: Data centers are specialized facilities designed to house computer systems and
associated components, such as telecommunications and storage systems. They are essential for
various reasons:
2. Racks: Organize multiple hosts (blades) in a compact manner, allowing for efficient
use of space and easy management of the servers. Each rack typically houses 20 to 40
blades.
3. Top of Rack (TOR) Switch: Connects all hosts within a rack and facilitates
communication between them. It also links the rack to other switches in the data center
and manages network traffic to and from the hosts.
4. Network Interface Card (NIC): Installed in each host, the NIC enables the host to
connect to the TOR switch and facilitates network communication.
5. Border Routers: Connect the data center network to the public Internet. They manage
the traffic flow between external clients and internal hosts, ensuring efficient data
exchange.
6. Data Center Network: The overall interconnection system that includes racks,
switches, and routers, designed to manage and optimize traffic between internal hosts and
external clients.
7. Cooling Systems: Maintain optimal temperature and humidity levels within the data
center to prevent overheating and ensure the reliable operation of all equipment.
8. Power Supply Units (PSUs): Provide uninterrupted power to the data center
equipment, often including backup systems like uninterruptible power supplies (UPS)
and generators to maintain operation during outages.
9. Storage Systems: Offer additional data storage solutions, including disk drives and
storage area networks (SANs), for managing and backing up data efficiently.