Computer Networks

Download as pdf or txt
Download as pdf or txt
You are on page 1of 172

Unit 1: DATA COMMUNICATION COMPONENTS

Introduction: Data Communications, Networks, Network Types, Network Models: Protocol


Layering, TCP/IP Protocol Suite, OSI Model Introduction to Physical Layer: Data and
signals Digital Transmission, Bandwidth Utilization: Multiplexing and Spectrum Spreading.
Switching: Introduction, Circuit Switched Networks, Packet Switching

Introduction :

Data Communications:

In Data Communications, data generally are defined as information that is stored in digital
form. Data communications is the process of transferring digital information between two or
more points. Information is defined as the knowledge or intelligence. Data communications
can be summarized as the transmission, reception, and processing of digital information. For
data communications to occur, the communicating devices must be part of a communication
system made up of a combination of hardware (physical equipment) and software (programs).
The effectiveness of a data communications system depends on four fundamental
characteristics: delivery, accuracy, timeliness, and jitter.

1. Delivery: The system must deliver data to the correct destination. Data must be received by
the intended device or user and only by that device or user.

2.Accuracy; The system must deliver the data accurately. Data that have been altered in
transmission and left uncorrected are unusable.

3.Timeliness: The system must deliver data in a timely manner. Data delivered late are
useless. In the case of video and audio, timely delivery means delivering data as they are
produced, in the same order that they are produced, and without significant delay. This kind
of delivery is called real-time transmission.

4. Jitter; Jitter refers to the variation in the packet arrival time. It is the uneven delay in the
delivery of audio or video packets. For example, let us assume that video packets are sent
every 30 ms. If some of the packets arrive with 30-ms delay and others with 40-ms delay, an
uneven quality in the video is the result.

A data communications system has five components:

1. Message: The message is the information (data) to be communicated. Popular forms of

information include text, numbers, pictures, audio, and video.

2. Sender: The sender is the device that sends the data message. It can be a computer,

workstation, telephone handset, video camera, and so on.

3. Receiver: The receiver is the device that receives the message. It can be a computer,

workstation, telephone handset, television, and so on.

4. Transmission medium: The transmission medium is the physical path by which a


message travels from sender to receiver. Some examples of transmission media include
twisted-pair wire, coaxial cable, fiber-optic cable, and radio waves.
5. Protocol: A protocol is a set of rules that govern data communications. It represents an
agreement between the communicating devices.

Figure1.1Five components of data communication

Types of Data Communication:


1. Point-to-Point Communication:
 Involves communication between two specific devices.
2. Multipoint Communication:
 Involves communication between one sender and multiple receivers or between
multiple senders and one receiver.
3. Simplex, Half-Duplex, and Full-Duplex Communication:
 Simplex: Communication is unidirectional (one-way), with data flowing only from
the sender to the receiver.
 Half-Duplex: Communication is bidirectional, but not simultaneous. Devices take
turns transmitting and receiving.
 Full-Duplex: Communication is bidirectional and simultaneous, allowing for real-
time interaction.

Communication Models:

1. Simplex Model:
 Unidirectional communication from sender to receiver.
2. Half-Duplex Model:
 Bidirectional communication, but not simultaneously.
3. Full-Duplex Model:
 Bidirectional communication with simultaneous data exchange.
Challenges in Data Communication:
1. Noise:
 Interference or disturbances that can affect the quality of the transmitted data.
2. Attenuation:
 The loss of signal strength as it travels through a medium, leading to a decrease in
signal quality.
3. Delay:
 The time taken for data to travel from the source to the destination, which can affect
real-time applications.
Networks, Network Types

What is a Network?

A network is a collection of interconnected devices and systems that are capable of sharing
and exchanging data. Networks enable communication and collaboration among devices,
facilitating the efficient transfer of information.
Types of Networks:

1. LAN (Local Area Network):

 A network limited to a small geographic area, such as within a single building or


campus.
 High data transfer rates and low latency.
 Commonly used in homes, offices, and educational institutions.

2. WAN (Wide Area Network):

 Spans a large geographic area, connecting LANs across cities, countries, or


continents.
 Utilizes public and private communication links.
 Slower data transfer rates compared to LANs.

3. MAN (Metropolitan Area Network):

 Covers a larger geographic area than a LAN but is smaller than a WAN, typically
within a city.
 Connects multiple LANs within the same metropolitan area.

4. PAN (Personal Area Network):

 A network for personal devices, typically within the range of an individual person.
 Examples include Bluetooth connections between devices.

5. CAN (Campus Area Network):

 Connects multiple LANs within a specific academic or business campus.


 Enables efficient communication and resource sharing within the campus.

6. SAN (Storage Area Network):

 A specialized network designed for high-speed data access to storage devices.


 Commonly used for storage management in large enterprises.

7. VPN (Virtual Private Network):

 A secure network created over the internet, allowing users to access a private
network remotely.
 Ensures encrypted communication for secure data transfer.

Network Topologies:
1. Bus Topology:

 All devices share a single communication line.

2. Star Topology:

 All devices are connected to a central hub or switch.

3. Ring Topology:

 Devices are connected in a circular fashion.

4. Mesh Topology:
 Every device is connected to every other device in the network.

5. Hybrid Topology:
 A combination of two or more different topologies.
Network Model:

The following points are:


1. Protocol layering
2. TCP/IP protocol suite
3. OSI Model

Protocol Layering

1. Physical Layer (Layer 1):

Function:
 Deals with the physical connection between devices.
 Specifies the characteristics of the hardware, such as cables, connectors, and
transmission rates.
 Examples:
 Ethernet, USB, fiber optics.

2. Data Link Layer (Layer 2):

Function:

 Responsible for the reliable transmission of data frames between devices on the
same network.
 Manages access to the physical medium.
 Examples:
 Ethernet, Wi-Fi, PPP (Point-to-Point Protocol).

3. Network Layer (Layer 3):

 Function:
 Handles routing and forwarding of data packets between different networks.
 Logical addressing, such as IP addresses, occurs at this layer.
 Examples:
 IP (Internet Protocol), ICMP (Internet Control Message Protocol).

4. Transport Layer (Layer 4):

 Function:
 Ensures end-to-end communication, reliability, and error recovery.
 Segmentation and reassembly of data.
 Examples:
 TCP (Transmission Control Protocol), UDP (User Datagram Protocol).

5. Session Layer (Layer 5):

 Function:
 Manages sessions or connections between applications on different devices.
 Dialog control, synchronization, and data exchange.
 Examples:
 NetBIOS (Network Basic Input/Output System).
6. Presentation Layer (Layer 6):

 Function:
 Translates data between the application layer and the lower layers.
 Handles data encryption, compression, and formatting.
 Examples:
 JPEG, GIF, SSL/TLS.

7. Application Layer (Layer 7):

 Function:
 Provides network services directly to end-users or applications.
 Interface between the application and the network.
 Examples:
 HTTP (Hypertext Transfer Protocol), FTP (File Transfer Protocol), DNS (Domain
Name System).

i) Protocol Layering:
A communication subsystem is a complex piece of Hardware and software. Early attempts
for implementing the software for such subsystems were based on a single, complex,
unstructured program with many interacting components. The resultant software was very
difficult to test and modify. To overcome such problem, the ISO has developed a layered
approach. In a layered approach, networking concept is divided into several layers, and each
layer is assigned a particular task. Therefore, we can say that networking tasks depend upon
the layers.

Layered Architecture

o The main aim of the layered architecture is to divide the design into small pieces.
o Each lower layer adds its services to the higher layer to provide a full set of services
to manage communications and run the applications.
o It provides modularity and clear interfaces, i.e., provides interaction between
subsystems.
o It ensures the independence between layers by providing the services from lower to
higher layer without defining how the services are implemented. Therefore, any
modification in a layer will not affect the other layers.
o The number of layers, functions, contents of each layer will vary from network to
network. However, the purpose of each layer is to provide the service from lower to a
higher layer and hiding the details from the layers of how the services are
implemented.
o The basic elements of layered architecture are services, protocols, and interfaces.
o Service: It is a set of actions that a layer provides to the higher layer.
o Protocol: It defines a set of rules that a layer uses to exchange the information
with peer entity. These rules mainly concern about both the contents and order
of the messages used.
o Interface: It is a way through which the message is transferred from one layer
to another layer.
o In a layer n architecture, layer n on one machine will have a communication with the
layer n on another machine and the rules used in a conversation are known as a layer-
n protocol

Let's take an example of the five-layered architecture.

o In case of layered architecture, no data is transferred from layer n of one machine to


layer n of another machine. Instead, each layer passes the data to the layer
immediately just below it, until the lowest layer is reached.
o Below layer 1 is the physical medium through which the actual communication takes
place.
o In a layered architecture, unmanageable tasks are divided into several small and
manageable tasks.
o The data is passed from the upper layer to lower layer through an interface. A Layered
architecture provides a clean-cut interface so that minimum information is shared
among different layers. It also ensures that the implementation of one layer can be
easily replaced by another implementation.
o A set of layers and protocols is known as network architecture.

F UNC T IO NS OF TCP/IP LAYERS :


N ET WO RK A C C ESS L AYER
o A network layer is the lowest layer of the TCP/IP model.
o A network layer is the combination of the Physical layer and Data Link layer defined
in the OSI reference model.
o It defines how the data should be sent physically through the network.
o This layer is mainly responsible for the transmission of the data between two devices
on the same network.
o The functions carried out by this layer are encapsulating the IP datagram into frames
transmitted by the network and mapping of IP addresses into physical addresses.
o The protocols used by this layer are ethernet, token ring, FDDI, X.25, frame relay.

I NT ERNET L AYER
o An internet layer is the second layer of the TCP/IP model.
o An internet layer is also known as the network layer.
o The main responsibility of the internet layer is to send the packets from any network,
and they arrive at the destination irrespective of the route they take.

F OL L OWI N G AR E T HE P R O T OC OL S U SED I N T HI S L AY E R AR E :

IP Protocol: IP protocol is used in this layer, and it is the most significant part of the entire
TCP/IP suite.

Following are the responsibilities of this protocol:

o IP Addressing: This protocol implements logical host addresses known as IP addresses.


The IP addresses are used by the internet and higher layers to identify the device and
to provide internetwork routing.
o Host-to-host communication: It determines the path through which the data is to be
transmitted.
o Data Encapsulation and Formatting: An IP protocol accepts the data from the transport
layer protocol. An IP protocol ensures that the data is sent and received securely, it
encapsulates the data into message known as IP datagram.
o Fragmentation and Reassembly: The limit imposed on the size of the IP datagram by data
link layer protocol is known as Maximum Transmission unit (MTU). If the size of IP
datagram is greater than the MTU unit, then the IP protocol splits the datagram into
smaller units so that they can travel over the local network. Fragmentation can be
done by the sender or intermediate router. At the receiver side, all the fragments are
reassembled to form an original message.
o Routing: When IP datagram is sent over the same local network such as LAN, MAN,
WAN, it is known as direct delivery. When source and destination are on the distant
network, then the IP datagram is sent indirectly. This can be accomplished by routing
the IP datagram through various devices such as routers.

ARP Protocol

o ARP stands for Address Resolution Protocol.


o ARP is a network layer protocol which is used to find the physical address from the IP
address.
o The two terms are mainly associated with the ARP Protocol:

o ARP request: When a sender wants to know the physical address of the device,
it broadcasts the ARP request to the network.
o ARP reply: Every device attached to the network will accept the ARP request
and process the request, but only recipient recognize the IP address and sends
back its physical address in the form of ARP reply. The recipient adds the
physical address both to its cache memory and to the datagram header

ICMP Protocol

o ICMP stands for Internet Control Message Protocol.

o It is a mechanism used by the hosts or routers to send notifications regarding


datagram problems back to the sender.
o A datagram travels from router-to-router until it reaches its destination. If a router is
unable to route the data because of some unusual conditions such as disabled links, a
device is on fire or network congestion, then the ICMP protocol is used to inform the
sender that the datagram is undeliverable.
o An ICMP protocol mainly uses two terms:
o ICMP Test: ICMP Test is used to test whether the destination is reachable or not.

o ICMP Reply: ICMP Reply is used to check whether the destination device is
responding or not.
o The core responsibility of the ICMP protocol is to report the problems, not correct
them. The responsibility of the correction lies with the sender.
o ICMP can send the messages only to the source, but not to the intermediate routers
because the IP datagram carries the addresses of the source and destination but not of
the router that it is passed to.

T RANS P O RT L AYER

The transport layer is responsible for the reliability, flow control, and correction of data
which is being sent over the network.
The two protocols used in the transport layer are User Datagram protocol and
Transmission control protocol.

User Datagram Protocol (UDP)

o It provides connectionless service and end-to-end delivery of transmission.


o It is an unreliable protocol as it discovers the errors but not specify the error.
o User Datagram Protocol discovers the error, and ICMP protocol reports the error to
the sender that user datagram has been damaged.

UDP consists of the following fields:


Source port address: The source port address is the address of the application program that has
created the message.
Destination port address: The destination port address is the address of the application program
that receives the message.
Total length: It defines the total number of bytes of the user datagram in bytes.
Checksum: The checksum is a 16-bit field used in error detection.

o UDP does not specify which packet is lost. UDP contains only checksum; it does not
contain any ID of a data segment.

Transmission Control Protocol (TCP)

o It provides a full transport layer services to applications.


o It creates a virtual circuit between the sender and receiver, and it is active for the
duration of the transmission.
o TCP is a reliable protocol as it detects the error and retransmits the damaged frames.
Therefore, it ensures all the segments must be received and acknowledged before the
transmission is considered to be completed and a virtual circuit is discarded.
o At the sending end, TCP divides the whole message into smaller units known as
segment, and each segment contains a sequence number which is required for reordering the
frames to form an original message.
o At the receiving end, TCP collects all the segments and reorders them based on
sequence numbers.

A P P LIC AT IO N L AYER
o An application layer is the topmost layer in the TCP/IP model.
o It is responsible for handling high-level protocols, issues of representation.
o This layer allows the user to interact with the application.
o When one application layer protocol wants to communicate with another application
layer, it forwards its data to the transport layer.
o There is an ambiguity occurs in the application layer. Every application cannot be
placed inside the application layer except those who interact with the communication
system. For example: text editor cannot be considered in application layer while web
browser using HTTP protocol to interact with the network where HTTP protocol is an
application layer protocol.

F OL L OWI N G AR E T HE MA I N P R OT OC OL S USE D I N T HE APP L IC A TI ON LA Y E R :

o HTTP: HTTP stands for Hypertext transfer protocol. This protocol allows us to
access the data over the world wide web. It transfers the data in the form of plain text,
audio, video. It is known as a Hypertext transfer protocol as it has the efficiency to
use in a hypertext environment where there are rapid j
o umps from one document to another.
o SNMP: SNMP stands for Simple Network Management Protocol. It is a framework
used for managing the devices on the internet by using the TCP/IP protocol suite.
o SMTP: SMTP stands for Simple mail transfer protocol. The TCP/IP protocol that
supports the e-mail is known as a Simple mail transfer protocol. This protocol is used
to send the data to another e-mail address.
o DNS: DNS stands for Domain Name System. An IP address is used to identify the
connection of a host to the internet uniquely. But, people prefer to use the names
instead of addresses. Therefore, the system that maps the name to the address is
known as Domain Name System.
o TELNET: It is an abbreviation for Terminal Network. It establishes the connection
between the local computer and remote computer in such a way that the local terminal
appears to be a terminal at the remote system.
o FTP: FTP stands for File Transfer Protocol. FTP is a standard internet protocol used
for transmitting the files from one computer to another computer.
OSI Model

o OSI stands for Open System Interconnection is a reference model that describes
how information from a software application in one computer moves through a
physical medium to the software application in another computer.
o OSI consists of seven layers, and each layer performs a particular network function.
o OSI model was developed by the International Organization for Standardization (ISO)
in 1984, and it is now considered as an architectural model for the inter-computer
communications.
o OSI model divides the whole task into seven smaller and manageable tasks. Each
layer is assigned a particular task.
o Each layer is self-contained, so that task assigned to each layer can be performed
independently.

7 Layers of OSI Model

There are the seven OSI layers. Each layer has different functions. A list of seven layers are
given below:

1. Physical Layer
2. Data-Link Layer
3. Network Layer
4. Transport Layer
5. Session Layer
6. Presentation Layer
7. Application Layer
1) Physical layer

o The main functionality of the physical layer is to transmit the individual bits from one
node to another node.
o It is the lowest layer of the OSI model.
o It establishes, maintains and deactivates the physical connection.
o It specifies the mechanical, electrical and procedural network interface specifications.

Functions of a Physical layer:

o Line Configuration: It defines the way how two or more devices can be connected
physically.
o Data Transmission: It defines the transmission mode whether it is simplex, half-
duplex or full-duplex mode between the two devices on the network.
o Topology: It defines the way how network devices are arranged.
o Signals: It determines the type of the signal used for transmitting the information.
2) Data-Link Layer

o This layer is responsible for the error-free transfer of data frames.


o It defines the format of the data on the network.
o It provides a reliable and efficient communication between two or more devices.
o It is mainly responsible for the unique identification of each device that resides on a
local network.
o It contains two sub-layers:

Logical Link Control Layer

o It is responsible for transferring the packets to the Network layer of the receiver that is
receiving.
o It identifies the address of the network layer protocol from the header.
o It also provides flow control.
o Media Access Control Layer
o A Media access control layer is a link between the Logical Link Control layer and the
network's physical layer.
o It is used for transferring the packets over the network.

Functions of the Data-link layer

o Framing: The data link layer translates the physical's raw bit stream into packets
known as Frames. The Data link layer adds the header and trailer to the frame. The
header which is added to the frame contains the hardware destination and source
address.
o Physical Addressing: The Data link layer adds a header to the frame that contains a
destination address. The frame is transmitted to the destination address mentioned in
the header.
o Flow Control: Flow control is the main functionality of the Data-link layer. It is the
technique through which the constant data rate is maintained on both the sides so that
no data get corrupted. It ensures that the transmitting station such as a server with
higher processing speed does not exceed the receiving station, with lower processing
speed.
o Error Control: Error control is achieved by adding a calculated value CRC (Cyclic
Redundancy Check) that is placed to the Data link layer's trailer which is added to the
message frame before it is sent to the physical layer. If any error seems to occurr, then
the receiver sends the acknowledgment for the retransmission of the corrupted frames.
o Access Control: When two or more devices are connected to the same
communication channel, then the data link layer protocols are used to determine
which device has control over the link at a given time.

3) Network Layer

o It is a layer 3 that manages device addressing, tracks the location of devices on the
network.
o It determines the best path to move data from source to the destination based on the
network conditions, the priority of service, and other factors.
o The Data link layer is responsible for routing and forwarding the packets.
o Routers are the layer 3 devices, they are specified in this layer and used to provide the
routing services within an internetwork.
o The protocols used to route the network traffic are known as Network layer protocols.
Examples of protocols are IP and Ipv6.

Functions of Network Layer:

o Internetworking: An internetworking is the main responsibility of the network layer.


It provides a logical connection between different devices.
o Addressing: A Network layer adds the source and destination address to the header
of the frame. Addressing is used to identify the device on the internet.
o Routing: Routing is the major component of the network layer, and it determines the
best optimal path out of the multiple paths from source to the destination.
o Packetizing: A Network Layer receives the packets from the upper layer and converts
them into packets. This process is known as Packetizing. It is achieved by internet
protocol (IP).

4) Transport Layer

o The Transport layer is a Layer 4 ensures that messages are transmitted in the order in
which they are sent and there is no duplication of data.
o The main responsibility of the transport layer is to transfer the data completely.
o It receives the data from the upper layer and converts them into smaller units known
as segments.
o This layer can be termed as an end-to-end layer as it provides a point-to-point
connection between source and destination to deliver the data reliably.

The two protocols used in this layer are:

Transmission Control Protocol

o It is a standard protocol that allows the systems to communicate over the internet.
o It establishes and maintains a connection between hosts.
o When data is sent over the TCP connection, then the TCP protocol divides the data
into smaller units known as segments. Each segment travels over the internet using multiple
routes, and they arrive in different orders at the destination. The transmission control protocol
reorders the packets in the correct order at the receiving end.

User Datagram Protocol

o User Datagram Protocol is a transport layer protocol.


o It is an unreliable transport protocol as in this case receiver does not send any
acknowledgment when the packet is received, the sender does not wait for any
acknowledgment. Therefore, this makes a protocol unreliable.
Functions of Transport Layer:

o Service-point addressing: Computers run several programs simultaneously due to


this reason, the transmission of data from source to the destination not only from one
computer to another computer but also from one process to another process. The transport
layer adds the header that contains the address known as a service-point address or port
address. The responsibility of the network layer is to transmit the data from one computer to
another computer and the responsibility of the transport layer is to transmit the message to the
correct process.
o Segmentation and reassembly: When the transport layer receives the message from
the upper layer, it divides the message into multiple segments, and each segment is assigned
with a sequence number that uniquely identifies each segment. When the message has arrived
at the destination, then the transport layer reassembles the message based on their sequence
numbers.
o Connection control: Transport layer provides two services Connection-oriented
service and connectionless service. A connectionless service treats each segment as an
individual packet, and they all travel in different routes to reach the destination. A
connection-oriented service makes a connection with the transport layer at the destination
machine before delivering the packets. In connection-oriented service, all the packets travel
in the single route.
o Flow control: The transport layer also responsible for flow control but it is performed
end-to-end rather than across a single link.
o Error control: The transport layer is also responsible for Error control. Error control
is performed end-to-end rather than across the single link. The sender transport layer ensures
that message reach at the destination without any error.

5) Session Layer

o It is a layer 3 in the OSI model.


o The Session layer is used to establish, maintain and synchronizes the interaction
between communicating devices.
Functions of Session layer:

o Dialog control: Session layer acts as a dialog controller that creates a dialog between
two processes or we can say that it allows the communication between two processes
which can be either half-duplex or full-duplex.
o Synchronization: Session layer adds some checkpoints when transmitting the data in
a sequence. If some error occurs in the middle of the transmission of data, then the
transmission will take place again from the checkpoint. This process is known as
Synchronization and recovery.

6) Presentation Layer

o A Presentation layer is mainly concerned with the syntax and semantics of the
information exchanged between the two systems.
o It acts as a data translator for a network.
o This layer is a part of the operating system that converts the data from one
presentation format to another format.
o The Presentation layer is also known as the syntax layer.

Functions of Presentation layer:

o Translation: The processes in two systems exchange the information in the form of
character strings, numbers and so on. Different computers use different encoding
methods, the presentation layer handles the interoperability between the different
encoding methods. It converts the data from sender-dependent format into a common
format and changes the common format into receiver-dependent format at the
receiving end.
o Encryption: Encryption is needed to maintain privacy. Encryption is a process of
converting the sender-transmitted information into another form and sends the
resulting message over the network.
o Compression: Data compression is a process of compressing the data, i.e., it reduces
the number of bits to be transmitted. Data compression is very important in
multimedia such as text, audio, video.
7) Application Layer

o An application layer serves as a window for users and application processes to access
network service.
o It handles issues such as network transparency, resource allocation, etc.
o An application layer is not an application, but it performs the application layer
functions.
o This layer provides the network services to the end-users.

Functions of Application layer:

o File transfer, access, and management (FTAM): An application layer allows a user
to access the files in a remote computer, to retrieve the files from a computer and to
manage the files in a remote computer.
o Mail services: An application layer provides the facility for email forwarding and
storage.
o Directory services: An application provides the distributed database sources and is
used to provide that global information about various objects.

Introduction to physical layer:

Data Signals Digital transmission

Data or information can be stored in two ways, analog and digital. For a computer to use the
data, it must be in discrete digital form.Similar to data, signals can also be in analog and
digital form. To transmit data digitally, it needs to be first converted to digital form.

D IG IT AL - T O -D IG IT AL C O NVERS IO N

This section explains how to convert digital data into digital signals. It can be done in two
ways, line coding and block coding. For all communications, line coding is necessary
whereas block coding is optional.

L INE C O DING

The process for converting digital data into digital signal is said to be Line Coding. Digital
data is found in binary format.It is represented (stored) internally as series of 1s and 0s.
Digital signal is denoted by discreet signal, which represents digital data.There are three
types of line coding schemes available:

U N I - P OL A R E N C ODI N G

Unipolar encoding schemes use single voltage level to represent data. In this case, to
represent binary 1, high voltage is transmitted and to represent 0, no voltage is transmitted. It
is also called Unipolar-Non-return-to-zero, because there is no rest condition i.e. it either
represents 1 or 0.

P OL A R E N C OD I N G

Polar encoding scheme uses multiple voltage levels to represent binary values. Polar
encodings is available in four types:

 Polar Non-Return to Zero (Polar NRZ)


It uses two different voltage levels to represent binary values. Generally, positive
voltage represents 1 and negative value represents 0. It is also NRZ because there is
no rest condition.
NRZ scheme has two variants: NRZ-L and NRZ-I.
NRZ-L changes voltage level at when a different bit is encountered whereas NRZ-I
changes voltage when a 1 is encountered.
 R E T U R N T O Z E R O (RZ)
Problem with NRZ is that the receiver cannot conclude when a bit ended and when
the next bit is started, in case when sender and receiver’s clock are not synchronized.

RZ uses three voltage levels, positive voltage to represent 1, negative voltage to


represent 0 and zero voltage for none. Signals change during bits not between bits.
 M A N C HE STE R
This encoding scheme is a combination of RZ and NRZ-L. Bit time is divided into
two halves. It transits in the middle of the bit and changes phase when a different bit
is encountered.
 D I F FE RE N TI A L M AN C HE ST E R
This encoding scheme is a combination of RZ and NRZ-I. It also transit at the middle
of the bit but changes phase only when 1 is encountered.
B IP OL AR E N C OD I NG

Bipolar encoding uses three voltage levels, positive, negative and zero. Zero voltage
represents binary 0 and bit 1 is represented by altering positive and negative voltages.
B LO C K C O DING

To ensure accuracy of the received data frame redundant bits are used. For example, in even-
parity, one parity bit is added to make the count of 1s in the frame even. This way the original
number of bits is increased. It is called Block Coding.

Block coding is represented by slash notation, mB/nB.Means, m-bit block is substituted with
n-bit block where n > m. Block coding involves three steps:

 Division,
 Substitution
 Combination.

After block coding is done, it is line coded for transmission.

A NALO G - TO -D IG IT AL C O NVERS IO N

Microphones create analog voice and camera creates analog videos, which are treated is
analog data. To transmit this analog data over digital signals, we need analog to digital
conversion.

Analog data is a continuous stream of data in the wave form whereas digital data is discrete.
To convert analog wave into digital data, we use Pulse Code Modulation (PCM).

PCM is one of the most commonly used method to convert analog data into digital form. It
involves three steps:

 Sampling
 Quantization
 Encoding.
S A M PLIN G

The analog signal is sampled every T interval. Most important factor in sampling is the rate at
which analog signal is sampled. According to Nyquist Theorem, the sampling rate must be at
least two times of the highest frequency of the signal.

Q U A N TI Z AT I ON

Sampling yields discrete form of continuous analog signal. Every discrete pattern shows the
amplitude of the analog signal at that instance. The quantization is done between the
maximum amplitude value and the minimum amplitude value. Quantization is approximation
of the instantaneous analog value.

E N C OD I N G

In encoding, each approximated value is then converted into binary format.

T RANS M IS S IO N M O DES

The transmission mode decides how data is transmitted between two computers.The binary
data in the form of 1s and 0s can be sent in two different modes: Parallel and Serial.
P A R A LLE L T R A N SM ISSIO N

The binary bits are organized in-to groups of fixed length. Both sender and receiver are
connected in parallel with the equal number of data lines. Both computers distinguish
between high order and low order data lines. The sender sends all the bits at once on all
lines.Because the data lines are equal to the number of bits in a group or data frame, a
complete group of bits (data frame) is sent in one go. Advantage of Parallel transmission is
high speed and disadvantage is the cost of wires, as it is equal to the number of bits sent in
parallel.

S E R I AL T R A N SMI SSI ON

In serial transmission, bits are sent one after another in a queue manner. Serial transmission
requires only one communication channel.

Serial transmission can be either asynchronous or synchronous.

A SY N C HR ON OU S S ER I AL T R A N SMI SSI ON

It is named so because there’is no importance of timing. Data-bits have specific pattern and
they help receiver recognize the start and end data bits.For example, a 0 is prefixed on every
data byte and one or more 1s are added at the end.

Two continuous data-frames (bytes) may have a gap between them.

S Y N C HR ON OU S S E RI A L T R A N SM I SSI ON

Timing in synchronous transmission has importance as there is no mechanism followed to


recognize start and end data bits.There is no pattern or prefix/suffix method. Data bits are
sent in burst mode without maintaining gap between bytes (8-bits). Single burst of data bits
may contain a number of bytes. Therefore, timing becomes very important.
It is up to the receiver to recognize and separate bits into bytes.The advantage of synchronous
transmission is high speed, and it has no overhead of extra header and footer bits as in
asynchronous transmission.

Bandwidth Utilization:

Multiplexing:

What is Multiplexing?

Multiplexing is a technique used to combine and send the multiple data streams over a single
medium. The process of combining the data streams is known as multiplexing and hardware
used for multiplexing is known as a multiplexer.

Multiplexing is achieved by using a device called Multiplexer (MUX) that combines n input
lines to generate a single output line. Multiplexing follows many-to-one, i.e., n input lines
and one output line.

Demultiplexing is achieved by using a device called Demultiplexer (DEMUX) available at


the receiving end. DEMUX separates a signal into its component signals (one input and n
outputs). Therefore, we can say that demultiplexing follows the one-to-many approach.

Why Multiplexing?

o The transmission medium is used to send the signal from sender to receiver. The
medium can only have one signal at a time.
o If there are multiple signals to share one medium, then the medium must be divided in
such a way that each signal is given some portion of the available bandwidth. For
example: If there are 10 signals and bandwidth of medium is100 units, then the 10
unit is shared by each signal.
o When multiple signals share the common medium, there is a possibility of collision.
Multiplexing concept is used to avoid such collision.
o Transmission services are very expensive.

Concept of Multiplexing

o The 'n' input lines are transmitted through a multiplexer and multiplexer combines the
signals to form a composite signal.
o The composite signal is passed through a Demultiplexer and demultiplexer separates a
signal to component signals and transfers them to their respective destinations.

Advantages of Multiplexing:

o More than one signal can be sent over a single medium.


o The bandwidth of a medium can be utilized effectively.

Multiplexing Techniques

Multiplexing techniques can be classified as

Frequency-division Multiplexing (FDM)

o It is an analog technique.
o Frequency Division Multiplexing is a technique in which the available bandwidth of
a single transmission medium is subdivided into several channels.

o In the above diagram, a single transmission medium is subdivided into several


frequency channels, and each frequency channel is given to different devices. Device
1 has a frequency channel of range from 1 to 5.
o The input signals are translated into frequency bands by using modulation techniques,
and they are combined by a multiplexer to form a composite signal.
o The main aim of the FDM is to subdivide the available bandwidth into different
frequency channels and allocate them to different devices.
o Using the modulation technique, the input signals are transmitted into frequency
bands and then combined to form a composite signal.
o The carriers which are used for modulating the signals are known as sub-carriers.
They are represented as f1,f2..fn.
o FDM is mainly used in radio broadcasts and TV networks.

Advantages of FDM:

o FDM is used for analog signals.


o FDM process is very simple and easy modulation.
o A Large number of signals can be sent through an FDM simultaneously.
o It does not require any synchronization between sender and receiver.

Disadvantages Of FDM:

o FDM technique is used only when low-speed channels are required.


o It suffers the problem of crosstalk.
o A Large number of modulators are required.
o It requires a high bandwidth channel.

Applications Of FDM:

o FDM is commonly used in TV networks.


o It is used in FM and AM broadcasting. Each FM radio station has different
frequencies, and they are multiplexed to form a composite signal. The multiplexed
signal is transmitted in the air.

Wavelength Division Multiplexing (WDM)


o Wavelength Division Multiplexing is same as FDM except that the optical signals are
transmitted through the fibre optic cable.
o WDM is used on fibre optics to increase the capacity of a single fibre.
o It is used to utilize the high data rate capability of fibre optic cable.
o It is an analog multiplexing technique.
o Optical signals from different source are combined to form a wider band of light with
the help of multiplexer.
o At the receiving end, demultiplexer separates the signals to transmit them to their
respective destinations.
o Multiplexing and Demultiplexing can be achieved by using a prism.
o Prism can perform a role of multiplexer by combining the various optical signals to
form a composite signal, and the composite signal is transmitted through a fibre
optical cable.
o Prism also performs a reverse operation, i.e., demultiplexing the signal.

Time Division Multiplexing

o It is a digital technique.
o In Frequency Division Multiplexing Technique, all signals operate at the same time
with different frequency, but in case of Time Division Multiplexing technique, all
signals operate at the same frequency with different time.
o In Time Division Multiplexing technique, the total time available in the channel is
distributed among different users. Therefore, each user is allocated with different time
interval known as a Time slot at which data is to be transmitted by the sender.
o A user takes control of the channel for a fixed amount of time.
o In Time Division Multiplexing technique, data is not transmitted simultaneously
rather the data is transmitted one-by-one.
o In TDM, the signal is transmitted in the form of frames. Frames contain a cycle of
time slots in which each frame contains one or more slots dedicated to each user.
o It can be used to multiplex both digital and analog signals but mainly used to
multiplex digital signals.

There are two types of TDM:

o Synchronous TDM
o Asynchronous TDM

Synchronous TDM

o A Synchronous TDM is a technique in which time slot is pre assigned to every device.
o In Synchronous TDM, each device is given some time slot irrespective of the fact that
the device contains the data or not.
o If the device does not have any data, then the slot will remain empty.
o In Synchronous TDM, signals are sent in the form of frames. Time slots are organized
in the form of frames. If a device does not have data for a particular time slot, then the
empty slot will be transmitted.
o The most popular Synchronous TDM are T-1 multiplexing, ISDN multiplexing, and
SONET multiplexing.
o If there are n devices, then there are n slots.

Concept of Synchronous TDM


In the above figure, the Synchronous TDM technique is implemented. Each device is
allocated with some time slot. The time slots are transmitted irrespective of whether the
sender has data to send or not.

Disadvantages of Synchronous TDM:

o The capacity of the channel is not fully utilized as the empty slots are also transmitted
which is having no data. In the above figure, the first frame is completely filled, but in
the last two frames, some slots are empty. Therefore, we can say that the capacity of
the channel is not utilized efficiently.
o The speed of the transmission medium should be greater than the total speed of the
input lines. An alternative approach to the Synchronous TDM is Asynchronous Time
Division Multiplexing.

Asynchronous TDM

o An asynchronous TDM is also known as Statistical TDM.


o An asynchronous TDM is a technique in which time slots are not fixed as in the case
of Synchronous TDM. Time slots are allocated to only those devices which have the
data to send. Therefore, we can say that Asynchronous Time Division multiplexor
transmits only the data from active workstations.
o An asynchronous TDM technique dynamically allocates the time slots to the devices.
o In Asynchronous TDM, total speed of the input lines can be greater than the capacity
of the channel.
o Asynchronous Time Division multiplexor accepts the incoming data streams and
creates a frame that contains only data with no empty slots.
o In Asynchronous TDM, each slot contains an address part that identifies the source of
the data.
o The difference between Asynchronous TDM and Synchronous TDM is that many
slots in Synchronous TDM are unutilized, but in Asynchronous TDM, slots are fully
utilized. This leads to the smaller transmission time and efficient utilization of the
capacity of the channel.
o In Synchronous TDM, if there are n sending devices, then there are n time slots. In
Asynchronous TDM, if there are n sending devices, then there are m time slots where
m is less than n (m<n).
o The number of slots in a frame depends on the statistical analysis of the number of
input lines.

S PREAD S PECTRUM

Spread spectrum is a technique used for wireless communications in telecommunication and


radio communication. In this technique, the frequency of the transmitted signal, i.e., an
electrical signal, electromagnetic signal, or acoustic signal, is deliberately varied and
generates a much greater bandwidth than the signal would have if its frequency were not
varied.

In other words, "Spread Spectrum is a technique in which the transmitted signals of specific
frequencies are varied slightly to obtain greater bandwidth as compared to initial bandwidth."

Now, spread spectrum technology is widely used in radio signals transmission because it can
easily reduce noise and other signal issues.

E X AM P LE OF S P READ S P ECT RUM

We know that a conventional wireless signal frequency is usually specified in


megahertz (MHz) or gigahertz (GHz). It does not change with time (Sometimes it is
exceptionally changed in the form of small, rapid fluctuations that generally occur due to
modulation). Suppose you want to listen to FM stereo at frequency 104.8 MHz on your radio,
and then once you set the frequency, the signal stays at 104.8 MHz. It does not go up to 105.1
MHz or down to 101.1 MHz. You see that your set digits on the radio's frequency dial stay
the same at all times. The frequency of a conventional wireless signal is kept as constant to
keep bandwidth within certain limits, and the signal can be easily located by someone who
wants to retrieve the information.

In this conventional wireless communication model, you can face at least two problems:

1. A signal whose frequency is constant is subject to catastrophic interference. This


interference occurs when another signal is transmitted on or near the frequency of a
specified signal.
2. A constant-frequency signal can easily be intercepted. So, it is not suitable for the
applications in which information must be kept confidential between the source
(transmitting party) and the receiver.

The spread spectrum model is used to overcome with this conventional communication
model. Here, the transmitted signal frequency is deliberately varied over a comparatively
large segment of the electromagnetic radiation spectrum. This variation is done according to a
specific but complicated mathematical function. If the receiver wants to intercept the signal,
it must be tuned to frequencies that vary precisely according to this function.

R EAS O NS T O US E S P READ S P ECT RUM


o Spread spectrum signals are distributed over a wide range of frequencies and then
collected and received back to the receiver. On the other hand, wide-band signals are
noise-like and challenging to detect.
o Initially, the spread spectrum was adopted in military applications because of its
resistance to jamming and difficulty intercepting.
o Now, this is also used in commercial wireless communication.
o It is most preferred because of its useful bandwidth utilization ability.

U S AG E OF S P READ S P EC T RUM

There are many reasons to use this spread spectrum technique for wireless communications.
The following are some reasons:

o It can successfully establish a secure medium of communication.


o It can increase the resistance to natural interference, such as noise and jamming, to
prevent detection.
o It can limit the power flux density (e.g., in satellite down links).
o It can enable multiple-access communications.

T YP ES OF S P READ S P EC T RUM

Spread Spectrum can be categorized into two types:

o Frequency Hopping Spread Spectrum (FHSS)


o Direct Sequence Spread Spectrum(DSSS)

F R E QU E N C Y H OPP I N G S P RE A D S P EC T RU M (FHSS)
o The Frequency Hopping Spread Spectrum or FHSS allows us to utilize bandwidth
properly and maximum. In this technique, the whole available bandwidth is divided
into many channels and spread between channels, arranged continuously.
o The frequency slots are selected randomly, and frequency signals are transmitted
according to their occupancy.
o The transmitters and receivers keep on hopping on channels available for a particular
amount of time in milliseconds.
o So, you can see that it implements the frequency division multiplexing and time-
division multiplexing simultaneously in FHSS.

The Frequency Hopping Spread Spectrum or FHSS can also be classified into two
types:

o Slow Hopping: In slow hopping, multiple bits are transmitted on a specific frequency or
same frequency.
o Fast Hopping: In fast hopping, individual bits are split and then transmitted on different
frequencies.

A D V A N T A GE S OF F R E QU E NC Y H OPP IN G S PR E A D S P EC T R UM (FHSS)

The following are some advantages of frequency hopping spread spectrum (FHSS):

o The biggest advantage of Frequency Hopping Spread Spectrum or FHSS is its high
efficiency.
o The Frequency Hopping Spread Spectrum or FHSS signals are highly resistant to
narrowband interference because the signal hops to a different frequency band.
o It requires a shorter time for acquisition.
o It is highly secure. Its signals are very difficult to intercept if the frequency-hopping
pattern is not known; that's why it is preferred to use in Military services.
o We can easily program it to avoid some portions of the spectrum.
o Frequency Hopping Spread Spectrum or FHSS transmissions can share a frequency
band with many types of conventional transmissions with minimal mutual
interference. FHSS signals add minimal interference to narrowband communications,
and vice versa.
o It provides a very large bandwidth.
o It can be simply implemented as compared to DsSS.

D I SA D V A NT A GE S OF F R E QUE N C Y H OPP I NG S P R EA D S P E CT R UM (FHSS)

The following are some disadvantages of Frequency Hopping Spread Spectrum (FHSS):

o FHSS is less Robust, so sometimes it requires error correction.


o FHSS needs complex frequency synthesizers.
o FHSS supports a lower data rate of 3 Mbps as compared to the 11 Mbps data rate
supported by DSSS.
o It is not very useful for range and range rate measurements.
o It supports the lower coverage range due to the high SNR requirement at the receiver.
o Nowadays, it is not very popular due to the emerging of new wireless technologies in
wireless products.

A P P LI C AT I ON S OF F R E QUE N C Y H OPP I N G S P R EA D S P E CT R UM (FHSS)

Following is the list of most used applications of Frequency Hopping Spread Spectrum or
FHSS:

o The Frequency Hopping Spread Spectrum or FHSS is used in wireless local area
networks (WLAN) standard for Wi-Fi.
o FHSS is also used in the wireless personal area networks (WPAN) standard for
Bluetooth.

D I R E CT S E QUE N CE S PR E A D S PE C T RU M (DSSS)

The Direct Sequence Spread Spectrum (DSSS) is a spread-spectrum modulation technique


primarily used to reduce overall signal interference in telecommunication. The Direct
Sequence Spread Spectrum modulation makes the transmitted signal wider in bandwidth than
the information bandwidth. In DSSS, the message bits are modulated by a bit sequencing
process known as a spreading sequence. This spreading-sequence bit is known as a chip. It
has a much shorter duration (larger bandwidth) than the original message bits. Following are
the features of Direct Sequence Spread Spectrum or DSSS.

o In Direct Sequence Spread Spectrum or DSSS technique, the data that needs to be
transmitted is split into smaller blocks.
o After that, each data block is attached with a high data rate bit sequence and is
transmitted from the sender end to the receiver end.
o Data blocks are recombined again to generate the original data at the receiver's end,
which was sent by the sender, with the help of the data rate bit sequence.
o If somehow data is lost, then data blocks can also be recovered with those data rate
bits.
o The main advantage of splitting the data into smaller blocks is that it reduces the noise
and unintentional inference.

The Direct Sequence Spread Spectrum or DSSS can also be classified into two types:

o Wide Band Spread Spectrum


o Narrow Band Spread Spectrum

A D V A N T A GE S OF D IR E CT S E QU E NC E S P R EA D S PE C TR U M (DSSS)

The following are some advantages of Direct Sequence Spread Spectrum or DSSS:

o Direct Sequence Spread Spectrum or DSSS is less reluctant to noise; that's why the
DSSS system's performance in the presence of noise is better than the FHSS system.
o In Direct Sequence Spread Spectrum or DSSS, signals are challenging to detect.
o It provides the best discrimination against multipath signals.
o In Direct Sequence Spread Spectrum, there are very few chances of jamming because
it avoids intentional interference such as jamming effectively.

D I SA D V A NT A GE S OF D I RE C T S E QUE N C E S P R EA D S PE CT R UM (DSSS)

The following are some disadvantages of Direct Sequence Spread Spectrum or DSSS:

o The Direct Sequence Spread Spectrum or DSSS system takes large acquisition time;
that's why its performance is slow.
o It requires wide-band channels with small phase distortion.
o In DSSS, the pseudo-noise generator generates a sequence at high rates.
A P P LI C AT I ON S OF D I RE CT S E QUE N CE S PRE A D S P E CT R UM (DSSS)

Following is the list of most used applications of Direct Sequence Spread Spectrum or DSSS:

o Direct Sequence Spread Spectrum or DSSS is used in LAN technology.


o Direct Sequence Spread Spectrum or DSSS is also used in Satellite communication
technology.
o DSSS is used in the military and many other commercial applications.
o It is used in the low probability of the intercept signal.
o It supports Code division multiple access.

S WITCHING TECHNIQ UES :

In large networks, there can be multiple paths from sender to receiver. The switching
technique will decide the best route for data transmission.

Switching technique is used to connect the systems for making one-to-one communication.

Classification Of Switching Techniques

C IRC UIT S WIT CH ING


o Circuit switching is a switching technique that establishes a dedicated path between
sender and receiver.
o In the Circuit Switching Technique, once the connection is established then the
dedicated path will remain to exist until the connection is terminated.
o Circuit switching in a network operates in a similar way as the telephone works.
o A complete end-to-end path must exist before the communication takes place.
o In case of circuit switching technique, when any user wants to send the data, voice,
video, a request signal is sent to the receiver then the receiver sends back the
acknowledgment to ensure the availability of the dedicated path. After receiving the
acknowledgment, dedicated path transfers the data.
o Circuit switching is used in public telephone network. It is used for voice
transmission.
o Fixed data can be transferred at a time in circuit switching technology.

Communication through circuit switching has 3 phases:

o Circuit establishment
o Data transfer
o Circuit Disconnect

Circuit Switching can use either of the two technologies:

S P A CE D I V I SI ON S WI T C HE S :
o Space Division Switching is a circuit switching technology in which a single
transmission path is accomplished in a switch by using a physically separate set of
crosspoints.
o Space Division Switching can be achieved by using crossbar switch. A crossbar
switch is a metallic crosspoint or semiconductor gate that can be enabled or disabled
by a control unit.
o The Crossbar switch is made by using the semiconductor. For example, Xilinx
crossbar switch using FPGAs.
o Space Division Switching has high speed, high capacity, and nonblocking switches.

Space Division Switches can be categorized in two ways:

o Crossbar Switch

o Multistage Switch

C R OSSB A R S WIT C H

The Crossbar switch is a switch that has n input lines and n output lines. The crossbar switch
has n2 intersection points known as crosspoints.

Disadvantage of Crossbar switch:


The number of crosspoints increases as the number of stations is increased. Therefore, it
becomes very expensive for a large switch. The solution to this is to use a multistage switch.

M U LT I STA GE S WI TC H
o Multistage Switch is made by splitting the crossbar switch into the smaller units and
then interconnecting them.
o It reduces the number of crosspoints.
o If one path fails, then there will be an availability of another path.

Advantages of Circuit Switching:

o In the case of Circuit Switching technique, the communication channel is dedicated.


o It has fixed bandwidth.

Disadvantages of Circuit Switching:

o Once the dedicated path is established, the only delay occurs in the speed of data
transmission.
o It takes a long time to establish a connection approx 10 seconds during which no data
can be transmitted.
o It is more expensive than other switching techniques as a dedicated path is required
for each connection.
o It is inefficient to use because once the path is established and no data is transferred,
then the capacity of the path is wasted.
o In this case, the connection is dedicated therefore no other data can be transferred
even if the channel is free.

P AC KET S WIT CH ING


o The packet switching is a switching technique in which the message is sent in one go,
but it is divided into smaller pieces, and they are sent individually.
o The message splits into smaller pieces known as packets and packets are given a
unique number to identify their order at the receiving end.
o Every packet contains some information in its headers such as source address,
destination address and sequence number.
o Packets will travel across the network, taking the shortest path as possible.
o All the packets are reassembled at the receiving end in correct order.
o If any packet is missing or corrupted, then the message will be sent to resend the
message.
o If the correct order of the packets is reached, then the acknowledgment message will
be sent.
A P P RO ACH ES OF P AC KET S WIT CH ING :

There are two approaches to Packet Switching:

D A T A GR AM P A C KE T SWI TC HI N G :

o It is a packet switching technology in which packet is known as a datagram, is


considered as an independent entity. Each packet contains the information about the
destination and switch uses this information to forward the packet to the correct
destination.
o The packets are reassembled at the receiving end in correct order.
o In Datagram Packet Switching technique, the path is not fixed.
o Intermediate nodes take the routing decisions to forward the packets.
o Datagram Packet Switching is also known as connectionless switching.

V I R T U AL C I R CU IT S WIT C HIN G
o Virtual Circuit Switching is also known as connection-oriented switching.
o In the case of Virtual circuit switching, a preplanned route is established before the
messages are sent.
o Call request and call accept packets are used to establish the connection between
sender and receiver.
o In this case, the path is fixed for the duration of a logical connection.

Let's understand the concept of virtual circuit switching through a diagram:


o In the above diagram, A and B are the sender and receiver respectively. 1 and 2 are
the nodes.
o Call request and call accept packets are used to establish a connection between the
sender and receiver.
o When a route is established, data will be transferred.
o After transmission of data, an acknowledgment signal is sent by the receiver that the
message has been received.
o If the user wants to terminate the connection, a clear signal is sent for the termination.

Advantages of Packet Switching:

o Cost-effective: In packet switching technique, switching devices do not require massive


secondary storage to store the packets, so cost is minimized to some extent. Therefore,
we can say that the packet switching technique is a cost-effective technique.
o Reliable: If any node is busy, then the packets can be rerouted. This ensures that the
Packet Switching technique provides reliable communication.
o Efficient: Packet Switching is an efficient technique. It does not require any established
path prior to the transmission, and many users can use the same communication
channel simultaneously, hence makes use of available bandwidth very efficiently.

Disadvantages of Packet Switching:

o Packet Switching technique cannot be implemented in those applications that require


low delay and high-quality services.
o The protocols used in a packet switching technique are very complex and requires
high implementation cost.
o If the network is overloaded or corrupted, then it requires retransmission of lost
packets. It can also lead to the loss of critical information if errors are nor recovered.
Unit-II DATA LINK LAYER AND MEDIUM ACCESS CONTROL

Introduction to Data Link Layer. Error Detection and Correction: Introduction, Block
Coding, Cyclic Codes, Checksum Data Link Control: DLC Services, Data-Link Layer
Protocols Media Access Control. Wired LANs: Ethernet-Ethernet Protocol,
Standard Ethernet: Characteristics, Addressing.

IntroductionData-linkLayer:

Data Link Layer is second layer of OSI Layered Model. This layer is one of the most
complicated layers and has complex functionalities and liabilities. Data link layer
hides the details of underlying hardware and represents itself to upper layer as the
medium to communicate.

Data link layer works between two hosts which are directly connected in some
sense. This direct connection could be point to point or broadcast. Systems on
broadcast network are said to be on same link. The work of data link layer tends to
get more complex when it is dealing with multiple hosts on single collision domain.

Data link layer is responsible for converting data stream to signals bit by bit and to
send that over the underlying hardware. At the receiving end, Data link layer picks
up data from hardware which are in the form of electrical signals, assembles them
in a recognizable frame format, and hands over to upper layer.

Data link layer has two sub-layers:

• Logical Link Control: It deals with protocols, flow-control, and error control
• Media Access Control: It deals with actual control of media

Functionality of Data-link Layer

Data link layer does many tasks on behalf of upper layer. These are:

• Framing
Data-link layer takes packets from Network Layer and encapsulates them
into Frames.Then, it sends each frame bit-by-bit on the hardware. At
receiver’ end, data link layer picks up signals from hardware and assembles
them into frames.
• Addressing
Data-link layer provides layer-2 hardware addressing mechanism. Hardware
address is assumed to be unique on the link. It is encoded into hardware at
the time of manufacturing.
• Synchronization
When data frames are sent on the link, both machines must be synchronized
in order to transfer to take place.
• Error Control
Sometimes signals may have encountered problem in transition and the bits
are flipped.These errors are detected and attempted to recover actual data
bits. It also provides error reporting mechanism to the sender.
• Flow Control
Stations on same link may have different speed or capacity. Data-link layer
ensures flow control that enables both machine to exchange data on same
speed.
• Multi-Access
When host on the shared link tries to transfer the data, it has a high
probability of collision. Data-link layer provides mechanism such as
CSMA/CD to equip capability of accessing a shared media among multiple
Systems.

Error Detection & Correction:


Error Detection

When data is transmitted from one device to another device, the system does not
guarantee whether the data received by the device is identical to the data
transmitted by another device. An Error is a situation when the message received
at the receiver end is not identical to the message transmitted.

Types of Errors

Errors can be classified into two categories:

o Single-Bit Error
o Burst Error

Single-Bit Error:

The only one bit of a given data unit is changed from 1 to 0 or from 0 to 1.
In the above figure, the message which is sent is corrupted as single-bit, i.e., 0 bit is
changed to 1.

Single-Bit Error does not appear more likely in Serial Data Transmission. For
example, Sender sends the data at 10 Mbps, this means that the bit lasts only for 1 ?
s and for a single-bit error to occurred, a noise must be more than 1 ?s.

Single-Bit Error mainly occurs in Parallel Data Transmission. For example, if eight
wires are used to send the eight bits of a byte, if one of the wire is noisy, then single-
bit is corrupted per byte.

Burst Error:

The two or more bits are changed from 0 to 1 or from 1 to 0 is known as Burst Error.

The Burst Error is determined from the first corrupted bit to the last corrupted bit.

The duration of noise in Burst Error is more than the duration of noise in Single-Bit.

Burst Errors are most likely to occurr in Serial Data Transmission.

The number of affected bits depends on the duration of the noise and data rate.

Error Detecting Techniques:

The most popular Error Detecting Techniques are:


o Single parity check
o Two-dimensional parity check
o Checksum
o Cyclic redundancy check

Single Parity Check

o Single Parity checking is the simple mechanism and inexpensive to detect


the errors.
o In this technique, a redundant bit is also known as a parity bit which is
appended at the end of the data unit so that the number of 1s becomes
even. Therefore, the total number of transmitted bits would be 9 bits.
o If the number of 1s bits is odd, then parity bit 1 is appended and if the
number of 1s bits is even, then parity bit 0 is appended at the end of the data
unit.
o At the receiving end, the parity bit is calculated from the received data bits
and compared with the received parity bit.
o This technique generates the total number of 1s even, so it is known as even-
parity checking.
Drawbacks of Single Parity Checking

o It can only detect single-bit errors which are very rare.


o If two bits are interchanged, then it cannot detect the errors.

Two-Dimensional Parity Check

o Performance can be improved by using Two-Dimensional Parity


Check which organizes the data in the form of a table.
o Parity check bits are computed for each row, which is equivalent to the
single-parity check.
o In Two-Dimensional Parity check, a block of bits is divided into rows, and the
redundant row of bits is added to the whole block.
o At the receiving end, the parity bits are compared with the parity bits
computed from the received data.

Drawbacks of 2D Parity Check

o If two bits in one data unit are corrupted and two bits exactly the same
position in another data unit are also corrupted, then 2D Parity checker will
not be able to detect the error.
o This technique cannot be used to detect the 4-bit errors or more in some
cases.

Checksum

A Checksum is an error detection technique based on the concept of redundancy.

It is divided into two parts:

Checksum Generator

A Checksum is generated at the sending side. Checksum generator subdivides the


data into equal segments of n bits each, and all these segments are added
together by using one's complement arithmetic. The sum is complemented and
appended to the original data, known as checksum field. The extended data is
transmitted across the network.

Suppose L is the total sum of the data segments, then the checksum would be ?L
1. The Sender follows the given steps:
2. The block unit is divided into k sections, and each of n bits.
3. All the k sections are added together by using one's complement to get the
sum.
4. The sum is complemented and it becomes the checksum field.
5. The original data and checksum field are sent across the network.

Checksum Checker

A Checksum is verified at the receiving side. The receiver subdivides the incoming
data into equal segments of n bits each, and all these segments are added
together, and then this sum is complemented. If the complement of the sum is
zero, then the data is accepted otherwise data is rejected.

1. The Receiver follows the given steps:


2. The block unit is divided into k sections and each of n bits.
3. All the k sections are added together by using one's complement algorithm
to get the sum
4. The sum is complemented.
5. If the result of the sum is zero, then the data is accepted otherwise the data
is discarded.

Cyclic Redundancy Check (CRC)

CRC is a redundancy error technique used to determine the error.

Following are the steps used in CRC for error detection:

o In CRC technique, a string of n 0s is appended to the data unit, and this n


number is less than the number of bits in a predetermined number, known
as division which is n+1 bits.
o Secondly, the newly extended data is divided by a divisor using a process is
known as binary division. The remainder generated from this division is
known as CRC remainder.
o Thirdly, the CRC remainder replaces the appended 0s at the end of the
original data. This newly generated unit is sent to the receiver.
o The receiver receives the data followed by the CRC remainder. The receiver
will treat this whole unit as a single unit, and it is divided by the same divisor
that was used to find the CRC remainder.

If the resultant of this division is zero which means that it has no error, and the data
is accepted.

If the resultant of this division is not zero which means that the data consists of an
error. Therefore, the data is discarded.
Let's understand this concept through an example:

Suppose the original data is 11100 and divisor is 1001.

CRC Generator

o A CRC generator uses a modulo-2 division. Firstly, three zeroes are


appended at the end of the data as the length of the divisor is 4 and we
know that the length of the string 0s to be appended is always one less than
the length of the divisor.
o Now, the string becomes 11100000, and the resultant string is divided by
the divisor 1001.
o The remainder generated from the binary division is known as CRC
remainder. The generated value of the CRC remainder is 111.
o CRC remainder replaces the appended string of 0s at the end of the data
unit, and the final string would be 11100111 which is sent across the
network.
CRC Checker

o The functionality of the CRC checker is similar to the CRC generator.


o When the string 11100111 is received at the receiving end, then CRC
checker performs the modulo-2 division.
o A string is divided by the same divisor, i.e., 1001.
o In this case, CRC checker generates the remainder of zero. Therefore, the
data is accepted.

\
Error Correction

Error Correction codes are used to detect and correct the errors when data is
transmitted from the sender to the receiver.

Error Correction can be handled in two ways:

o Backward error correction: Once the error is discovered, the receiver


requests the sender to retransmit the entire data unit.
o Forward error correction: In this case, the receiver uses the error-correcting
code which automatically corrects the errors.

A single additional bit can detect the error, but cannot correct it.

For correcting the errors, one has to know the exact position of the error. For
example, If we want to calculate a single-bit error, the error correction code will
determine which one of seven bits is in error. To achieve this, we have to add some
additional redundant bits.

Suppose r is the number of redundant bits and d is the total number of the data
bits. The number of redundant bits r can be calculated by using the formula:

2r>=d+r+1

The value of r is calculated by using the above formula. For example, if the value of
d is 4, then the possible smallest value that satisfies the above relation would be 3.

To determine the position of the bit which is in error, a technique developed by R.W
Hamming is Hamming code which can be applied to any length of the data unit
and uses the relationship between data units and redundant units.

Hamming Code

Parity bits: The bit which is appended to the original data of binary bits so that the
total number of 1s is even or odd.

Even parity: To check for even parity, if the total number of 1s is even, then the
value of the parity bit is 0. If the total number of 1s occurrences is odd, then the
value of the parity bit is 1.

Odd Parity: To check for odd parity, if the total number of 1s is even, then the value
of parity bit is 1. If the total number of 1s is odd, then the value of parity bit is 0.
Algorithm of Hamming code:

o An information of 'd' bits are added to the redundant bits 'r' to form d+r.
o The location of each of the (d+r) digits is assigned a decimal value.
o The 'r' bits are placed in the positions 1,2,.....2k-1.
o At the receiving end, the parity bits are recalculated. The decimal value of
the parity bits determines the position of an error.

Relationship b/w Error position & binary number.

Let's understand the concept of Hamming code through an example:

Suppose the original data is 1010 which is to be sent.

Total number of data bits 'd' = 4


Number of redundant bits r : 2r >= d+r+1
2r>= 4+r+1
Therefore, the value of r is 3 that satisfies the above relation.
Total number of bits = d+r = 4+3 = 7;

Determining the position of the redundant bits

The number of redundant bits is 3. The three bits are represented by r1, r2, r4. The
position of the redundant bits is calculated with corresponds to the raised power of
2. Therefore, their corresponding positions are 1, 21, 22.

1. The position of r1 = 1
2. The position of r2 = 2
3. The position of r4 = 4
Representation of Data on the addition of parity bits:

Determining the Parity bits

Determining the r1 bit

The r1 bit is calculated by performing a parity check on the bit positions whose
binary representation includes 1 in the first position.

We observe from the above figure that the bit positions that includes 1 in the first
position are 1, 3, 5, 7. Now, we perform the even-parity check at these bit positions.
The total number of 1 at these bit positions corresponding to r1 is even, therefore,
the value of the r1 bit is 0.

Determining r2 bit

The r2 bit is calculated by performing a parity check on the bit positions whose
binary representation includes 1 in the second position.

We observe from the above figure that the bit positions that includes 1 in the
second position are 2, 3, 6, 7. Now, we perform the even-parity check at these bit
positions. The total number of 1 at these bit positions corresponding to r2 is odd,
therefore, the value of the r2 bit is 1.

Determining r4 bit

The r4 bit is calculated by performing a parity check on the bit positions whose
binary representation includes 1 in the third position.

We observe from the above figure that the bit positions that includes 1 in the third
position are 4, 5, 6, 7. Now, we perform the even-parity check at these bit positions.
The total number of 1 at these bit positions corresponding to r4 is even, therefore,
the value of the r4 bit is 0.

Data transferred is given below:

Suppose the 4th bit is changed from 0 to 1 at the receiving end, then parity bits are
recalculated.

R1 bit

The bit positions of the r1 bit are 1,3,5,7


We observe from the above figure that the binary representation of r1 is 1100. Now,
we perform the even-parity check, the total number of 1s appearing in the r1 bit is
an even number. Therefore, the value of r1 is 0.

R2 bit

The bit positions of r2 bit are 2,3,6,7.

We observe from the above figure that the binary representation of r2 is 1001. Now,
we perform the even-parity check, the total number of 1s appearing in the r2 bit is
an even number. Therefore, the value of r2 is 0.

R4 bit

The bit positions of r4 bit are 4,5,6,7.

We observe from the above figure that the binary representation of r4 is 1011. Now,
we perform the even-parity check, the total number of 1s appearing in the r4 bit is
an odd number. Therefore, the value of r4 is 1.

o
Block Coding:

Cyclic Codes,

Checksum

Data Link Controls


Data Link Control is the service provided by the Data Link Layer to provide reliable
data transfer over the physical medium. For example, In the half-duplex
transmission mode, one device can only transmit the data at a time. If both the
devices at the end of the links transmit the data simultaneously, they will collide
and leads to the loss of the information. The Data link layer provides the
coordination among the devices so that no collision occurs.

The Data link layer provides three functions:

o Line discipline
o Flow Control
o Error Control

Line Discipline
o Line Discipline is a functionality of the Data link layer that provides the
coordination among the link systems. It determines which device can send,
and when it can send the data.

Line Discipline can be achieved in two ways:


o ENQ/ACK
o Poll/select

END/ACK

END/ACK stands for Enquiry/Acknowledgement is used when there is no wrong


receiver available on the link and having a dedicated path between the two devices
so that the device capable of receiving the transmission is the intended one.

END/ACK coordinates which device will start the transmission and whether the
recipient is ready or not.

Working of END/ACK

The transmitter transmits the frame called an Enquiry (ENQ) asking whether the
receiver is available to receive the data or not.

The receiver responses either with the positive acknowledgement(ACK) or with the
negative acknowledgement(NACK) where positive acknowledgement means that
the receiver is ready to receive the transmission and negative acknowledgement
means that the receiver is unable to accept the transmission.

Following are the responses of the receiver:

o If the response to the ENQ is positive, the sender will transmit its data, and
once all of its data has been transmitted, the device finishes its
transmission with an EOT (END-of-Transmission) frame.
o If the response to the ENQ is negative, then the sender disconnects and
restarts the transmission at another time.
o If the response is neither negative nor positive, the sender assumes that the
ENQ frame was lost during the transmission and makes three attempts to
establish a link before giving up.
Poll/Select

The Poll/Select method of line discipline works with those topologies where one
device is designated as a primary station, and other devices are secondary stations.

Working of Poll/Select

o In this, the primary device and multiple secondary devices consist of a single
transmission line, and all the exchanges are made through the primary
device even though the destination is a secondary device.
o The primary device has control over the communication link, and the
secondary device follows the instructions of the primary device.
o The primary device determines which device is allowed to use the
communication channel. Therefore, we can say that it is an initiator of the
session.
o If the primary device wants to receive the data from the secondary device, it
asks the secondary device that they anything to send, this process is known
as polling.
o If the primary device wants to send some data to the secondary device, then
it tells the target secondary to get ready to receive the data, this process is
known as selecting.

Select

o The select mode is used when the primary device has something to send.
o When the primary device wants to send some data, then it alerts the
secondary device for the upcoming transmission by transmitting a Select
(SEL) frame, one field of the frame includes the address of the intended
secondary device.
o When the secondary device receives the SEL frame, it sends an
acknowledgement that indicates the secondary ready status.
o If the secondary device is ready to accept the data, then the primary device
sends two or more data frames to the intended secondary device. Once the
data has been transmitted, the secondary sends an acknowledgement
specifies that the data has been received.

Poll

o The Poll mode is used when the primary device wants to receive some data
from the secondary device.
o When a primary device wants to receive the data, then it asks each device
whether it has anything to send.
o Firstly, the primary asks (poll) the first secondary device, if it responds with
the NACK (Negative Acknowledgement) means that it has nothing to send.
Now, it approaches the second secondary device, it responds with the ACK
means that it has the data to send. The secondary device can send more
than one frame one after another or sometimes it may be required to send
ACK before sending each one, depending on the type of the protocol being
used.

Flow Control
o It is a set of procedures that tells the sender how much data it can transmit
before the data overwhelms the receiver.
o The receiving device has limited speed and limited memory to store the data.
Therefore, the receiving device must be able to inform the sending device to
stop the transmission temporarily before the limits are reached.
o It requires a buffer, a block of memory for storing the information until they
are processed.

Two methods have been developed to control the flow of data:

o Stop-and-wait
o Sliding window

Stop-and-wait

o In the Stop-and-wait method, the sender waits for an acknowledgement after


every frame it sends.
o When acknowledgement is received, then only next frame is sent. The
process of alternately sending and waiting of a frame continues until the
sender transmits the EOT (End of transmission) frame.

Advantage of Stop-and-wait

The Stop-and-wait method is simple as each frame is checked and acknowledged


before the next frame is sent.

Disadvantage of Stop-and-wait

Stop-and-wait technique is inefficient to use as each frame must travel across all
the way to the receiver, and an acknowledgement travels all the way before the
next frame is sent. Each frame sent and received uses the entire time needed to
traverse the link.

Sliding Window

o The Sliding Window is a method of flow control in which a sender can


transmit the several frames before getting an acknowledgement.
o In Sliding Window Control, multiple frames can be sent one after the another
due to which capacity of the communication channel can be utilized
efficiently.
o A single ACK acknowledge multiple frames.
o Sliding Window refers to imaginary boxes at both the sender and receiver
end.
o The window can hold the frames at either end, and it provides the upper limit
on the number of frames that can be transmitted before the
acknowledgement.
o Frames can be acknowledged even when the window is not completely filled.
o The window has a specific size in which they are numbered as modulo-n
means that they are numbered from 0 to n-1. For example, if n = 8, the
frames are numbered from 0,1,2,3,4,5,6,7,0,1,2,3,4,5,6,7,0,1........
o The size of the window is represented as n-1. Therefore, maximum n-1
frames can be sent before acknowledgement.
o When the receiver sends the ACK, it includes the number of the next frame
that it wants to receive. For example, to acknowledge the string of frames
ending with frame number 4, the receiver will send the ACK containing the
number 5. When the sender sees the ACK with the number 5, it got to know
that the frames from 0 through 4 have been received.

Sender Window

o At the beginning of a transmission, the sender window contains n-1 frames,


and when they are sent out, the left boundary moves inward shrinking the
size of the window. For example, if the size of the window is w if three
frames are sent out, then the number of frames left out in the sender
window is w-3.
o Once the ACK has arrived, then the sender window expands to the number
which will be equal to the number of frames acknowledged by ACK.
o For example, the size of the window is 7, and if frames 0 through 4 have
been sent out and no acknowledgement has arrived, then the sender
window contains only two frames, i.e., 5 and 6. Now, if ACK has arrived with
a number 4 which means that 0 through 3 frames have arrived undamaged
and the sender window is expanded to include the next four frames.
Therefore, the sender window contains six frames (5,6,7,0,1,2).
Receiver Window

o At the beginning of transmission, the receiver window does not contain n


frames, but it contains n-1 spaces for frames.
o When the new frame arrives, the size of the window shrinks.
o The receiver window does not represent the number of frames received, but
it represents the number of frames that can be received before an ACK is
sent. For example, the size of the window is w, if three frames are received
then the number of spaces available in the window is (w-3).
o Once the acknowledgement is sent, the receiver window expands by the
number equal to the number of frames acknowledged.
o Suppose the size of the window is 7 means that the receiver window
contains seven spaces for seven frames. If the one frame is received, then
the receiver window shrinks and moving the boundary from 0 to 1. In this
way, window shrinks one by one, so window now contains the six spaces. If
frames from 0 through 4 have sent, then the window contains two spaces
before an acknowledgement is sent.
Error Control
Error Control is a technique of error detection and retransmission.

Categories of Error Control:

Stop-and-wait ARQ
Stop-and-wait ARQ is a technique used to retransmit the data in case of damaged
or lost frames.

This technique works on the principle that the sender will not transmit the next
frame until it receives the acknowledgement of the last transmitted frame.

Four features are required for the retransmission:

o The sending device keeps a copy of the last transmitted frame until the
acknowledgement is received. Keeping the copy allows the sender to
retransmit the data if the frame is not received correctly.
o Both the data frames and the ACK frames are numbered alternately 0 and 1
so that they can be identified individually. Suppose data 1 frame
acknowledges the data 0 frame means that the data 0 frame has been
arrived correctly and expects to receive data 1 frame.
o If an error occurs in the last transmitted frame, then the receiver sends the
NAK frame which is not numbered. On receiving the NAK frame, sender
retransmits the data.
o It works with the timer. If the acknowledgement is not received within the
allotted time, then the sender assumes that the frame is lost during the
transmission, so it will retransmit the frame.

Two possibilities of the retransmission:

o Damaged Frame: When the receiver receives a damaged frame, i.e., the
frame contains an error, then it returns the NAK frame. For example, when
the data 0 frame is sent, and then the receiver sends the ACK 1 frame
means that the data 0 has arrived correctly, and transmits the data 1 frame.
The sender transmits the next frame: data 1. It reaches undamaged, and the
receiver returns ACK 0. The sender transmits the next frame: data 0. The
receiver reports an error and returns the NAK frame. The sender retransmits
the data 0 frame.
o Lost Frame: Sender is equipped with the timer and starts when the frame is
transmitted. Sometimes the frame has not arrived at the receiving end so
that it can be acknowledged neither positively nor negatively. The sender
waits for acknowledgement until the timer goes off. If the timer goes off, it
retransmits the last transmitted frame.
Sliding Window ARQ

SlidingWindow ARQ is a technique used for continuous transmission error control.

Three Features used for retransmission:

o In this case, the sender keeps the copies of all the transmitted frames until
they have been acknowledged. Suppose the frames from 0 through 4 have
been transmitted, and the last acknowledgement was for frame 2, the
sender has to keep the copies of frames 3 and 4 until they receive correctly.
o The receiver can send either NAK or ACK depending on the conditions. The
NAK frame tells the sender that the data have been received damaged.
Since the sliding window is a continuous transmission mechanism, both
ACK and NAK must be numbered for the identification of a frame. The ACK
frame consists of a number that represents the next frame which the
receiver expects to receive. The NAK frame consists of a number that
represents the damaged frame.
o The sliding window ARQ is equipped with the timer to handle the lost
acknowledgements. Suppose then n-1 frames have been sent before
receiving any acknowledgement. The sender waits for the
acknowledgement, so it starts the timer and waits before sending any more.
If the allotted time runs out, the sender retransmits one or all the frames
depending upon the protocol used.

Two protocols used in sliding window ARQ:

o Go-Back-n ARQ: In Go-Back-N ARQ protocol, if one frame is lost or damaged,


then it retransmits all the frames after which it does not receive the positive
ACK.

Three possibilities can occur for retransmission:

o Damaged Frame: When the frame is damaged, then the receiver sends a
NAK frame.
In the above figure, three frames have been transmitted before an error discovered
in the third frame. In this case, ACK 2 has been returned telling that the frames 0,1
have been received successfully without any error. The receiver discovers the error
in data 2 frame, so it returns the NAK 2 frame. The frame 3 is also discarded as it is
transmitted after the damaged frame. Therefore, the sender retransmits the frames
2,3.

o Lost Data Frame: In Sliding window protocols, data frames are sent
sequentially. If any of the frames is lost, then the next frame arrive at the
receiver is out of sequence. The receiver checks the sequence number of
each of the frame, discovers the frame that has been skipped, and returns
the NAK for the missing frame. The sending device retransmits the frame
indicated by NAK as well as the frames transmitted after the lost frame.
o Lost Acknowledgement: The sender can send as many frames as the
windows allow before waiting for any acknowledgement. Once the limit of
the window is reached, the sender has no more frames to send; it must wait
for the acknowledgement. If the acknowledgement is lost, then the sender
could wait forever. To avoid such situation, the sender is equipped with the
timer that starts counting whenever the window capacity is reached. If the
acknowledgement has not been received within the time limit, then the
sender retransmits the frame since the last ACK.

Selective-Reject ARQ

o Selective-Reject ARQ technique is more efficient than Go-Back-n ARQ.


o In this technique, only those frames are retransmitted for which negative
acknowledgement (NAK) has been received.
o The receiver storage buffer keeps all the damaged frames on hold until the
frame in error is correctly received.
o The receiver must have an appropriate logic for reinserting the frames in a
correct order.
o The sender must consist of a searching mechanism that selects only the
requested frame for retransmission.
Data Link Layer Services:

Data Link Layer is generally representing protocol layer in program that is


simply used to handle and control the transmission of data between source
and destination machines. It is simply responsible for exchange of frames
among nodes or machines over physical network media. This layer is often
closest and nearest to Physical Layer (Hardware).

Data Link Layer is basically second layer of seven-layer Open System


Interconnection (OSI) reference model of computer networking and lies just
above Physical Layer.
This layer usually provides and gives data reliability and provides various
tools to establish, maintain, and also release data link connections between
network nodes. It is responsible for receiving and getting data bits usually
from Physical Layer and also then converting these bits into groups, known
as data link frames so that it can be transmitted further. It is also responsible
to handle errors that might arise due to transmission of bits.

Service Provided to Network Layer :

The important and essential function of Data Link Layer is to provide an


interface to Network Layer. Network Layer is third layer of seven-layer OSI
reference model and is present just above Data Link Layer.
The main aim of Data Link Layer is to transmit data frames they have
received to destination machine so that these data frames can be handed
over to network layer of destination machine. At the network layer, these
data frames are basically addressed and routed.
This process is shown in diagram :
1. Actual Communication :

In this communication, physical medium is present through which Data Link


Layer simply transmits data frames. The actual path is Network Layer -> Data
link layer -> Physical Layer on sending machine, then to physical media and
after that to Physical Layer -> Data link layer -> Network Layer on receiving
machine.

2. Virtual Communication :

In this communication, no physical medium is present for Data Link Layer to


transmit data. It can be only be visualized and imagined that two Data Link
Layers are communicating with each other with the help of or using data link
protocol.

Types of Services provided by Data Link Layer :

The Data link layer generally provides or offers three types of services as
given below :

1. Unacknowledged Connectionless Service


2. Acknowledged Connectionless Service
3. Acknowledged Connection-Oriented Service

1. Unacknowledged Connectionless Service :


Unacknowledged connectionless service simply provides datagram styles
delivery without any error, issue, or flow control. In this service, source machine
generally transmits independent frames to destination machine without having
destination machine to acknowledge these frames.
This service is called as connectionless service because there is no connection
established among sending or source machine and destination or receiving
machine before data transfer or release after data transfer.
In Data Link Layer, if anyhow frame is lost due to noise, there will be no attempt
made just to detect or determine loss or recovery from it. This simply means
that there will be no error or flow control. An example can be Ethernet.
2. Acknowledged Connectionless Service :

This service simply provides acknowledged connectionless service i.e. packet


delivery is simply acknowledged, with help of stop and wait for protocol.
In this service, each frame that is transmitted by Data Link Layer is simply
acknowledged individually and then sender usually knows whether or not these
transmitted data frames received safely. There is no logical connection
established and each frame that is transmitted is acknowledged individually.
This mode simply provides means by which user of data link can just send or
transfer data and request return of data at the same time. It also uses particular
time period that if it has passed frame without getting acknowledgment, then it
will resend data frame on time period.
This service is more reliable than unacknowledged connectionless service. This
service is generally useful over several unreliable channels, like wireless
systems, Wi-Fi services, etc.
3. Acknowledged Connection-Oriented Service :

In this type of service, connection is established first among sender and


receiver or source and destination before data is transferred.
Then data is transferred or transmitted along with this established connection.
In this service, each of frames that are transmitted is provided individual
numbers first, so as to confirm and guarantee that each of frames is received
only once that too in an appropriate order and sequence.

Data link protocols:

The data link protocols operate in the data link layer of the Open System
Interconnections (OSI) model, just above the physical layer.
The services provided by the data link protocols may be any of the following −
• Framing − The stream of bits from the physical layer are divided into data
frames whose size ranges from a few hundred to a few thousand bytes.
These frames are distributed to different systems, by adding a header to the
frame containing the address of the sender and the receiver.
• Flow Control − Through flow control techniques, data is transmitted in such
a way so that a fast sender does not drown a slow receiver.
• Error Detection and/or Correction − These are techniques of detecting and
correcting data frames that have been corrupted or lost during transmission.
• Multipoint transmission − Access to shared channels and multiple points
are regulated in case of broadcasting and LANs.
Common Data Link Protocols

• Synchronous Data Link Protocol (SDLC) − SDLC was developed by IBM in


the 1970s as part of Systems Network Architecture. It was used to connect
remote devices to mainframe computers. It ascertained that data units
arrive correctly and with right flow from one network point to the next.
• High Level Data Link Protocol (HDLC) − HDLC is based upon SDLC and
provides both unreliable service and reliable service. It is a bit – oriented
protocol that is applicable for both point – to – point and multipoint
communications.
• Serial Line Interface Protocol (SLIP) − This is a simple protocol for
transmitting data units between an Internet service provider (ISP) and home
user over a dial-up link. It does not provide error detection / correction
facilities.
• Point - to - Point Protocol (PPP) − This is used to transmit multiprotocol
data between two directly connected (point-to-point) computers. It is a byte
– oriented protocol that is widely used in broadband communications
having heavy loads and high speeds.
• Link Control Protocol (LCP) − It one of PPP protocols that is responsible for
establishing, configuring, testing, maintaining and terminating links for
transmission. It also imparts negotiation for set up of options and use of
features by the two endpoints of the links.
• Network Control Protocol (NCP) − These protocols are used for negotiating
the parameters and facilities for the network layer. For every higher-layer
protocol supported by PPP, one NCP is there.

Media Access Control:

A media access control is a network data transfer policy that determines how data is
transmitted between two computer terminals through a network cable. The media
access control policy involves sub-layers of the data link layer 2 in the OSI reference
model.

The essence of the MAC protocol is to ensure non-collision and eases the transfer of
data packets between two computer terminals. A collision takes place when two or
more terminals transmit data/information simultaneously. This leads to a breakdown of
communication, which can prove costly for organizations that lean heavily on data
transmission.

MediaAccessControlMethods

This network channel through which data is transmitted between terminal nodes to
avoid collision has three various ways of accomplishing this purpose. They include:
• Carriersense multipleaccesswith collision avoidance(CSMA/CA)
• Carriersense multipleaccesswith collision detection(CSMA/CD)
• Demandpriority
• Token passing

CarrierSenseMultipleAccesswithCollisionAvoidance(CSMA/CA)

Carrier senses multiple accesses with collision avoidance (CSMA/CA) is a media


access control policy that regulates how data packets are transmitted between two
computer nodes. This method avoids collision by configuring each computer terminal
to make a signal before transmission. The signal is carried out by the transmitting
computerto avoid a collision.

Multiple access implies that many computers are attempting to transmit data. Collision
avoidance means that when a computer node transmitting data states its intention, the
other waits ata specific length of timebeforeresending the data.

CSMA/CA is data traffic regulation is slow and adds cost in having each computer node
signal its intention beforetransmitting data. It used only on Applenetworks.

Wired local area network:

A wired local area network is a local area network where the connectivity
between different components or elements of the LAN is done using wires
and cable. Well to implement the wired LANs various technologies were
introduced such as token ring, token bus, FDDI, ATM LANs, and Ethernet.

Among all these technologies only Ethernet survived the market as it has
the capability to update itself to meet the increasing requirements. So, in
this context, we will discuss wired local area networks in brief and how
the Ethernet technology succeeded in implementing wired LANs

Ethernet:

Ethernet is a type of communication protocol that is created at Xerox PARC in


1973 by Robert Metcalfe and others, which connects computers on a network over
a wired connection. It is a widely used LAN protocol, which is also known as Alto
Aloha Network. It connects computers within the local area network and wide area
network. Numerous devices like printers and laptops can be connected by LAN and
WAN within buildings, homes, and even small neighborhoods.

It offers a simple user interface that helps to connect various devices easily, such
as switches, routers, and computers. A local area network (LAN) can be created
with the help of a single router and a few Ethernet cables, which enable
communication between all linked devices. This is because an Ethernet port is
included in your laptop in which one end of a cable is plugged in and connect the
other to a router. Ethernet ports are slightly wider, and they look similar to
telephone jacks.

With lower-speed Ethernet cables and devices, most of the Ethernet devices are
backward compatible. However, the speed of the connection will be as fast as the
lowest common denominator. For instance, the computer will only have the
potential to forward and receive data at 10 Mbps if you attach a computer with a
10BASE-T NIC to a 100BASE-T network. Also, the maximum data transfer rate will
be 100 Mbps if you have a Gigabit Ethernet router and use it to connect the device.

The wireless networks replaced Ethernet in many areas; however, Ethernet is still
more common for wired networking. Wi-Fi reduces the need for cabling as it allows
the users to connect smartphones or laptops to a network without the required
cable. While comparing with Gigabit Ethernet, the faster maximum data transfer
rates are provided by the 802.11ac Wi-Fi standard. Still, as compared to a wireless
network, wired connections are more secure and are less prone to interference.
This is the main reason to still use Ethernet by many businesses and organizations.

Different Types of Ethernet Networks

An Ethernet device with CAT5/CAT6 copper cables is connected to a fiber optic


cable through fiber optic media converters. The distance covered by the network is
significantly increased by this extension for fiber optic cable. There are some kinds
of Ethernet networks, which are discussed below:

o Fast Ethernet: This type of Ethernet is usually supported by a twisted pair or


CAT5 cable, which has the potential to transfer or receive data at around100
Mbps. They function at 100Base and 10/100Base Ethernet on the fiber side
of the link if any device such as a camera, laptop, or other is connected to a
network. The fiber optic cable and twisted pair cable are used by fast
Ethernet to create communication. The 100BASE-TX, 100BASE-FX, and
100BASE-T4 are the three categories of Fast Ethernet.
o Gigabit Ethernet: This type of Ethernet network is an upgrade from Fast
Ethernet, which uses fiber optic cable and twisted pair cable to create
communication. It can transfer data at a rate of 1000 Mbps or 1Gbps. In
modern times, gigabit Ethernet is more common. This network type also
uses CAT5e or other advanced cables, which can transfer data at a rate of
10 Gbps.

The primary intention of developing the gigabit Ethernet was to full fill the user's
requirements, such as faster transfer of data, faster communication network, and
more.

o 10-Gigabit Ethernet: This type of network can transmit data at a rate of 10


Gigabit/second, considered a more advanced and high-speed network. It
makes use of CAT6a or CAT7 twisted-pair cables and fiber optic cables as
well. This network can be expended up to nearly 10,000 meters with the help
of using a fiber optic cable.
o Switch Ethernet: This type of network involves adding switches or hubs,
which helps to improve network throughput as each workstation in this
network can have its own dedicated 10 Mbps connection instead of sharing
the medium. Instead of using a crossover cable, a regular network cable is
used when a switch is used in a network. For the latest Ethernet, it supports
1000Mbps to 10 Gbps and 10Mbps to 100Mbps for fast Ethernet.

Advantages of Ethernet

o It is not much costly to form an Ethernet network. As compared to other


systems of connecting computers, it is relatively inexpensive.
o Ethernet network provides high security for data as it uses firewalls in terms
of data security.
o Also, the Gigabit network allows the users to transmit data at a speed of
1-100Gbps.
o In this network, the quality of the data transfer does maintain.
o In this network, administration and maintenance are easier.
o The latest version of gigabit ethernet and wireless ethernet have the
potential to transmit data at the speed of 1-100Gbps.

Disadvantages of Ethernet

o It needs deterministic service; therefore, it is not considered the best for


real-time applications.
o The wired Ethernet network restricts you in terms of distances, and it is best
for using in short distances.
o If you create a wired ethernet network that needs cables, hubs, switches,
routers, they increase the cost of installation.
o Data needs quick transfer in an interactive application, as well as data is
very small.
o In ethernet network, any acknowledge is not sent by receiver after accepting
a packet.
o If you are planning to set up a wireless Ethernet network, it can be difficult if
you have no experience in the network field.
o Comparing with the wired Ethernet network, wireless network is not more
secure.
o The full-duplex data communication mode is not supported by the
100Base-T4 version.
o Additionally, finding a problem is very difficult in an Ethernet network (if has),
as it is not easy to determine which node or cable is causing the problem.

Ethernet Standards

An Ethernet standard describes the properties, functions, and implementation of a


specific media type. There are various types of media. A media type can provide
different speeds of transmission on different types of implementation. An Ethernet
standard specifies a specific implementation of a particular media type. Ethernet
standards are defined by IEEE.

Ethernet Terminology

Ethernet standards are expressed by using the following terminology.

Transmission speed, type of transmission, and length or type of cabling


Let's take an example to understand the above terminology. The
term '100BaseT' describes the following: -

100: - The number indicates that the standard data transmission speed of this
media type is 100Mbps.

Base: - The indicates that the media uses a baseband technology for
transmission.

T: - The letter indicates that the media uses twisted-pair cabling.

Key points

• The name of an Ethernet standard consists of three parts. The first part
contains a number, the second part contains a word (mostly Base), and the third
part contains a number or letters.
• The first part specifies the data transmission speed of the media.
• The second part indicates the technology or the method the media uses to
transmit data. The word 'Base' signifies a type of network that uses only one carrier
frequency for signaling and requires all network stations to share its use.
• The third part specifics the length or type of the cable that the media uses in
implementation. For example, if the standard contains a letter T in this part, it
means the standard uses twisted-pair cabling. Or if a standard contains a
number 5 in this part, it means the standard can span 500 meters long.

Properties and functions of the most common Ethernet standards


The following section describes the properties and functions of the most
command Ethernet standards.

10Base2

This standard is also known as ThinNet. It uses coaxial cabling. It provides


10Mbps speed. It supports a maximum length of 200 meters. This standard is not
used in modern networks.
10Base5

This standard is also known as ThickNet. It also uses coaxial cabling and provides
10Mbps speed. It supports a maximum length of 500 meters. This standard is also
not used in modern networks.

10BaseT

10BaseT is one of the most common Ethernet standards used in Ethernet networks.
It uses UTP (Cat3 or higher) cables and Hubs. Hubs use a physical star topology
and a logical bus topology. Hubs repeat and forward signals to all nodes. Because
of Hubs, the 10BaseT networks are slow and susceptible to collisions.

This standard also specifies a rule about how many Hubs you can use in a network.
This rule specifies that a maximum of four hubs can be placed between
communicating workstations. This rule ensures that all stations on the network
can detect a collision.

Due to the slow data transmission speed and collision, modern networks do not
use the 10BaseT standard.

10BaseF

10BaseF is an implementation of 10BaseT over fiber optic cabling. 10BaseF offers


only 10 Mbps, even though the fiber optic media has the capacity for much faster
data rates. One of the implementations of 10BaseF is to connect two hubs as well
as connecting hubs to workstations.

Due to the slow data transmission speed and expensive cabling, the 10BaseT
standard is also not used in modern networks.

100BaseT4

100BaseT4 was created to upgrade 10BaseT networks over Cat3 wiring to 100
Mbps without having to replace the wiring. Using four pairs of twisted pair wiring,
two of the four pairs are configured for half-duplex transmission (data can move in
only one direction at a time). The other two pairs are configured as simplex
transmission, which means data moves only in one direction on a pair all the time.

100BaseTX
100BaseTX is also known as Fast Ethernet. It transmits data at 100 Mbps. Fast
Ethernet works nearly identically to 10BaseT, including that it has a physical star
topology using a logical bus. 100BaseTX requires Cat5 or higher UTP cabling. It
uses two of the four-wire pairs: one to transmit data and the other to receive data.

This is mostly used Ethernet standard in modern networks.

100BaseFX

100BaseFX is known as Fast Ethernet . 100BaseFX runs over multimode


fiber cables. Multimode fiber optic cables use LEDs to transmit data and are thick
enough that the light signals bounce off the walls of the fiber. The dispersion of the
signal limits the length of the multimode fiber.

1000BaseT

1000BaseT is also known as Gigabit Ethernet. It uses Cat5 or higher grade UTP
cable. It uses all four pairs of the cable. It uses a physical star topology with a
logical bus. There is also 1000BaseF, which runs over multimode fiber optic
cabling. It supports both the full-duplex and half-duplex modes of data
transmission.

10GBaseT

This standard is also known as 10 Gigabit Ethernet. It uses Cat6 or higher grade
UTP cable. It uses all four pairs of the UTP cable. It provides 10 Gbps speed. It
operates only in full-duplex mode.

Characteristics of Ethernet:

Ethernet is a widely used networking technology that defines the rules for
organizing and formatting data for transmission over a network. Here are some key
characteristics of Ethernet in computer networks:

Physical Layer:

• Ethernet operates at the physical layer (Layer 1) and the data link layer
(Layer 2) of the OSI model.
• It specifies the electrical, mechanical, and functional characteristics of the
hardware, such as cables, connectors, and network interface cards (NICs).

Topology:
• Ethernet supports various topologies, including star, bus, ring, and hybrid
configurations.
• In a star topology, devices are connected to a central hub or switch, while a
bus topology involves a single communication channel shared by all devices.
Carrier Sense Multiple Access with Collision Detection (CSMA/CD):
• Traditional Ethernet uses CSMA/CD as a protocol to manage access to the
network medium.
• Before transmitting, a device listens to the network to check if it is clear. If a
collision is detected during transmission, devices involved in the collision
use a backoff algorithm and reattempt transmission after a random interval.

Frame Format:

• Ethernet frames consist of a preamble, destination and source MAC


addresses, EtherType/Length field, data, and a Frame Check Sequence
(FCS).
• The preamble helps synchronize clocks between sender and receiver, while
the FCS is used for error checking.

MAC Address:

• Each device on an Ethernet network is assigned a unique MAC (Media


Access Control) address.
• The MAC address is a 48-bit identifier, typically expressed in hexadecimal
format and assigned by the manufacturer.

Data Rate:

• Ethernet supports various data rates, ranging from the original 10 Mbps
(Ethernet) to 100 Mbps (Fast Ethernet), 1 Gbps (Gigabit Ethernet), 10 Gbps,
40 Gbps, and 100 Gbps, with ongoing developments.

Switched Ethernet:

• With the introduction of switches, Ethernet evolved to switched Ethernet.


• Switches create separate collision domains for each connected device,
reducing collisions and increasing network efficiency.
• Full-duplex communication is supported, allowing simultaneous data
transmission and reception.

Frame Forwarding:

• Switches forward Ethernet frames based on MAC addresses, creating a


dynamic MAC address table to associate MAC addresses with the
corresponding switch ports.
• This improves network efficiency by reducing unnecessary traffic.
Half-Duplex and Full-Duplex:

• Ethernet originally operated in half-duplex mode, where devices could either


send or receive at a given time.
• Full-duplex communication allows devices to transmit and receive
simultaneously, enhancing network performance.

Standardization:

• Ethernet standards are defined by the Institute of Electrical and Electronics


Engineers (IEEE).
• Common standards include IEEE 802.3 for Ethernet and its variants, such as
802.3u (Fast Ethernet) and 802.3ab (Gigabit Ethernet).

Ethernet Over Optical Fiber:

• Ethernet can be transmitted over optical fiber using standards like


1000BASE-SX, 1000BASE-LX, and 10GBASE-SR.

Ubiquity:

• Ethernet is highly prevalent and used in various environments, from local


area networks (LANs) in homes and offices to wide area networks (WANs)
and the internet.

Ethernet Addressing:

MAC Address or Ethernet Addressing

Every Ethernet frame contains two addresses: source and destination. The source
address represents the device that generated it. The destination address
represents the recipients of the frame.

An Ethernet address is also known as a Hardware address, physical address,


burned-in address, universal address, MAC address, or LAN address.

These terms define the purposes and functions of the Ethernet address. For
example, the terms hardware address and physical address indicate the address
belongs to an interface.

The terms MAC address and LAN address indicate the data link layer uses this
address in the LAN environment.

The term burned-in address (BIA) specifies a fact that indicates a permanent MAC
address has been encoded (burned into) the ROM chip on the NIC.

The term universal address indicates the address is unique in the universe.
Globally unique MAC addresses

An administrative process is followed to make each MAC address unique in the


universe. A MAC address is a 6-byte-long (48-bit-long) binary number. In this
number, the first 3 bytes are assigned by IEEE and the last 3 bytes are assigned by
the manufacturer.

Before a manufacturer builds Ethernets, it obtains a universally unique 3 bytes


code from IEEE. IEEE provides a unique 3 bytes code to every Ethernet product
manufacturer.

The manufacturer uses the assigned code to generate MAC addresses for its
products. In each MAC address, it uses the assigned code as the first 3 bytes. It
uses the last 3 bytes to make the address unique. As a result, the MAC address of
every device in the universe is unique.

The following image shows an example of this process.

For convenience, devices display MAC addresses as 12-digit hexadecimal numbers.


They add periods between two hexa numbers. For example, a Cisco switch might
list a MAC address as 0012.AB12.3456.

Examples of MAC addresses

Following are the example MAC addresses

0000.AB12.3456, AA12.AB12.3456, 0012:1234:45CD, CC00:AABB:CC22


Key points

• MAC stands for Media Access Control.


• Each MAC address is unique in the universe.
• MAC addresses work in the Data Link layer.
• A MAC address is locally significant.
• A MAC address is 48 bits long in binary.
• A MAC address is usually written in hexadecimal.
• An Organizationally Unique Identifier is a 3-hexa bytes code.
• IEEE assigns OUI codes to Ethernet manufacturers.
• OUI codes are unique among manufacturers.
• Manufacturers use OUI codes to generate unique MAC addresses for their
products.
• In each MAC address, the first 3-hexa bytes are OUI.
• The manufacturer uses the last 3-hexa bytes to generate unique MAC
addresses for every interface.
UNIT III
NETWORK
LAYER

3.1 Introduction to Network Layer

Layer-3 in the OSI model is called Network layer. Network layer manages options
pertaining to host and network addressing, managing sub-networks, and internetworking.

Network layer takes the responsibility for routing packets from source to destination
within or outside a subnet. Two different subnet may have different addressing schemes or
non-compatible addressing types. Same with protocols, two different subnet may be
operating on different protocols which are not compatible with each other. Network layer
has the responsibility to route the packets from source to destination, mapping different
addressing schemes and protocols.

Layer-3 Functionalities

Devices which work on Network Layer mainly focus on routing. Routing may include various
tasks aimed to achieve a single goal. These can be:

• Addressing devices and networks.


• Populating routing tables or static routes.
• Queuing incoming and outgoing data and then forwarding them according to
quality-of-service constraints set for those packets.
• Internetworking between two different subnets.
• Delivering packets to destination with best efforts.
• Provides connection oriented and connection less mechanism.
Network Layer Features

With its standard functionalities, Layer 3 can provide various features as:

• Quality of service management


• Load balancing and link management
• Security
• Interrelation of different protocols and subnets with different schema.
• Different logical network design over the physical network design.
• L3 VPN and tunnels can be used to provide end to end dedicated connectivity.

Internet protocol is widely respected and deployed Network Layer protocol which helps to
communicate end to end devices over the internet. It comes in two flavors. IPv4 which has
ruled the world for decades but now is running out of address space. IPv6 is created to
replace IPv4 and hopefully mitigates limitations of IPv4 too.

CN : UNIT III Network Layer : Chennai Institute of Technology Page 1


3.2 Network Layer Services
Main Task of the network layer is to move packets from the source host to
the destinationhost. It transports packet from sending to receiving hosts via
internet.
Network layer services are packetizing, routing & forwarding and other services.
Packetizing
Encapsulating the payload in a network layer packet at the source and
de-capsulating thepayload from the network layer packet at the destination
called packetizing.
Forwarding
Forwarding refers to the way a packet is delivered to the next node.
Routing
Network layer is responsible for finding the best one route from the source
to thedestination is called routing.
Three important functions of network layer :
Path determination:
Switching:
Call setup.
Network Layer Function
Logical addressing: Every device that communicates over a network
has associated withit a logical address which is called IP address.
Routing: finding the best one route from the source to the destination.
Datagram encapsulation: Attaching header with data is called encapsulation.
Fragmentation and reassembly: Dividing data into small unit is
called fragmentation and reunion these small units into actual data are
called reassemble.
Error handling and diagnostics: Identifying errors and correcting the errors
Network Layer Design Issue
Design issue of the network layer is discussed in this section.
Store and Forward Packet Switching
The packet is stored there until it has fully arrived so the checksum can be
verified. Then the packets are forwarded to the next router along the path
until it reaches the destination host.

CN : UNIT III Network Layer : Chennai Institute of Technology Page 2


The above process or mechanism is called store and forward packet switching.
Services Provided to the Transport Layer
Network layer provides the services to the transport layer at the network
transport layer / transport layer interface. The service should be
independent of network topology.
Implementation of Connectionless Service
Connectionless network services is also known as datagram. A datagram
is a self-contained message with which contains sufficient information to
allow it to be routed from source to destination..
Each packet is treated independent.
Implementing of Connection-oriented Service
Connection-oriented network is also known as virtual circuit. Virtual circuit
is similar to telephone system. A route, which consists of a logical
connection is first established between two users.
The process is completed in three main phases –
Establishment phase.
Data transfer phase.
Connection release phase.

3.3 Packet Switching


Packet switching is a method of transferring data to a network in form of
packets. In order to transfer the file fast and efficiently manner over the network and
minimize the transmission latency, the data is broken into small pieces of variable
length, called Packet. At the destination, all these small parts (packets) have to be
reassembled, belonging to the same file. A packet composes of a payload and
various control information. No pre-setup or reservation of resources is needed.
Packet Switching uses Store and Forward technique while switching the packets;
while forwarding the packet each hop first stores that packet than forward. This
technique is very beneficial because packets may get discarded at any hop due to
some reason. More than one path is possible between a pair of sources and
destinations. Each packet contains the Source and destination address using which
they independently travel through the network. In other words, packets belonging to
the same file may or may not travel through the same path. If there is congestion at
some path, packets are allowed to choose different paths possible over an existing
network.
Packet-Switched networks were designed to overcome the of Circuit-
Switched networks since circuit-switched networks were not very effective for small
messages. Packet switching is a technique used in computer networks to transmit
data in the form of packets, which are small units of data that are transmitted

CN : UNIT III Network Layer : Chennai Institute of Technology Page 3


independently across the network. Each packet contains a header, which includes
information about the packet’s source and destination, as well as the data payload.
One of the main advantages of packet switching is that it allows multiple packets to
be transmitted simultaneously across the network, which makes more efficient use of
network resources than circuit switching. However, packet switching can also introduce
delays into the transmission process, which can impact the performance of network
applications.

Here are some of the types of delays that can occur in packet switching:

1. Transmission delay: This is the time it takes to transmit a packet over a link.
It is affected by the size of the packet and the bandwidth of the link.
2. Propagation delay: This is the time it takes for a packet to travel from the
source to the destination. It is affected by the distance between the two
nodes and the speed of light.
3. Processing delay: This is the time it takes for a packet to be processed
by a node, such as a router or switch. It is affected by the processing
capabilities of the node and the complexity of the routing algorithm.
4. Queuing delay: This is the time a packet spends waiting in a queue before it
can be transmitted. It is affected by the number of packets in the queue and
the priority of the packets.
while packet switching can introduce delays in the transmission process, it is generally
more efficient than circuit switching and can support a wider range of applications. To
minimize delays, various techniques can be used, such as optimizing routing algorithms,
increasing link bandwidth, and using quality of service (QoS) mechanisms to prioritize
certain types of traffic.
Advantages of Packet Switching over Circuit Switching:
• More efficient in terms of bandwidth, since the concept of reserving a circuit
is not there.
• Minimal transmission latency.
• More reliable as a destination can detect the missing packet.
• More fault tolerant because packets may follow a different path in case any
link is down, Unlike Circuit Switching.
• Cost-effective and comparatively cheaper to implement.
Disadvantage of Packet Switching over Circuit Switching:
• Packet Switching doesn’t give packets in order, whereas Circuit Switching
provides ordered delivery of packets because all the packets follow the same
path.
• Since the packets are unordered, we need to provide sequence numbers for
each packet.
• Complexity is more at each node because of the facility to follow multiple
paths.
• Transmission delay is more because of rerouting.
• Packet Switching is beneficial only for small messages, but for bursty data
(large messages) Circuit Switching is better.
Modes of Packet Switching:

CN : UNIT III Network Layer : Chennai Institute of Technology Page 4


1. Connection-oriented Packet Switching (Virtual Circuit): Before
starting the
transmission, it establishes a logical path or virtual connection using a signaling
protocol, between sender and receiver and all packets belongs to this flow will follow
this predefined route. Virtual Circuit ID is provided by switches/routers to uniquely
identify this virtual connection. Data is divided into small units and all these small units
are appended with help of sequence numbers. Packets arrive in order at the destination.
Overall, three phases take place here- The setup, data transfer and tear-down phases.

All address information is only transferred during the setup phase. Once the route to a
destination is discovered, entry is added to the switching table of each intermediate
node. During data transfer, packet header (local header) may contain information such
as length, timestamp, sequence number, etc.
Connection-oriented switching is very useful in switched WAN. Some popular protocols
which use the Virtual Circuit Switching approach are X.25, Frame-Relay, ATM, and
MPLS(Multi-Protocol Label Switching).
2. Connectionless Packet Switching (Datagram): Unlike Connection-oriented packet
switching, In Connectionless Packet Switching each packet contains all necessary
addressing information such as source address, destination address, port numbers, etc.
In Datagram Packet Switching, each packet is treated independently. Packets belonging

CN : UNIT III Network Layer : Chennai Institute of Technology Page 5


to one flow may take different routes because routing decisions are made dynamically,
so the packets that arrived at the destination might be out of order. It has no connection
setup and teardown phase, like Virtual Circuits.
Packet delivery is not guaranteed in connectionless packet switching, so reliable
delivery must be provided by end systems using additional protocols.

A---R1---R2---B
A is the sender (start)
R1, R2 are two routers that store and forward data
B is receiver(destination)
To send a packet from A to B there are delays since this is a Store and Forward
network.

Delays in Packet switching:

1. Transmission Delay: Time required by station to transmit data to the link.


2. Propagation Delay: Time of data propagation through the link.
3. Queuing Delay: Time spend by the packet at the destination’s queue.
4. Processing Delay: Processing time for data at the destination.
3.4 Network Layer Performance

The performance of a network pertains to the measure of service quality of a


network as perceived by the user. There are different ways to measure the
performance of a network, depending upon the nature and design of the network.
Finding the performance of a network depends on both quality of the network and
the quantity of the network.Parameters for Measuring Network Performance
• Bandwidth
• Latency (Delay)
• Bandwidth – Delay Product
• Throughput
• Jitter

BANDWIDTH

CN : UNIT III Network Layer : Chennai Institute of Technology Page 6


Bandwidth determines how rapidly the webserver is able to upload the requested
information. Bandwidth is characterized as the measure of data or information that can
be transmitted in a fixed measure of time. The bandwidth is measured in bits per
second(bps) or bytes per second. In the case of analog devices, the bandwidth is
measured in cycles per second, or Hertz (Hz). “Bandwidth” means “Capacity” and
“Speed” means “Transfer rate”.

LATENCY

In a network, during the process of data communication, latency (also known as


delay) is defined as the total time taken for a complete message to arrive at the
destination, starting with the time when the first bit of the message is sent out from the
source and ending with the time when the last bit of the message is delivered at the
destination. The network connections where small delays occur are called “Low-Latency-
Networks” and the network connections which suffer from long delays are known as
“High-Latency-Networks”.
High latency leads to the creation of bottlenecks in any network communication.
Latency is also known as a ping rate and is measured in milliseconds(ms).

Propagation Time
It is the time required for a bit to travel from the source to the destination.
Propagation time can be calculated as the ratio between the link length (distance) and
the propagation speed over the communicating medium. For example, for an electric
signal, propagation time is the time taken for the signal to travel through a wire.

Transmission Time
Transmission Time is a time based on how long it takes to send the signal down
the transmission line. It consists of time costs for an EM signal to propagate from one
side to the other, or costs like the training signals that are usually put on the front of a
packet by the sender, which helps the receiver synchronize clocks. The transmission
time of a message relies upon the size of the message and the bandwidth of the
channel.
Transmission time = Message size / Bandwidth
Queuing Time
Queuing time is a time based on how long the packet has to sit around in the
router. Quite frequently the wire is busy, so we are not able to transmit a packet
immediately. The queuing time is usually not a fixed factor, hence it changes with the
load thrust in the network. The more the traffic, the more likely a packet is stuck in the
queue, just sitting in the memory, waiting.
Processing Delay

CN : UNIT III Network Layer : Chennai Institute of Technology Page 7


Processing delay is the delay based on how long it takes the router to figure out
where to send the packet. As soon as the router finds it out, it will queue the packet for
transmission. These costs are predominantly based on the complexity of the protocol.
The router must decipher enough of the packet to make sense of which queue to put the
packet in.
Bandwidth – Delay Product
Bandwidth and Delay are two performance measurements of a link. However, what is
significant in data communications is the product of the two, the bandwidth-delay
product.

3.5 Logical Addressing - IPv4 Addresses


An IPv4 address is a 32-bit address that uniquely and universally defines the
connection of a device (for example, a computer or a router) to the Internet. IPv4
addresses are unique.
Address Space
A protocol such as IPv4 that defines addresses has an address space. An
address space is the total number of addresses used by the protocol. If a
protocol uses bits to define an address, the address space is because each
bit can have two different values (0 or 1) and bits can have values.
IPv4 uses 32-bit addresses, which means that the address space is 232 or
4,294,967,296 (more than 4 billion).
Notations
binary notation and dotted decimal notation.
Binary Notation
In binary notation, the IPv4 address is displayed as 32 bits. The following is
an example of anIPv4 address in binary notation.
01110101 10010101 00011101 00000010
Dotted-Decimal Notation
To make the IPv4 address more compact and easier to read, Internet addresses
are usually written in decimal form with a decimal point (dot) separating the
bytes. The following is the dotted-decimal notation of the above address:
117.149.29.2
Figure 3.4 shows an IPv4 address in both binary and dotted-decimal notation.
Note that because each byte (octet) is 8 bits, each number in dotted-decimal
notation is a value ranging from 0 to 255

CN : UNIT III Network Layer : Chennai Institute of Technology Page 8


Fig 3.4 Dotted decimal notation and binary notation for an IPv4 address
Example 3.1
Change the following IPv4 addresses from binary notation to dotted-decimal
notation.a)10000001 00001011 00001011 11101111
b)11000001 10000011 00011011 11111111
Solution
We replace each group of 8 bits with its equivalent decimal number and add dots for
separation.a)129.11.11.239
b)193.131.27.255
Example 3.2
Change the following IPv4 addresses from dotted-decimal notation to binary
notation.a)111.56.45.78
b)221.34.7.82
Solution
We replace each decimal number with its
binary equivalent. a)01101111 00111000
00101101 01001110
b)11011101 00100010 00000111 01010010
Classful Addressing
In classful addressing, the address space is divided into five classes: A, B, C, D, and E.
If the address is given in binary notation, the first few bits can immediately tell
us the class of the address. If the address is given in decimal-dotted notation,
the first byte defines the class. Both methods are shown in Figure 3.5.

CN : UNIT III Network Layer : Chennai Institute of Technology Page 9


Fig 3.5 Finding the classes in binary and dotted decimal notation
Example 3.4
Find the class of each address.
a)00000001 00001011 00001011 11101111
b)11000001 10000011 00011011 11111111
c) 14.
23.1
20.8

Solution
The first bit is O. This is a class A address.
The first 2 bits are 1; the third bit is O. This is a class C address.
The first byte is 14 (between 0 and 127); the class is A.
The first byte is 252 (between 240 and 255); the class is E.
Classes and Blocks
One problem with classful addressing is that each class is divided into a fixed
number of blockswith each block having a fixed size as shown in Table 3.1.

Netid and Hostid


In classful addressing, an IP address in class A, B, or C is divided into netid and
hostid.
The netid is in color, the hostid is in white. Note that the concept does not apply to
classes D and
E. In class A, one byte defines the netid and three bytes define the hostid. In
class B, two bytes define the netid and two bytes define the hostid. In class C,
three bytes define the netid and one byte defines the hostid.
Sub netting
If an organization was granted a large block in class A or B, it could divide

CN : UNIT III Network Layer : Chennai Institute of Technology Page


the addresses into several contiguous groups and assign each group to
smaller networks (called subnets).
Classless Addressing
To overcome address depletion and give more organizations access to the
Internet, classless addressing was designed and implemented. In this scheme,
there are no classes, but the addressesare still granted in blocks.
Mask
A mask is a 32-bit number in which the n leftmost bits are 1s and the 32 - n
rightmost bits are 0s.
First Address: The first address in the block can be found by setting the 32 - n
rightmost bits inthe binary notation of the address to 0s.
Example 3.6
A block of addresses is granted to a small organization. We know that one of
the addresses is 205.16.37.39/28. What is the first address in the block?
Solution
The binary representation of the given address is 11001101 00010000 00100101
00100111. If
we set 32 - 28 rightmost bits to 0, we get 11001101 000100000100101
0010000 or 205.16.37.32.
Last Address: The last address in the block can be found by setting the 32 - n
rightmost bits in the binary notation of the address to Is.
Example 3.7
Find the last address for the block in Example 3.6.
Solution
The binary representation of the given address is 11001101
000100000010010100100111. If we set 32 - 28 rightmost bits to 1, we get
11001101 00010000 001001010010 1111 or 205.16.37.47.
Network Addresses
A very important concept in IP addressing is the network address.
Hierarchy
IP addresses, like other addresses or identifiers we encounter these days, have
levels of hierarchy. For example, a telephone network in North America has
three levels of hierarchy. Theleftmost three digits define the area code, the next
three digits define the exchange, the last four digits define the connection of the
local loop to the central office. Figure 3.7 shows the structure of a hierarchical
telephone number

CN : UNIT III Network Layer : Chennai Institute of Technology Page


Fig 3.7 : Hierarchy in a telephone network in
North America Two-Level Hierarchy: No
Subnetting
Figure 3.8 shows the hierarchical structure of an IPv4 address.

Fig 3.8 : Two levels of hierarchy in an


IPv4 address The prefix is common to
all addresses in the network;
. Note that in our example, the subnet prefix length can differ for the subnets as
shown in Figure 3.9.

Fig 3.9 : Three level hierarchy in an IPv4 address


Sub netting a Network
If a organization is large or if its computers are geographically dispersed,
it makes good sense to divide network into smaller ones, connected
together by routers. The benefits fordoing things this way include.
Reduced network traffic
Optimized network performance
Simplified network management

CN : UNIT III Network Layer : Chennai Institute of Technology Page


Facilities spanning large geographical distances.
subnet mask code
1 = Positions representing network or
subnet addresses. 0 = Positions
representing the host address.
Subnet mask format
1111 1111. 1111 1111 1111 1111 0000 0000
Network address positions Subnet positions Host positions

The subnet mask can also be denoted using the decimal equivalents of the
binary patterns. Thedefault subnet masks for the different classes of networks
are as below in Table 3.3.1

Class Format Default subnet mask


Net.Node.Node.Node 255.0.0.0
Net.Net.Node.Node 255.255.0.0
Net.Net.Net.Node 255.255.255.0

Table 3.3.1 Default subnet mask of IP address Masking


How many subnets?
Number of subnet is calculated as
follows : ,Number of subnet = 2x
Where x is the number of masked bits or the 1s (ones).
For example 11100000, the number of 1s gives us 23 subnets. In this
example there are 8 subnets.
How many host per subnet ?
Number of host per subnet = 2y– 2
Where y is the number of unmasked bits or the 0s (zeros)
For example 11100000, the number of 0s gives us 25 – 2 hosts. In this
example there are 30 hosts per subnet. Your need to subtract 2 for subnet
address and the broadcast address.
What are the valid subnets?
For valid subnet = 256 – Subnet mask = Block size. An example would be 256 –
224 =
32. The block size of a 224 mask is always 32.

CN : UNIT III Network Layer : Chennai Institute of Technology Page


Start counting at zero in block of 32 until you reach the subnet mask
value and these are your subnets. 0, 32, 64, 96, 128, 160, 192, 224.
What is the broadcast address for each subnet ?
Our subnets are 0, 32, 64, 96, 128, 160, 192, 224, the broadcast address is
always the number right before the next subnet. For example, the subnet 0 ha a
broadcast address of 31 because next subnet is 32. The subnet 32 has a
broadcast address of 63 because next subnet is 64.
What are the valid hosts ?
Valid hosts are the numbers between the subnets, omitting the all 0s and
all 1s. For example, if 32 is the subnet number and 63 is the broadcast address,
then 32 to 63 is the valid host range. It is always between the subnet address
and the broadcast address.
Example 3.3.2 What is the sub-network address if the destination address is
200.45.34.56 andthe subnet mask is 255.255.240.0 ?
Solution: Using AND operation, we can find sub-network address,
Convert the given destination address into
binary format: 200.45.34.56 =>11001000
0010110100100010 00111000
Convert the given subnet mask address into
binary format: 255.255.240.0 =>11111111
1111111111110000 00000000
Do the AND operation using destination address and subnet
mask address. 200.45.34.56 =>11001000
0010110100100010 00111000
255,255.240.0 => 11111111 1111111111110000
00000000
11001000 0010110100100000 00000000
Subnet work address is 200.45.32.0
Example 3.3.2 For a network address 192.168.10.0 and subnet mask
255.255.255.224 thencalculate:
Number of subnet and number of host
Valid subnet
Solution: Given network address 192.168.10.0 is class C address. Subnet mask address
is255.255.255.224. Here three bits is browse for subnet.
Number of subnet and number of host
255.255.255.224 convert into binary =>11111111 11111111 11111111
11100000

CN : UNIT III Network Layer : Chennai Institute of Technology Page


Number or subnet =
2x = 23 = 8 So there
are 8 subnet.
Number of host per subnet = 2y - 2 = 25-2 = 30
Valid subnets
For valid subnet = 256 - Subnet mask = Block size. An example would be 256 -
224 = 32. Theblock size of a 224 mask is always 32.
Start counting at zero in block of 32 until you reach the subnet mask value
and these are yoursubnets. 0, 32, 64, 96,128,160,192, 224.
Example 3.3.3Find the sub-network address for the fallowing;
Sr. No. IP address Mask
a) 140.11.36.22 255.255.255.0
b) 120.14.22.16 255.255.128.0
Solution
IP address
Ma
sk 140.11.36.22 255.
255.255.0
The values of mask (i.e. 255.255.255.0) is
boundary level. SoIP address 140.11.36.22
Mask 255.255.255.0
140.11.36.0
b) IP address 140.11.36.22
Mask 255.255.128.0
Example 3.3.4Find the sub-network address for the fallowing;
Sr. No. IP address Mask
a) 141.181.14.16 255.255.224.0
b) 200.34.22.156 255.255.255.240
c) 125.35.12.57 255.255.0.0

CN : UNIT III Network Layer : Chennai Institute of Technology Page


Solution
a)141.181.14.16 IP address
255.255.224.0 Mask
141.181.0.0 Sub-network address
b)
200.34.22.156 IP address
255.255.255.240 Mask
200.34.22.144 Sub-network address

c)
125.35.12.57 IP address
255.255.0.0 Mask
125.35.0.0 Sub-network address
(i.e. 128) So for byte-3 value use bite-wise AND operators. It is shown below.

120.14.22.16 IP address
255.255.128.0 Mask
125.14.0.0 Sub-network address
In the above example, the bite wise ANDing is done in between 22 and 128. It is as
follows.
22 Binary representation 00010110
128 Binary representation 10000000
00000000
Thus the sub-network address for this is 120.14.0.0.
Example 3.3.5 Finde the class of the following address.
a) 1.22.200.10 b) 241.240.200.2 c) 227.3.6.8 d) 180.170.0.2
Solution: a) 1.22.200.10 Class A IP address
241.240.200.2 Class E IP address
227.3.6.8 Class D IP address
180.170.0.2 Class B IP address
Example 3.3.6Find the retid and Hositd for the following.
a) 19.34.21.5 b) 190.13.70.10 c) 246.3.4.10 d) 201.2.4.2
Solution
a) netid => 19 Hostid => 13.70.10
b) netid => 190.13 Hostid => 70.10
No netid and No Hostid because 246.3.4.10 is the class E address.
netid =>201.2.4 Hostid =>2
Example 3.3.7: Consider sending a 3500 - byte datagram that has arrived at a router
R1that needs to be sent over a link that has an MTU size of 1000 bytes to R2. Then it has
CN : UNIT III Network Layer : Chennai Institute of Technology Page 11
to traverse a link with an MTU of 600 bytes. Let the identification number of the original
datagram be 465.
How many fragments are delivered at the destination ? Show the parameters associated
with eachof these fragments.
Solution: The maximum size of data field in each fragment = 680 (because there are 20
bytes IP header). Thusthe number of required fragments) = [3500 - 20/680] - 5.11 ~ 6.
Each fragment will have Identification number 465. Each fragment except the last
one will be of size 700 bytes (including IP header). The last datagram will be of size 360
bytes (including IP header). The offsets of the4 fragments will be 0, 85, 70, 255. Each or
the first 3 fragments will have flag=l; the last fragment will have flag=0.
Example 3.10
An ISP is granted a block of addresses starting with 190.100.0.0/16 (65,536 addresses).
The ISPneeds to distribute these addresses to three groups of customers as follows:
The first group has 64 customers; each needs 256 addresses.
The second group has 128 customers; each needs 128 addresses.
The third group has 128 customers; each needs 64 addresses.
Design the sub blocks and find out how many addresses are still available after these allocations.
Solution
Figure 3.11 shows the situation.

Fig 3.11 An example of address allocation and distribution by an ISP


Group 1
For this group, each customer needs 256 addresses. This means that 8 (log2256) bits
are neededto define each host. The prefix length is then 32 - 8 =24. The addresses are
1st Customer: 190.100.0.0/24 100.0.255/24

CN : UNIT III Network Layer : Chennai Institute of Technology Page 12


2nd Customer: 190.100.1.0/24190 190.100.1.255/24
64th Customer: 190.100.63.0/24
190.100.63.255/
24Total =64 X 256 =16,384
Group2
For this group, each customer needs 128 addresses. This means that 7 (10g2 128) bits
are neededto define each host. The prefix length is then 32 - 7 =25. The addresses are
Group3
For this group, each customer needs 64 addresses. This means that 6 (log2 64) bits are
needed toeach host. The prefix length is then 32 - 6 =26. The addresses are
1st Customer: 190.100.128.0/26 190.100.128.63/26
2nd Customer: 190.100.128.64/26 190.100.128.127/26
128th Customer: 190.100.159.192/26
190.100.159.255/26Total =128 X 64 =8192
Number of granted addresses to the ISP:
65,536 Number of allocated addresses by the
ISP: 40,960 Number of available addresses:
24,576
Example 3.3.8Consider sending a 2400-byte datagram into link that has an MTU of 700
bytes. Suppose the original datagram is stamped with the identification number 422.
How many fragments are generated? What are the values in the various fields in the IP
datagram(s) generated related to fragmentation.
Solution: The maximum size of data field in each fragment = 680 (because there are 20
bytes IPheader).
Thus the number of required fragments = (2400 - 20) / 680 =4
Each fragment will have Identification number 422. Each fragment except that last
one tobe of size 700 bytes (including IP header.
The last datagram will be of size 360 bytes (including IP
header). The offsets of the 4 fragments will be 0, 85, 170,
255.
Each of the first 3 fragments will have flag = 1; last fragment will have flag = 0.
Example 3.3.9 Suppose all the interfaces in each of three subnets are required to have
the prefix 223.1.17/24.Also suppose that subnet 1 is required to support at least 60
interfaces, Subnet 2 is tosupport at least 90 interfaces and subnet 3 is to support at least
22 interfaces. Provide three network addresses that satisfy these constraints.
Solution: The network address cannot be used for an interface (Network prefix + all zeros).
The broadcast address cannot be used for an interface (Network prefix + all ones)
Subnet 2 (90
interfaces)
CN : UNIT III Network Layer : Chennai Institute of Technology Page 13
2n - 2 ≥90

CN : UNIT III Network Layer : Chennai Institute of Technology Page 14


Notice that we subtract 2 from the total number of available IP addresses because 2 IP
addresses are reserved for the network and broadcast addresses.
2n≥ 92 n = 7
Number of bits allocated to host part n = 7
Number of bits allocated to network part = Pre filength = 32 - n = 32 - 7 = 25
The network address of the first subnet is always the address of the given address
space.Network address of first subnet = 223.1.17.0/25 = 223.1.17/25
To obtain the broadcast address of a subnet, we keep to network part of the subnet’s
networkaddress as it is, and convert all bits in its host part to 1s.
Broadcast address of first subnet = 223.1.17.01111111 / 25 = 223.1.1.7.127/25
Subnet 1 (60 interfaces)
2n - 2 ≥ 60
Notice that we subtract 2 from the total number of available IP addresses because
2 IP addresses are reserved for the network and broadcast addresses.
2n ≥ 60 n =6
Number of bits allocated to host part = n = 6
Number of bits allocated to network part = Prefix length = 32 - n = 32 - 6 = 26 The
network address of any subnet (that is NOT the first subnet) is obtained by adding one to
the broadcast address of its preceding subnet.
Network address of second subnet = 223.1.17.128/26
Broadcast address of second subnet = 223.1.17.10111111/26 =223.1.17.191/26
Subnet 3 (12 interfaces) :
2n- 2 ≥ 12
Notice that we subtract 2 from the total number of available IP addresses because 2 IP
addresses are reserved for the network and broadcast addresses.
2n≥ 14 n = 4
Number of bits allocated to host part = n = 4
Number of bits allocated network part = Prefix length = 32 - n = 32 - 4
= 28Network address of third subnet = 223.1.17.192/28
3.6 Network Layer Protocols
3.6.1 IPv4:
The Internet Protocol version 4 (IPv4) is the delivery mechanism used by the TCP/IP
protocols. IPv4 is an unreliable and connectionless datagram protocol-a best-effort
delivery service.

1. Datagram
Packets in the IPv4 layer are called datagrams. A datagram is a variable-length packet consisting

CN : UNIT III Network Layer : Chennai Institute of Technology Page 15


of two parts: header and data. The header is 20 to 60 bytes in length and contains
information essential to routing and delivery. It is customary in TCP/IP to show the header
in 4-byte sections.Version (VER): This 4-bit field defines the version of the IPv4 protocol.
The version is 4.

Header length (HLEN): This 4-bit field defines the total length of the datagram headerin 4-
bytewords.
Services: This 8-bit field has following service
With only 1 bit set at a time, we can have five different types of services.

Total length. This is a 16-bit field that defines the total length (header plus data) .
Identification. This field is used indicate types of fragmentation.
Flags. This field is used in fragmentation.
Fragmentation offset. This field is used indicate reassembly.
Time to live: A datagram has a limited lifetime in its travel through an internet.
Protocol: This 8-bit field defines the higher-level protocol that uses the servicesof
the IPv4 layer. An IPv4 datagram can encapsulate data from several higher-level
protocols such as TCP, UDP,ICMP, and IGMP.
Checksum: The checksum concept and its calculation.
Source address: This 32-bit field defines the IPv4 address of the source
Destination address: This 32-bit field defines the IPv4 address of the
destination
6
Example 3.12
An IPv4 packet has arrived with the first 8 bits as shown:
01000010The receiver discards the packet. Why?
Solution
There is an error in this packet. The 4 leftmost bits (0100) show the version, which is
correct. The next 4 bits (0010) show an invalid header length (2 x 4 =8). The minimum
number of bytes in the header must be 20. The packet has been corrupted in transmission.
Example 3.13
In an IPv4 packet, the value of HLEN is 1000 in binary. How many bytes of options are
being carried by this packet?
Solution
The HLEN value is 8, which means the total number of bytes in the header is 8 x 4, or 32
bytes.The first 20 bytes are the base header; the next 12 bytes are the options.
Fragmentation
A datagram can travel through different networks. Each router decapsulates the IPv4
datagram from the frame it receives, processes it, and then encapsulates it in another
frame
Checksum
First, the value of the checksum field is set to O. Then the entire header is divided into
16-bit sections and added together. The result (sum) is complemented and inserted into
the checksum field.
Options
The header of the IPv4 datagram is made of two parts: a fixed part and a variable part.
The fixed part is 20 bytes long The variable part comprises the options that can be a
maximum of 40 bytes. Options, as the name implies, are not required

No Operation
A no-operation option is a 1-byte option used as filler between options.
End of Option

7
An end-of-option option is a 1-byte option used for padding at the end of the option field.
Record Route
A record route option is used to record the Internet routers that handle the datagram.
Strict Source Route
A strict source route option is used by the source to predetermine a route for the
datagram as ittravels through the Internet.
Loose Source Route
Each router in the list must be visited, but the datagram can visit other routers as well.
Timestamp
A timestamp option is used to record the time of datagram processing by a router. .
3.7 IPv6:
The next-generation IP, or IPv6, has some advantages over IPv4 that can be
summarized asfollows:
Larger address space
Better header format
New options
Allowance for extension
Support for resource allocation
Support for more security
IPv6 addresses
A new notation has been devised for writing 16-byte addresses. They are written as
eightgroups of four hexadecimal digits with colons between the groups, like this
8000 : 0000 : 0000 : 0000 : 0123 : 4567 : 89AB : CDEF
Optimization
Leading zeros within a group can be omitted so 0123 can be written as 123.
One or more groups of 16 zero bits can be replaced by a pair of colons. The
address newbecomes
8000 : : 123 : 4567 : 89AB : CDEF
3.7.1 Address Types
IPv6 allows three types of addresses.
3.7.1.1 Unicast 2. Anycast 3. Multicast
Unicast: An identifier for a single interface. A packet sent to a unicast address is
delivered to the interface identified by that address.
Anycast: An identifier for a set of interfaces. A packet sent to an anycast
address isdelivered to one of the interfaces identified by the address.
Multicast: An identifier for a set of interfaces. A packet sent to a multicast
address isdelivered to all interfaces identified by that address.
8
Packet Format
Each packet is composed of a mandatory base header followed by the payload. The
payload consists of two parts: optional extension headers and data from an upper layer.
The base header occupies 40 bytes, whereas the extension headers and data from the
upper layer contain up to 65,535 bytes of information.

a. Base Header
These fields are as follows:
Version: This 4-bit field defines the version number of the IP. For IPv6, the value is 6.
Priority: The 4-bit priority field defines the priority of the packet.
Flow label: The flow label is a 3-byte (24-bit) field used for control the flow of data.
Payload length: The 2-byte payload length field defines the length of the IP
datagramexcludingthe base header.
Next header: The next header is an 8-bit field defining the header that follows thebase
header inthe datagram.

9
Hop limit: This 8-bit hop limit field used to indicate life time of the packet.
Source address: The source address field is a 16-byte (128-bit) Internet address
thatidentifiesthe original source of the datagram.
Destination address: The destination address field is a 16-byte (128-bit)
Internetaddress thatusually identifies the final destination of the datagram.
Priority: The priority field of the IPv6 packet defines the priority of each packet.
Comparison between IPv4 and IPv5
Sr. No. IPv4 IPv6
1. Header size is 32 bits Header size is 128 bits
2. It cannot support auto configuration. Supports auto configuration.
3. Cannot support real time application. Supports real time application.
4. No security at network layer. Provides security at network layer.
5. Throughput and delay is more. Throughput and delay is less.
Transition from IPv4 to IPv6
Three strategies have been devised by the IETF to help the transition.
1. Dual stack 2. Tunneling 3. Header translation
Dual Stack
All the host must run IPv4 and IPv6 simultaneously until all the Internet uses IPv6.
Fig. 3.27 shows the dual stack.

Fig. 3.27 Dual


stackTunneling
When two computers using IPv6 want to communicate with each other and the
packet must pass through a region that uses IPv4. The IPv6 packet is
encapsulated in an IPv4 packet when it enters the region, and it leaves its capsule,
when it exits the region.
Fig. 3.28 shows the tunneling
Fig. 3.28
Tunneling Header
Translation
Header translation is used when some of the system uses IPv4. The sender wants to use
IPv6, but the receiver does not understand IPv6.
Fig. 3.29 shows the header translation.

Fig. 3.29 Header translation


The header format must be totally changed through header translation. The header
of theIPv6 packet is converted to IPv4 header.
3.7 Internet Control Message Protocol
The Internet Control Message Protocol (ICMP) is the protocol that handles error
andother control messages.
ICMP messages are encapsulated by IP packets.
Fig. 3.30 shows an ICMP encapsulation.

Fig. 3.30
The value of the protocol field in the IP datagram is 1 to indicate that the IP
datais an ICMP message.
3.7.1 Message Types
All ICMP messages fall in the following classes :
3.7.1.1 Error reporting 2. Query.
The error reporting messages report problems that a router or a host may
encounter whenit processes an IP packet.
The query messages, which occurs in pairs, help a host or a network manager
specific information from a router or another host.
The main functions associated with the ICMP are as follows:
Error reporting
Reachability
testing 3 Congestion
control
Route change notification
Performance measuring
Subnet addressing.
3.7.2 Message Format
Fig 3.31 shows the basic error message format. An ICMP message is encapsulated
into the data field of an IP packet. An ICMP header is 8 bytes long and a variable
size data section.

Fig.3.31 error message format


Type: It is 8 bits field identifies the type of the message.
Code: Size of the code field is 8 bits. It provides the information or parameters
of themessage type.
Checksum: This 16-bit field is used to detect errors in the ICMP message.
IP header plus original datagram: This field can be used for diagnostic purposes
by matching the information in the ICMP message with the original data in the IP
packet.
3.7.3 Error Reporting
ICMP does not correct errors, it simply reports them.
ICMP handles five types of errors
3.7.3.1 Destination unreachable
The ICMP destination unreachable message is sent by a router in response to a
packet which it cannot forward because the destination (or next hop) is
unreachable or a service is unavailable .
Fig. 3.32 shows the destination unreachable format.
Type : 3 Code : 0 to 15 Checksum
Unused (All 0s)
Part of the received IP datagram including IP header
plus the first 8 bytes of datagram
data
Fig. 3.33
Code field: The code field is used by the different message formats to indicate
specific error conditions. For destination unreachable, the code field is:
0 = Net
unreachable 1 =
Host unreachable
2 = Protocol
unreachable 3 = Port
unreachable;
4 = Fragmentation needed and DF
set5 = Source route failed.
Codes 0, 1, 4 and 5 may be sent from a router. Codes 2 and 3 may be sent from a host.
Checksum: The checksum is the 16-bit ones's complement of the
one'scomplement sum of the ICMP message starting with the ICMP Type.
Unused: The 32 bits are not used and ignored.
3.7.3.2 Source quench
ICMP source quench messages to report congestion to the original source. A
source quench message is a request for the source to reduce its current rate of
datagram transmission..
Fig. 3.34 shows the source quench format.
Type : 4 Code : 0 Checksum
Unused (All 0s)
Part of the received IP datagram including IP header
plus the first 8 bytes of datagram
data
Fig. 3.34
Type field: The type field indicates the type of ICMP message. A Source
Quenchmessage will have the number 4 in the type held.
Code field: The code held is used by the different message formats to indicate
specificerror conditions. For source quench, the code field is always 0.
Checksum: The checksum is the 16-bit ones complement of the one's
complementsum of the ICMP message starting with the ICMP type.
Unused: The 32 bits are not used and ignored.
3.7.3.3 Time exceeded message
Fig. 3.35 shows the time exceeded message format.
Type : 11 Code : 0 or 1 Checksum
Unused (All 0s)
Part of the received IP datagram including IP header
plus the first 8 bytes of datagram
data
Fig. 3.35
Type field: The type field indicates the type of ICMP message. A time exceeded
message will have the number 11 in the type field.
Code field: The code field is used by the different message formats to indicate
specific error conditions. For Echo, the code field is:
0 = Time to live exceeded in transit
1 = Fragment reassembly time exceeded.
Checksum: The checksum is the 16-bit one’s complement sum of the one’s
complementsum of the ICMP message starting with the ICMP type.
Unused: The 32 bits are not used and ignored.
3.7.3.4 Parameter problem
The parameter problem message identifies the octet of the original datagram’s
headerwhere the error was detected.
Fig. 3.36 shows the parameter problem message format.
Type : 11 Code : 0 or 1 Checksum
Pointer Unused (All 0s)
Part of the received IP datagram including IP header
plus the first 8 bytes of datagram
data
Fig. 3.36
Type field: The type field indicates the type of ICMP message. A parameter
problemmessage will have the number 12 in the type field.
Code field: The code field is used by the different message formats to indicates
specific error conditions. For Parameter Problem message, the code field is 0
when the Pointer field indicates the error.
Checksum: The checksum is the 16-bit ones’s complement of the one’s
complement sumof the ICMP message starting with ICMP type.
Pointer: The pointer identifies the octet of the original datagram's header where the
errorwas detected (it may be in the middle of an option).
Unused : The 24 bits are not used and ignored.
3.7.3.5 Redirection:
Fig. 3.38 shows the redirection message format.
Type : 5 Code : 0 or 3 Checksum
IP address of the target router
Part of the received IP datagram including IP header
plus the first 8 bytes of datagram
data
Fig. 3.38
Type field: The type field indicates the type of ICMP message. A redirect message
willhave the number 5 in the type field
Code field: The code field is used by the different message formats to indicate
specificerror conditions. For the redirect message, the code field is :
0 = Redirect datagrams for the
network 1 = Redirect datagrams for
the host
2 = Redirect datagrams for the type of service and network
2 = Redirect datagrams for the type of service and host.
Checksum: The checksum is the 16-bit ones's complement of the
one'scomplement sum of the ICMP message starting with the ICMP type.
Gateway internet address: This field is used to indicate the router with the shortest
pathto the destination network.
Query
ICMP query messages are of four types
Echo request and reply
Timestamp request and reply
Address-mask request and reply
Router solicitation and advertisement.
3.7.4 Echo Request and Reply
The echo request and echo reply messages can be used to determine if there is
communication at the IP level.Because the ICMP messages are encapsulated in IP
datagram’s.
Fig. 3.39 shows the format of the echo request and echo reply messages.
Fig. 3.39 Format of the echo and echo reply messages
3.7.5 Fig. 3.40 shows the Timestamp request and timestamp reply message format.
Timestamp Request and Reply
Timestamp-request and timestamp-reply messages can be used to calculate the
round-triptime between a source and a destination machine.

Fig. 3.41
Type field (8 bits): The type field indicates the type of ICMP message. A
timestamp message will have the number 13 and timestamp Reply message will
have the number 14.
Code field (8 bits): The code field is used by the different message formats to
indicatespecific error conditions
Checksum (16 bits): The checksum is the 16-bit ones complement..
Identifier and sequence number (16 bits each): The identifier and sequence
number may be used by the echo sender to aid in matching the replies with the
echo requests.
Originate timestamp (32 bits): The originate timestamp is the time in units of
32milliseconds from the source.
Receive timestamp (32 bits): The receive timestamp is the time in units of 32
milliseconds at the destination.
Transmit timestamp (32 bits): The transmit timestamp is the time in units of 32
milliseconds Timestamp Reply datagram was transmitted from the destination.
Example 3.5.1 An IP and ICMP packet comes back with the following information:
Original timestamp 46
Receiving time 59
Transmit time 60
Return time 67
Calculate sending time, receiving time and round-trip time.
Solution
Sending time Receive timestamp – Original
timestamp
59 – 46
13 milliseconds
Receiving time Returned time – Transmit time
67 – 60
7 milliseconds
Round-trip time Sending time + Receiving time
13 + 7 = 20 m milliseconds
3.7.6 Address Mask Request and Reply Messages
The address mask request is used by a host to determine what its address mask is
on a network. The address mask reply message is the reply from a router or a host
to thesource host with the correct address mask for the network.
Fig. 3.42 shows the format of the mask request and reply message format.

Fig. 3.42
Type field: The type field indicates the type of ICMP message. An address mask
request message will have the number 17 in the type field and an address
mask reply message will have the number 18.
Code field: The code field is used by the different message formats to indicate
specific error conditions..
Checksum: The checksum is the 16-bit ones's complement of the one's complement sum
Identifier and sequence number: The identifier and sequence number may be
used bythe echo sender to aid in matching the replies with the echo requests.
Address mask: The address mask field contains the 32 bit subnet mask for the
network(e.g. 255.255.255.0).
3.7.7 Router Solicitation and Advertisement
Fig. 3.43 shows the router solicitation message format.
Type : 10 Code : 0 Checksum
Identifier Sequence number
Fig. 3.43
Identifier and sequence number fields are not used.
Fig. 3.44 shows the router advertisement message format.
This is the reply that comes back from the previous request. Lifetime field
shows thenumber of seconds that the entries are considered to be valid.
Type : 9 Code : 0 Checksum
Number of addresses Address entry size Lifetime
Router address 1
Address preference 1
Router address 2
Address preference 2

Fig. 3.45
3.8 Routing table
A host or a router has a routing table with an entry for each destination or a combination
of destinations, to route IP packets. Routing table can be either static or dynamic or Static
Routing table:

Figure22.10( Fig 3.46)


Network Next-hop Interlace Flag
address address

Mask. This field defines the mask applied for the entry.
Network address. This field defines the network address to which the packet is

Reference count. This field gives the number of users of this route at the moment.
Use. This field shows the number of packets transmitted through this router for the
corresponding destination.
of Static Routing
Minimal CPU/Memory overhead.

Bandwidth is not used for sending routing updates.

Impractical on large network.


Advantages and Disadvantages of Dynamic
Routing
Simpler to configure on larger networks.

Updates are shared between routers, thus consuming bandwidth.


Routing protocols put additional load on router CPU/RAM.

Difference between Static and Dynamic Routing


Dynamic routing (Adaptive)

source and the destination


computers.

controlling
mechanism if any faults in

networks.
algorithms for
routing the data packets.

Direct Versus Indirect Delivery

3.4 Forwarding

Forwarding Techniques

Next-Hop Method versus Route Method


Network-Specific Method versus Host-Specific Method

Unicast Routing Protocols


1. Intra- and Inter-domain Routing

Several intradomain and interdomain routing protocols are in


use. Two intradomain routing protocols: Distance vector and
link state.

Comparison between Intra and Inter-domain Routing


Intra-domain routing Inter-domain routing
Routing between AS’s.

autonomous system.
collection of interconnected AS’s.

also called Exterior Gateway Protocols.


Routing protocols are BGP.

Distance Vector Routing


The whole idea of distance vector routing is the sharing of information between neighbors.
The starting assumption for distance-vector routing is that each node knows thecost
of

Final distances stored at each node (global view).

Routing Information Protocol (RIP)

RIP Message Format


Fig. 3.9.6 shows the RIP message format.

Fig. 3.9.6

Network address : The address field defines the address of the destination network.

RIP support two types of messages: Request and Response.

A response message can be either solicited or unsolicited.


Solicited response

Containing information about the destination specifiedin the corresponding request.


Unsolicited response
Issentperiodically, every 30 seconds.
Containing information covering the whole routing table

Garbage collection (120 sec).


Identifier - Indicates what type of address is specified in this particular entry.

Link State Routing

Learning about the neighbors:


Measuring line cost:

Distributing the link state packets :

ii) Bellman-Ford algorithm.


Dijkstra's algorithm:

Step-I: Source node is initialized and can be indicated as a filled circle.


Following example illustrates Dijkstra's algorithm.
Since shortest is E, now E is working node.

Bellman-Ford algorithm
Bellman-Ford algorithm is illustrated in the followingexample.
Solution :Step-1:

d(AE) < d(AC)

Open Shortest Path First (OSPF)

OSPF supports multiple circuit load balancing..


OSPF can converge very quickly to network topology change.
OSPF support multiple metrics.
Fig. 3.10.4.

b. Database description

Area ID: Network ID of destination networks.

Authentication: This field includes a value from the authentication type.


OSPF Advantages
Low traffic overhead.
Fast convergence.
OSPF Disadvantages
Memory overhead.
Processor overhead.
Configuration OSPF can be complex to configure.
Difference between Distance Vector and Link State Routing
Distance vector

It is decentralized routing algorithm.


Send small updates every where.
neighbouring routers.

memory space
Simple to implement and support.

Global Internet (or Path Vector Routing)


Path vector routing protocol provides information about how to reach a network
given

BGP(Border Gateway Protocols)


BGP performs three functional procedures
Neighbour acquisition 2. Neighbour reachability 3. Network reachability.
Fig. 3.11.1shows the internal and external BGP.

Fig. 3.11.1

Marker: Marker field is used for authentication.

Type: Type field indicates type of message. BGP defines four message type.
c) NOTIFICATION d) KEEPALIVE
BGP easily solves the count-to-infinity problem.

3.11.2 Comparison between RIP and OSPF

RIP is distance vector routing protocol.


Multicast Routing Protocols
Unicast, Multicast, and Broadcast:

In unicast communication, there is one source and one destination.

Multicast Routing
Routing Protocols

PIM-DM mode

Routers explicitly join and leave the group by using "Join" and "Leave" messages.

Two Marks Questions


Identify the class/speciality of the following IP addresses (Dec 15)

d)255.255.255.255

What is the purpose of the Address Resolution Protocol ?(May 11)


Define an internetwork.

Define geographic routing.( May 10)

What are the different kinds of multicast routing ?(May 11)

Compare the Ethernet address with IP address


Ethernet Address
organizational

For example: 8-0-20-b-de-3e For example: 172.16.16.1

Efficient and hierarchical addressing and routing infrastructure


IPv6 networks provide auto-configuration capabilities.
Improved security features.

Large address space.


Stateless and stateful address configuration.

What are the metrics used by routing protocols ?(May


15)
Distance matrices
Adjacency matrices
What is fragmentation and reassembly (Dec 16)

Give the comparison of unicast, multicast and broadcast routing.(Dec


16)
Support for complex address structures

Expand ICMP and write the function.( May


16) ICMP stands for internet control message
control.

5) Performance measuring 6) Subnet addressing

Highlight the characteristics of datagram networks. (Dec


17)

Each packet is forwarded independently.


: : OF53:6382:AB00:67DB:BB27:7332.

: : OF53:6382:AB00:67DB:BB27:7332 : Correct
7803:42F2:::88EC-D4BA:B75D:11CD : Incorrect because of two many (:)
UNIT IV
TRANSPORT LAYER
4.1 Introduction of Transport Layer
• A transport layer is responsibility for source to destination or end to end delivery of entire
message.
• Transport layer functions
1. This layer breaks messages into packets.
2. It performs error recovery if the lower layer are not adequately error free.
3. Function of flow control if not done adequately at the network layer.
4. Functions of multiplexing and demultiplexing sessions together.
5. This layer can be responsible for setting up and releasing connections across the network.
4.2 The Transport Services
• The transport entity that provides services to transport service users, whichmight be an
application process.
• The hardware and software within the transport layer that does the work is called the
transport entity. The following categories of service are useful for describing the transport
service.
1. Type of service
2. Quality of service
3. Data transfer
4. User interface
5. Connection management
6. Expedited delivery
7. Status reporting
8. Security
1. Type of service
• It provides two types of services connection-oriented and connectionless or datagram
service.
2. Quality of service
• The transport protocol entity should allow the transport service user to specify the quality
of transmission service to be provided.
• Following are the transport layer quality of service parameters.
a) Error and loss levels.

Page 1
b) Desired average and maximum delay.
c) Throughput.
d) Priority level.
e) Resilience.
3. Data transfer: It transfers data between two transport entities.
4. User Interface: There is not clear mechanism of the user interface to the transport
protocol should be standardized.
5. Connection management: If connection-oriented service is provided, the transport
entity is responsible for establishing and terminating connections.
6. Status reporting: It gives the following information.
a) Addresses.
b) Performance characteristics of a connection.
c) Class of protocol in use.
d) Current timer values.
7. Security: The transport entity may provide a variety of security services.

4.3 USER DATAGRAM PROTOCOL (UDP) :


UDP is a simple, datagram-oriented, transport layer protocol. UDP is connectionless protocol
provides no reliability or flow control mechanisms. It also has no error recovery procedures.
The User Datagram Protocol (UDP) is called a connectionless, unreliable transport protocol. It
performs very limited error checking.
1. User Datagram
UDP packets, called user datagrams, have a fixed-size header of 8 bytes. Figure 4.9 shows the
format of a user datagram.

The fields are as follows:


Source port number. This is the port number used by the process running on the sourcehost. It
is 16 bits long, which means that the port number can range from 0 to 65,535.
Destination port number. This is the port number used by the process running on thedestination
host. It is also 16 bits long.

Page 2
Length. This is a 16-bit field that defines the total length of the user datagram, headerplus data.
Checksum. This field is used to detect errors.
3. Checksum
The UDP checksum calculation is different from the one for IP and ICMP. Here the checksum
includes three sections: a pseudoheader, the UDP header, and the data coming from the
application layer

Figure 4.10 Pseudoheader for checksum calculation


The pseudoheader is the part of the header of the IP packet in which the user datagram is to be
encapsulated with some fields filled with Os (see Figure 4.10).
Example 4.1
Figure 4.11 shows the checksum calculation for a very small user datagram with only 7 bytes of
data. Because the number of bytes of data is odd, padding is added for checksum calculation.
The pseudoheader as well as the padding will be dropped when the user datagram is delivered to
IP.

4. UDP Operation:
UDP uses concepts common to the transport layer.

Page 3
Connectionless Services
UDP provides a connectionless service. This means that each user datagram sent by UDP is an
independent datagram. There is no relationship between the different user datagrams even if they
are coming from the same source process and going to the same destination program.
Flow and Error Control
UDP is a very simple, unreliable transport protocol. There is no flow control and hence no
window mechanism. The receiver may overflow with incoming messages. There is no error
control mechanism in UDP except for the checksum.
Encapsulation and Decapsulation
To send a message from one process to another, the UDP protocol encapsulates and
decapsulates messages in an IP datagram.
Queuing
In UDP, queues are associated with ports. (Figure 4.12).

At the client site, when a process starts, it requests a port number from the operating system.
Some implementations create both an incoming and an outgoing queue associated with each
process. Other implementations create only an incoming queue associated with each process.
5. Use of UDP
The following lists some uses of the UDP protocol:
UDP is suitable for a process that requires simple request-response communication with little
concern for flow and error control.
UDP is suitable for a process with internal flow and error control mechanisms.
UDP is a suitable transport protocol for multicasting.
UDP is used for some route updating protocols such as Routing Information Protocol (RIP).
4.4 TCP
TCP is a process-to-process (program-to-program) protocol. TCP is called a connection-
oriented, reliable transport protocol. TCP uses flow and error control mechanisms at the transport
level.
1. TCP Services

Page 4
The services offered by TCP to the processes at the application layer.
Process-to-Process Communication
TCP provides process-to-process communication using port numbers.
2. Stream Delivery Service
TCP is a stream-oriented protocol. TCP allows the sending process to deliver data as a stream of
bytes and allows the receiving process to obtain data as a stream of bytes. TCP creates an
environment in which the two processes seem to be connected by an imaginary "tube" that
carries their data across the Internet.

Sending and Receiving Buffers:


Because the sending and the receiving processes may not write or read data at the same speed,
TCP needs buffers for storage. There are two buffers, the sending buffer and the receiving buffer,
one for each direction.
Segments
At the transport Layer, TCP groups a number of bytes together into a packet called a segment.
3. Full-Duplex Communication
TCP offers full-duplex service, in which data can flow in both directions at the same time. Each
TCP then has a sending and receiving buffer, and segments move in both directions.
4. Connection-Oriented Service
TCP is a connection-oriented protocol. When a process at site A wants to send and receive data
from another process at site B, the following occurs:
1. The two TCPs establish a connection between them.
2. Data are exchanged in both directions.
3. The connection is terminated.
Reliable Service
TCP is a reliable transport protocol. It uses an acknowledgment mechanism to check the safe
and sound arrival of data.
5. TCP Features
TCP has several features.
Numbering System

Page 5
There are two fields called the sequence number and the acknowledgment number. These two
fields refer to the byte number and not the segment number.
Byte Number
TCP numbers all data bytes that are transmitted in a connection. The bytes of data being
transferred in each connection are numbered by TCP. The numbering starts with a randomly
generated number.
Sequence Number
After the bytes have been numbered, TCP assigns a sequence number to each segment that is
being sent.
Acknowledgment Number
Communication in TCP is full duplex; when a connection is established, both parties can send
and receive data at the same time.
6. Flow Control
TCP provides flow control. The receiver of the data controls the amount of data that are to be
sent by the sender.
Error Control
To provide reliable service, TCP implements an error control mechanism
Congestion Control
TCP, takes into account congestion in the network. The amount of data sent by a sender is not
only controlled by the receiver (flow control), but is also determined by the level of congestion
in the network.
7. Segment
A packet in TCP is called a segment.
8. Format
The format of a segment is shown in Figure 4.16

Page 6
The segment consists of a 20- to 60-byte header.
Source port address. This is a 16-bit field that defines the port number of the
applicationprogram.
Destination port address. This is a 16-bit field that defines the port number of theapplication
program in the host that is receiving the segment.
Sequence number. This 32-bit field defines the number assigned to the first byte of
datacontained in this segment.
Acknowledgment number. This 32-bit field defines the byte number that the receiver ofthe
segment is expecting to receive from the other party.
Header length. This 4-bit field indicates the number of 4-byte words in the TCP header.The
length of the header can be between 20 and 60 bytes. Therefore, the value of this field can be
between 5 (5 x 4 =20) and 15 (15 x 4 =60).
Reserved. This is a 6-bit field reserved for future use.
Control. This field defines 6 different control bits or flags as shown in Figure 4.17.One or more
of these bits can be set at a time.

These bits enable flow control, connection establishment and termination, connection abortion,
and the mode of data transfer in TCP. A brief description of each bit is shown in Table 4.3

Window size. This field defines the size of the window, in bytes, that the other partymust
maintain.
Checksum. This 16-bit field contains the checksum.
Urgent pointer. This l6-bit field, which is valid, only if the urgent flag is set, is usedwhen the
segment contains urgent data.

Page 7
Options. There can be up to 40 bytes of optional information in the TCP header.
A TCP Connection
TCP is connection-oriented. A connection-oriented transport protocol establishes a virtual path
between the source and destination. All the segments belonging to a message are then sent over
this virtual path.
a. Connection Establishment
TCP transmits data in full-duplex mode. When two TCPs in two machines are connected,
they are able to send segments to each other simultaneously.
Three-Way Handshaking:
The connection establishment in TCP is called three way handshaking. Tthe three-way
handshaking process as shown in Figure 4.18.

1. The client sends the first segment, a SYN segment, in which only the SYN flag is set.
This segment is for synchronization of sequence numbers. It consumes one sequence number.
When the data transfer start, the sequence number is incremented by 1.
2. The server sends the second segment, a SYN +ACK segment, with 2 flag bits set:
SYN and ACK.
3. The client sends the third segment. This is just an ACK segment. It acknowledges the
receipt of the second segment with the ACK flag and acknowledgment number field.
b. Data Transfer
After connection is established, bidirectional data transfer can take place. The client and server
can both send data and acknowledgments. Figure 4.19 shows an example.
In this example, after connection is established (not shown in the figure), the client sends 2000
bytes of data in two segments. The server then sends 2000 bytes in one segment.

Page 8
The client sends one more segment. The first three segments carry both data and
acknowledgment, but the last segment carries only an acknowledgment because there are no
more data to be sent.
Pushing Data
The sending TCP uses a buffer to store the stream of data coming from the sending application
program. The sending TCP can select the segment size.

The application program at the sending site can request a push operation. This means that the
sending TCP must not wait for the window to be filled. It must create a segment and send it
immediately.
Urgent Data
On occasion an application program needs to send urgent bytes. This means that the sending
application program wants a piece of data to be read out of order by the receiving application
program.
c. Connection Termination
Any of the two parties involved in exchanging data (client or server) can close the connection,
although it is usually initiated by the client. Most implementations today allow two Options for
connection termination: three-way handshaking and four-way handshaking with a half-close
option.
Three-Way Handshaking
Most implementations today allow three-way handshaking for connection termination as shown
in Figure 4.20.
1. In a normal situation, the client TCP, after receiving a close command from the client process,
sends the first segment, a FIN segment in which the FIN flag is set.

Page 9
The FIN segment consumes one sequence number if it does not carry data.
2. The server TCP, after receiving the FIN segment, informs its process of the situation and
sends the second segment, a FIN +ACK segment, to confirm the receipt of the FIN segment from
the client and at the same time to announce the closing of the connection in the other direction.
The FIN +ACK segment consumes one sequence number if it does not carry data.
3. The client TCP sends the last segment, an ACK segment, to confirm the receipt of the FIN
segment from the TCP server. This segment contains the acknowledgment number, which is 1
plus the sequence number received in the FIN segment from the server.

Figure 4.20 : Three-way handshaking for connection termination


Half-Close
In TCP, one end can stop sending data while still receiving data. This is called a half-close.
Figure 4.21 shows an example of a half-close. The client half-closes the connection by sending a
FIN segment. The server accepts the half-close by sending the ACK segment. The data transfer
from the client to the server stops. The server, however, can still send data. When the server has
sent all the processed data, it sends a FIN segment, which is acknowledged by an ACK from the
client.

Page 10
10. Flow Control
TCP uses a sliding window to handle flow control. The sliding window protocol used by TCP,
however, is something between the Go-Back-N and Selective Repeat sliding window.
The size of the window at one end is determined by the lesser of two values: receiver window
(rwnd) or congestion window (cwnd).

Some points about TCP sliding windows:


 The size of the window is the lesser of rwnd and cwnd.
 The source does not have to send a full window's worth of data.
 The window can be opened or closed by the receiver, but should not be shrunk.
 The destination can send an acknowledgment at any time as long as it does not result in a
shrinking window.
 The receiver can temporarily shut down the window; the sender, however, can always
send a segment of 1 byte after the window is shut down.
11. Error Control
TCP is a reliable transport layer protocol. This means that an application program that delivers a
stream of data to TCP relies on TCP to deliver the entire stream to the application program on
the other end in order, without error, and without any part lost or duplicated.
a. Checksum
Each segment includes a checksum field which is used to check for a corrupted segment. If the
segment is corrupted, it is discarded by the destination TCP and is considered as lost. TCP uses a
16-bit checksum that is mandatory in every segment.
b. Acknowledgment
TCP uses acknowledgments to confirm the receipt of data segments.
c. Retransmission
The heart of the error control mechanism is the retransmission of segments. When a segment is
corrupted, lost, or delayed, it is retransmitted. .
i. Retransmission After RTO
A recent implementation of TCP maintains one retransmission time-out (RTO) timer for all
outstanding (sent, but not acknowledged) segments.

Page 11
ii. Retransmission After Three Duplicate ACK Segments
One segment is lost and the receiver receives so many out-of-order segments that they cannot be
saved (limited buffer size).
a. Normal Operation
The first scenario shows bidirectional data transfer between two systems, as in Figure .
The client TCP sends one segment; the server TCP sends three. The figure shows which rule
applies to each acknowledgment. There are data to be sent, so the segment displays the next byte
expected. When the client receives the first segment from the server, it does not have any more
data to send; it sends only an ACK segment

b. Lost Segment
In this scenario, we show what happens when a segment is lost or corrupted. A lost
segment and a corrupted segment are treated the same way by the receiver. A lost segment is
discarded somewhere in the network; a corrupted segment is discarded by the receiver itself.
Both are considered lost. Figure 4.24 shows a situation in which a segment is lost and discarded
by some router in the network, perhaps due to congestion.

Page 12
c. Fast Retransmission
In this scenario, we want to show the idea of fast retransmission. Our scenario is the same as the
second except that the RTO has a higher value (see Figure 4.25).

Example 4.5.1 With TCP's slow start and AIMD for congestion control, show how the
windowsize will vary for a transmission where every 5th packet is lost. Assume an advertised
window size of 50 MSS.
Solution: Since Slow Start is used, window size is increased by the number ofsegments
successfully sent. This happens until either threshold, value is reached or time out occurs.
In both of the above situations AIMD is used to avoid congestion. If threshold is reached,
window size will be increased linearly. If there is timeout, window size will be reduced to half.
Assuming the window size at the start of the slow start phase is 2 MSS and the threshold
at the start of the first transmission is 8 MSS.

Page 13
Window size for 1st transmission = 2 MSS
Window size for 2nd transmission = 4 MSS
Window size for 3rd transmission = 8 MSS
threshold reached, increase linearly (according to AIMD)
Window size for 4th transmission = 9 MSS
Window size for 5th transmission = 10 MSS
time out occurs, resend 5thwith window size starts with as slow start.
Window size for 6 th transmission = 2 MSS
Window size for 7 th transmission = 4 MSS
threshold reached, now increase linearly (according to AIMD)
Additive Increase: 5 MSS (since 8 MSS isn't permissible anymore)
Window size for 8th transmission = 5 MSS
Window size for 9th transmission = 6 MSS
Window size for 10th transmission = 7 MSS
This shows that window size is variable and time out occurs during the fifth transmission.
Example 4.5.2 Suppose you are hired to design a reliable byte-stream protocol that uses a sliding
window (like TCP). This protocol will run over a 50-Mbps network, the RTT of the network is
80ms and the maximum segment lifetime is 60 seconds. How many bits would you include in the
Advertised Window and Sequence Numfields of your protocol header ?
Solution
(50 103 )
The window size (in bytes) must be RTT Bandwidth = 8 (0.08) = 500 bytes
We need therefore, 12 bits for the advertized window (allows a maximum windowsize of
33554431 bytes)
4.6 Adaptive Retransmission
• TCPguarantees the reliable delivery of data, it retransmits each segment if an ACKis not
received in a certain period of time. TCP sets this timeout as a function of the RTT it
expects between the two ends of the connection.
• TCP uses an adaptive retransmission mechanism.
• Every time TCP sends a data segment, it records the time. When an ACK for that
segment arrives, TCP reads the time again and then takes the difference between these
two times as a sample RTT.
• TCP then computes an estimate RTT as a weighted average between the previous
estimate and this new sample.
Estimated RTT = α Estimated RTT + (1 - α) Sample RTT

Page 14
• Parameter αis selected to smooth the estimated RTT.
• TCP then uses Estimated RTT to compute the timeout in a rather conservativeway :
Timeout = 2 Estimated RTT
4.6.1 Karn / Partridge Algorithm
• The problem of the above algorithm is that, an ACK does not really acknowledge a
transmission it actually acknowledges the receipt of data. Fig. 4.6.1 shows associating the
ACK with / retransmission.
• If you assume that the ACK is for the original transmission but it was really for the
second, then the sample RTT is too large, which is shown in Fig. 4.6.1.
• If you assume that the ACK is for the second transmission but it was actually for the first,
then the sample RTT is too small.

Fig. 4.6.1 Ack with original


Solution to the above problem
• Whenever TCP retransmits a segment, it stops taking sample of the RTT it only
measuressample RTT forsegments that have been sent only once. This solution is
known as the Karn/Partridge algorithm.
• Each time TCP retransmits,it sets the next timeout to betwice the last timeout, rather than
basing it on the last estimated RTT.

Fig.4.6.2 Retransmission with ACK

Page 15
4.6.2 Jacobson / Karels Algorithm
• This algorithm is used by any end to end protocol.
• In this algorithm, the sender measures a new sample RTT as before. It then foldsthis new
sample into the timeout calculation as follows :
Difference = Sample RTT - Estimated RTT
Estimated RTT = Estimated RTT + ( Difference)
Deviation = Deviation + ( | Difference | - Deviation)
where is fraction between 0 and 1.
• TCP then computes the timeout value as a function and both Estimated RTT and
deviation as follows :
Timeout = µ Estimated RTT + Deviation
4.9.1 Comparison of TCP and UDP
Service/Features TCP UDP
Connection-oriented yes no
Full duplex yes yes
Reliable data transfer yes no
Partial-reliable data transfer no no
Ordered data delivery yes no
Unordered data delivery no yes
Flow control yes no
Congestion control yes no
ECN capable yes no
Selective ACKs optional no
Preservation of message boundaries no yes
Path MTU discovery yes no
Application PDU fragmentation yes no
Appliation PDU bundling yes no
Multistreaming no no
Multihoming no no
Protecting against SYN flooding attacks no Not applicable
Allows half-closed connections yes Not applicable
Rechability check yes no
Psuedo-header for checksum yes yes
Time wait state for 4-tuple Not applicable

Page 16
Questions and Answers
1. What are the advantages of using UDP over TCP? (Dec 10)
Ans: Does not include the overhead needed to detect reliability and maintain connection-
oriented semantics.
2. Give the approaches to improve the QoS. (May 11, Dec 17)
Ans: Approaches to QoS:
1. Fine-grained approaches, which provide QoS to individual applications or flows.
Integrated services, a QoS architecture developed in the IETF and often associated with
RSVP.
2. Coarse-grained approaches, which provide QoS to large classes of data or aggregated
traffic. Differentiated services which is probably the most widely deployedQoS
mechanism.
3. What is TCP ?(Dec 11)
Ans: TCP provides a connection oriented, reliable, byte stream service. The term connection-
oriented means the two applications using TCP must establish a TCP connection with each other
before they can exchange data.
4. Define congestion.( Dec 11)
Ans: When too many packets rushing to a node or a part of network, the network performance
degrades so this situation is called as congestion.
5. What do you mean by slow start in TCP congestion ?(May 16)
Ans: Slow-start is part of the congestion control strategy used by TCP, the data transmission
protocol used by many Internet applications. Slow-start is used in conjunction with other
algorithms to avoid sending more data than the network is capable oftransmitting, that is, to
avoid causing network congestion.
6 What do you mean by QoS ?(Dec14,15,16,18)
Ans: Quality of Service is used in some organizations to help provide an optimal end-user
experience for audio and video communications. QoS is most-commonly used on networks
where bandwidth is limited: with a large number of network packets competing: for a relatively
small amount of available bandwidth.
7. Why is UDP pseudo header included in UDP checksum calculation? What is the
effect of an invalid checksum at the receiving UDP?(May 13)
Ans: To verify that the user datagram has reached its correct destination. Since UDP is a
connectionless protocol, it does not throw any exceptions on receiving a invalid "checksum”
UDP message. The transport layer on the other hand, might drop it on receiving this packet
because of the wrong check sum.

Page 17
8. Suppose TCP operates over a 1-Gbps link, utilizing the full bandwidth continuously.
How long will it take for sequence numbers to wrap around, completely? Suppose
an added 32-bit timestamp field increments, 1000 times during this wrap around
time, how long will it take for the, timestamp filed to wraparound?(May 13,18)
Ans: TCPAdvertisedWindow is 16 bits, SequenceNum is 32 bits. At most there will by 232
bytes on the fly in this 1 Gbpslink. The corresponding transmission time is 2 32 8 / 1 109 =
34.36 sec. So it will take 34.36 sec to wrap around the sequence number.
Each increment of timestamp = 34.36 sec / 1000 = 34.36 ms,
So the total time can be expressed by this timestamp = 34.36 10-3 232 sec
= 1.48 108 sec = 4.68 year
So, by adding this timestamp, it will take 4.68 year to wrap around the sequence number.
9. Differentiate between delay and jitter.(Dec 13)
Ans: Delay is the time it takes a packet totravel across the network from source todestination.
Jitter is the fluctuation of end-to-end delay from packet to the next packet.
10. What is the difference between congestion control and flow control ?( Dec 15,17)
Ans. : i) Flow control is done by server machine whereas congestion control is done by
router.
ii) Flow control cannot block the bandwidth of medium whereas congestion control
blocks the bandwidth of the medium.
iii) Flow control affects less on network performance. Congestion control affects the
network performance.
iv) Flow control uses buffering whereas congestion control not used buffering.
11. List some ways to deal with congestion.
Ans. : Several ways to handle congestion
1. Packet elimination 2. Flow control
3. Buffer allocation 4. Choke packets
12. Define a network congestion.
Ans: When two or more nodes would simultaneously try to transmit packets to one node there is
a high probability that the number of packets would exceed the packet handling capacity of the
network and lead to congestion.
13. Define a segment
Ans: The term segment usually refers to an information unit whose source and destination are
transport layer entities.
14. Defineslow start.(May 14)
Ans. :Slow start is congestion control in TCP.

Page 18
15. When can an application make use of UDP ?( May 14)
Ans. : Fast data transmission and multicast operation. Q.51 Differentiate UDP and TCP.
16. Differentiate UDP and TCP (May 14,16)
Ans. :
Sr.No. UDP TCP
1. Connectionless Connection oriented
2. Connection is message stream. Connection is byte stream
3. Supports broadcasting Does not support broadcasting
17. List some of the quality of service parameters of transport layer.(May 15)
Ans. :ISO specifies eleven QoS parameters for transport layer
1. Connection establishment delay 2. Connection establishment failureprobability
3. Throughput 4. Transit delay
5. Residual error rate 6. Transfer failure probability
7. Connection release delay 8. Connection release failure probability
9. Protection 10. Priority
11. Resilience
18. How does transport layer perform duplication control? (May 15)
Ans: TCP uses a sequence number to identity each byte of data. It helps to avoid duplicate data
and disordering during transmission.
19. List the different phases used in TCP connection.(May 16)
Ans. : 1) TCP connection establishment
2) TCP connection termination
3) TCP connection release
20. How do fast retransmit mechanism of TCP works ?(May 17)
Ans: With fast retransmit, the sender retransmits the missing TCP segments before their
retransmission timers expire. Because the retransmission timers did not expire or for the missing
TCP segments, missing segments are received at the destination andacknowledged by the
receiver more quickly than they would have been without fast retransmit and the sender can
more quickly send later segments to the receiver. This process is known as fast recovery.
21. What are the services provided by Transport layer protocol ?(May 18)
Ans: The services provided by Transport layer protocol are
• Reliable communication over an unreliable channel
• It provides connection-oriented and connectionless services
• It provides logical communication between processes running on hosts.

Page 19
22. Define congestion control(May 18)
Ans: Congestion control refers to the mechanisms and techniques to control thecongestion and
keep the load below the capacity.

Page 20
lOMoARcPSD|351 717 46

lOMoARcPSD|351 717 46

APPLICATION LAYER
UNIT - V

Domain Name Space (DNS), DDNS, TELNET, EMAIL, File Transfer Protocol (FTP), WWW, HTTP, SNMP,
Bluetooth, Firewalls, Basic concepts of Cryptography.

Domain Name Space (DNS):

DNS is a host name to IP address translation service. DNS is a distributed database implemented in a
hierarchy of name servers. It is an application layer protocol for message exchange between clients and
servers.

DNS Examples
There are various kinds of DOMAIN :
1. Generic domain : .com(commercial) .edu(educational) .mil(military) .org(non profit
organization) .net(similar to commercial) all these are generic domain.
2. Country domain .in (india) .us .uk
3. Inverse domain if we want to know what is the domain name of the website. Ip to domain name
mapping.So DNS can provide both the mapping for example to find the ip addresses of
geeksforgeeks.org then we have to type nslookup www.geeksforgeeks.org.

Dynamic Domain Name System (DDNS) :


It is a method of automatically updating a name server in the Domain Name Server (DNS), often in real-
time, with the active DDNS configuration of its configured hostnames, addresses, or other information.

Advantages:
1. It saves time required by static addresses updates manually when network configuration changes.
2. It saves space as the number of addresses are used as required at one time rather than using one for
all the possible users of the IP address.
3. It is very comfortable for users point of view as any IP address changes will not affect any of their
lOMoARcPSD|351 717 46

activities.
4. It does not affect accessibility as changed IP addresses are configured automatically against URL’s.

Disadvantages:
1. It is less reliable due to lack of static IP addresses and domain name mappings.
2. Dynamic DNS services alone cannot make any guarantee about the device you are attempting to
connect is actually your own.
Telnet

This Protocol that allows you to connect to remote computers (called hosts) over a TCP/IP network (such
as the internet), Once your telnet client establishes a connection to the remote host, your client becomes a
virtual terminal, allowing you to communicate with the remote host from your computer

Modes of Operation:
Most telnet implementation operates in one of the following three modes.
Default Mode:
• If there is no other modes are invoked then this mode is used.
• Echoing is performed in this mode by client.
• In this mode, user types a character and client enchoes the character on the screen but it does not
send it until whole line is completed.
Character Mode:
• Each character typed in this mode is sent by client to server.
• Server in this type of mode is normally enchoes character back to be displayed on the client’s
screen.
Line Mode:
• Line editing like echoing, character erasing etc is done from the client side.
• Client will send the whole line to the server.
Electronic Mail

Email is one of most widely used services of Internet. This service allows an Internet user to send
a message in formatted manner (mail) to the other Internet user in any part of world. Message in mail
not only contain text, but it also contains images, audio and videos data. The person who is sending mail
is called sender and person who receives mail is called recipient. It is just like postal mail service.
Components of E-Mail System :
The basic components of an email system are : User Agent (UA), Message Transfer Agent (MTA), Mail
Box, and Spool file. These are explained as following below.
1. User Agent (UA) :
The UA is normally a program which is used to send and receive mail. Sometimes, it is called as mail
reader. It accepts variety of commands for composing, receiving and replying to messages as well as
for manipulation of the mailboxes.

2. Message Transfer Agent (MTA) :


MTA is actually responsible for transfer of mail from one system to another. To send a mail, a system
lOMoARcPSD|351 717 46

must have client MTA and system MTA. It transfers mail to mailboxes of recipients if they are
connected in the same machine. It delivers mail to peer MTA if destination mailbox is in another
machine. The delivery from one MTA to another MTA is done by Simple Mail Transfer Protocol.

3. Mailbox :
It is a file on local hard drive to collect mails. Delivered mails are present in this file. The user can
read it delete it according to his/her requirement. To use e-mail system each user must have a
mailbox . Access to mailbox is only to owner of mailbox.

4. Spool file :
This file contains mails that are to be sent. User agent appends outgoing mails in this file using SMTP.
MTA extracts pending mail from spool file for their delivery. E-mail allows one name, an alias, to
represent several different e-mail addresses. It is known as mailing list, whenever user have to send a
message, system checks recipient’s name against alias database. If mailing list is present for defined
alias, separate messages, one for each entry in the list, must be prepared and handed to MTA. If for
defined alias, there is no such mailing list is present, name itself becomes naming address and a single
message is delivered to mail transfer entity.

Services provided by E-mail system:


• Composition –
The composition refer to process that creates messages and answers. For composition any kind of text
editor can be used.
• Transfer –
Transfer means sending procedure of mail i.e. from the sender to recipient.
• Reporting –
Reporting refers to confirmation for delivery of mail. It help user to check whether their mail is
delivered, lost or rejected.
• Displaying –
It refers to present mail in form that is understand by the user.
lOMoARcPSD|351 717 46

• Disposition –
This step concern with recipient that what will recipient do after receiving mail i.e save mail, delete
before reading or delete after reading.

File Transfer Protocol(FTP):

It is an application layer protocol which moves files between local and remote file systems. It runs on the
top of TCP, like HTTP. To transfer a file, 2 TCP connections are used by FTP in parallel: control
connection and data connection.

What is control connection?


For sending control information like user identification, password, commands to change the remote
directory, commands to retrieve and store files, etc., FTP makes use of control connection. The control
connection is initiated on port number 21.
What is data connection?
For sending the actual file, FTP makes use of data connection. A data connection is initiated on that port .
FTP sends the control information out-of-band as it uses a separate control connection. Some protocols
send their request and response header lines and the data in the same TCP connection. For this reason,
they are said to send their control information in-band. HTTP and SMTP are such examples.
lOMoARcPSD|351 717 46

FTP Session :
When a FTP session is started between a client and a server, the client initiates a control TCP connection
with the server side. The client sends control information over this. When the server receives this, it
initiates a data connection to the client side. Only one file can be sent over one data connection. But the
control connection remains active throughout the user session. As we know HTTP is stateless i.e. it does
not have to keep track of any user state. But FTP needs to maintain a state about its user throughout the
session.
Data Structures: FTP allows three types of data structures:
1. File Structure – In file-structure there is no internal structure and the file is considered to be a
continuous sequence of data bytes.
2. Record Structure – In record-structure the file is made up of sequential records.
3. Page Structure – In page-structure the file is made up of independent indexed pages.

FTP Commands – Some of the FTP commands are:


USER – This command sends the user identification to the server.
PASS – This command sends the user password to the server.
CWD – This command allows the user to work with a different directory or dataset for file storage or
retrieval without altering his login or accounting information.
RMD – This command causes the directory specified in the path-name to be removed as a directory.
MKD – This command causes the directory specified in the pathname to be created as a directory.
PWD – This command causes the name of the current working directory to be returned in the reply.
RETR – This command causes the remote host to initiate a data connection and to send the requested file
over the data connection.
STOR – This command causes to store a file into the current directory of the remote host.
LIST – Sends a request to display the list of all the files present in the directory.
ABOR – This command tells the server to abort the previous FTP service command and any associated
transfer of data.
QUIT – This command terminates a USER and if file transfer is not in progress, the server closes the
control connection.
lOMoARcPSD|351 717 46

World Wide Web (WWW)


The World Wide Web abbreviated as WWW and commonly known as the web. The WWW was initiated
by CERN (European library for Nuclear Research) in 1989.

History:
It is a project created, by Timothy Berner’s Lee in 1989, for researchers to work together effectively at
CERN. is an organisation, named World Wide Web Consortium (W3C), was developed for further
development in web. This organisation is directed by Tim Berner’s Lee, aka father of web.

Working of WWW:
The World Wide Web is based on several different technologies : Web browsers, Hypertext Markup
Language (HTML) and Hypertext Transfer Protocol (HTTP).
Web browser is used to access webpages. Web browsers can be defined as programs which display text,
data, pictures, animation and video on the Internet.

Some of the commonly used browsers are Internet Explorer, Opera Mini, Google Chrome.

Features of WWW:
• HyperText Information System
• Cross-Platform
• Distributed
• Open Standards and Open Source
• Uses Web Browsers to provide a single interface for many services
• Dynamic, Interactive and Evolving.
• “Web 2.0”
Components of Web
There are 3 components of web:
1. Uniform Resource Locator (URL): serves as system for resources on web.
2. Hypertexts Transfer Protocol (HTTP): specifies communication of browser and server.
3. Hyper Text Markup Language (HTML): defines structure, organization and content of
webpage.
lOMoARcPSD|351 717 46

HTTP
stands for HyperText Transfer Protocol. It is invented by Tim Berner. HyperText is the type of text which
is specially coded with the help of some standard coding language called as HyperText Markup Language
(HTML)

Characteristics of HTTP :
HTTP is IP based communication protocol which is used to deliver data from server to client or vice-
versa.
1. Server processes a request, which is raised by client and also server and client knows each other
only during current request and response period.
2. Any type of content can be exchanged as long as server and client are compatible with it.
3. Once data is exchanged then servers and client are no more connected with each other.
4. It is a request and response protocol based on client and server requirements.
5. It is connection less protocol because after connection is closed, server does not remember
anything about client and client does not remember anything about server.
6. It is stateless protocol because both client and server does not expecting anything from each other
but they are still able to communicate.
Advantages :
• Memory usage and CPU usage are low because of less simultaneous connections.
• Since there are few TCP connections hence network congestion are less.
• Since handshaking is done at initial connection stage, then latency is reduced because there is no
further need of handshaking for subsequent requests.
• The error can be reports without closing connection.
• HTTP allows HTTP pipe-lining of request or response.
Disadvantages :
• HTTP requires high power to establish communication and transfer data.
• HTTP is less secure, because it does not uses any encryption method like https use TLS to encrypt
normal http requests and response.
• HTTP is not optimized for cellular phone and it is too gabby.
• HTTP does not offer genuine exchange of data because it is less secure.
• Client does not close connection until it receives complete data from server and hence server needs
to wait for data completion and cannot be available for other clients during this time.
Simple Network Management Protocol (SNMP)
SNMP is an application layer protocol which uses UDP port number 161/162.SNMP is used to monitor
the network, detect network faults and sometimes even used to configure remote devices.
lOMoARcPSD|351 717 46

SNMP components –
There are 3 components of SNMP:
1. SNMP Manager –
It is a centralized system used to monitor network. It is also known as Network Management Station
(NMS)
2. SNMP agent –
It is a software management software module installed on a managed device. Managed devices can be
network devices like PC, router, switches, servers etc.
3. Management Information Base –
MIB consists of information of resources that are to be managed. These information is organised
hierarchically. It consists of objects instances which are essentially variables.
SNMP messages –
Different variables are:
1. Get Request –
SNMP manager sends this message to request data from SNMP agent. It is simply used to retrieve data
from SNMP agent. In response to this, SNMP agent responds with requested value through response
message.
2. Get Next Request –
This message can be sent to discover what data is available on a SNMP agent. The SNMP manager
can request for data continuously until no more data is left. In this way, SNMP manager can take
knowledge of all the available data on SNMP agent.
3. Get Bulk Request –
This message is used to retrieve large data at once by the SNMP manager from SNMP agent. It is
introduced in SNMPv2c.
4. Set Request –
It is used by SNMP manager to set the value of an object instance on the SNMP agent.
5. Response –
It is a message send from agent upon a request from manager. When sent in response to Get messages,
it will contain the data requested. When sent in response to Set message, it will contain the newly set
value as confirmation that the value has been set.
6. Trap –
These are the message send by the agent without being requested by the manager. It is sent when a
fault has occurred.
7. Inform Request –
It was introduced in SNMPv2c, used to identify if the trap message has been received by the manager
lOMoARcPSD|351 717 46

or not. The agents can be configured to set trap continuously until it receives an Inform message. It is
same as trap but adds an acknowledgement that trap doesn’t provide.
SNMP security levels –
It defines the type of security algorithm performed on SNMP packets. These are used in only SNMPv3.
There are 3 security levels namely:
1. No Auth No Priv –
This (no authentication, no privacy) security level uses community string for authentication and no
encryption for privacy.
2. Auth No priv – This security level (authentication, no privacy) uses HMAC with Md5 for
authentication and no encryption is used for privacy.
3. Auth Priv – This security level (authentication, privacy) uses HMAC with Md5 or SHA for
authentication and encryption uses DES-56 algorithm.

Bluetooth
It is a Wireless Personal Area Network (WPAN) technology and is used for exchanging data over smaller
distances. This technology was invented by Ericson in 1994. It operates in the unlicensed, industrial,
scientific and medical (ISM) band at 2.4 GHz to 2.485 GHz. Maximum devices that can be connected at the
same time are 7. Bluetooth ranges upto 10 meters. It provides data rates upto 1 Mbps or 3 Mbps depending
upon the version.

Bluetooth Architecture:
The architecture of bluetooth defines two types of networks:

1. Piconet

2. Scatternet
lOMoARcPSD|351 717 46

Piconet:
Piconet is a type of bluetooth network that contains one primary node called master node and seven
active secondary nodes called slave nodes. Thus, we can say that there are total of 8 active nodes which
are present at a distance of 10 metres. The communication between the primary and secondary node can
be one-to-one or one-to-many. Possible communication is only between the master and slave; Slave-slave
communication is not possible. It also have 255 parked nodes, these are secondary nodes and cannot take
participation in communication unless it get converted to the active state.
Scatternet:
It is formed by using various piconets. A slave that is present in one piconet can be act as master or we
can say primary in other piconet. This kind of node can receive message from master in one piconet and
deliver the message to its slave into the other piconet where it is acting as a slave. This type of node is
refer as bridge node. A station cannot be master in two piconets.
Bluetooth protocol stack:
1. Radio (RF) layer:
It performs modulation/demodulation of the data into RF signals. It defines the physical characteristics
of bluetooth transceiver. It defines two types of physical link: connection-less and connection-
oriented.
2. Baseband Link layer:
It performs the connection establishment within a piconet.
3. Link Manager protocol layer:
It performs the management of the already established links. It also includes authentication and
encryption processes.
4. Logical Link Control and Adaption protocol layer:
It is also known as the heart of the bluetooth protocol stack. It allows the communication between
upper and lower layers of the bluetooth protocol stack. It packages the data packets received from
upper layers into the form expected by lower layers. It also performs the segmentation and
multiplexing.
5. SDP layer:
It is short for Service Discovery Protocol. It allows to discover the services available on another
bluetooth enabled device.
6. RF comm layer:
It is short for Radio Frontend Component. It provides serial interface with WAP and OBEX.
7. OBEX:
It is short for Object Exchange. It is a communication protocol to exchange objects between 2
devices.
lOMoARcPSD|351 717 46

8. WAP:
It is short for Wireless Access Protocol. It is used for internet access.

9. TCS:
It is short for Telephony Control Protocol. It provides telephony service.
10. Application layer:
It enables the user to interact with the application.

Advantages:
• Low cost.
• Easy to use.
• It can also penetrate through walls.
• It creates an adhoc connection immediately without any wires.
• It is used for voice and data transfer.
Disadvantages:
• It can be hacked and hence, less secure.
• It has slow data transfer rate: 3 Mbps.
• It has small range: 10 meters.

Firewall
A firewall is a network security device, either hardware or software-based, which monitors all incoming
and outgoing traffic and based on a defined set of security rules it accepts, rejects or drops that specific
traffic.
Accept : allow the traffic
Reject : block the traffic but reply with an “unreachable error”
Drop : block the traffic with no reply
A firewall establishes a barrier between secured internal networks and outside untrusted network, such as
the Internet.
Firewalls are generally of two types: Host-based and Network-based.
1. Host- based Firewalls : Host-based firewall is installed on each network node which controls
each incoming and outgoing packet. It is a software application or suite of applications, comes as a
part of the operating system. Host-based firewalls are needed because network firewalls cannot
provide protection inside a trusted network. Host firewall protects each host from attacks and
unauthorized access.
2. Network-based Firewalls : Network firewall function on network level. In other words, these
firewalls filter all incoming and outgoing traffic across the network. It protects the internal network by
lOMoARcPSD|351 717 46

filtering the traffic using rules defined on the firewall. A Network firewall might have two or more
network interface cards (NICs). A network-based firewall is usually a dedicated system with
proprietary software installed.

Cryptography

Cryptography is technique of securing information and communications through use of codes so that only
those person for whom the information is intended can understand it and process it. Thus preventing
unauthorized access to information. The prefix “crypt” means “hidden” and suffix graphy means
“writing”.

Features of Cryptography are as follows:


1. Confidentiality:
Information can only be accessed by the person for whom it is intended and no other person except
him can access it.
2. Integrity:
Information cannot be modified in storage or transition between sender and intended receiver without
any addition to information being detected.
3. Authentication:
The identities of sender and receiver are confirmed. As well as destination/origin of information is
confirmed.
Types of Cryptography:
Symmetric Key Cryptography:
It is an encryption system where the sender and receiver of message use a single common key to encrypt
and decrypt messages. Symmetric Key Systems are faster and simpler but the problem is that sender and
receiver have to somehow exchange key in a secure manner. The most popular symmetric key
cryptography system is Data Encryption System(DES).
Asymmetric Key Cryptography:
Under this system a pair of keys is used to encrypt and decrypt information. A public key is used for
encryption and a private key is used for decryption. Public key and Private Key are different. Even if the
public key is known by everyone the intended receiver can only decode it because he alone knows the
private key.
lOMoARcPSD|351 717 46

2 MARKS QUESTIONS AND ANSWERS

1. What is the purpose of Domain Name System?


Domain Name System can map a name to an address and conversely an address to name.

2. Discuss the three main division of the domain name space.

Domain name space is divided into three different sections: generic domains, country domains & inverse
domain.

Generic domain: Define registered hosts according to their generic behavior, uses genericsuffixes.
Country domain: Uses two characters to identify a country as the last suffix. Inverse domain: Finds the
domain name given the IP address.

3. Discuss the TCP connections needed in FTP.

FTP establishes two connections between the hosts. One connection is used for data transfer, the other for
control information. The control connection uses very simple rules of communication. The data connection
needs more complex rules due to the variety of data types transferred.

4. Discuss the basic model of FTP.

The client has three components: the user interface, the client control process, and the client data transfer
process. The server has two components: the server control process and the server data transfer process. The
control connection is made between the control processes. The data connection is made between the data
transfer processes.

5. What is the function of SMTP?

The TCP/IP protocol supports electronic mail on the Internet is called Simple Mail Transfer (SMTP). It is a
system for sending messages to other computer users based on e-mail addresses. SMTP provides mail
exchange between users on the same or different computers.
lOMoARcPSD|351 717 46

6. What is the difference between a user agent (UA) and a mail transfer agent? (MTA)?

The UA prepares the message, creates the envelope, and puts the message in the envelope. The MTA transfers
the mail across the Internet.

7. How does MIME enhance SMTP?

MIME is a supplementary protocol that allows non-ASCII data to be sent through SMTP. MIME transforms
non-ASCII data at the sender site to NVT ASCII data and deliverers it to the client SMTP to be sent through
the Internet. The server SMTP at the receiving side receives the NVT ASCII data and delivers it to MIME to
be transformed back to the original data.

8. Why is an application such as POP needed for electronic messaging?

Workstations interact with the SMTP host which receives the mail on behalf of every host in the organization,
to retrieve messages b y using a client-server protocol such as Post Office Protocol , version 3(POP3).
Although POP3 is used to download messages from the server, the SMTP client still needed on the desktop to
forward messages from the workstation user to its SMTP mail server.

9. Write down the three types of WWW documents.

The documents in the WWW can be grouped into three broad categories: static, dynamic and active.
Static: Fixed-content documents that are created and stored in a server. Dynamic: Created by web server
whenever a browser requests the document. Active: A program to be run at the client side.

10. What is the purpose of HTML?

HTML is a computer language for specifying the contents and format of a web document. It allows additional
text to include codes that define fonts, layouts, embedded graphics and hypertext links.

11. Define CGI.

CGI is a standard for communication between HTTP servers and executable programs. It is used in crating
dynamic documents.
lOMoARcPSD|351 717 46

12. Name four factors needed for a secure network.


Privacy: The sender and the receiver expect confidentiality.

Authentication: The receiver is sure of the sender’s identity and that an imposter has not sentthe message.
Integrity: The data must arrive at the receiver exactly as it was sent.
Non-Reputation: The receiver must able to prove that a received message came from a specificsender.

13. How is a secret key different from public key?

In secret key, the same key is used by both parties. The sender uses this key and an encryption algorithm to
encrypt data; the receiver uses the same key and the corresponding decryption algorithm to decrypt the data. In
public key, there are two keys: a private key and a public key. The private key is kept by the receiver. The
public key is announced to the public.

14. What is a digital signature?

Digital Signature is an electronic signature that can be used to authenticate the identity of the sender of a
message or document and possibly to ensure that the original content of the message or document that has
been sent is unchanged. Digital signature is easily transportable, cannot be imitated by someone else, and can
be automatically time-stamped. The ability to ensure that the original signed message arrived means that the
sender cannot easily repudiate it later.

15. What are the advantages & disadvantages of public key encryption? Advantages:

a) Remove the restriction of a shared secret key between two entities. Here each entity can create a pair of
keys, keep the private one, and publicly distribute the other one.

b) The no. of keys needed is reduced tremendously. For one million users to communicate, only two
million keys are needed.

Disadvantage:

If you use large numbers the method to be effective. Calculating the cipher text using the long keys takes a lot
of time. So it is not recommended for large amounts of text.
lOMoARcPSD|351 717 46

15. What are the advantages & disadvantages of secret key encryption?
Advantage:

Secret Key algorithms are efficient: it takes less time to encrypt a message. The reason is that the key is
usually smaller. So it is used to encrypt or decrypt long messages.

Disadvantages:
a) Each pair of users must have a secret key. If N people in world want to use this method, there needs to
be N (N-1)/2 secret keys. For one million people to communicate, a half-billion secret keys are needed.
b) The distribution of the keys between two parties can be difficult.

17. Define permutation.


Permutation is transposition in bit level.

Straight permutation: The no. of bits in the input and output are preserved.
Compressed permutation: The no. of bits is reduced (some of the bits are dropped).
Expanded permutation: The no. of bits is increased (some bits are repeated).

18. Define substitution & transposition encryption.


Substitution: A character level encryption in which each character is replaced by anothercharacter in the set.

Transposition: A Character level encryption in which the characters retain their plaintext but theposition of
the character changes.

19. State the difference between fully qualified and partially qualified domain name

You might also like