Data Communication and Computer Networks NOTES
Data Communication and Computer Networks NOTES
DATA COMMUNICATION
AND
COMPUTER
NETWORK
UNIT -1
Data Communication:-
Digital Communication:
In digital communication digital signal is used rather than analog signal for
communication in between the source and destination. They digital signal
consists of discrete values rather than continuous values. In digital
communication physical transfer of data occurs in the form of digital bit stream
i.e 0 or 1 over a point-to-point or point-to-multipoint transmission medium. In
digital communication the digital transmission data can be broken into packets as
discrete messages which is not allowed in analog communication.
1
2
Analog Communication:
In analog communication the data is transferred with the help of analog signal in
between transmitter and receiver. Any type of data is transferred in analog signal.
Any data is converted into electric form first and after that it is passed through
communication channel. Analog communication uses a continuous signal which
varies in amplitude, phase, or some other property with time in proportion to that
of a variable.
2
3
ANALOG DIGITAL
S .No. COMMUNICATION COMMUNICATION
3
4
ANALOG DIGITAL
S .No. COMMUNICATION COMMUNICATION
In analog communication
only limited number of
channels can be It can broadcast large
broadcasted number of channels
04. simultaneously. simultaneously.
4
5
ANALOG DIGITAL
S .No. COMMUNICATION COMMUNICATION
errors.
Digital communication
Analog communication system is having less
system is having complex complex hardware and
09. hardware and less flexible. more flexible.
In analog communication
for multiplexing In Digital communication
for multiplexing
Frequency Division
Multiplexing (FDM) is Time Division Multiplexing
used. (TDM) is used.
10.
5
6
ANALOG DIGITAL
S .No. COMMUNICATION COMMUNICATION
high.
Synchronization problem is
17. Synchronization problem. easier.
Transmission modes
The way in which data is transmitted from one device to another device is known
as transmission mode.
Each communication channel has a direction associated with it, and transmission
media provide the direction. Therefore, the transmission mode is also known as a
directional mode.
6
7
Simplex mode
Half-duplex mode
Full-duplex mode
Simplex mode
In Simplex mode, the communication is unidirectional, i.e., the data flow in one
direction.
A device can only send the data but cannot receive it or it can receive the data but
cannot send the data.
This transmission mode is not very popular as mainly communications require the
two-way exchange of data. The simplex mode is used in the business field as in
sales that do not require any corresponding reply.
7
8
The radio station is a simplex channel as it transmits the signal to the listeners but
never allows them to transmit back.
Keyboard and Monitor are the examples of the simplex mode as a keyboard can
only accept the data from the user and monitor can only be used to display the data
on the screen.
The main advantage of the simplex mode is that the full capacity of the
communication channel can be utilized during transmission.
In simplex mode, the station can utilize the entire bandwidth of the communication
channel, so that more data can be transmitted at a time.
Half-Duplex mode
8
9
In a Half-duplex channel, direction can be reversed, i.e., the station can transmit
and receive the data as well.
Messages flow in both the directions, but not at the same time.
In half-duplex mode, it is possible to perform the error detection, and if any error
occurs, then the receiver requests the sender to retransmit the data.
In half-duplex mode, both the devices can send and receive the data and also can
utilize the entire bandwidth of the communication channel during the transmission
of data.
In half-duplex mode, when one device is sending the data, then another has to wait,
this causes the delay in sending the data at the right time.
Full-duplex mode
9
10
In Full duplex mode, the communication is bi-directional, i.e., the data flow in both
the directions.
Both the stations can send and receive the message simultaneously.
Full-duplex mode has two simplex channels. One channel has traffic moving in
one direction, and another channel has traffic flowing in the opposite direction.
The most common example of the full-duplex mode is a telephone network. When
two people are communicating with each other by a telephone line, both can talk
and listen at the same time.
Both the stations can send and receive the data at the same time.
If there is no dedicated path exists between the devices, then the capacity of the
communication channel is divided into two parts.
Serial Communication
In serial communication the data bits are transmitted serially over a common
communication link one after the other. Basically it does not allow simultaneous
transmission of data because only a single channel is utilized. Thereby allowing
sequential transfer rather than simultaneous transfer.
10
11
It is highly suitable for long distance signal transmission as only a single wire or
bus is used. So, it can be connected between two points that are separated at a large
distance with respect to each other. But as only a single data bit is transmitted
per clock pulse thus the transmission of data is a quiet time taking process.
Parallel Communication
11
12
The figure below shows the transmission of 8 byte data using parallel
communication technique:
Here, as we can see that for the transmission of 8-bit of data, 8 separate
communication links are utilized. And so rather following a sequential data
transmission, simultaneous transmission of data is allowed. This leads to a faster
communication between the sender and receiver.
But for connecting multiple lines between sender and receiver multiple connecting
unit are to be present between a pair of sender and receiver. And this is the reason
why parallel communication is not suitable for long distance transmission, because
connecting multiple lines to large distances is very difficult and expensive.
1. Due to the presence of single communication link the speed of data transmission
is slow. While multiple links in case of parallel communication allows data
transmission at comparatively faster rate.
2. Whenever there exists a need for system up-gradation then upgrading a system
that uses serial communication is quite an easy task as compared to upgrading a
parallel communication system.
3. In serial communication, the all data bits are transmitted over a common channel
thus proper spacing is required to be maintained in order to avoid interference.
12
13
Conclusion
So it is clear that utilizing multiple lines for data transmission in case of parallel
communication is advantageous as it offers faster data transmission. But as the
same time it is disadvantageous when considered in case of cost and transmission
distance.
Packet Switching:-
Process
Each packet in a packet switching technique has two parts: a header and a payload.
The header contains the addressing information of the packet and is used by the
intermediate routers to direct it towards its destination. The payload carries the
actual data.
A packet is transmitted as soon as it is available in a node, based upon its header
information. The packets of a message are not routed via the same path. So, the
packets in the message arrives in the destination out of order. It is the responsibility
of the destination to reorder the packets in order to retrieve the original message.
The process is diagrammatically represented in the following figure. Here the
message comprises of four packets, A, B, C and D, which may follow different
routes from the sender to the receiver.
14
15
All address information is only transferred during setup phase. Once the route
to destination is discovered, entry is added to switching table of each
intermediate node. During data transfer, packet header (local header) may
contain information such as length, timestamp, sequence number etc.
Connection-oriented switching is very useful in switched WAN. Some popular
protocols which use Virtual Circuit Switching approach are X.25, Frame-
Relay, ATM and MPLS(Multi-Protocol Label Switching).
15
16
3. A---R1---R2---B
4.
5. A is the sender (start)
6. R1, R2 are two routers that store and forward data
7. B is receiver(destination)
To send a packet from A to B there are delays since this is a Store and
Forward network.
1. Transmission Delay
2. Propagation Delay
3. Queuing Delay
4. Processing Delay
Transmission Delay :
Time taken to put a packet onto link. In other words, it is simply time required
to put data bits on the wire/communication medium. It depends on length of
packet and bandwidth of network.
Transmission Delay = Data size / bandwidth = (L/B) second
16
17
Propagation delay :
Time taken by the first bit to travel from sender to receiver end of the link. In
other words, it is simply the time required for bits to reach the destination
from the start point. Factors on which Propagation delay depends are Distance
and propagation speed.
Propagation delay = distance/transmission speed = d/s
Queuing Delay :
Queuing delay is the time a job waits in a queue until it can be executed. It
depends on congestion. It is the time difference between when the packet
arrived Destination and when the packet data was processed or executed. It
may be caused by mainly three reasons i.e. originating switches, intermediate
17
18
18
19
Question : How much time will it take to send a packet of size L bits from A
to B in given setup if Bandwidth is R bps, propagation speed is t meter/sec and
distance b/w any two points is d meters (ignore processing and queuing
delay) ?
A---R1---R2---B
Ans:
N = no. of links = no. of hops = no. of routers +1 = 3
File size = L bits
Bandwidth = R bps
Propagation speed = t meter/sec
Distance = d meters
Transmission delay = (N*L)/R = (3*L)/R sec
Propagation delay = N*(d/t) = (3*d)/t sec
Total time = 3*(L/R + d/t) sec
19
20
Advantages
Delay in delivery of packets is less, since packets are sent as soon as they are
available.
Switching devices don’t require massive storage, since they don’t have to
store the entire messages before forwarding them to the next node.
Data delivery can continue even if some parts of the network faces link
failure. Packets can be routed via other paths.
It allows simultaneous usage of the same channel by multiple users.
It ensures better bandwidth usage as a number of packets from multiple
sources can be transferred via the same link.
Disadvantages
They are unsuitable for applications that cannot afford delays in
communication like high quality voice calls.
Packet switching high installation costs.
They require complex protocols for delivery.
Network problems may introduce errors in packets, delay in delivery of
packets or loss of packets. If not properly handled, this may lead to loss of
critical information.
Circuit Switching:-
Circuit switching is a connection-oriented network switching technique. Here, a
dedicated route is established between the source and the destination and the entire
message is transferred through it.
Phases of Circuit Switch Connection
Circuit Establishment: In this phase, a dedicated circuit is established from
the source to the destination through a number of intermediate switching
centres. The sender and receiver transmits communication signals to request
and acknowledge establishment of circuits.
Data Transfer: Once the circuit has been established, data and voice are
transferred from the source to the destination. The dedicated connection
remains as long as the end parties communicate.
20
21
21
22
signal. Practical use in radio spectrum & optical fiber to share multiple
independent signals.
Time Division Multiplexing : Divides into frames
Time-division multiplexing (TDM) is a method of transmitting and receiving
independent signals over a common signal path by means of synchronized
switches at each end of the transmission line. TDM is used for long-distance
communication links and bears heavy data traffic loads from end user.
Time division multiplexing (TDM) is also known as a digital circuit switched.
Example 1 : How long it takes to send a file of ‘x bits’ from host A to host B
over a circuit switched network that uses TDM with ‘h slots’ and have a bit rate
of ‘R Mbps’, circuit establish time is k seconds.Find total time?
Explanation:
Transmission rate = Link Rate or Bit rate / no. of slots = R/h bps
Transmission time = size of file/ transmission rate = x / (R/h) = (x*h)/R
Total time = transmission time + circuit setup time = (x*h)/R secs + k secs
Advantages and Disadvantages of Circuit Switching
Advantages
It is suitable for long continuous transmission, since a continuous
transmission route is established, that remains throughout the conversation.
The dedicated path ensures a steady data rate of communication.
No intermediate delays are found once the circuit is established. So, they are
suitable for real time communication of both voice and data transmission.
Disadvantages
22
23
Message Switching –
Message switching was a technique developed as an alternate to circuit switching,
before packet switching was introduced. In message switching, end users
communicate by sending and receiving messages that included the entire data to
be shared. Messages are the smallest individual unit.
Also, the sender and receiver are not directly connected. There are a number of
intermediate nodes transfer data and ensure that the message reaches its
destination. Message switched data networks are hence called hop-by-hop
systems.
They provide 2 distinct and important characteristics:
1. Store and forward – The intermediate nodes have the responsibility of
transferring the entire message to the next node. Hence, each node must have
storage capacity. A message will only be delivered if the next hop and the link
connecting it are both available, otherwise it’ll be stored indefinitely. A store-
and-forward switch forwards a message only if sufficient resources are
available and the next hop is accepting data. This is called the store-and-
forward property.
2. Message delivery – This implies wrapping the entire information in a single
message and transferring it from the source to the destination node. Each
message must have a header that contains the message routing information,
including the source and destination.
Message switching network consists of transmission links (channels), store-and-
forward switch nodes and end stations as shown in the following picture:
23
24
24
25
2. In message switching, the data channels are shared by the network devices.
3. It makes the traffic management efficient by assigning priorities to the
messages.
Disadvantages of Message Switching –
Message switching has the following disadvantages:
1. Message switching cannot be used for real time applications as storing of
messages causes delay.
2. In message switching, message has to be stored for which every intermediate
devices in the network requires a large storing capacity.
Applications –
The store-and-forward method was implemented in telegraph message switching
centres. Today, although many major networks and systems are packet-switched
or circuit switched networks, their delivery processes can be based on message
switching. For example, in most electronic mail systems the delivery process is
based on message switching, while the network is in fact either circuit-switched
or packet-switched.
NETWORK MODELS:-
OSI Model:-
OSI stands for Open Systems Interconnection. It has been developed by ISO –
‘International Organization of Standardization‘, in the year 1984. It is a 7
layer architecture with each layer having specific functionality to perform. All
these 7 layers work collaboratively to transmit the data from one person to
another across the globe.
25
26
The lowest layer of the OSI reference model is the physical layer. It is
responsible for the actual physical connection between the devices. The physical
layer contains information in the form of bits. It is responsible for transmitting
individual bits from one node to the next. When receiving data, this layer will get
the signal received and convert it into 0s and 1s and send them to the Data Link
layer, which will put the frame back together.
26
27
The data link layer is responsible for the node to node delivery of the message.
The main function of this layer is to make sure data transfer is error-free from one
node to another, over the physical layer. When a packet arrives in a network, it is
the responsibility of DLL to transmit it to the Host using its MAC address.
Data Link Layer is divided into two sub layers :
1. Logical Link Control (LLC)
2. Media Access Control (MAC)
The packet received from Network layer is further divided into frames depending
on the frame size of NIC(Network Interface Card). DLL also encapsulates Sender
and Receiver’s MAC address in the header.
The Receiver’s MAC address is obtained by placing an ARP(Address Resolution
Protocol) request onto the wire asking “Who has that IP address?” and the
destination host will reply with its MAC address.
27
28
Network layer works for the transmission of data from one host to the other
located in different networks. It also takes care of packet routing i.e. selection of
the shortest path to transmit the packet, from the number of routes available. The
sender & receiver’s IP address are placed in the header by the network layer.
The functions of the Network layer are :
1. Routing: The network layer protocols determine which route is suitable from
source to destination. This function of network layer is known as routing.
2. Logical Addressing: In order to identify each device on internetwork
uniquely, network layer defines an addressing scheme. The sender &
receiver’s IP address are placed in the header by network layer. Such an
address distinguishes each device uniquely and universally.
* Segment in Network layer is referred as Packet.
Transport layer provides services to application layer and takes services from
network layer. The data in the transport layer is referred to as Segments. It is
responsible for the End to End Delivery of the complete message. The transport
layer also provides the acknowledgement of the successful data transmission and
re-transmits the data if an error is found.
• At sender’s side:
Transport layer receives the formatted data from the upper layers,
performs Segmentation and also implements Flow & Error control to ensure
proper data transmission. It also adds Source and Destination port number in its
28
29
29
30
SCENARIO:
Let’s consider a scenario where a user wants to send a message through some
Messenger application running in his browser. The “Messenger” here acts as the
application layer which provides the user with an interface to create the data. This
message or so-called Data is compressed, encrypted (if any secure data) and
converted into bits (0’s and 1’s) so that it can be transmitted.
At the very top of the OSI Reference Model stack of layers, we find Application
layer which is implemented by the network applications. These applications
produce the data, which has to be transferred over the network. This layer also
serves as a window for the application services to access the network and for
displaying the received information to the user.
Ex: Application – Browsers, Skype Messenger etc.
**Application Layer is also called as Desktop Layer.
31
32
This layer corresponds to the combination of Data Link Layer and Physical Layer
of the OSI model. It looks out for hardware addressing and the protocols present in
this layer allows for the physical transmission of data.
We just talked about ARP being a protocol of Internet layer, but there is a conflict
about declaring it as a protocol of Internet Layer or Network access layer. It is
described as residing in layer 3, being encapsulated by layer 2 protocols.
2. Internet Layer –
This layer parallels the functions of OSI’s Network layer. It defines the protocols
which are responsible for logical transmission of data over the entire network. The
main protocols residing at this layer are :
1. IP – stands for Internet Protocol and it is responsible for delivering packets
from the source host to the destination host by looking at the IP addresses in the
packet headers. IP has 2 versions:
IPv4 and IPv6. IPv4 is the one that most of the websites are using currently. But
IPv6 is growing as the number of IPv4 addresses are limited in number when
compared to the number of users.
2. ICMP – stands for Internet Control Message Protocol. It is encapsulated within
IP datagrams and is responsible for providing hosts with information about
network problems.
3. ARP – stands for Address Resolution Protocol. Its job is to find the hardware
address of a host from a known IP address. ARP has several types: Reverse
ARP, Proxy ARP, Gratuitous ARP and Inverse ARP.
3. Host-to-Host Layer –
This layer is analogous to the transport layer of the OSI model. It is responsible for
end-to-end communication and error-free delivery of data. It shields the upper-
layer applications from the complexities of data. The two main protocols present in
this layer are :
1. Transmission Control Protocol (TCP) – It is known to provide reliable and
error-free communication between end systems. It performs sequencing and
segmentation of data. It also has acknowledgment feature and controls the flow
of the data through flow control mechanism. It is a very effective protocol but
32
33
4. Application Layer –
This layer performs the functions of top three layers of the OSI model:
Application, Presentation and Session Layer. It is responsible for node-to-node
communication and controls user-interface specifications. Some of the protocols
present in this layer are: HTTP, HTTPS, FTP, TFTP, Telnet, SSH, SMTP, SNMP,
NTP, DNS, DHCP, NFS, X Window, LPD. Have a look at Protocols in
Application Layer for some information about these protocols. Protocols other than
those present in the linked article are :
1. HTTP and HTTPS – HTTP stands for Hypertext transfer protocol. It is used
by the World Wide Web to manage communications between web browsers and
servers. HTTPS stands for HTTP-Secure. It is a combination of HTTP with
SSL(Secure Socket Layer). It is efficient in cases where the browser need to fill
out forms, sign in, authenticate and carry out bank transactions.
2. SSH – SSH stands for Secure Shell. It is a terminal emulations software similar
to Telnet. The reason SSH is more preferred is because of its ability to maintain
the encrypted connection. It sets up a secure session over a TCP/IP connection.
3. NTP – NTP stands for Network Time Protocol. It is used to synchronize the
clocks on our computer to one standard time source. It is very useful in
situations like bank transactions. Assume the following situation without the
presence of NTP. Suppose you carry out a transaction, where your computer
reads the time at 2:30 PM while the server records it at 2:28 PM. The server can
crash very badly if it’s out of sync.
3. The diagrammatic comparison of the TCP/IP and OSI model is as follows :
33
34
TCP/IP OSI
TCP refers to Transmission OSI refers to Open Systems
Control Protocol. Interconnection.
34
35
35
36
(multiple access protocols) to manage the students and make them answer one at
a time.
Thus, protocols are required for sharing data on non dedicated channels. Multiple
access protocols can be subdivided further as –
1. Random Access Protocol: In this, all stations have same superiority that is no
station has more priority than another station. Any station can send data
depending on medium’s state( idle or busy). It has two features:
1. There is no fixed time for sending data
2. There is no fixed sequence of stations sending data
The Random access protocols are further subdivided as:
(a) ALOHA – It was designed for wireless LAN but is also applicable for shared
medium. In this, multiple stations can transmit data at the same time and can
hence lead to collision and data being garbled.
Pure Aloha:
When a station sends data it waits for an acknowledgement. If the
36
37
acknowledgement doesn’t come within the allotted time then the station waits
for a random amount of time called back-off time (Tb) and re-sends the data.
Since different stations wait for different amount of time, the probability of
further collision decreases.
Vulnerable Time = 2* Frame transmission time
Throughput = G exp{-2*G}
Maximum throughput = 0.184 for G=0.5
Slotted Aloha:
It is similar to pure aloha, except that we divide time into slots and sending of
data is allowed only at the beginning of these slots. If a station misses out the
allowed time, it must wait for the next slot. This reduces the probability of
collision.
Vulnerable Time = Frame transmission time
Throughput = G exp{-*G}
Maximum throughput = 0.368 for G=1
For more information on ALOHA refer – LAN Technologies
(b) CSMA – Carrier Sense Multiple Access ensures fewer collisions as the
station is required to first sense the medium (for idle or busy) before transmitting
data. If it is idle then it sends data, otherwise it waits till the channel becomes
idle. However there is still chance of collision in CSMA due to propagation
delay. For example, if station A wants to send data, it will first sense the
medium.If it finds the channel idle, it will start sending data. However, by the
time the first bit of data is transmitted (delayed due to propagation delay) from
station A, if station B requests to send data and senses the medium it will also
find it idle and will also send data. This will result in collision of data from
station A and B.
P-persistent: The node senses the medium, if idle it sends the data with p
probability. If the data is not transmitted ((1-p) probability) then it waits for
some time and checks the medium again, now if it is found idle then it send
with p probability. This repeat continues until the frame is sent. It is used in
Wifi and packet radio systems.
O-persistent: Superiority of nodes is decided beforehand and transmission
occurs in that order. If the medium is idle, node waits for its time slot to send
data.
(c) CSMA/CD – Carrier sense multiple access with collision detection. Stations
can terminate transmission of data if collision is detected.
Carrier Sense Multiple Access with Collision Detection (CSMA/CD) is a network
protocol for carrier transmission that operates in the Medium Access Control
(MAC) layer. It senses or listens whether the shared channel for transmission is
busy or not, and defers transmissions until the channel is free. The collision
detection technology detects collisions by sensing transmissions from other
stations. On detection of a collision, the station stops transmitting, sends a jam
signal, and then waits for a random time interval before retransmission.
Algorithms
The algorithm of CSMA/CD is:
When a frame is ready, the transmitting station checks whether the channel
is idle or busy.
If the channel is busy, the station waits until the channel becomes idle.
If the channel is idle, the station starts transmitting and continually monitors
the channel to detect collision.
If a collision is detected, the station starts the collision resolution algorithm.
The station resets the retransmission counters and completes frame
transmission.
The algorithm of Collision Resolution is:
The station continues transmission of the current frame for a specified time
along with a jam signal, to ensure that all the other stations detect collision.
The station increments the retransmission counter.
If the maximum number of retransmission attempts is reached, then the
station aborts transmission.
Otherwise, the station waits for a backoff period which is generally a
function of the number of collisions and restart main algorithm.
The following flowchart summarizes the algorithms:
38
39
Though this algorithm detects collisions, it does not reduce the number of
collisions.
It is not appropriate for large networks performance degrades exponentially
when more stations are added.
(d) CSMA/CA – Carrier sense multiple access with collision avoidance. The
process of collisions detection involves sender receiving acknowledgement
signals. If there is just one signal(its own) then the data is successfully sent but if
there are two signals(its own and the one with which it has collided) then it
means a collision has occurred. To distinguish between these two cases, collision
must have a lot of impact on received signal. However it is not so in wired
networks, so CSMA/CA is used in this case.
CSMA/CA avoids collision by:
39
40
1. Interframe space – Station waits for medium to become idle and if found idle
it does not immediately send data (to avoid collision due to propagation delay)
rather it waits for a period of time called Interframe space or IFS. After this
time it again checks the medium for being idle. The IFS duration depends on
the priority of station.
2. Contention Window – It is the amount of time divided into slots. If the
sender is ready to send data, it chooses a random number of slots as wait time
which doubles every time medium is not found idle. If the medium is found
busy it does not restart the entire process, rather it restarts the timer when the
channel is found idle again.
3. Acknowledgement – The sender re-transmits the data if acknowledgement is
not received before time-out.
2. Controlled Access:
In this, the data is sent by that station which is approved by all other stations. For
further details refer – Controlled Access Protocols
3. Channelization:
In this, the available bandwidth of the link is shared in time, frequency and code
to multiple stations to access channel simultaneously.
Frequency Division Multiple Access (FDMA) – The available bandwidth is
divided into equal bands so that each station can be allocated its own band.
Guard bands are also added so that no to bands overlap to avoid crosstalk and
noise.
Time Division Multiple Access (TDMA) – In this, the bandwidth is shared
between multiple stations. To avoid collision time is divided into slots and
stations are allotted these slots to transmit data. However there is a overhead
of synchronization as each station needs to know its time slot. This is resolved
by adding synchronization bits to each slot. Another issue with TDMA is
propagation delay which is resolved by addition of guard bands.
For more details refer – Circuit Switching
Code Division Multiple Access (CDMA) – One channel carries all
transmissions simultaneously. There is neither division of bandwidth nor
division of time. For example, if there are many people in a room all speaking
at the same time, then also perfect reception of data is possible if only two
person speak the same language. Similarly data from different stations can be
transmitted simultaneously in different code languages.
40
41
UNIT:-2
NETWORK LAYER:-
Types of ARP
There are four types of Address Resolution Protocol, which is given below:
o Proxy ARP
o Gratuitous ARP
o Reverse ARP (RARP)
o Inverse ARP
41
42
Proxy ARP - Proxy ARP is a method through which a Layer 3 devices may
respond to ARP requests for a target that is in a different network from the sender.
The Proxy ARP configured router responds to the ARP and map the MAC address
of the router with the target IP address and fool the sender that it is reached at its
destination.
At the backend, the proxy router sends its packets to the appropriate destination
because the packets contain the necessary information.
Example - If Host A wants to transmit data to Host B, which is on the different
network, then Host A sends an ARP request message to receive a MAC address for
Host B. The router responds to Host A with its own MAC address pretend itself as
a destination. When the data is transmitted to the destination by Host A, it will
send to the gateway so that it sends to Host B. This is known as proxy ARP.
Gratuitous ARP - Gratuitous ARP is an ARP request of the host that helps to
identify the duplicate IP address. It is a broadcast request for the IP address of the
42
43
router. If an ARP request is sent by a switch or router to get its IP address and no
ARP responses are received, so all other nodes cannot use the IP address allocated
to that switch or router. Yet if a router or switch sends an ARP request for its IP
address and receives an ARP response, another node uses the IP address allocated
to the switch or router.
There are some primary use cases of gratuitous ARP that are given below:
o The gratuitous ARP is used to update the ARP table of other devices.
o It also checks whether the host is using the original IP address or a duplicate
one.
Reverse ARP (RARP) - It is a networking protocol used by the client system in a
local area network (LAN) to request its IPv4 address from the ARP gateway router
table. A table is created by the network administrator in the gateway-router that is
used to find out the MAC address to the corresponding IP address.
When a new system is set up or any machine that has no memory to store the IP
address, then the user has to find the IP address of the device. The device sends a
RARP broadcast packet, including its own MAC address in the address field of
both the sender and the receiver hardware. A host installed inside of the local
network called the RARP-server is prepared to respond to such type of broadcast
packet. The RARP server is then trying to locate a mapping table entry in the IP to
MAC address. If any entry matches the item in the table, then the RARP server
sends the response packet along with the IP address to the requesting computer.
Inverse ARP (InARP) - Inverse ARP is inverse of the ARP, and it is used to find
the IP addresses of the nodes from the data link layer addresses. These are mainly
used for the frame relays, and ATM networks, where Layer 2 virtual circuit
addressing are often acquired from Layer 2 signaling. When using these virtual
circuits, the relevant Layer 3 addresses are available.
43
44
44
45
45
46
CASE-1: The sender is a host and wants to send a packet to another host on the
same network.
Use ARP to find another host’s physical address
CASE-2: The sender is a host and wants to send a packet to another host on
another network.
Sender looks at its routing table.
Find the IP address of the next hop (router) for this destination.
Use ARP to find the router’s physical address
CASE-3: the sender is a router and received a datagram destined for a host on
another network.
Router check its routing table.
Find the IP address of the next router.
Use ARP to find the next router’s physical address.
CASE-4: The sender is a router that has received a datagram destined for a host
in the same network.
Use ARP to find this host’s physical address.
NOTE: An ARP request is a broadcast, and an ARP response is a Unicast.
46
47
History of RARP :
RARP was proposed in 1984 by the university Network group. This protocol
provided the IP Address to the workstation. These diskless workstations were
also the platform for the primary workstations from Sun Microsystems.
Working of RARP :
The RARP is on the Network Access Layer and is employed to send data between
two points in a very network.
Each network participant has two unique addresses:- IP address (a logical
address) and MAC address (the physical address).
The IP address gets assigned by software and after that the MAC address is
constructed into the hardware.
The RARP server that responds to RARP requests, can even be any normal
computer within the network. However, it must hold the data of all the MAC
47
48
RARP stands for Reverse Address ARP stands for Address Resolution
Resolution Protocol Protocol
The MAC address is known and the The IP address is known, and the MAC
IP address is requested address is being requested
It uses the value 3 for requests and It uses the value 1 for requests and 2
4 for responses for responses
Uses of RARP :
RARP is used to convert the Ethernet address to an IP address.
It is available for the LAN technologies like FDDI, token ring LANs, etc.
Disadvantages of RARP :
The Reverse Address Resolution Protocol had few disadvantages which
eventually led to its replacement by BOOTP and DHCP. Some of the
disadvantages are listed below:
The RARP server must be located within the same physical network.
The computer sends the RARP request on very cheap layer of the network.
Thus, it’s unattainable for a router to forward the packet because the computer
sends the RARP request on very cheap layer of the network.
48
49
The RARP cannot handle the subnetting process because no subnet masks are
sent. If the network is split into multiple subnets, a RARP server must be
available with each of them.
It isn’t possible to configure the PC in a very modern network.
It doesn’t fully utilize the potential of a network like Ethernet.
RARP has now become an obsolete protocol since it operates at low level. Due to
this, it requires direct address to the network which makes it difficult to build a
server.
ICMP:- Since IP does not have a inbuilt mechanism for sending error and control
messages. It depends on Internet Control Message Protocol(ICMP) to provide an
error control. It is used for reporting errors and management queries. It is a
supporting protocol and used by networks devices like routers for sending the
error messages and operations information.
e.g. the requested service is not available or that a host or router could not be
reached.
Source quench message :
Source quench message is request to decrease traffic rate for messages sending to
the host(destination). Or we can say, when receiving host detects that rate of
sending packets (traffic rate) to it is too fast it sends the source quench message
to the source to slow the pace down so that no packet can be lost.
ICMP will take source IP from the discarded packet and informs to source by
sending source quench message.
Then source will reduce the speed of transmission so that router will free for
congestion.
49
50
When the congestion router is far away from the source the ICMP will send hop
by hop source quench message so that every router will reduce the speed of
transmission.
Parameter problem :
Whenever packets come to the router then calculated header checksum should be
equal to recieved header checksum then only packet is accepted by the router.
50
51
ICMP will take the source IP from the discarded packet and informs to source by
sending parameter problem message.
Time exceeded message :
When some fragments are lost in a network then the holding fragment by the
router will be droped then ICMP will take source IP from discarded packet and
informs to the source, of discarded datagram due to time to live field reaches to
zero, by sending time exceeded message.
Destination un-reachable :
Destination unreachable is generated by the host or its inbound gateway to inform
the client that the destination is unreachable for some reason.
51
52
There is no necessary condition that only router give the ICMP error message
some time destination host send ICMP error message when any type of failure
(link failure,hardware failure,port failure etc) happen in the network.
Redirection message :
Redirect requests data packets be sent on an alternate route. The message informs
to a host to update its routing information (to send packets on an alternate route).
Ex. If host tries to send data through a router R1 and R1 sends data on a router
R2 and there is a direct way from host to R2. Then R1 will send a redirect
message to inform the host that there is a best way to the destination directly
through R2 available. The host then sends data packets for the destination directly
to R2.
The router R2 will send the original datagram to the intended destination.
But if datagram contains routing information then this message will not be sent
even if a better route is available as redirects should only be sent by gateways and
should not be sent by Internet hosts.
52
53
53
54
Gaming –
Internet group management protocol is often used in simulation games which
has multiple users over the network such as online games.
Web Conferencing tools –
Video conferencing is a new method to meet people from your own
convenience and IGMP connects to the users for conferencing and transfers
the message/data packets efficiently.
Types:
There are 3 versions of IGMP. These versions are backward compatible.
Following are the versions of IGMP:
1. IGMPv1 :
The version of IGMP communication protocol allows all the supporting hosts to
join the multicast groups using membership request and include some basic
features. But, host cannot leave the group on their own and have to wait for a
timeout to leave the group.
The message packet format in IGMPv1:
Version –
Set to 1.
Type –
1 for Host Membership Query and Host Membership Report.
Unused –
8-bits of zero which are of no use.
Checksum –
It is the one’s complement of the one’s complement of the sum of IGMP
message.
Group Address –
The group address field is zero when sent and ignored when received in
54
55
2. IGMPv2 :
IGMPv2 is the revised version of IGMPv1 communication protocol. It has added
functionality of leaving the multicast group using group membership.
The message packet format in IGMPv2:
Type –
0x11 for Membership Query
0x12 for IGMPv1 Membership Report
0x16 for IGMPv2 Membership Report
0x22 for IGMPv3 Membership Report
0x17 for Leave Group
Group Address –
It is set as 0 when sending a general query. Otherwise, multicast address for
group-specific or source-specific queries.
55
56
3. IGMPv3 :
IGMPv2 was revised to IGMPv3 and added source-specific multicast and
membership report aggregation. These reports are sent to 224.0.0.22.
The message packet format in IGMPv3:
56
57
QRV value from the most recently received query as their own value until the
most recently received QRV is zero.
QQIC –
It represents Querier’s Query Interval Code.
Number of sources –
It represents the number of source addresses present in the query. For general
query or group-specific query, this field is zero and for group-and-source-
specific query, this field is non-zero.
Source Address[i] –
It represents the IP unicast address for N fields.
Working:
IGMP works on devices that are capable of handling multicast groups and
dynamic multicasting. These devices allows the host to join or leave the
membership in the multicast group. These devices also allows to add and remove
clients from the group. This communication protocol is operated between host
and local multicast router. When a multicast group is created, the multicast group
address is in range of class D (224-239) IP addresses and is forwarded as
destination IP address in the packet.
57
58
L2 or Level-2 devices such as switches are used in between host and multicast
router for IGMP snooping. IGMP snooping is a process to listen to the IGMP
network traffic in controlled manner. Switch receives the message from host and
forwards the membership report to the local multicast router. The multicast traffic
is further forwarded to remote routers from local multicast routers using PIM
(Protocol Independent Multicast) so that clients can receive the message/data
packets. Clients wishing to join the network sends join message in the query and
switch intercepts the message and adds the ports of clients to its multicast routing
table.
Advantages:
IGMP communication protocol efficiently transmits the multicast data to the
receivers and so, no junk packets are transmitted to the host which shows
optimized performance.
Bandwidth is consumed totally as all the shared links are connected.
Hosts can leave a multicast group and join another.
Disadvantages:
It does not provide good efficiency in filtering and security.
Due to lack of TCP, network congestion can occur.
IGMP is vulnerable to some attacks such as DOS attack (Denial-Of-Service).
Internet Protocol is one of the major protocols in the TCP/IP protocols suite. This
protocol works at the network layer of the OSI model and at the Internet layer of
the TCP/IP model. Thus this protocol has the responsibility of identifying hosts
based upon their logical addresses and to route data among them over the
underlying network.
IP provides a mechanism to uniquely identify hosts by an IP addressing scheme.
IP uses best effort delivery, i.e. it does not guarantee that packets would be
delivered to the destined host, but it will do its best to reach the destination.
Internet Protocol version 4 uses 32-bit logical address.
IPv4 - Packet Structure
Internet Protocol being a layer-3 protocol (OSI) takes data Segments from layer-4
(Transport) and divides it into packets. IP packet encapsulates data unit received
from above layer and add to its own header information.
58
59
59
60
In this mode, data is sent only to one destined host. The Destination Address field
contains 32- bit IP address of the destination host. Here the client sends data to the
targeted server −
60
61
In this mode, the packet is addressed to all the hosts in a network segment. The
Destination Address field contains a special broadcast address,
i.e. 255.255.255.255. When a host sees this packet on the network, it is bound to
process it. Here the client sends a packet, which is entertained by all the Servers −
61
62
This mode is a mix of the previous two modes, i.e. the packet sent is neither
destined to a single host nor all the hosts on the segment. In this packet, the
Destination Address contains a special address which starts with 224.x.x.x and
can be entertained by more than one host.
Here a server sends packets which are entertained by more than one servers.
Every network has one IP address reserved for the Network Number which
represents the network and one IP address reserved for the Broadcast Address,
which represents all the hosts in that network.
A single IP address can contain information about the network and its sub-
network and ultimately the host. This scheme enables the IP Address to be
hierarchical where a network can have many sub-networks which in turn can have
many hosts.
Subnet Mask
62
63
The 32-bit IP address contains information about the host and its network. It is
very necessary to distinguish both. For this, routers use Subnet Mask, which is as
long as the size of the network address in the IP address. Subnet Mask is also 32
bits long. If the IP address in binary is ANDed with its Subnet Mask, the result
yields the Network address. For example, say the IP Address is 192.168.1.152 and
the Subnet Mask is 255.255.255.0 then −
This way the Subnet Mask helps extract the Network ID and the Host from an IP
Address. It can be identified now that 192.168.1.0 is the Network number and
192.168.1.152 is the host on that network.
Binary Representation
The positional value method is the simplest form of converting binary from
decimal value. IP address is 32 bit value which is divided into 4 octets. A binary
octet contains 8 bits and the value of each bit can be determined by the position of
bit value '1' in the octet.
63
64
IPV6:- Internet Protocol version 6 (IPv6) is the latest revision of the Internet
Protocol (IP) and the first version of the protocol to be widely deployed. IPv6 was
developed by the Internet Engineering Task Force (IETF) to deal with the long-
anticipated problem of IPv4 address exhaustion. This tutorial will help you in
64
65
IPv6 - Features
The successor of IPv4 is not designed to be backward compatible. Trying to keep
the basic functionalities of IP addressing, IPv6 is redesigned entirely. It offers the
following features:
Larger Address Space
In contrast to IPv4, IPv6 uses 4 times more bits to address a device on the
Internet. This much of extra bits can provide approximately
3.4×1038 different combinations of addresses. This address can accumulate
the aggressive requirement of address allotment for almost everything in
this world. According to an estimate, 1564 addresses can be allocated to
every square meter of this earth.
Simplified Header
IPv6’s header has been simplified by moving all unnecessary information
and options (which are present in IPv4 header) to the end of the IPv6
header. IPv6 header is only twice as bigger than IPv4 provided the fact that
IPv6 address is four times longer.
End-to-end Connectivity
Every system now has unique IP address and can traverse through the
Internet without using NAT or other translating components. After IPv6 is
fully implemented, every host can directly reach other hosts on the Internet,
with some limitations involved like Firewall, organization policies, etc.
Auto-configuration
IPv6 supports both stateful and stateless auto configuration mode of its host
devices. This way, absence of a DHCP server does not put a halt on inter
segment communication.
Faster Forwarding/Routing
Simplified header puts all unnecessary information at the end of the header.
The information contained in the first part of the header is adequate for a
Router to take routing decisions, thus making routing decision as quickly as
looking at the mandatory header.
IPSec
65
66
Initially it was decided that IPv6 must have IPSec security, making it more
secure than IPv4. This feature has now been made optional.
No Broadcast
Though Ethernet/Token Ring are considered as broadcast network because
they support Broadcasting, IPv6 does not have any broadcast support any
more. It uses multicast to communicate with multiple hosts.
Anycast Support
This is another characteristic of IPv6. IPv6 has introduced Anycast mode of
packet routing. In this mode, multiple interfaces over the Internet are
assigned same Anycast IP address. Routers, while routing, send the packet
to the nearest destination.
Mobility
IPv6 was designed keeping mobility in mind. This feature enables hosts
(such as mobile phone) to roam around in different geographical area and
remain connected with the same IP address. The mobility feature of IPv6
takes advantage of auto IP configuration and Extension headers.
Enhanced Priority Support
IPv4 used 6 bits DSCP (Differential Service Code Point) and 2 bits ECN
(Explicit Congestion Notification) to provide Quality of Service but it could
only be used if the end-to-end devices support it, that is, the source and
destination device and underlying network must support it.
In IPv6, Traffic class and Flow label are used to tell the underlying routers
how to efficiently process the packet and route it.
Smooth Transition
Large IP address scheme in IPv6 enables to allocate devices with globally
unique IP addresses. This mechanism saves IP addresses and NAT is not
required. So devices can send/receive data among each other, for example,
VoIP and/or any streaming media can be used much efficiently.
Other fact is, the header is less loaded, so routers can take forwarding
decisions and forward them as quickly as they arrive.
Extensibility
One of the major advantages of IPv6 header is that it is extensible to add
more information in the option part. IPv4 provides only 40-bytes for
66
67
options, whereas options in IPv6 can be as much as the size of IPv6 packet
itself.
IPv6 - Addressing Modes
In computer networking, addressing mode refers to the mechanism of hosting an
address on the network. IPv6 offers several types of modes by which a single host
can be addressed. More than one host can be addressed at once or the host at the
closest distance can be addressed.
Unicast
Multicast
The IPv6 multicast mode is same as that of IPv4. The packet destined to multiple
hosts is sent on a special multicast address. All the hosts interested in that
multicast information, need to join that multicast group first. All the interfaces
that joined the group receive the multicast packet and process it, while other hosts
not interested in multicast packets ignore the multicast information.
67
68
Anycast
IPv6 has introduced a new type of addressing, which is called Anycast addressing.
In this addressing mode, multiple interfaces (hosts) are assigned same Anycast IP
address. When a host wishes to communicate with a host equipped with an
Anycast IP address, it sends a Unicast message. With the help of complex routing
mechanism, that Unicast message is delivered to the host closest to the Sender in
terms of Routing cost.
68
69
Fixed Header
69
70
[Image:
IPv6 Fixed Header]
IPv6 fixed header is 40 bytes long and contains the following information.
2 Traffic Class (8-bits): These 8 bits are divided into two parts. The most
significant 6 bits are used for Type of Service to let the Router Known what
services should be provided to this packet. The least significant 2 bits are used
for Explicit Congestion Notification (ECN).
3 Flow Label (20-bits): This label is used to maintain the sequential flow of the
packets belonging to a communication. The source labels the sequence to help
the router identify that a particular packet belongs to a specific flow of
information. This field helps avoid re-ordering of data packets. It is designed
for streaming/real-time media.
4 Payload Length (16-bits): This field is used to tell the routers how much
information a particular packet contains in its payload. Payload is composed of
Extension Headers and Upper Layer data. With 16 bits, up to 65535 bytes can
be indicated; but if the Extension Headers contain Hop-by-Hop Extension
Header, then the payload may exceed 65535 bytes and this field is set to 0.
70
71
6 Hop Limit (8-bits): This field is used to stop packet to loop in the network
infinitely. This is same as TTL in IPv4. The value of Hop Limit field is
decremented by 1 as it passes a link (router/hop). When the field reaches 0 the
packet is discarded.
Extension Headers
In IPv6, the Fixed Header contains only that much information which is
necessary, avoiding those information which is either not required or is rarely
used. All such information is put between the Fixed Header and the Upper layer
header in the form of Extension Headers. Each Extension Header is identified by a
distinct value.
When Extension Headers are used, IPv6 Fixed Header’s Next Header field points
to the first Extension Header. If there is one more Extension Header, then the first
Extension Header’s ‘Next-Header’ field points to the second one, and so on. The
last Extension Header’s ‘Next-Header’ field points to the Upper Layer Header.
Thus, all the headers points to the next one in a linked list manner.
If the Next Header field contains the value 59, it indicates that there are no
headers after this header, not even Upper Layer Header.
The following Extension Headers must be supported as per RFC 2460:
71
72
These headers:
1. should be processed by First and subsequent destinations.
2. should be processed by Final Destination.
Extension Headers are arranged one after another in a linked list manner, as
depicted in the following diagram:
[Image:
Extension Headers Connected Format]
IPv4 has 32-bit address length IPv6 has 128-bit address length
72
73
IPv4 IPv6
In IPv4 checksumfield is
available In IPv6 checksumfield is not available
IPv4 has header of 20-60 bytes. IPv6 has header of 40 bytes fixed
73
74
Classful Addressing:- The 32 bit IP address is divided into five sub-classes. These
are:
Class A
Class B
Class C
Class D
Class E
Each of these classes has a valid range of IP addresses. Classes D and E are
reserved for multicast and experimental purposes respectively. The order of bits in
the first octet determine the classes of IP address.
IPv4 address is divided into two parts:
Network ID
Host ID
The class of IP address is used to determine the bits used for network ID and host
ID and the number of total networks and hosts possible in that particular class.
Each ISP or network administrator assigns IP address to each device that is
connected to its network.
74
75
2^7-2= 126 network ID(Here 2 address is subracted because 0.0.0.0 and 127.x.y.z
are special address. )
2^24 – 2 = 16,777,214 host ID
IP addresses belonging to class A ranges from 1.x.x.x – 126.x.x.x
Class B:
IP address belonging to class B are assigned to the networks that ranges from
medium-sized to large-sized networks.
The network ID is 16 bits long.
The host ID is 16 bits long.
The higher order bits of the first octet of IP addresses of class B are always set to
10. The remaining 14 bits are used to determine network ID. The 16 bits of host ID
is used to determine the host in any network. The default sub-net mask for class B
is 255.255.x.x. Class B has a total of:
2^14 = 16384 network address
2^16 – 2 = 65534 host address
75
76
Class C:
IP address belonging to class C are assigned to small-sized networks.
The network ID is 24 bits long.
The host ID is 8 bits long.
The higher order bits of the first octet of IP addresses of class C are always set to
110. The remaining 21 bits are used to determine network ID. The 8 bits of host ID
is used to determine the host in any network. The default sub-net mask for class C
is 255.255.255.x. Class C has a total of:
2^21 = 2097152 network address
2^8 – 2 = 254 host address
IP addresses belonging to class C ranges from 192.0.0.x – 223.255.255.x.
Class D:
IP address belonging to class D are reserved for multi-casting. The higher order
bits of the first octet of IP addresses belonging to class D are always set to 1110.
The remaining bits are for the address that interested hosts recognize.
Class D does not posses any sub-net mask. IP addresses belonging to class D
ranges from 224.0.0.0 – 239.255.255.255.
76
77
Class E:
IP addresses belonging to class E are reserved for experimental and research
purposes. IP addresses of class E ranges from 240.0.0.0 – 255.255.255.254. This
class doesn’t have any sub-net mask. The higher order bits of first octet of class E
are always set to 1111.
The network ID cannot start with 127 because 127 belongs to class A address and
is reserved for internal loop-back functions.
All bits of network ID set to 1 are reserved for use as an IP broadcast address and
therefore, cannot be used.
All bits of network ID set to 0 are used to denote a specific host on the local
network and are not routed and therefore, aren’t used.
Summary of Classful addressing :
77
78
78
79
Each organization is responsible for determining the number and size of the
subnets it creates, within the limits of the address space available for its use.
Additionally, the details of subnet segmentation within an organization remain
local to that organization.
An IP address is divided into two fields: a Network Prefix (also called the Network
ID) and a Host ID. What separates the Network Prefix and the Host ID depends on
whether the address is a Class A, B or C address. Figure 1 shows an IPv4 Class B
address, 172.16.37.5. Its Network Prefix is 172.16.0.0, and the Host ID is 37.5.
79
80
Class B
IP address
The subnet mechanism uses a portion of the Host ID field to identify individual
subnets. Figure 2, for example, shows the third group of the 172.16.0.0 network
being used as a Subnet ID. A subnet mask is used to identify the part of the address
that should be used as the Subnet ID. The subnet mask is applied to the full
network address using a binary AND operation. AND operations operate,
assuming an output is "true" only when both inputs are "true." Otherwise, the
output is "false." Only when two bits are both 1. This results in the Subnet ID.
Figure 2 shows the AND of the IP address, as well as the mask producing the
Subnet ID. Any remaining address bits identify the Host ID. The subnet in Figure 2
is identified as 172.16.2.0, and the Host ID is 5. In practice, network staff will
typically refer to a subnet by just the Subnet ID. It would be common to hear
someone say, "Subnet 2 is having a problem today," or, "There is a problem with
the dot-two subnet."
80
81
Subnet
ID
The Subnet ID is used by routers to determine the best route between subnetworks.
Figure 3 shows the 172.16.0.0 network, with the third grouping as the Subnet ID.
Four of the 256 possible subnets are shown connected to one router. Each subnet is
identified either by its Subnet ID or the subnet address with the Host ID set to .0.
The router interfaces are assigned the Host ID of .1 -- e.g., 172.16.2.1.
When the router receives a packet addressed to a host on a different subnet than the
sender -- host A to host C, for example -- it knows the subnet mask and uses it to
determine the Subnet ID of host C. It examines its routing table to find the
interface connected to host C's subnet and forwards the packet on that interface.
Subnet segmentation
A subnet itself also may be segmented into smaller subnets, giving organizations
the flexibility to create smaller subnets for things like point-to-point links or for
subnetworks that support a few devices. The example below uses an 8-bit Subnet
ID. The number of bits in the subnet mask depends on the organization's
81
82
requirements for subnet size and the number of subnets. Other subnet mask lengths
are common. While this adds some complexity to network addressing, it
significantly improves the efficiency of network address utilization.
Subnet
segmentation
In modern routing architectures, routing protocols distribute the subnet mask with
routes and provide mechanisms to summarize groups of subnets as a single routing
82
83
table entry. Older routing architectures relied on the default Class A, B and C IP
address classification to determine the mask to use. CIDR notation is used to
identify Network Prefix and Mask, where the subnet mask is a number that
indicates the number of ones in the Mask (e.g., 172.16.2.0/24). This is also known
as Variable-Length Subnet Masking (VLSM) and CIDR. Subnets and subnetting
are used in both IPv4 and IPv6 networks, based on the same principles.
Before introducing IPv6 Address format, we shall look into Hexadecimal Number
System. Hexadecimal is a positional number system that uses radix (base) of 16.
To represent the values in readable format, this system uses 0-9 symbols to
represent values from zero to nine and A-F to represent values from ten to fifteen.
Every digit in Hexadecimal can represent values from 0 to 15.
83
84
Address Structure
An IPv6 address is made of 128 bits divided into eight 16-bits blocks. Each block
is then converted into 4-digit Hexadecimal numbers separated by colon symbols.
For example, given below is a 128 bit IPv6 address represented in binary format
and divided into eight 16-bits blocks:
0010000000000001 0000000000000000 0011001000111000 1101111111100001
0000000001100011 0000000000000000 0000000000000000 1111111011111011
Each block is then converted into Hexadecimal and separated by ‘:’ symbol:
2001:0000:3238:DFE1:0063:0000:0000:FEFB
Even after converting into Hexadecimal format, IPv6 address remains long. IPv6
provides some rules to shorten the address. The rules are as follows:
Rule.1: Discard leading Zero(es):
In Block 5, 0063, the leading two 0s can be omitted, such as (5th block):
2001:0000:3238:DFE1:63:0000:0000:FEFB
Rule.2: If two of more blocks contain consecutive zeroes, omit them all and
replace with double colon sign ::, such as (6th and 7th block):
84
85
2001:0000:3238:DFE1:63::FEFB
Consecutive blocks of zeroes can be replaced only once by :: so if there are still
blocks of zeroes in the address, they can be shrunk down to a single zero, such as
(2nd block):
2001:0:3238:DFE1:63::FEFB
Interface ID
IPv6 has three different types of Unicast Address scheme. The second half of the
address (last 64 bits) is always used for Interface ID. The MAC address of a
system is composed of 48-bits and represented in Hexadecimal. MAC addresses
are considered to be uniquely assigned worldwide. Interface ID takes advantage of
this uniqueness of MAC addresses. A host can auto-configure its Interface ID by
using IEEE’s Extended Unique Identifier (EUI-64) format. First, a host divides its
own MAC address into two 24-bits halves. Then 16-bit Hex value 0xFFFE is
sandwiched into those two halves of MAC address, resulting in EUI-64 Interface
ID.
85
86
This address type is equivalent to IPv4’s public address. Global Unicast addresses
in IPv6 are globally identifiable and uniquely addressable.
Link-Local Address
[Image:
Link-Local Address]
Link-local addresses are used for communication among IPv6 hosts on a link
(broadcast segment) only. These addresses are not routable, so a Router never
forwards these addresses outside the link.
86
87
Unique-Local Address
This type of IPv6 address is globally unique, but it should be used in local
communication. The second half of this address contain Interface ID and the first
half is divided among Prefix, Local Bit, Global ID and Subnet ID.
[Image:
Unique-Local Address]
Prefix is always set to 1111 110. L bit, is set to 1 if the address is locally assigned.
So far, the meaning of L bit to 0 is not defined. Therefore, Unique Local IPv6
address always starts with ‘FD’.
[Image: IPv6
Unicast Address Scope]
The scope of Link-local address is limited to the segment. Unique Local Address
are locally global, but are not routed over the Internet, limiting their scope to an
organization’s boundary. Global Unicast addresses are globally unique and
recognizable. They shall make the essence of Internet v2 addressing.
Unicast Addresses
87
88
Figure 4-6 diagrams the three types of addresses: unicast, multicast, and anycast.
We begin by looking at unicast addresses. Don’t be intimidated by all the different
types of unicast addresses. The most significant types are global unicast addresses,
which are equivalent to IPv4 public addresses, and link-local addresses. These
address types are discussed in detail in Chapters 5 and 6.
NOTE
Notice that there is no broadcast address shown in Figure 4-6. Remember that IPv6
does not include a broadcast address.
This section covers the different types of unicast addresses, as illustrated in Figure
4-6. The following is a quick preview of each type of unicast address discussed in
this section:
88
89
IPv4 embedded: An IPv6 address that carries an IPv4 address in the low-
order 32 bits of the address.
Figure 4-7 shows the generic structure of a GUA, which has three fields:
Figure 4-7 illustrates the more general structure, without the specific sizes for any
of the three parts. The first 3 bits of a GUA address begin with the binary value
001, which results in the first hexadecimal digit becoming a 2 or a 3. (We look at
the structure of the GUA address more closely in Chapter 5.)
There are several ways a device can be configured with a global unicast address:
Manually configured.
89
90
Example 4-1 demonstrates how to view the global unicast address on Windows
and Mac OS operating systems, using the ipconfig and ifconfig commands,
respectively. The ifconfig command is also used with the Linux operating system
and provides similar output.
NOTE
You may see multiple IPv6 global unicast addresses including one or more
temporary addresses. You’ll learn more about this in Chapter 9.
90
91
This section has provided just a brief introduction to global unicast addresses.
Remember that IPv6 introduced a lot of changes to IP. Devices may obtain more
than one GUA address for reasons such as privacy. For a network administrator
needing to manage and control access within a network, having these additional
addresses that are not administered through stateful DHCPv6 may be undesirable.
Chapter 11 discusses devices obtaining or creating multiple global unicast
addresses and various options to ensure that devices only obtain a GUA address
from a stateful DHCPv6 server.
ROUTING ALGORITHMS:-
Distance Vector Routing:- A distance-vector routing (DVR) protocol requires
that a router inform its neighbors of topology changes periodically. Historically
known as the old ARPANET routing algorithm (or known as Bellman-Ford
algorithm).
Bellman Ford Basics – Each router maintains a Distance Vector table containing
the distance between itself and ALL possible destination nodes. Distances,based on
a chosen metric, are computed using information from the neighbors’ distance
vectors.
Information kept by DV router -
Each router has an ID
Associated with each link connected to a router,
there is a link cost (static or dynamic).
Intermediate hops
From time-to-time, each node sends its own distance vector estimate to
neighbors.
When a node x receives new DV estimate from any neighbor v, it saves v’s
distance vector and it updates its own DV using B-F equation:
Dx(y) = min { C(x,v) + Dv(y), Dx(y) } for each node y ∈ N
Example – Consider 3-routers X, Y and Z as shown in figure. Each router have
their routing table. Every routing table will contain distance to the destination
nodes.
92
93
Consider router X , X will share it routing table to neighbors and neighbors will
share it routing table to it to X and distance from node X to destination will be
calculated using bellmen- ford equation.
Dx(y) = min { C(x,v) + Dv(y)} for each node y ∈ N
As we can see that distance will be less going from X to Z when Y is intermediate
node(hop) so it will be update in routing table X.
93
94
94
95
95
96
Link state routing is a technique in which each router shares the knowledge of its
neighborhood with every other router in the internetwork.
Reliable Flooding
Route Calculation
Each node uses Dijkstra's algorithm on the graph to calculate the optimal routes to
all nodes.
o The Link state routing algorithm is also known as Dijkstra's algorithm which
is used to find the shortest path from one node to every other node in the
network.
96
97
o The Dijkstra's algorithm is an iterative, and it has the property that after
kth iteration of the algorithm, the least cost paths are well known for k
destination nodes.
o c( i , j): Link cost from node i to node j. If i and j nodes are not directly
linked, then c(i , j) = ∞.
o D(v): It defines the cost of the path from source code to destination v that
has the least cost currently.
o P(v): It defines the previous node (neighbor of v) along with current least
cost path from source to v.
o N: It is the total number of nodes available in the network.
Algorithm
Initialization
N = {A} // A is a root node.
for all nodes v
if v adjacent to A
then D(v) = c(A,v)
else D(v) = infinity
loop
find w not in N such that D(w) is a minimum.
Add w to N
Update D(v) for all v adjacent to w and not in N:
D(v) = min(D(v) , D(w) + c(w,v))
Until all nodes in N
In the above algorithm, an initialization step is followed by the loop. The number
of times the loop is executed is equal to the total number of nodes available in the
network.
Disadvantage:
97
98
Heavy traffic is created in Line state routing due to Flooding. Flooding can cause
an infinite looping, this problem can be solved by using Time-to-leave field.
Initiation
Sharing
Updating
Hierarchical Routing Protocol :
Hierarchical Routing is the method of routing in networks that is based on
hierarchical addressing. Most transmission control protocol, Internet protocol
(DCPIP). Routing is based on two level of hierarchical routing in which IP
address is divided into a network, person and a host person. Gateways use only
the network a person tell an IP data until gateways delivered it directly.
It addresses the growth of routing tables. Routers are further divided into
regions and they know the route of their own regions only. It works like a
telephone routing.
98
99
Example –
City, State, Country, Continent.
Multicast at FF02::9
Broadcast at (RIPng can only run on
255.255.255.255 Multicast at 224.0.0.9 IPv6 networks)
99
100
100
101
Consider the above given topology which has 3-routers R1, R2, R3. R1 has IP
address 172.16.10.6/30 on s0/0/1, 192.168.20.1/24 on fa0/0. R2 has IP address
172.16.10.2/30 on s0/0/0, 192.168.10.1/24 on fa0/0. R3 has IP address
172.16.10.5/30 on s0/1, 172.16.10.1/30 on s0/0, 10.10.10.1/24 on fa0/0.
Configure RIP for R1 :
R1(config)# router rip
R1(config-router)# network 192.168.20.0
R1(config-router)# network 172.16.10.4
R1(config-router)# version 2
R1(config-router)# no auto-summary
Note : no auto-summary command disables the auto-summarisation. If we don’t
select no auto-summary, then subnet mask will be considered as classful in
Version 1.
Configureg RIP for R2 :
R2(config)# router rip
R2(config-router)# network 192.168.10.0
R2(config-router)# network 172.16.10.0
R2(config-router)# version 2
R2(config-router)# no auto-summary
Similarly, Configure RIP for R3 :
R3(config)# router rip
R3(config-router)# network 10.10.10.0
R3(config-router)# network 172.16.10.4
R3(config-router)# network 172.16.10.0
R3(config-router)# version 2
R3(config-router)# no auto-summary
RIP timers :
Update timer : The default timing for routing information being exchanged
by the routers operating RIP is 30 seconds. Using Update timer, the routers
exchange their routing table periodically.
Invalid timer: If no update comes until 180 seconds, then the destination
router consider it as invalid. In this scenario, the destination router mark hop
count as 16 for that router.
Hold down timer : This is the time for which the router waits for neighbour
router to respond. If the router isn’t able to respond within a given time then it
is declared dead. It is 180 seconds by default.
101
102
Flush time : It is the time after which the entry of the route will be flushed if
it doesn’t respond within the flush time. It is 60 seconds by default. This timer
starts after the route has been declared invalid and after 60 seconds i.e time
will be 180 + 60 = 240 seconds.
Note that all these times are adjustable. Use this command to change the timers :
R1(config-router)# timers basic
R1(config-router)# timers basic 20 80 80 90
OSPF:- Open Shortest Path First (OSPF) is a link-state routing protocol that is
used to find the best path between the source and the destination router using its
own Shortest Path First). OSPF is developed by Internet Engineering Task Force
(IETF) as one of the Interior Gateway Protocol (IGP), i.e, the protocol which
aims at moving the packet within a large autonomous system or routing domain.
It is a network layer protocol which works on the protocol number 89 and uses
AD value 110. OSPF uses multicast address 224.0.0.5 for normal communication
and 224.0.0.6 for update to designated router(DR)/Backup Designated Router
(BDR).
OSPF terms –
1. Router I’d – It is the highest active IP address present on the router. First,
highest loopback address is considered. If no loopback is configured then the
highest active IP address on the interface of the router is considered.
2. Router priority – It is a 8 bit value assigned to a router operating OSPF, used
to elect DR and BDR in a broadcast network.
3. Designated Router (DR) – It is elected to minimize the number of adjacency
formed. DR distributes the LSAs to all the other routers. DR is elected in a
broadcast network to which all the other routers shares their DBD. In a
broadcast network, router requests for an update to DR and DR will respond to
that request with an update.
4. Backup Designated Router (BDR) – BDR is backup to DR in a broadcast
network. When DR goes down, BDR becomes DR and performs its functions.
DR and BDR election – DR and BDR election takes place in broadcast network
or multi-access network. Here are the criteria for the election:
1. Router having the highest router priority will be declared as DR.
2. If there is a tie in router priority then highest router I’d will be considered.
First, the highest loopback address is considered. If no loopback is configured
then the highest active IP address on the interface of the router is considered.
OSPF states – The device operating OSPF goes through certain states. These
states are:
102
103
1. Down – In this state, no hello packet have been received on the interface.
Note – The Down state doesn’t mean that the interface is physically down.
Here, it means that OSPF adjacency process has not started yet.
2. INIT – In this state, hello packet have been received from the other router.
3. 2WAY – In the 2WAY state, both the routers have received the hello packets
from other routers. Bidirectional connectivity has been established.
Note – In between the 2WAY state and Exstart state, the DR and BDR
election takes place.
4. Exstart – In this state, NULL DBD are exchanged.In this state, master and
slave election take place. The router having the higher router I’d becomes the
master while other becomes the slave. This election decides Which router will
send it’s DBD first (routers who have formed neighbourship will take part in
this election).
5. Exchange – In this state, the actual DBDs are exchanged.
6. Loading – In this sate, LSR, LSU and LSA (Link State Acknowledgement)
are exchanged.
Important – When a router receives DBD from other router, it compares it’s
own DBD with the other router DBD. If the received DBD is more updated
than its own DBD then the router will send LSR to the other router stating
what links are needed. The other router replies with the LSU containing the
updates that are needed. In return to this, the router replies with the Link State
Acknowledgement.
7. Full – In this state, synchronization of all the information takes place. OSPF
routing can begin only after the Full state.
103
104
UNIT:-3
TRANSPORT LAYER:-
104
105
Transport Layer Services:- Transport Layer is the second layer of the TCP/IP
model. It is an end-to-end layer used to deliver messages to a host. It is termed as
an end-to-end layer because it provides a point-to-point connection rather
than hop-to- hop, between the source host and destination host to deliver the
services reliably. The unit of data encapsulation in Transport Layer is a segment.
The standard protocols used by Transport Layer to enhance its functionalities are
TCP(Transmission Control Protocol), UDP( User Datagram Protocol),
DCCP( Datagram Congestion Control Protocol) etc.
Various responsibilities of a Transport Layer –
Process to process delivery –
While Data Link Layer requires the MAC address (48 bits address contained
inside the Network Interface Card of every host machine) of source-
destination hosts to correctly deliver a frame and Network layer requires the
IP address for appropriate routing of packets , in a similar way Transport
Layer requires a Port number to correctly deliver the segments of data to the
correct process amongst the multiple processes running on a particular host.
A port number is a 16 bit address used to identify any client-server program
uniquely.
End-to-end Connection between hosts –
The transport layer is also responsible for creating the end-to-end Connection
between hosts for which it mainly uses TCP and UDP. TCP is a secure,
connection- orientated protocol which uses a handshake protocol to establish a
robust connection between two end- hosts. TCP ensures reliable delivery of
messages and is used in various applications. UDP, on the other hand, is a
stateless and unreliable protocol which ensures best-effort delivery. It is
suitable for the applications which have little concern with flow or error
control and requires to send the bulk of data like video conferencing. It is
often used in multicasting protocols.
Multiplexing and Demultiplexing –
Multiplexing allows simultaneous use of different applications over a network
which is running on a host. The transport layer provides this mechanism which
enables us to send packet streams from various applications simultaneously
over a network. The transport layer accepts these packets from different
processes differentiated by their port numbers and passes them to the network
layer after adding proper headers. Similarly, Demultiplexing is required at the
receiver side to obtain the data coming from various processes. Transport
receives the segments of data from the network layer and delivers it to the
appropriate process running on the receiver’s machine.
105
106
Congestion Control –
Congestion is a situation in which too many sources over a network attempt to
send data and the router buffers start overflowing due to which loss of packets
occur. As a result retransmission of packets from the sources increases the
congestion further. In this situation, the Transport layer provides Congestion
Control in different ways. It uses open loop congestion control to prevent the
congestion and closed loop congestion control to remove the congestion in a
network once it occurred. TCP provides AIMD- additive increase
multiplicative decrease, leaky bucket technique for congestion control.
Data integrity and Error correction –
Transport layer checks for errors in the messages coming from application
layer by using error detection codes, computing checksums, it checks whether
the received data is not corrupted and uses the ACK and NACK services to
inform the sender if the data has arrived or not and checks for the integrity of
data.
Flow control –
The transport layer provides a flow control mechanism between the adjacent
layers of the TCP/IP model. TCP also prevents data loss due to a fast sender
and slow receiver by imposing some flow control techniques. It uses the
method of sliding window protocol which is accomplished by the receiver by
sending a window back to the sender informing the size of data it can receive.
106
107
remaining part consist of data. UDP port number fields are each 16 bits long,
therefore range for port numbers defined from 0 to 65535; port number 0 is
reserved. Port numbers help to distinguish different user requests or process.
1. Source Port : Source Port is 2 Byte long field used to identify port number of
source.
2. Destination Port : It is 2 Byte long field, used to identify the port of destined
packet.
3. Length : Length is the length of UDP including header and the data. It is 16-
bits field.
4. Checksum : Checksum is 2 Bytes long field. It is the 16-bit one’s
complement of the one’s complement sum of the UDP header, pseudo header
of information from the IP header and the data, padded with zero octets at the
end (if necessary) to make a multiple of two octets.
Notes – Unlike TCP, Checksum calculation is not mandatory in UDP. No Error
control or flow control is provided by UDP. Hence UDP depends on IP and
ICMP for error reporting.
Applications of UDP:
Used for simple request response communication when size of data is less and
hence there is lesser concern about flow and error control.
It is suitable protocol for multicasting as UDP supports packet switching.
UDP is used for some routing update protocols like RIP(Routing Information
Protocol).
107
108
Normally used for real time applications which can not tolerate uneven delays
between sections of a received message.
Following implementations uses UDP as a transport layer protocol:
NTP (Network Time Protocol)
DNS (Domain Name Service)
BOOTP, DHCP.
NNP (Network News Protocol)
Quote of the day protocol
TFTP, RTSP, RIP.
Application layer can do some of the tasks through UDP-
Trace Route
Record Route
Time stamp
UDP takes datagram from Network Layer, attach its header and send it to the
user. So, it works fast.
Actually UDP is null protocol if you remove checksum field.
1. Reduce the requirement of computer resources.
2. When using the Multicast or Broadcast to transfer.
3. The transmission of Real-time packets, mainly in multimedia applications.
TCP Protocol:- The transmission Control Protocol (TCP) is one of the most
important protocols of Internet Protocols suite. It is most widely used protocol for
data transmission in communication network such as internet.
Features
TCP is reliable protocol. That is, the receiver always sends either positive or
negative acknowledgement about the data packet to the sender, so that the
sender always has bright clue about whether the data packet is reached the
destination or it needs to resend it.
TCP ensures that the data reaches intended destination in the same order it
was sent.
TCP is connection oriented. TCP requires that connection between two
remote points be established before sending actual data.
TCP provides error-checking and recovery mechanism.
TCP provides end-to-end communication.
TCP provides flow control and quality of service.
TCP operates in Client/Server point-to-point mode.
TCP provides full duplex server, i.e. it can perform roles of both receiver
and sender.
108
109
Header
The length of TCP header is minimum 20 bytes long and maximum 60 bytes.
109
110
Addressing
TCP communication between two remote hosts is done by means of port numbers
(TSAPs). Ports numbers can range from 0 – 65535 which are divided as:
Connection Management
110
111
Establishment
Client initiates the connection and sends the segment with a Sequence number.
Server acknowledges it back with its own Sequence number and ACK of client’s
segment which is one more than client’s Sequence number. Client after receiving
ACK of its segment sends an acknowledgement of Server’s response.
Release
Either of server and client can send TCP segment with FIN flag set to 1. When the
receiving end responds it back by ACKnowledging FIN, that direction of TCP
communication is closed and connection is released.
Bandwidth Management
111
112
TCP uses the concept of window size to accommodate the need of Bandwidth
management. Window size tells the sender at the remote end, the number of data
byte segments the receiver at this end can receive. TCP uses slow start phase by
using window size 1 and increases the window size exponentially after each
successful communication.
For example, the client uses windows size 2 and sends 2 bytes of data. When the
acknowledgement of this segment received the windows size is doubled to 4 and
next sent the segment sent will be 4 data bytes long. When the acknowledgement
of 4-byte data segment is received, the client sets windows size to 8 and so on.
If an acknowledgement is missed, i.e. data lost in transit network or it received
NACK, then the window size is reduced to half and slow start phase starts again.
TCP uses port numbers to know what application process it needs to handover the
data segment. Along with that, it uses sequence numbers to synchronize itself with
the remote host. All data segments are sent and received with sequence numbers.
The Sender knows which last data segment was received by the Receiver when it
gets ACK. The Receiver knows about the last segment sent by the Sender by
referring to the sequence number of recently received packet.
If the sequence number of a segment recently received does not match with the
sequence number the receiver was expecting, then it is discarded and NACK is
sent back. If two segments arrive with the same sequence number, the TCP
timestamp value is compared to make a decision.
Multiplexing
The technique to combine two or more data streams in one session is called
Multiplexing. When a TCP client initializes a connection with Server, it always
refers to a well-defined port number which indicates the application process. The
client itself uses a randomly generated port number from private port number
pools.
Using TCP Multiplexing, a client can communicate with a number of different
application process in a single session. For example, a client requests a web page
which in turn contains different types of data (HTTP, SMTP, FTP etc.) the TCP
session timeout is increased and the session is kept open for longer time so that
the three-way handshake overhead can be avoided.
112
113
This enables the client system to receive multiple connection over single virtual
connection. These virtual connections are not good for Servers if the timeout is
too long.
Congestion Control
When large amount of data is fed to system which is not capable of handling it,
congestion occurs. TCP controls congestion by means of Window mechanism.
TCP sets a window size telling the other end how much data segment to send.
TCP may use three algorithms for congestion control:
Additive increase, Multiplicative Decrease
Slow Start
Timeout React
Timer Management
TCP uses different types of timer to control and management various tasks:
Keep-alive timer:
Retransmission timer:
Persist timer:
113
114
Timed-Wait:
After releasing a connection, either of the hosts waits for a Timed-Wait time
to terminate the connection completely.
This is in order to make sure that the other end has received the
acknowledgement of its connection termination request.
Timed-out can be a maximum of 240 seconds (4 minutes).
Crash Recovery
TCP is very reliable protocol. It provides sequence number to each of byte sent in
segment. It provides the feedback mechanism i.e. when a host receives a packet, it
is bound to ACK that packet having the next sequence number expected (if it is
not the last segment).
When a TCP Server crashes mid-way communication and re-starts its process it
sends TPDU broadcast to all its hosts. The hosts can then send the last data
segment which was never unacknowledged and carry onwards.
TCP Services:- The Transmission Control Protocol is the most common transport
layer protocol. It works together with IP and provides a reliable transport service
between processes using the network layer service provided by the IP protocol.
The various services provided by the TCP to the application layer are as follows:
1. Process-to-Process Communication –
TCP provides process to process communication, i.e, the transfer of data takes
place between individual processes executing on end systems. This is done
using port numbers or port addresses. Port numbers are 16 bit long that help
identify which process is sending or receiving data on a host.
2. Stream oriented –
This means that the data is sent and received as a stream of bytes(unlike UDP
or IP that divides the bits into datagrams or packets). However, the network
layer, that provides service for the TCP, sends packets of information not
streams of bytes. Hence, TCP groups a number of bytes together into
a segment and adds a header to each of these segments and then delivers these
segments to the network layer. At the network layer, each of these segments
are encapsulated in an IP packet for transmission. The TCP header has
information that is required for control purpose which will be discussed along
with the segment structure.
3. Full duplex service –
This means that the communication can take place in both directions at the
same time.
114
115
115
116
116
117
Characteristics of SCTP :
1. Unicast with Multiple properties –
It is a point-to-point protocol which can use different paths to reach end host.
2. Reliable Transmission –
It uses SACK and checksums to detect damaged, corrupted, discarded,
duplicate and reordered data. It is similar to TCP but SCTP is more efficient
when it comes to reordering of data.
3. Message oriented –
Each message can be framed and we can keep order of datastream and tabs on
structure. For this, In TCP, we need a different layer for abstraction.
4. Multi-homing –
It can establish multiple connection paths between two end points and does not
need to rely on IP layer for resilience.
Advantages of SCTP :
117
118
1. It is a full- duplex connection i.e. users can send and receive data
simultaneously.
2. It allows half- closed connections.
3. The message’s boundaries are maintained and application doesn’t have to split
messages.
4. It has properties of both TCP and UDP protocol.
5. It doesn’t rely on IP layer for resilience of paths.
Disadvantages of SCTP :
1. One of key challenges is that it requires changes in transport stack on node.
2. Applications need to be modified to use SCTP instead of TCP/UDP.
3. Applications need to be modified to handle multiple simultaneous streams.
SCTP Services:-
Process-to-Process Communication: SCTP provides uses Process-to-Process
Communication and also uses all well-known ports in the TCP space and also
some extra port numbers.
Multiple Streams: TCP is a stream-oriented protocol. ...
Multi homing: ...
Full-Duplex Communication: ...
Connection-Oriented Service:
SCTP Features:-
Unicast with Multicast properties. This means it is a point-to-point protocol but
with the ability to use several addresses at the same end host. ...
Reliable transmission. ...
Message oriented. ...
Rate adaptive. ...
Multi-homing. ...
Multi-streaming. ...
Initiation.
The server replies with an INIT-ACK chunk containing its own list of IP
addresses, initial sequence number, verification tag (that must appear in every
packet it sends for this association), the number of outbound streams the server is
requesting, the number of inbound streams it can support, and a state cookie that
ensures the association is valid. The client then replies with a COOKIE-ECHO
chunk and the server validates the cookie and replies with a COOKIE-ACK chunk.
The COOKIE-ECHO and COOKIE-ACK messages can include user data (chunks)
for more efficiency.
When you Configure SCTP Security, you can set an SCTP INIT timeout to control
the maximum length of time after receiving an INIT chunk before the firewall
receives the INIT-ACK chunk. If that time is exceeded, then the firewall stops the
association initiation. You can also configure an SCTP COOKIE timeout to control
the maximum length of time after receiving an INIT-ACK chunk with the STATE
COOKIE before the firewall receives the COOKIE-ECHO chunk; if that time is
exceeded, that also causes the firewall to stop the association initiation.
SCTP timeout
—Maximum length of time that can elapse without SCTP traffic on an association
before the firewall closes the association.
—Maximum length of time that an SCTP association remains open after the
firewall denies the session based on Security policy rules.
119
120
—Maximum length of time that the firewall waits after a SHUTDOWN chunk to
receive a SHUTDOWN-ACK chunk before the firewall disregards the
SHUTDOWN chunk.
APPLICATION LAYER:-
SMTP:- Email is emerging as one of the most valuable services on the internet
today. Most of the internet systems use SMTP as a method to transfer mail from
one user to another. SMTP is a push protocol and is used to send the mail
whereas POP (post office protocol) or IMAP (internet message access protocol)
are used to retrieve those mails at the receiver’s side.
SMTP Fundamentals
SMTP is an application layer protocol. The client who wants to send the mail
opens a TCP connection to the SMTP server and then sends the mail across the
connection. The SMTP server is always on listening mode. As soon as it listens
for a TCP connection from any client, the SMTP process initiates a connection on
that port (25). After successfully establishing the TCP connection the client
process sends the mail instantly.
SMTP Protocol
The SMTP model is of two type :
1. End-to- end method
2. Store-and- forward method
The end to end model is used to communicate between different organizations
whereas the store and forward method are used within an organization. A SMTP
client who wants to send the mail will contact the destination’s host SMTP
directly in order to send the mail to the destination. The SMTP server will keep
the mail to itself until it is successfully copied to the receiver’s SMTP.
The client SMTP is the one which initiates the session let us call it as the client-
SMTP and the server SMTP is the one which responds to the session request and
120
121
let us call it as receiver-SMTP. The client- SMTP will start the session and the
receiver-SMTP will respond to the request.
121
122
POP Protocol
The POP protocol stands for Post Office Protocol. As we know that SMTP is used
as a message transfer agent. When the message is sent, then SMPT is used to
deliver the message from the client to the server and then to the recipient server.
But the message is sent from the recipient server to the actual server with the help
of the Message Access Agent. The Message Access Agent contains two types of
protocols, i.e., POP3 and IMAP.
122
123
Suppose sender wants to send the mail to receiver. First mail is transmitted to the
sender's mail server. Then, the mail is transmitted from the sender's mail server to
the receiver's mail server over the internet. On receiving the mail at the receiver's
mail server, the mail is then sent to the user. The whole process is done with the
help of Email protocols. The transmission of mail from the sender to the sender's
mail server and then to the receiver's mail server is done with the help of the SMTP
protocol. At the receiver's mail server, the POP or IMAP protocol takes the data
and transmits to the actual user.
Since SMTP is a push protocol so it pushes the message from the client to the
server. As we can observe in the above figure that SMTP pushes the message from
the client to the recipient's mail server. The third stage of email communication
requires a pull protocol, and POP is a pull protocol. When the mail is transmitted
from the recipient mail server to the client which means that the client is pulling
the mail from the server.
What is POP3?
The POP3 is a simple protocol and having very limited functionalities. In the case
of the POP3 protocol, the POP3 client is installed on the recipient system while the
POP3 server is installed on the recipient's mail server.
The first version of post office protocol was first introduced in 1984 as RFC 918
by the internet engineering task force. The developers developed a simple and
effective email protocol known as the POP3 protocol, which is used for retrieving
123
124
the emails from the server. This provides the facility for accessing the mails offline
rather than accessing the mailbox offline.
In 1985, the post office protocol version 2 was introduced in RFC 937, but it was
replaced with the post office protocol version 3 in 1988 with the publication of
RFC 1081. Then, POP3 was revised for the next 10 years before it was published.
Once it was refined completely, it got published on 1996.
Although the POP3 protocol has undergone various enhancements, the developers
maintained a basic principle that it follows a three-stage process at the time of mail
retrieval between the client and the server. They tried to make this protocol very
simple, and this simplicity makes this protocol very popular today.
To establish the connection between the POP3 server and the POP3 client, the
POP3 server asks for the user name to the POP3 client. If the username is found in
the POP3 server, then it sends the ok message. It then asks for the password from
the POP3 client; then the POP3 client sends the password to the POP3 server. If
the password is matched, then the POP3 server sends the OK message, and the
connection gets established. After the establishment of a connection, the client can
see the list of mails on the POP3 mail server. In the list of mails, the user will get
124
125
the email numbers and sizes from the server. Out of this list, the user can start the
retrieval of mail.
Once the client retrieves all the emails from the server, all the emails from the
server are deleted. Therefore, we can say that the emails are restricted to a
particular machine, so it would not be possible to access the same mails on another
machine. This situation can be overcome by configuring the email settings to leave
a copy of mail on the mail server.
o It allows the users to read the email offline. It requires an internet connection
only at the time of downloading emails from the server. Once the mails are
downloaded from the server, then all the downloaded mails reside on our PC
or hard disk of our computer, which can be accessed without the internet.
Therefore, we can say that the POP3 protocol does not require permanent
internet connectivity.
o It provides easy and fast access to the emails as they are already stored on
our PC.
o There is no limit on the size of the email which we receive or send.
o It requires less server storage space as all the mails are stored on the local
machine.
o There is maximum size on the mailbox, but it is limited by the size of the
hard disk.
o It is a simple protocol so it is one of the most popular protocols used today.
o It is easy to configure and use.
o If the emails are downloaded from the server, then all the mails are deleted
from the server by default. So, mails cannot be accessed from other
125
126
machines unless they are configured to leave a copy of the mail on the
server.
o Transferring the mail folder from the local machine to another machine can
be difficult.
o Since all the attachments are stored on your local machine, there is a high
risk of a virus attack if the virus scanner does not scan them. The virus
attack can harm the computer.
o The email folder which is downloaded from the mail server can also become
corrupted.
o The mails are stored on the local machine, so anyone who sits on your
machine can access the email folder.
It also follows the client/server model. On one side, we have an IMAP client,
which is a process running on a computer. On the other side, we have an IMAP
server, which is also a process running on another computer. Both computers are
connected through a network.
126
127
o Port 993: This port is used when IMAP client wants to connect through
IMAP securely.
POP3 is becoming the most popular protocol for accessing the TCP/IP mailboxes.
It implements the offline mail access model, which means that the mails are
retrieved from the mail server on the local machine, and then deleted from the mail
server. Nowadays, millions of users use the POP3 protocol to access the incoming
mails. Due to the offline mail access model, it cannot be used as much. The online
model we would prefer in the ideal world. In the online model, we need to be
connected to the internet always. The biggest problem with the offline access using
POP3 is that the mails are permanently removed from the server, so multiple
computers cannot access the mails. The solution to this problem is to store the
mails at the remote server rather than on the local server. The POP3 also faces
another issue, i.e., data security and safety. The solution to this problem is to use
the disconnected access model, which provides the benefits of both online and
offline access. In the disconnected access model, the user can retrieve the mail for
local use as in the POP3 protocol, and the user does not need to be connected to the
internet continuously. However, the changes made to the mailboxes are
synchronized between the client and the server. The mail remains on the server so
different applications in the future can access it. When developers recognized these
benefits, they made some attempts to implement the disconnected access model.
This is implemented by using the POP3 commands that provide the option to leave
the mails on the server. This works, but only to a limited extent, for example,
keeping track of which messages are new or old become an issue when both are
retrieved and left on the server. So, the POP3 lacks some features which are
required for the proper disconnected access model.
The first version of IMAP was formally documented as an internet standard was
IMAP version 2, and in RFC 1064, and was published in July 1988. It was updated
in RFC 1176, August 1990, retaining the same version. So they created a new
document of version 3 known as IMAP3. In RFC 1203, which was published in
127
128
February 1991. However, IMAP3 was never accepted by the market place, so
people kept using IMAP2. The extension to the protocol was later created called
IMAPbis, which added support for Multipurpose Internet Mail Extensions (MIME)
to IMAP. This was a very important development due to the usefulness of MIME.
Despite this, IMAPbis was never published as an RFC. This may be due to the
problems associated with the IMAP3. In December 1994, IMAP version 4, i.e.,
IMAP4 was published in two RFCs, i.e., RFC 1730 describing the main protocol
and RFC 1731 describing the authentication mechanism for IMAP 4. IMAP 4 is
the current version of IMAP, which is widely used today. It continues to be
refined, and its latest version is actually known as IMAP4rev1 and is defined in
RFC 2060. It is most recently updated in RFC 3501.
IMAP Features
IMAP was designed for a specific purpose that provides a more flexible way of
how the user accesses the mailbox. It can operate in any of the three modes, i.e.,
online, offline, and disconnected mode. Out of these, offline and disconnected
modes are of interest to most users of the protocol.
o Access and retrieve mail from remote server: The user can access the mail
from the remote server while retaining the mails in the remote server.
o Set message flags: The message flag is set so that the user can keep track of
which message he has already seen.
o Manage multiple mailboxes: The user can manage multiple mailboxes and
transfer messages from one mailbox to another. The user can organize them
into various categories for those who are working on various projects.
o Determine information prior to downloading: It decides whether to retrieve
or not before downloading the mail from the mail server.
o Downloads a portion of a message: It allows you to download the portion of
a message, such as one body part from the mime-multi part. This can be
useful when there are large multimedia files in a short-text element of a
message.
o Organize mails on the server: In case of POP3, the user is not allowed to
manage the mails on the server. On the other hand, the users can organize
128
129
the mails on the server according to their requirements like they can create,
delete or rename the mailbox on the server.
o Search: Users can search for the contents of the emails.
o Check email-header: Users can also check the email-header prior to
downloading.
o Create hierarchy: Users can also create the folders to organize the mails in a
hierarchy.
1. The IMAP is a client-server protocol like POP3 and most other TCP/IP
application protocols. The IMAP4 protocol functions only when the IMAP4
must reside on the server where the user mailboxes are located. In c the
POP3 does not necessarily require the same physical server that provides the
SMTP services. Therefore, in the case of the IMAP protocol, the mailbox
must be accessible to both SMTP for incoming mails and IMAP for retrieval
and modifications.
129
130
The IMAP protocol synchronizes all the devices with the main server. Let's
suppose we have three devices desktop, mobile, and laptop as shown in the above
figure. If all these devices are accessing the same mailbox, then it will be
synchronized with all the devices. Here, synchronization means that when mail is
opened by one device, then it will be marked as opened in all the other devices, if
we delete the mail, then the mail will also be deleted from all the other devices. So,
we have synchronization between all the devices. In IMAP, we can see all the
folders like spam, inbox, sent, etc. We can also create our own folder known as a
custom folder that will be visible in all the other devices.
data to be sent through SMTP. It allows the users to exchange different kinds of
data files on the Internet: audio, video, images, application programs as well.
Why do we need MIME?
Limitations of Simple Mail Transfer Protocol (SMTP):
1. SMTP has a very simple structure
2. It’s simplicity however comes with a price as it only send messages in NVT 7-
bit ASCII format.
3. It cannot be used for languages that do not support 7-bit ASCII format such
as- French, German, Russian, Chinese and Japanese, etc. so it cannot be
transmitted using SMTP. So, in order to make SMTP more broad we use
MIME.
4. It cannot be used to send binary files or video or audio data.
Purpose and Functionality of MIME –
Growing demand for Email Message as people also want to express in terms of
Multimedia. So, MIME another email application is introduced as it is not
restricted to textual data.
MIME transforms non-ASCII data at sender side to NVT 7-bit data and delivers it
to the client SMTP. The message at receiver side is transferred back to the
original data. As well as we can send video and audio data using MIME as it
transfers them also in 7-bit ASCII data.
Features of MIME –
131
132
header and provides additional information. while POP being the message access
agent organizes the mails from the mail server to the receivers computer. POP
allows user agent to connect with the message transfer agent.
MIME Header:
It is added to the original e-mail header section to define transformation. There
are five headers which we add to the original header:
1. MIME Version – Defines version of MIME protocol. It must have the
parameter Value 1.0, which indicates that message is formatted using MIME.
2. Content Type – Type of data used in the body of message. They are of
different types like text data (plain, HTML), audio content or video content.
3. Content Type Encoding – It defines the method used for encoding the
message. Like 7-bit encoding, 8-bit encoding, etc.
4. Content Id – It is used for uniquely identifying the message.
5. Content description – It defines whether the body is actually image, video or
audio.
DHCP:- Dynamic Host Configuration Protocol(DHCP) is an application layer
protocol which is used to provide:
1. Subnet Mask (Option 1 – e.g., 255.255.255.0)
2. Router Address (Option 3 – e.g., 192.168.1.1)
3. DNS Address (Option 6 – e.g., 8.8.8.8)
4. Vendor Class Identifier (Option 43 – e.g., ‘unifi’ = 192.168.1.9 ##where unifi
= controller)
DHCP is based on a client-server model and based on discovery, offer, request,
and ACK.
DHCP port number for server is 67 and for the client is 68. It is a Client server
protocol which uses UDP services. IP address is assigned from a pool of
addresses. In DHCP, the client and the server exchange mainly 4 DHCP messages
in order to make a connection, also called DORA process, but there are 8 DHCP
messages in the process.
These messages are given as below:
1. DHCP discover message –
This is a first message generated in the communication process between server
and client. This message is generated by Client host in order to discover if
there is any DHCP server/servers are present in a network or not. This
message is broadcasted to all devices present in a network to find the DHCP
server. This message is 342 or 576 bytes long
132
133
133
134
Also the server has provided the offered IP address 192.16.32.51 and lease
time of 72 hours(after this time the entry of host will be erased from the server
automatically) . Also the client identifier is PC MAC address
(08002B2EAF2A) for all the messages.
3. DHCP request message –
When a client receives a offer message, it responds by broadcasting a DHCP
request message. The client will produce a gratitutous ARP in order to find if
there is any other host present in the network with same IP address. If there is
no reply by other host, then there is no host with same TCP configuration in
the network and the message is broadcasted to server showing the acceptance
of IP address .A Client ID is also added in this message.
134
135
Now the server will make an entry of the client host with the offered IP
address and lease time. This IP address will not be provided by server to any
other host. The destination MAC address is FFFFFFFFFFFF and the
destination IP address is 255.255.255.255 and the source IP address is
172.16.32.12 and the source MAC address is 00AA00123456 (server MAC
address).
5. DHCP negative acknowledgement message –
Whenever a DHCP server receives a request for IP address that is invalid
according to the scopes that is configured with, it send DHCP Nak message to
client. Eg-when the server has no IP address unused or the pool is empty, then
this message is sent by the server to client.
6. DHCP decline –
If DHCP client determines the offered configuration parameters are different
or invalid, it sends DHCP decline message to the server .When there is a reply
to the gratuitous ARP by any host to the client, the client sends DHCP decline
message to the server showing the offered IP address is already in use.
7. DHCP release –
A DHCP client sends DHCP release packet to server to release IP address and
cancel any remaining lease time.
8. DHCP inform –
If a client address has obtained IP address manually then the client uses a
DHCP inform to obtain other local configuration parameters, such as domain
name. In reply to the dhcp inform message, DHCP server generates DHCP ack
message with local configuration suitable for the client without allocating a
new IP address. This DHCP ack message is unicast to the client.
Note – All the messages can be unicast also by dhcp relay agent if the server is
present in different network.
Advantages – The advantages of using DHCP include:
135
136
SSH:- SSH or Secure SHell is now only major protocol to access the network
devices and servers over the internet. SSH was developed by SSH
Communications Security Ltd., it is a program to log into another computer over
136
137
a network, to execute commands in a remote machine, and to move files from one
machine to another.
It provides strong authentication and secure communications over insecure
channels.
SSH runs on port 22 by default; however it can be easily changed. SSH is a
very secure protocol because it shares and sends the information in encrypted
form which provides confidentiality and security of the data over an un-
secured network such as internet.
Once the data for communication is encrypted using SSH, it is extremely
difficult to decrypt and read that data, so our passwords also become secure to
travel on a public network.
SSH also uses a public key for the authentication of users accessing a server
and it is a great practice providing us extreme security. SSH is mostly used in
all popular operating systems like Unix, Solaris, Red-Hat Linux, CentOS,
Ubuntu etc.
SSH protects a network from attacks such as IP spoofing, IP source routing,
and DNS spoofing. An attacker who has managed to take over a network can
only force ssh to disconnect. He or she cannot play back the traffic or hijack
the connection when encryption is enabled.
When using ssh’s slogin (instead of rlogin) the entire login session, including
transmission of password, is encrypted; therefore it is almost impossible for an
outsider to collect passwords.
137