CN Notes in 24 Hours
CN Notes in 24 Hours
Computer
Networks
in
24 Hours
Prepared By:
Dr. Ajay N. Upadhyaya, HOD CE & IT Dept., AIT &
Prof. Parinda Prajapati, Asst. Prof. CE Dept, AIT
1. Explain network topologies with advantages and disadvantages.
Network topology defines the structure of the network.
Bus Topology:-
Ring topology:-
Connects one Node to the next and the last Node to the first.
Use the token.
Token is passing from 1 Node to another Node.
Only the node with the token is allowed to send data.
Advantage:-If Link is broken then that node is reachable from other side of ring.
Disadvantage:- Un directional Traffic
Star topology:-
2
Each Node is directly connected to the central Devices.
We can use Hub, switch, router as a Central Device.
Advantage:-
If one link is broken then only that particular node is
Unreachable.
Less Collision.
Most Efficient.
Simple and easy to identify Fault.
Disadvantage:-
Traffic is more at Central Point.
All Nodes are depended on Central devices.
Tree Topology:-
It’s also known as hierarchical topology.
All the nodes are connected in a tree structure.
Advantage:-
Simple and easy to identify Fault.
Disadvantage:-
If one link is broken then sub branch node can not send data to other
side.
Mesh topology:-
Two type of Mesh Topology:-
Fully Connected
Partially Connected
As seen in the graphic, each host has its own connections to all other hosts.
3
All the node has multiple paths to reach at Particular any one location.
Advantage:-
Reachability is high.
Provide much Protection.
Used at a nuclear power plant
Disadvantage:-
Cost is high.
Very Complex to Build.
Maintenance is also high.
2. Write a note on guided and Un-guided transmission medium.
Guided transmission is where the signal (information or data) is sent through some
sort of cable, usually copper or optical fiber.
There are many different types of cabling:
Twisted Pair
Coaxial Cable (coax)
Fiber Optic Cable
3.3.1 Twisted Pair:
This consists of two or more insulated wires twisted together in a shape similar
to a helix.
Use metallic conductor
The cables are twisted around each other to reduce the amount of external
interference
It consist of two conductor (copper), each with it’s colored plastic insulation.
This cable can be used at speeds of several Mb/s for a few kilometers.
Used for telephone line and lab network.
4
STP & UTP
UTP connector
Advantages of UTP
o Cost is less
o Easy to use
o Easy to install
o Flexible
UTP Used in Ethernet and Token ring
STP has a metal foil or cover.
Crosstalk (effect of one channel on the other channel) is less in STP.
STP has the same consideration as UTP
Shield must connect with ground.
Disadvantage of STP : cost is high
3.3.2 Coaxial Cable (coax)
5
Categories of coaxial cables
RG number denotes a unique set of physical Specification, including the wire gauge
of the inner conductor.
Impedance
Advantages:-
o Easy to Install.
o Inexpensive installation.
o It is better for Higher Distance at Higher speed than twisted pair.
o Excellent noise immunity.
Disadvantage:-
Prepared By: Dr. Ajay N. Upadhyaya & Prof. Parinda Prajapati
6
o High Cost
o Harder to work
7
pulses
o The detector at the far end reconverts the light pulses into an electrical signal to be
then interpreted as 1 or a 0.
o The limits the data rate is 1Gb/sec (1x109 bits / sec)
Propagations Mode:-
Two different light sources – both emit light when voltage applied
o LED – Light Emitting Diode – less costly, longer life
o ILD - Injection Laser Diode – greater data rate
Advantages of Fiber Optic over Copper Cable
o Fiber can handle much higher data rates than copper(More information can be
sent in one second using fiber)
o Fiber has low loss of signal power (attenuation), so repeaters are needed every
100km rather than every 5km for copper
o Fiber is not affected by power surges, electromagnetic interference or power
failure, or corrosive chemicals in the air
o Fibers are difficult to tap and therefore excellent for security
o Fibers are thin and lightweight, allowing more cables to fit into a given area.
o Noise Resistance.
o 1000 twisted pair cables 1 km long = 800kg
2 optical fiber cables 1km approx = 100kg allows transfer of more data
8
o Cost is high
o Installation and maintenance.
o Higher Bandwidth.
Fiber cable Connector
Introduction
o Radio waves are easy to generate and can travel long distances and penetrate
buildings.
o Radio waves are omni-directional which basically means that they can transmit both
ways.
o The transmitter and receiver do not have to be in direct line of sight
Bands
9
Radio Transmission Properties
o At low frequencies (<100MHz) radio waves pass through obstacles well but the signal
power attenuates (falls off) sharply in air
o At higher frequencies (>100MHz) radio waves tend to travel in straight lines and
bounce of obstacles and can be absorbed by rain (e.g in the 8GHz range)
o At all frequencies, radio waves are subject to interference from motors and other
electrical equipment
In very low frequencies (VLF), low frequencies (LF) and medium frequency
bands (MF) (<1 MHz) radio waves follow the ground.(The maximum
possible distance that these waves can travel is approximately 1000km)
3.4.2 Microwave Transmission
o Unlike radio waves, microwaves typically do not pass through solid objects
o Some Waves can be refracted due to atmospheric conditions and may take
longer to arrive than direct waves. These delayed waves can arrive out of
phase with the direct wave, causing destructive interference and
corrupting the received signal This effect is called multipath fading
o Because of increased demand for more spectrum (range of frequencies used
to transmit), transmitters are using higher and higher frequencies
Prepared By: Dr. Ajay N. Upadhyaya & Prof. Parinda Prajapati
10
o Microwave communication is widely used for long distance telephone
communication and cell phones.
o Microwave signals propagates in one direction at a time which means two
different frequencies are necessary for two way communication.
o Transmitter is used to transmit the signal
o Receiver is used to receive the signal
o Transceiver is a equipment, which work as a both Transmitter and
Receiver.
Terrestrial Microwave
11
• Operates on a number of frequency bands known as transponders
– Point to Point (Ground station to satellite to ground station)
– Multipoint(Ground station to satellite to multiple receiving stations)
• Satellite orbit
– 35,784 Km, to match earth rotation
– Stays fixed above the transmitter/receiver station as earth rotates
• Satellites need to be separated by distance
– Avoid interference
• Applications
– TV, long distance telephone(satellite phone), private business networks
• Optimum frequency range
– 1 – 10 GHz
– Below 1GHz results in noise, above 10GHz results in severe attenuation
Satellites in geosynchronous orbit
12
3.4.3 Infrared:
Infrared signals can be used for short-range communication in a closed area using
line-of-sight propagation.
Transceivers must be within line of sight of each other or via reflection
Does not penetrate walls like microwave
No frequency allocation or licensing
3.4.4 Bluetooth
o Bluetooth is a wireless technology standard for exchanging data over short distances
(2.4 to 2.485 GHz) from fixed and mobile devices and building personal area
networks (PANs).
o Invented by telecom vendor Ericsson in 1994, it was originally conceived as a wireless
alternative to RS-232 data cables.
o It can connect several devices, overcoming problems of synchronization.
o Penetrate Wall or other objects.
o No Line-of-sight required.
3. Explain Different types of Switching methods with examples.
There are three methods of switching.
Switching methods
A) Circuit Switching
A circuit switch is a device with n inputs and m outputs that creates a temporary
connection between an input link and an output link. The number of inputs does
not have to match the number of outputs.
13
Used in the telephone system: network resources in the telephone system are
reserved from your phone to the phone you call when you place the call; they're
released when you hang up.
In short, circuit switching..
• Dedicated communication path between two stations
• Three phases
— Establish
— Transfer
— Disconnect
• Must have switching capacity and channel capacity to establish connection
• Must have intelligence to work out routing
• Inefficient
— Channel capacity dedicated for duration of connection
— If no data, capacity wasted because of dedicated link.
• Set up (connection) takes time
• Once connected, transfer is transparent
• Developed for voice communication (phone)
• Resources dedicated to a particular call
• Much of the time a data connection (line) is idle and facilities are wasted.
• Circuit switching is inflexible. Once a circuit has been established, that circuit is the
path taken by all parts of the transmission whether or not it remains the most efficient
or available.
• Circuit switching sees all transmissions as equal.
• Data rate is fixed-Both ends must operate at the same rate
B) Packet Switching
Basic Operation:
14
Routing (addressing) info
• Packets are received, stored briefly (buffered) and past on to the next node
Store and forward
Advantages
• Line efficiency
Single node to node link can be shared by many packets over time
Packets queued and transmitted as fast as possible
• Data rate conversion
Each station connects to the local node at its own speed
Nodes buffer data if required to equalize rates
• Packets are accepted even when network is busy
Delivery may slow down
• Priorities can be used
There are two popular approaches to packet switching:
(i) Datagram approach (Connectionless service)
(i) Datagram
15
Figure
C) Message Switching
16
• Processing Delay (Nodel Delay) [Dproc]: Time required examining the packet header
and determining where to direct the packet.
• Propagation[Dprop]:
This is simply the time it takes for a packet to travel between one place to another at
the speed of light. IT is a simple measurement of how long it takes for a signal to
travel along the cable being tested
• Transmission time [Dtrans]( Transmission time Delay): It is the time between the first
bit leaving the sender and the last bit arriving at the receiver.
Transmission time = Message size/Bandwidth
5. 1) Consider a packet of length L which begins at end system A and travels over three
links to a destination end system. These three links are connected by two packet
switches. Let di, si, and Ri denote the length, propagation speed, and the
transmission rate of link i, for i = 1, 2, 3. The packet switch delays each packet by
dproc. Assuming no queuing delays, in terms of di, si, Ri, (i = 1, 2, 3), and L, what is
the total end-to-end delay for the packet? Suppose now the packet is 1,500 bytes,
the propagation speed on all three links is 2.5 · 108 m/s, the transmission rates of all
three links are 2 Mbps, the packet switch processing delay is 3 msec, the length of
the first link is 5,000 km, the length of the second link is 4,000 km, and the length
of the last link is 1,000 km. For these values, what is the end-to-end delay?
Answer:
packet length = L
link i Length = di
propagation speed = si
17
transmission rate = Ri
the first end system requires to transmit the packet onto the first link = L/R1
the first end system requires to transmit the packet onto the second link = L/R2
the first end system requires to transmit the packet onto the third link = L/R3
the packet propagates over the first link = d1/s1
the packet propagates over the second link = d2/s2
the packet propagates over the third link = d3/s3
end to end delay = L/R1+L/R2+L/R3+d1/s1+d2/s2+d3/s3+d(proc)+ d(pro)
packet size = 1500 bytes = 1500*8 bits
propagation speed on both links = 2.5 x 10^8
transmission rate of all three links = 2 Mbps
packet switch processing delay = 3 msec
length of the first link = 5000 km
length of the second link = 4000km
length of the last link = 1000 km
the first end system requires to transmit the packet onto the first link = L/R1 = .006 sec
packet propagates over the first link = .02 sec
the packet switch requires to transmit the packet onto the first link = L/R1 = .006 sec
packet propagates over the second link = .016 sec
the packet switch requires to transmit the packet onto the third link = L/R1 = .006 sec
packet propagates over the third link = .004 sec
end to end delay = .006+.006+.006+.02+.016+.004+.003+.003
= 0.064 sec=64 msec
2) Suppose two hosts, A and B, are separated by 20,000 kilometers and are connected
by a direct link of R = 2 Mbps. Suppose the propagation speed over the link is 2.5*
108 meters/sec.
a. Calculate the bandwidth-delay product, R* dprop.
b. Consider sending a file of 800,000 bits from Host A to Host B. Suppose the
file is sent continuously as one large message. What is the maximum number
of bits that will be in the link at any given time?
c. Provide an interpretation of the bandwidth-delay product.
d. What is the width (in meters) of a bit in the link? Is it longer than a football
field?
e. Derive a general expression for the width of a bit in terms of the propagation
speed s, the transmission rate R, and the length of the link m.
Prepared By: Dr. Ajay N. Upadhyaya & Prof. Parinda Prajapati
18
f. For what value of R is the width of a bit as long as the length of the link?
Answer:
4) A packet switch receives a packet and determines the outbound link to which the
packet should be forwarded. When the packet arrives, one other packet is halfway
done being transmitted on this outbound link and four other packets are waiting to
be transmitted. Packets are transmitted in order of arrival. Suppose all packets are
1,500 bytes and the link rate is 2 Mbps. What is the queuing delay for the packet?
More generally, what is the queuing delay when all packets have length L, the
transmission rate is R, x bits of the currently-being-transmitted packet have been
19
transmitted, and n packets are already in the queue?
Answer:
20
Physical characteristic of interfaces and media
o Transmission media between Source and Destination
o It may be wired or wireless
Representation of bits
o Bit must be encoded into signals-Electrical or Optical.
o It is define the type of encoding.
Data rate
o Rate of data transmission / No. of bits transmitted per second
Synchronization
o Sender and Receiver must be synchronies at bit level.
o Clock must be synchronies.
Line configuration
o Point-to-Point configuration & multipoint configuration
Physical topology
o Type of topology:-Star, Mesh, Ring, Bus.
Transmission mode
o Simplex, half-duplex or full-duplex
Framing:
o the data link layer divides the stream of bits received from the network layer
into manageable data units called frames.
Physical addressing:
o Add MAC Address (Layer 2 address/physical address) of Source and Receiver.
Flow control:
o Flow of data must be controlled.
o If sending rate of sender is higher than receiving rate of receiver then flow
21
control is required.
Error Control:
o Adds reliability
o Detect and Retransmit damaged or lost frames.
o Prevent duplication of frames.
o Trailer is added to the end of the frame for error controlling.
Access Control:
o Determine the Controlling of node over the link at any time.
o HUB and Switch (L2) are operated at Layer2.
Network Layer:
22
Responsibilities of the transport layer
Service-point Addressing:
o Computers often run several programs at the same time.
o The transport layer header therefore must include a type of address called a
service-point address (or port address).
Segmentation and reassembly:
o A message is divided into transmittable segments.
o Each segment is containing a sequence number.
o Reassemble the message correctly upon arriving at the destination and to
identify and
o Replace packets that were lost in transmission.
Connection Control:
o The transport layer can be either connectionless or connection-oriented.
o A connectionless transport layer treats each segment as an independent packet
and delivers it to the transport layer at the destination machine.
o A connection-oriented transport layer makes a connection with the transport
layer at the destination machine first before delivering the packets. After all the
data are transferred, the connection is terminated.
Flow control:
o It is responsible for flow control for end to end rather than across a single link.
Error control:
o It is responsible for error control. for end to end rather than across a single link.
o The sending transport layer makes sure that entire message arrives at the
receiving transport layer without error (damage, loss or duplication).
o Error correction is usually achieved through retransmission.
4.3.4 Session Layer
The session layer is network dialog controller.
It establishes, maintains, and synchronizes the interaction between communicating
systems.
Specific responsibilities of the session layer include the following:
Dialog control:
o The session layer allows two systems to enter into a dialog.
o It allows the communication between two processes to take place either in half-
duplex (one way at a time) or full-duplex (two ways at a time). For example,
the dialog between a terminal connected to a mainframe can be half-duplex.
Synchronization:
Prepared By: Dr. Ajay N. Upadhyaya & Prof. Parinda Prajapati
23
o It allows a process to add checkpoints (synchronization points) into a stream of
file.
o For example, if a system is sending a file of 2000 pages, it is advisable to insert
checkpoints after every 100 pages to ensure that each 100-page unit is received
and acknowledged independently. In this case, if a crash happens during the
transmission of page 523, retransmission begins at page 501: pages 1 to 500
need not be transmitted.
4.3.5 Presentation Layer
The presentation layer is concerned with the syntax and semantics of the information
exchanged between two systems.
24
It provides user interfaces and support for services such as electronic mail, remote file
access and transfer, shared database management, and other types of distributed
information services.
The application layer, enables the user, whether human or software, to access the
network.
It provides user interfaces and support for services such as electronic mail, remote file
access and transfer, shared database management, and other types of distributed
information services.
2) Draw the layered architecture of TCP/IP model and write at least two services
provided by each layer of the model.
TCP/IP model
Application Layer
25
Simple Network Management Protocol (SNMP)
Domain Name System (DNS)
Transport Layer
3) What are the five layers in the Internet protocol stack? What are the principal
responsibilities of each of these layers?
26
Five Layer Internet Protocol Stack.
Student have to explain the same thing as given above but in five parts
Note: AS per the book Computer Networking – A Top Down Approach By Kurose & Rose
7. Enlist various framing techniques used in data link layer. Explain any one in detail with
suitable example.
Framing
In the OSI model of computer networking, a frame is the protocol data unit at the data link
layer. Frames are the result of the final layer of encapsulation before the data is transmitted
over the physical layer
• Each frame starts and ends with special bytes: flag bytes.
• Two consecutive flag bytes indicate end of frame and beginning on new frame.
• Problem?
• What if flag bit pattern occurs in data?
27
• (a) A frame delimited by flag bytes.
• (b) Four examples of byte sequences before and after stuffing.
• Single ESC: part of the escape sequence.
• Doubled ESC: single ESC is part of data.
• De-stuffing.
• Problem:
• What if character encoding does not use 8-bit characters?
Bit Stuffing
1) Error detection methods at Data link layer (Generally one method can be ask)
28
[1] VRC –Vertical Redundancy Check
It’s least expensive method.
It’s also called Parity Checking and bit is called parity bit.
This Parity bit is append to every data unit, So the total number of 1‘s (Including
parity bit) becomes either Even or Odd.
Example:-
If data is 10101010
And if user wants data with Even Parity then Data becomes 101010100{Total
number of 1 is Four},
And if user wants data with Odd Parity then Data becomes 101010101{Total
number of 1 is Five}
VRC can detect all single bit error.
It also detects burst error as long as the total number of bit changed to Odd (1, 3, 5…)
& For Odd parity, and total number of bit changed to Even (2, 4, 6…) For Even parity.
[2] LRC –Longitudinal Redundancy Check
Blocks of bits are organized in a table format (means In a Row and Column).
Example:
We have data
01100111000111010001100100101001
32 bit block is divided into 4 rows and 8 columns.
01100111 00011101 00011001 00101001
Arrange it in table Format.
01100111
00011101
00011001
00101001
------------
01001010 LRC
Sender sends Original Data plus LRC to the receiver.
01100111 00011101 00011001 00101001 01001010
It generally used for detect burst error
Sometimes LRC checker can not be detecting an error.
Example: If data is 11110000 and 11000011.
And if the first and last bit in each of them are changed,
Then Data becomes 01110001 and 01000010 and LRC not find an Error.
29
CRC (CRC reminder) is appended to the end of Data unit.
CRC reminder has exactly one less bit then Divisor.
Data is also called frame
Divisor is also called generator.
Most powerful redundancy checking system.
[4] Checksum:
It is an error detection method used by higher layer protocol.
The Sender follows these steps.
1) Divide whole data unit into k section, each contain n bits.
2) All sections are added together.
3) The sum is complemented and become the checksum.
4) The checksum is send with the data.
The Receiver follows these steps.
1) Divide whole data unit into k section, each contain n bits.
2) All sections are added together.
3) The sum is complemented.
Prepared By: Dr. Ajay N. Upadhyaya & Prof. Parinda Prajapati
30
4) If the result is zero then data are accepted: otherwise data is rejected.
Example:
At sender side
Data of 16 bit
10101001 00111001
Addition of this two number (Each contain 8 bit).is:
10101001
00111001
---------------------
SUM- 11100010
CHECKSUM: 0 0 0 1 1 1 0 1
At Receiver side
Data of 24 bit
10101001 00111001 00011101
10101001
00111001
00011101
---------------------
SUM-1 1 1 1 1 1 1 1
And its complement 0 0 0 0 0 0 0 0
If we get all zero then there is not any error in the data are accepted.
The receiver can detect an error but it can’t find error in particular data unit
2) Error correction method at Data link layer
Error correction can be done in two ways:
1) Whenever error detect Retransmit entire data unit.
2) Whenever error detect Correct the error using any techniques: Forward Error
Correction and Burst Error Correction
Data redundancy bit
2r >= m+r+1
31
Hamming code
Position of redundant bit
Example:
Redundancy bit calculation
32
10. Write a short note on:
1) ALOHA
2) slotted ALOHA
3) CSMA
4) CSMA/CD Protocol
5) Bitmap protocol
33
B.1 Slotted ALOHA
Assumptions:
All frames same size
Time is divided into equal size slots, time to transmit 1 frame
Nodes start to transmit frames only at beginning of slots
Nodes are synchronized
If 2 or more nodes transmit in slot, all nodes detect collision
Operation:
When node obtains fresh frame, it transmits in next slot
No collision, node can send new frame in next slot
If collision, node retransmits frame in each subsequent slot with prob. p until success
Pros:
Single active node can continuously transmit at full rate of channel
Highly decentralized: only slots in nodes need to be in sync
Simple
Prepared By: Dr. Ajay N. Upadhyaya & Prof. Parinda Prajapati
34
Cons:
Collisions, wasting slots
Idle slots
Nodes may be able to detect collision in less than time to transmit packet
Clock synchronization
Slotted Aloha efficiency:
Efficiency is the long-run fraction of successful slots when there are many nodes, each
with many frames to send
Suppose N nodes with many frames to send, each transmits in slot with probability p
Prob that node 1 has success in a slot = p(1-p)N-1
Prob that any node has a success = Np(1-p)N-1
For max efficiency with N nodes, find p* that maximizes Np(1-p)N-1
For many nodes, take limit of Np*(1-p*)N-1 as N goes to infinity, gives 1/e = .37
At best: channel used for useful transmissions 37% of time!
35
ALOHA, you can send any time.
5. Pure ALOHA is featured with the feedback property that enables it to listen to
the channel and finds out whether the frame was destroyed.
Pure Aloha efficiency:
P(success by given node) = P(node transmits) .
P(no other node transmits in [p0-1,p0] .
P(no other node transmits in [p0-1,p0]
= p. (1-p) N-1 . (1-p)N-1
= p. (1-p) 2(N-1)
Choosing optimum p and then letting n -> infty ... Even worse ! = 1/(2e) = .18
B.3 CSMA (Carrier Sense Multiple Access)
Introduction:
A station senses the channel before it starts transmission
o If busy, either wait or schedule backoff (different options)
o If idle, start transmission
o Vulnerable period is reduced to tprop (due to channel capture effect)
o When collisions occur they involve entire frame transmission times
o Human analogy: don’t interrupt others!
CSMA Options:
Transmitter behavior when busy channel is sensed
o 1-persistent CSMA (most greedy)
Start transmission as soon as the channel becomes idle
Low delay and low efficiency
o Non-persistent CSMA (least greedy)
Prepared By: Dr. Ajay N. Upadhyaya & Prof. Parinda Prajapati
36
If busy, wait a backoff period, then sense carrier again
High delay and high efficiency
o p-persistent CSMA (adjustable greedy)
Wait till channel becomes idle, transmit with prob. p; or
wait one mini-slot time & re-sense with probability 1-p
Delay and efficiency can be balanced
37
Assumptions:
Collisions can be detected and resolved in 2tprop
Time slotted in 2tprop slots during contention periods
Assume n busy stations, and each may transmit with probability p in each contention
time slot
Once the contention period is over (a station successfully occupies the channel), it
takes X seconds for a frame to be transmitted
It takes tprop before the next contention period starts.
38
Reservation System Options
Centralized or distributed system
o Centralized systems: A central controller listens to reservation information,
decides order of transmission, issues grants
o Distributed systems: Each station determines its slot for transmission from the
reservation information
Single or Multiple Frames
o Single frame reservation: Only one frame transmission can be reserved within a
reservation cycle
o Multiple frame reservation: More than one frame transmission can be reserved
within a frame.
11. Explain Ethernet Frame structure.
OR
Draw and explain Ethernet header.
The basic frame consists of seven elements split between three main areas:-
Header
o Preamble (PRE) - This is seven bytes long and it consists of a pattern of
alternating ones and zeros, and this informs the receiving stations that a frame
is starting as well as enabling synchronisation.
o Start Of Frame delimiter (SOF) - This consists of one byte and contains an
alternating pattern of ones and zeros but ending in two ones.
o Destination Address (DA) – It consist of Destination 6 byte Address.
o Source Address (SA) - The source address consists of six bytes, and it is used to
identify the sending station
Prepared By: Dr. Ajay N. Upadhyaya & Prof. Parinda Prajapati
39
o Length / Type - This field is two bytes in length. It provides MAC information
and indicates the number of client data types that are contained in the data
field of the frame. It may also indicate the frame ID type if the frame is
assembled using an optional format.(IEEE 802.3 only).
Payload
o Data - This block contains the payload data and it may be up to 1500 bytes
long. If the length of the field is less than 46 bytes, then padding data is added
to bring its length up to the required minimum of 46 bytes.
Trailer
o Frame Check Sequence (FCS) - This field is four bytes long. It contains a 32 bit
Cyclic Redundancy Check (CRC) which is generated over the DA, SA, Length /
Type and Data fields.
The original Ethernet IEEE 802.3 standard defined the minimum Ethernet frame size as 64
bytes and the maximum as 1518 bytes
TCP/IP supports four other Protocols in the network Layer: ARP, RARP, ICMP, and IGMP.
40
destination host to send all packets.
ARP packet
41
Encapsulation of RARP packet
42
IGMP message type
VER (4 BITS)
o The version field is set to the value '4' in decimal or '0100' in binary.
o The value indicates the version of IP (4 or 6, there is no version 5).
HLEN (4 BITS)
o Defines the length of header
o Length is in a multiple of four bytes
o The four bit can represent a number between 0 to 15, which multiply by 4 gives
a maximum of 60 bytes
Service types (8 Bits)
Prepared By: Dr. Ajay N. Upadhyaya & Prof. Parinda Prajapati
43
o Define How the Datagram should be handled.
o Define the Priority.
TOTAL LENGTH (16 BITS)
o This informs the receiver of the datagram where the end of the data in this
datagram is.
o This is why an IP datagram can be up to 65,535 bytes long, as that is the
maximum value of this 16-bit field.
IDENTIFICATION (16 bits)
o It is used for Fragmentation.
o Sometimes, a device in the middle of the network path cannot handle the
datagram at the size it was originally transmitted, and must break it into
fragments.
o If an intermediate system needs to break up the datagram, it uses this field to
aid in identifying the fragments.
FLAGS (3 BITS)
o The flags field contains single-bit flags that indicate whether the datagram is a
fragment, whether it is permitted to be fragmented, and whether the datagram
is the last fragment, or there are more fragments.
o The first bit in this field is always zero.
FRAGMENT OFFSET (13 BITS)
o When a datagram is fragmented, it is necessary to reassemble the fragments in
the correct order.
o The fragment offset numbers the fragments in such a way that they can be
reassembled correctly.
TIME TO LIVE (8 BITS)
o This field determines how long a datagram will exist.
o At each hop along a network path, the datagram is opened and it's time to live
field is decremented by one (or more than one in some cases).
o When the time to live field reaches zero, the datagram is said to have 'expired'
and is discarded.
o This prevents congestion on the network that is created when a datagram
cannot be forwarded to it's destination.
o Most applications set the time to live field to 30 or 32 by default.
PROTOCOL (8 BITS)
o This indicates what type of protocol is encapsulated within the IP datagram.
Some of the common values seen in this field include:
Prepared By: Dr. Ajay N. Upadhyaya & Prof. Parinda Prajapati
44
Number
Protocol
(Decimal)
ICMP 1
IGMP 2
TCP 6
UDP 17
HEADER CHECKSUM (16 BITS)
o The checksum allows IP to detect datagram with corrupted headers and discard
them.
o Since the time to live field changes at each hop, the checksum must be re-calculated
at each hop.
IPv6 packets have their own frame Ethertype value, 0x86dd, making it easy for receivers that
must handle both IPv4 and IPv6 to distinguish the frame content on the same interface. The
IPv6 header is comprised of the following fields:
45
Version: A four-bit field for the IP version number (0x06). „
Traffic Class: An 8-bit field that identifies the major class of the packet content (for
example, voice or video packets). The default value is 0, meaning it is ordinary bulk
data (such as FTP) and requires no special handling. „
Flow Label: A 20-bit field used to label packets belonging to the same flow (those
with the same values in several TCP/IP header parameters). The flow label is
normally 0 (flows are detected in other ways).
Payload Length: A 16-bit field giving the length of the packet in bytes, excluding the
IPv6 header. „
Next Header: An 8-bit field giving the type of header immediately following the IPv6
header (this serves the same function as the Protocol field in IPv4). „
Hop Limit: An 8-bit field set by the source host and decremented by 1 at each router.
Packets are discarded if Hop Limit is decremented to zero (this replaces the IPv4 Time
To Live field). Generally, implementers choose the default to use, but values such as
64 or 128 are common.
IPv4 IPv6
32-bit (4 byte) address supporting 128-bit (16 byte) address supporting 228
4,294,967,296 address (although many were (about 3.4 x 1038) addresses
lost to special purposes, like 10.0.0.0 and
127.0.0.0)
Address Shortages: Larger address space:
IPv4 supports 4.3×109 (4.3 billion) addresses, IPv6 supports 3.4×1038 addresses, or
which is inadequate to give one (or more if 5×1028(50 octillion) for each of the
they possess more than one device) to every roughly 6.5 billion people alive today.
living person.
NAT can be used to extend address No NAT support (by design)
limitations
IP addresses assigned to hosts by DHCP or IP addresses self-assigned to hosts with
Prepared By: Dr. Ajay N. Upadhyaya & Prof. Parinda Prajapati
46
static configuration stateless address auto-configuration or
DHCPv6
IPSec support optional IPSec support required
Options integrated in header fields Options supported with extensions
headers (simpler header format)
Has broadcast addresses for all devices No such concept in IPv6 (uses multicast
groups)
Uses 127.0.0.1 as loopback address Uses ::1 as loopback address
IPv4 is subdivided into classes <A-E> IPv6 is classless. IPv6 uses a prefix and
an Identifier ID known as IPv4 network
IPv4 header has 20 bytes. IPv6 header is the double, it has 40 bytes
IPv4 header has many fields (13 fields) IPv6 header has fewer fields, it has 8
fields.
IPv4 address uses a subnet mask. IPv6 uses a prefix length.
IPv4 has lack of security. IPv6 has a built-in strong security
IPv4 was never designed to be secure - Encryption
- Originally designed for an isolated military - Authentication
network
- Then adapted for a public educational &
research network
16. Sub netting Examples:
1) One of the addresses in a block is 17.63.110.114/24. Find the first address, and the last
address in the block.
Answer:
2) Consider a router that interconnects three subnets: Subnet 1, Subnet 2, and Subnet 3.
Suppose all of the interfaces in each of these three subnets are required to have the
47
prefix 223.1.17/24. Also suppose that Subnet 1 is required to support at least 60
interfaces, Subnet 2 is to support at least 90 interfaces, and Subnet 3 is to support at
least 12 interfaces. Provide three network addresses (of the form a.b.c.d/x) that satisfy
these constraints.
Answer:
Recall:
• The network address cannot be used for an interface (Network prefix + all zeros).
• The broadcast address cannot be used for an interface (Network prefix + all ones)
Then:
Subnet 1 requires 60 interfaces + 2 (network + broadcast) =>
so 64 addresses will sufice
Subnet 2 requires 90 interfaces + 2 (network + broadcast) =>
so 128 addresses will sufice
Subnet 3 requires 12 interfaces + 2 (network + broadcast) =>
so 16 addresses will sufice
One Solution that Works:
Sub 2 230.1.17.0/25 223.1.17.00000000/25 223.1.17.0 to 223.1.17.127 = 128
Sub 1 230.1.17.128/26 223.1.17.10000000/26 223.1.17.128 to 223.1.17.191 = 64
Sub 3 230.1.17.192/28 223.1.17.11000000/28 223.1.17.192 to 223.1.17.207 = 16
3) An ISP is granted a block of addresses starting with 120.60.4.0/20. The ISP wants to
distribute these blocks to 100 organizations with each organization receiving 8
addresses only. Design the subblocks and give the slash notation for each subblock.
Find out how many addresses are still available after these allocations.
The site has 232−20 = 212 = 4096 addresses.
Each group needs 8 addresses only. That means the prefix length is 32-3=29 so that the slash
notation will be /29.
.
1st subnet: 120.60.4.0/29 to 120.60.4.7/29
... ... ...
32nd subnet: 120.60.4.248/29 to 120.60.4.255/29
33rd subnet: 120.60.5.0/29 to 120.60.5.7/29
... ... ...
64th subnet: 120.60.5.248/29 to 120.60.5.255/29
... ... ...
99th subnet: 120.60.7.16/29 to 120.60.7.23/29
Prepared By: Dr. Ajay N. Upadhyaya & Prof. Parinda Prajapati
48
100th subnet: 120.60.7.24/29 to 120.60.7.31/29
Subnets:
4096 − 800 = 3296 addresses left
Subnet Mask
The subnet mask is therefore given by,
255.255.255.248
Step 2 : Number of addresses in each subnet :
The structure of a class C address is as shown in Fig. 23(b).
49
Step 3 : First and the last address in subnet 1 :
First address in subnet-I = 211.17.180.0
Last address in subnet-I = 211.17.180.7
Step 4 : First and the last address in subnet 32 :
First address in subnet-32 = 211.17.180.248
Last address in subnet-32 = 211.17.180.255
5) Consider a router that interconnects three subnets: Subnet 1, Subnet 2, and Subnet 3.
Suppose all of the interfaces in each of these three subnets are required to have the
prefix 223.1.17/24. Also suppose that Subnet 1 is required to support at least 60
interfaces, Subnet 2 is to support at least 90 interfaces, and Subnet 3 is to support at
least 12 interfaces. Provide three network addresses (of the form a.b.c.d/x) that satisfy
these constraints. (Dec- 2015, May- 2015)
Answer:
1. Subnet 1:- 223.1.17.0/26
2. Subnet 2:- 223.1.17.128/25
3. Subnet 3:- 223.1.17.224/28
Method:
Prefix of network for all:223.1.17/24
The above prefix shows that total network bits here are 24 and so we have remaining 8 bits
(32-24) of host at our disposal.
50
Subnet mask (Decimal Equivalent) =255.255.255.192
Subnet mask (CIDR notation- /N): /26
To find network addresses:-
As the prefix must be : 223.1.17
Range can be defined as follows:
(To find the range, make a table as below containing starting addresses and ending
addresses. To compute starting addresses: place starting address as all 0’s after given
network address i.e. 223.1.17.0 as initial address, keep on adding block size to initial starting
address until you do not reach the last starting address. Last starting address must be
network address followed by number in subnet mask except 255 i.e. 223.1.17.192 here.
Ending address will always be next starting address-1. Last ending address will always be
network address followed by 255 i.e. 233.1.17.255 here )
(Note: Throughout the answer usable ranges will be shown in orange colour and selected
range will be orange, bold and italics)
Starting addresses Ending Addresses
223.1.17.0 223.1.17.63
223.1.17.64 223.1.17.127
223.1.17.128 223.1.17.191
223.1.17.192 223.1.17.255
(Note: Here to convert in CIDR notation you can use any address from the range defined
above and starting address of the range will be used to denote the subnet. For convenience
and ease here I have used 1st range of addresses. But remember at least 62 (as valid hosts are
62-2=60 are required number of interfaces) addresses of that range will not be usable by
remaining subnets as these subnets are part of same network 223.1.17/24. These addresses
can or cannot be taken continuously. For ease let us consider they are taken continuously i.e.
for the next subnet you cannot use range 223.1.17.0 to 223.1.17.62. Place the CIDR notation of
subnet mask after the network address)
51
So we have,
N=25
H=7
Block Size/IP addresses in each subnet =2H = 27 = 128
Total networks= 2N=225
Valid Hosts= Block size – 2 =128-2 = 126
Subnet mask= (Write N(25) times 1’s and H(7) times 0’s and convert to decimal as we
convert IP address)
Subnet mask (Binary) 11111111.11111111.11111111.10000000
Subnet mask (Decimal Equivalent) =255.255.255.128
Subnet mask (CIDR notation- /N): /25
To find network addresses:-
As the prefix must be : 223.1.17
Range can be defined as follows:
Starting addresses Ending Addresses
223.1.17.0 223.1.17.127
223.1.17.128 223.1.17.255
(Note: Here to convert in CIDR notation you can use any address from the range defined
above and starting address of the range will be used to denote the subnet. But for this subnet
you cannot use range containing addresses 223.1.17.0 to 223.1.17.62 as they are used in
creating subnet 1. So we are left only with second option having range 223.1.17.128 to
223.1.17.255. Remember that these addresses will not be usable by next subnet.)
52
Subnet mask= (Write N(28) times 1’s and H(4) times 0’s and convert to decimal as we
convert IP address)
Subnet mask (Binary) 11111111.11111111.11111111.11110000
Subnet mask (Decimal Equivalent) =255.255.255.240
Subnet mask (CIDR notation- /N): /28
To find network addresses:-
As the prefix must be : 223.1.17
Range can be defined as follows:
53
[Extra:- Some alternate correct answers:
1.
1. Subnet 1:- 223.1.17.0/26
2. Subnet 2:- 223.1.17.128/25
3. Subnet 3:- 223.1.17.80/28
2.
3.
1. Subnet 1:- 223.1.17.0/26
2. Subnet 2:- 223.1.17.128/25
3. Subnet 3:- 223.1.17.192/28
Etc]
54
Each transport-layer segment has a set of fields in the segment for directing an
incoming transport-layer segment to the appropriate socket at the receiving host
At the receiving end, the transport layer examines these fields to identify the
receiving socket and then directs the segment to that socket. This job of delivering the
data in a transport-layer segment to the correct socket is called demultiplexing.
The job of gathering data chunks at the source host from different sockets,
encapsulating each data chunk with header information (that will later be used in
demultiplexing) to create segments, and passing the segments to the network layer is
called multiplexing.
Note that the transport layer in the middle host in Figure must demultiplex segments
arriving from the network layer below to either process P1 or P2 above; this is done
by directing the arriving segment’s data to the corresponding process’s socket. The
transport layer in the middle host must also gather outgoing data from these sockets,
form transport-layer segments, and pass these segments down to the network layer.
Transport-layer multiplexing requires (1) that sockets have unique identifiers, and (2)
that each segment have special fields that indicate the socket to which the segment is
to be delivered. These special fields, illustrated in Figure, are the source port number
field and the destination port number field.
Each port number is a 16-bit number, ranging from 0 to 65535. The port numbers
ranging from 0 to 1023 are called well-known port numbers and are restricted, which
means that they are reserved for use by well-known application protocols such as
HTTP (which uses port number 80) and FTP (which uses port number 21)
It should now be clear how the transport layer could implement the demultiplexing
service: Each socket in the host could be assigned a port number, and when a segment
arrives at the host, the transport layer examines the destination port number in the
segment and directs the segment to the corresponding socket. The segment’s data
55
then passes through the socket into the attached process.
This is basically how UDP does it.
However, we’ll also see that multiplexing/demultiplexing in TCP is yet more subtle.
Suppose a process in Host A, with UDP port 19157, wants to send a chunk of
application data to a process with UDP port 46428 in Host B. The transport layer in
Host A creates a transport-layer segment that includes the application data, the
source port number (19157), the destination port number (46428), and two other
values (which will be discussed later, but are unimportant for the current discussion).
The transport layer then passes the resulting segment to the network layer. The
network layer encapsulates the segment in an IP datagram and makes a best-effort
attempt to deliver the segment to the receiving host. If the segment arrives at the
Prepared By: Dr. Ajay N. Upadhyaya & Prof. Parinda Prajapati
56
receiving Host B, the transport layer at the receiving host examines the destination
port number in the segment (46428) and delivers the segment to its socket identified
by port 46428. Note that Host B could be running multiple processes, each with its
own UDP socket and associated port number. As UDP segments arrive from the
network, Host B directs (demultiplexes) each segment to the appropriate socket by
examining the segment’s destination port number.
It is important to note that a UDP socket is fully identified by a two-tuple consisting
of a destination IP address and a destination port number. As a consequence, if two
UDP segments have different source IP addresses and/or source port numbers, but
have the same destination IP address and destination port number, then the two
segments will be directed to the same destination process via the same destination
socket.
2) Connection-Oriented Multiplexing and Demultiplexing
In order to understand TCP demultiplexing, we have to take a close look at TCP
sockets and TCP connection establishment. One subtle difference between a TCP
socket and a UDP socket is that a TCP socket is identified by a four-tuple: (source IP
address, source port number, destination IP address, destination port number).
Thus, when a TCP segment arrives from the network to a host, the host uses all four
values to direct (demultiplex) the segment to the appropriate socket.
In particular, and in contrast with UDP, two arriving TCP segments with different
source IP addresses or source port numbers will (with the exception of a TCP
segment carrying the original connection-establishment request) be directed to two
different sockets. To gain further insight, let’s reconsider the TCP client-server
programming
The TCP server application has a “welcoming socket,” that waits for
connectionestablishment requests from TCP clients (see Figure) on port number
12000.
57
The TCP client creates a socket and sends a connection establishment request segment
with the lines:
clientSocket = socket(AF_INET,
SOCK_STREAM)clientSocket.connect((serverName,12000))
A connection-establishment request is nothing more than a TCP segment with
destination port number 12000 and a special connection-establishment bit set in the
TCP header.
The segment also includes a source port number that was chosen by the client.
When the host operating system of the computer running the server process receives
the incoming connection-request segment with destination port 12000, it locates the
server process that is waiting to accept a connection on port number 12000. The server
process then creates a new socket:
connectionSocket, addr = serverSocket.accept()
Also, the transport layer at the server notes the following four values in the
connection- request segment:
(1) the source port number in the segment, (2) the IP address of the source host, (3) the
destination port number in the segment, and (4) its own IP address. The newly
created connection socket is identified by these four values; all subsequently arriving
segments whose source port, source IP address, destination port, and destination IP
address match these four values will be demultiplexed to this socket. With the TCP
connection now in place, the client and server can now send data to each other. The
server host may support many simultaneous TCP connection sockets, with each
socket attached to a process, and with each socket identified by its own fourtuple.
When a TCP segment arrives at the host, all four fields (source IP address, source
port, destination IP address, destination port) are used to direct (demultiplex) the
segment to the appropriate socket.
18. Explain classful IP addressing scheme.
OR
Explain IP Addressing.
IP address is used to unlikely identify the node in the network.
IPV4 version contains 32 bit address.
Divided into main two parts : Net ID and Host ID
Originally, all IP addresses were classful – they belonged to Class A, B, C or D. Class D is for
Multicast and is rarely used. Class E is reserved and is not currently used.
58
A 8 24
B 16 16
C 24 8
D Used for Multicasting
E Reserved for Future Use
59
Addresses with and without Subnetting
Masking: Masking is a process that extracts the address of the physical network from an IP
address.
.
20. Explain Distance Vector Routing with count-to-Infinity problem
60
In distance vector routing, each router maintains a routing table indexed by, and containing
one entry for, each router in the subnet. This entry contains two parts: the preferred
outgoing line to use for that destination and an estimate of the time or distance to that
destination. The metric used might be number of hops, time delay in milliseconds, total
number of packets queued along the path, or something similar.
The router is assumed to know the ''distance'' to each of its neighbors. If the metric is hops,
the distance is just one hop. If the metric is queue length, the router simply examines each
queue. If the metric is delay, the router can measure it directly with special ECHO packets
that the receiver just timestamps and sends back as fast as it can.
As an example, assume that delay is used as a metric and that the router knows the delay to
each of its neighbors. Once every T msec each router sends to each neighbor a list of its
estimated delays to each destination. It also receives a similar list from each neighbor.
Imagine that one of these tables has just come in from neighbor X, with X i being X's estimate
of how long it takes to get to router i. If the router knows that the delay to X is m msec, it
also knows that it can reach router i via X in Xi + m msec. By performing this calculation for
each neighbor, a router can find out which estimate seems the best and use that estimate and
the corresponding line in its new routing table. Note that the old routing table is not used in
the calculation.
This updating process is illustrated in Fig. 13-1. Part (a) shows a subnet. The first four
columns of part (b) show the delay vectors received from the neighbors of router J. A claims
to have a 12-msec delay to B, a 25-msec delay to C, a 40-msec delay to D, etc. Suppose that J
has measured or estimated its delay to its neighbors, A, I, H, and K as 8, 10, 12, and 6 msec,
respectively.
61
Figure 13-1. (a) A subnet. (b) Input from A, I, H, K, and the new routing table for J.
Consider how J computes its new route to router G. It knows that it can get to A in 8 msec,
and A claims to be able to get to G in 18 msec, so J knows it can count on a delay of 26 msec
to G if it forwards packets bound for G to A. Similarly, it computes the delay to G via I, H,
and K as 41 (31 + 10), 18 (6 + 12), and 37 (31 + 6) msec, respectively. The best of these values is
18, so it makes an entry in its routing table that the delay to G is 18 msec and that the route
to use is via H. The same calculation is performed for all the other destinations, with the new
routing table shown in the last column of the figure.
The Count-to-Infinity Problem
Distance vector routing works in theory but has a serious drawback in practice: although it
converges to the correct answer, it may do so slowly. In particular, it reacts rapidly to good
news, but leisurely to bad news. Consider a router whose best route to destination X is large.
If on the next exchange neighbor A suddenly reports a short delay to X, the router just
switches over to using the line to A to send traffic to X. In one vector exchange, the good
news is processed.
To see how fast good news propagates, consider the five-node (linear) subnet of Fig. 13-2,
where the delay metric is the number of hops. Suppose A is down initially and all the other
routers know this. In other words, they have all recorded the delay to A as infinity.
62
good news is spreading at the rate of one hop per exchange. In a subnet whose longest path
is of length N hops, within N exchanges everyone will know about newly-revived lines and
routers.
Now let us consider the situation of Fig. 13-3(b), in which all the lines and routers are
initially up. Routers B, C, D, and E have distances to A of 1, 2, 3, and 4, respectively.
Suddenly A goes down, or alternatively, the line between A and B is cut, which is effectively
the same thing from B's point of view.
At the first packet exchange, B does not hear anything from A. Fortunately, C says: Do not
worry; I have a path to A of length 2. Little does B know that C's path runs through B itself.
For all B knows, C might have ten lines all with separate paths to A of length 2. As a result, B
thinks it can reach A via C, with a path length of 3. D and E do not update their entries for A
on the first exchange.
On the second exchange, C notices that each of its neighbors claims to have a path to A of
length 3. It picks one of the them at random and makes its new distance to A 4, as shown in
the third row of Fig. 13-3(b). Subsequent exchanges produce the history shown in the rest of
Fig. 13-3(b).
From this figure, it should be clear why bad news travels slowly: no router ever has a value
more than one higher than the minimum of all its neighbors. Gradually, all routers work
their way up to infinity, but the number of exchanges required depends on the numerical
value used for infinity. For this reason, it is wise to set infinity to the longest path plus 1. If
the metric is time delay, there is no well-defined upper bound, so a high value is needed to
prevent a path with a long delay from being treated as down. Not entirely surprisingly, this
problem is known as the count-to-infinity problem.
21. Explain Link State Routing
Distance vector routing was used in the ARPANET until 1979, when it was replaced by link
state routing. Two primary problems caused its demise. First, since the delay metric was
queue length, it did not take line bandwidth into account when choosing routes. Initially, all
the lines were 56 kbps, so line bandwidth was not an issue, but after some lines had been
upgraded to 230 kbps and others to 1.544 Mbps, not taking bandwidth into account was a
major problem. Of course, it would have been possible to change the delay metric to factor in
line bandwidth, but a second problem also existed, namely, the algorithm often took too
long to converge (the count-to-infinity problem). For these reasons, it was replaced by an
entirely new algorithm, now called link state routing. Variants of link state routing are now
63
widely used.
The idea behind link state routing is simple and can be stated as five steps. Each router must
do the following:
Link-state routing protocols were developed to alleviate the convergence and loop issues of
distance-vector protocols. Link-state protocols maintain three separate tables:
Neighbor table – contains a list of all neighbors, and the interface each neighbor is
connected off of. Neighbors are formed by sending Hello packets.
Topology table – otherwise known as the “link-state” table, contains a map of all
links within an area, including each link’s status.
Prepared By: Dr. Ajay N. Upadhyaya & Prof. Parinda Prajapati
64
Shortest-Path table – contains the best routes to each particular destination (otherwise
known as the “routing” table”)
22. Explain Shortest Path Routing
Let us begin our study of feasible routing algorithms with a technique that is widely used in
many forms because it is simple and easy to understand. The idea is to build a graph of the
subnet, with each node of the graph representing a router and each arc of the graph
representing a communication line (often called a link). To choose a route between a given
pair of routers, the algorithm just finds the shortest path between them on the graph.
The concept of a shortest path deserves some explanation. One way of measuring path
length is the number of hops. Using this metric, the paths ABC and ABE in Fig. 13-5 are
equally long. Another metric is the geographic distance in kilometers, in which case ABC is
clearly much longer than ABE (assuming the figure is drawn to scale).
However, many other metrics besides hops and physical distance are also possible. For
example, each arc could be labelled with the mean queueing and transmission delay for
some standard test packet as determined by hourly test runs. With this graph labeling, the
shortest path is the fastest path rather than the path with the fewest arcs or kilometers.
In the general case, the labels on the arcs could be computed as a function of the distance,
bandwidth, average traffic, communication cost, mean queue length, measured delay, and
other factors. By changing the weighting function, the algorithm would then compute the
''shortest'' path measured according to any one of a number of criteria or to a combination of
criteria.
65
Figure 13-5. The first five steps used in computing the shortest path from A to D. The
arrows indicate the working node.
Several algorithms for computing the shortest path between two nodes of a graph are
known. This one is due to Dijkstra (1959). Each node is labeled (in parentheses) with its
distance from the source node along the best known path. Initially, no paths are known, so
all nodes are labeled with infinity. As the algorithm proceeds and paths are found, the labels
may change, reflecting better paths. A label may be either tentative or permanent. Initially,
all labels are tentative. When it is discovered that a label represents the shortest possible
path from the source to that node, it is made permanent and never changed thereafter.
To illustrate how the labeling algorithm works, look at the weighted, undirected graph of
Fig. 13-5(a), where the weights represent, for example, distance. We want to find the shortest
path from A to D. We start out by marking node A as permanent, indicated by a filled-in
circle. Then we examine, in turn, each of the nodes adjacent to A (the working node),
relabeling each one with the distance to A. Whenever a node is relabeled, we also label it
with the node from which the probe was made so that we can reconstruct the final path
later. Having examined each of the nodes adjacent to A, we examine all the tentatively
labeled nodes in the whole graph and make the one with the smallest label permanent, as
shown in Fig. 13-5(b). This one becomes the new working node.
We now start at B and examine all nodes adjacent to it. If the sum of the label on B and the
66
distance from B to the node being considered is less than the label on that node, we have a
shorter path, so the node is relabeled.
After all the nodes adjacent to the working node have been inspected and the tentative labels
changed if possible, the entire graph is searched for the tentatively-labeled node with the
smallest value. This node is made permanent and becomes the working node for the next
round. Figure 13-5 shows the first five steps of the algorithm.
To see why the algorithm works, look at Fig. 13-5(c). At that point we have just made E
permanent. Suppose that there were a shorter path than ABE, say AXYZE. There are two
possibilities: either node Z has already been made permanent, or it has not been. If it has,
then E has already been probed (on the round following the one when Z was made
permanent), so the AXYZE path has not escaped our attention and thus cannot be a shorter
path.
Now consider the case where Z is still tentatively labeled. Either the label at Z is greater than
or equal to that at E, in which case AXYZE cannot be a shorter path than ABE, or it is less
than that of E, in which case Z and not E will become permanent first, allowing E to be
probed from Z.
67
Exterior
Administrative
120 120 100 100 110 115 200
Distance(AD)
Bellman- Bellman- Bellman- Best Path
Algorithm DUAL Dijkstra Dijkstra
Ford Ford Ford Algorithm
Full Full Only Only Only Only
Updates Full Table
Table Table Changes Changes Changes Changes
24. List out different Sliding Window Protocols and explain any one in details.
OR
Explain the following protocols: Stop and Wait Protocol.
Sliding window protocol
By placing limits on the number of packets that can be transmitted or received at any given
time, a sliding window protocol allows an unlimited number of packets to be communicated
using fixed-size sequence numbers. The term "window" on the transmitter side represents
the logical boundary of the total number of packets yet to be acknowledged by the receiver.
The receiver informs the transmitter in each acknowledgment packet the current maximum
receiver buffer size (window boundary). The TCP header uses a 16 bit field to report the
receive window size to the sender. Therefore, the largest window that can be used is 2 16 = 64
kilobytes.
Prepared By: Dr. Ajay N. Upadhyaya & Prof. Parinda Prajapati
68
The sliding window method ensures that traffic congestion on the network is avoided. The
application layer will still be offering data for transmission to TCP without worrying about
the network traffic congestion issues as the TCP on sender and receiver side implement
sliding windows of packet buffer. The window size may vary dynamically depending on
network traffic.
In any communication protocol based on automatic repeat request for error control, the
receiver must acknowledge received packets. If the transmitter does not receive an
acknowledgment within a reasonable time, it re-sends the data.
Data in sender buffer are sent in chunks instead of the entire data in buffer. Why? Suppose
the sender buffer has the capacity of 1 MB and the receiver buffer has the capacity of 512KB,
then 50 % of the data will be lost at the receiver end and this will unnecessarily cause re-
transmission of packets from the sender end. Therefore, the sender will send data in chunks
less than 512. This will be decided by the help of window size. Window size caters the
capacity of the receiver. Flow control is the receiver related problem, we do not want the
receiver to be overwhelmed, thus in order to avoid overwhelming situation, we need to
control the flow by using window size “N”.
69
where the value of n can be arbitrary. Sliding window refers to imaginary boxes at the
transmitter and receiver. This window provides the upper limit on the number of frames
that can be transmitted before acknowledgment requirement. Window holds the number of
frame to provide above mention limit. The frames which are being transmitted to send are
falling in sending window similarly frames to be accepted are store in the receiving
window.
Significance of Sender's and Receiver's Windows:
The sequence no. within the sender’s window gives the number of frame sent but not yet
acknowledge. The frames in the sender’s window are stored so that they can be possibly
retransmitted in the case of damage while travelling to receiver.
The receiver window represents not the number of frame receive but the no. of frame that
may still be received before an ACK is sent. Because sliding window of receiver strings from
left when frame of data are received and expand to right when ACK is sent. The receiver
window contains (n-1) spaces for frame.
Sliding Window
• Sliding window refers to an imaginary boxes that hold the frames on both sender and
receiver side.
• It provides the upper limit on the number of frames that can be transmitted before
requiring an acknowledgment.
• Frames may be acknowledged by receiver at any point even when window is not full on
receiver side.
• Frames may be transmitted by source even when window is not yet full on sender side.
• The windows have a specific size in which the frames are numbered modulo- n, which
means they are numbered from 0 to n-l. For e.g. if n = 8, the frames are numbered 0,
1,2,3,4,5,6, 7, 0, 1,2,3,4,5,6, 7, 0, 1, ....
Prepared By: Dr. Ajay N. Upadhyaya & Prof. Parinda Prajapati
70
• The size of window is n-1. For e.g. In this case it is 7. Therefore, a maximum of n-l frames
may be sent before an acknowledgment.
• When the receiver sends an ACK, it includes the number of next frame it expects to
receive. For example in order to acknowledge the group of frames ending in frame 4, the
receiver sends an ACK containing the number 5. When sender sees an ACK with number 5,
it comes to know that all the frames up to number 4 have been received.
In this case n=1 and uses stop and wait technique. Sender waits for ACK after each frame
transmission. The operation of this protocol is based on the ARQ(automatic repeat request)
principle, which hold the next frame will be transmitted when positive ACK is received and
when negative ACK is received, it retransmit the same frame.
Stop and wait ARQ becomes inefficient when the propagation delay is much greater than the
time tool retransmit for example let us assume that frame of 800 bits is transmitted over
channel with speed 1mbps and let time for transmission if from end ACK is 30 ms. The
number of bits that can be transmitted over this channel is 30,000 bits. But in stop and wait
ARQ only 800 bits can be transmitted as it waits for ACK. The product of bit rate and delay
is called delay bandwidth product. It helps in measuring last opportunity in transmitted bits.
71
o If acknowledgement does not come in time, the sender assumes that either the frame
or its acknowledgement is lost in transit. Sender retransmits the frame and starts the
timeout counter.
o If a negative acknowledgement is received, the sender retransmits the frame.
25. Explain Go Back N Protocol.
The sender in this case does not waits for the ACK signal for transmission of next frame. The
sender continuously transmits the frame so that the channel should be kept busy rather that
wasting time in waiting for it ACK. Because in stop and protocol system does not transmit
anything while it is waiting So channel remain idle for considerable time period But in this
case the system does depends on only NACK(negative feedback). It symbolizes error in a
particular frame. But as NACK signal will take same time to reach sender, the sender will
continue to transmit. On the reception of the NACK signal, the transmitter will retransmit all
the frames 3 onwards. The receivers discard all the frames it has received after 3. Example:
suppose the frame is being transmitted end at frame bit 3 error occurs and NACK is
transmitted at the receiver. But this takes some time to reach the transmitter. By the time
upto frame 7 has all ways been transmitted.
If the transmitter frame is lost or acknowledgement is lost then only error occurs. In case of
damaged or lost frames the receiver transmits NACK to transmitter and the transmitter
retransmits all the frames sent since the last frame acknowledged. The disadvantage of go
back ARQ protocol is that its efficiency decreases in noisy channel as it does not wait for
ACK after every frame transmitted.
72
Stop and wait ARQ mechanism does not utilize the resources at their best. When the
acknowledgement is received, the sender sits idle and does nothing. In Go-Back-N ARQ
method, both sender and receiver maintain a window.
The sending-window size enables the sender to send multiple frames without receiving the
acknowledgement of the previous ones. The receiving-window enables the receiver to
receive multiple frames and acknowledge them. The receiver keeps track of incoming
frame’s sequence number.
When the sender sends all the frames in window, it checks up to what sequence number it
has received positive acknowledgement. If all frames are positively acknowledged, the
sender sends next set of frames. If sender finds that it has received NACK or has not receive
any ACK for a particular frame, it retransmits all the frames after which it does not receive
any positive ACK.
Go-Back-N ARQ is a specific instance of the automatic repeat request (ARQ) protocol, in
which the sending process continues to send a number of frames specified by a window
size even without receiving an acknowledgement (ACK) packet from the receiver. It is a
special case of the general sliding window protocol with the transmit window size of N and
receive window size of 1. It can transmit N frames to the peer before requiring an ACK.
The receiver process keeps track of the sequence number of the next frame it expects to
receive, and sends that number with every ACK it sends. The receiver will discard any
frame that does not have the exact sequence number it expects (either a duplicate frame it
already acknowledged, or an out-of-order frame it expects to receive later) and will resend
an ACK for the last correct in-order frame. [1] Once the sender has sent all of the frames in
its window, it will detect that all of the frames since the first lost frame are outstanding, and
will go back to the sequence number of the last ACK it received from the receiver process
and fill its window starting with that frame and continue the process over again.
Go-Back-N ARQ is a more efficient use of a connection than Stop-and-wait ARQ, since
unlike waiting for an acknowledgement for each packet, the connection is still being utilized
as packets are being sent. In other words, during the time that would otherwise be spent
waiting, more packets are being sent. However, this method also results in sending frames
multiple times – if any frame was lost or damaged, or the ACK acknowledging them was
lost or damaged, then that frame and all following frames in the window (even if they were
received without error) will be re-sent. To avoid this, Selective Repeat ARQ can be used.
73
26. Explain Selective Repeat ARQ.
In Go-back-N ARQ, it is assumed that the receiver does not have any buffer space for its
window size and has to process each frame as it comes. This enforces the sender to
retransmit all the frames which are not acknowledged.
In Selective-Repeat ARQ, the receiver while keeping track of sequence numbers, buffers the
frames in memory and sends NACK for only frame which is missing or damaged.
The sender in this case, sends only packet for which NACK is received.
74
OR
Draw and explain TCP packet header.
4. Acknowledgement number (32 bits) –Acknowledge the receipt of data, and Contains
the next sequence number that the sender of the acknowledgement expects to receive
which is the sequence number plus 1 (plus the number of bytes received in the last
message?). This number is used only if the ACK flag is on.
5. Header length (4 bits) – It shows the length of the header.
6. Reserved (6 bits) –Reserved for future use.
7. Control bits
a) URG (1 bit) - The urgent pointer is valid.
b) ACK (1 bit) - Makes the acknowledgement number valid.
c) PSH (1 bit) - High priority data for the application.
d) RST (1 bit) - Reset the connection.
e) SYN (1 bit) - Turned on when a connection is being established and the
sequence number field will contain the initial sequence number chosen by this
host for this connection.
f) FIN (1 bit) - The sender is done sending data.
8. Window size (16 bits) - The maximum number of bytes that the receiver will to
accept. (it’s maximum size of sliding window)
9. TCP checksum (16 bits) – Use to detect error.
10. Urgent pointer (16 bits) - It is only valid if the URG bit is set. The urgent mode is a
way to transmit emergency data to the other side of the connection.
11. Options and Padding - (variable length) Convey additional Information for
alignment purpose.
28. Explain Congestion Control.
75
Congestion:
An important issue in a packet-switched network is congestion. Congestion in a network
may occur if the load on the network-the number of packets sent to the network-is greater
than the capacity of the network-the number of packets a network can handle. Congestion
control refers to the mechanisms and techniques to control the congestion and keep the load
below the capacity.
We may ask why there is congestion on a network. Congestion happens in any system that
involves waiting. For example, congestion happens on a freeway because any abnonnality in
the flow, such as an accident during rush hour, creates blockage.
Congestion in a network or internetwork occurs because routers and switches have queues-
buffers that hold the packets before and after processing. A router, for example, has an input
queue and an output queue for each interface.
Congestion Control:
Open-Loop Congestion Control
In open-loop congestion control, policies are applied to prevent congestion before it
happens. In these mechanisms, congestion control is handled by either the source or the
destination. We give a brief list of policies that can prevent congestion.
76
The type of window at the sender may also affect congestion. The Selective Repeat window
is better than the Go-Back-N window for congestion control. In the Go-Back-N window,
when the timer for a packet times out, several packets may be resent, although some may
have arrived safe and sound at the receiver. This duplication may make the congestion
worse. The Selective Repeat window, on the other hand, tries to send the specific packets
that have been lost or corrupted.
Acknowledgment Policy
The acknowledgment policy imposed by the receiver may also affect congestion. If the
receiver does not acknowledge every packet it receives, it may slow down the sender and
help prevent congestion. Several approaches are used in this case. A receiver may send an
acknowledgment only if it has a packet to be sent or a special timer expires. A receiver may
decide to acknowledge only N packets at a time. We need to know that the
acknowledgments are also part of the load in a network. Sending fewer acknowledgments
means imposing less load on the network.
Discarding Policy
A good discarding policy by the routers may prevent congestion and at the same time may
not harm the integrity of the transmission. For example, in audio transmission, if the policy
is to discard less sensitive packets when congestion is likely to happen, the quality of sound
is still preserved and congestion is prevented or alleviated.
Admission Policy
An admission policy, which is a quality-of-service mechanism, can also prevent congestion
in virtual-circuit networks. Switches in a flow first check the resource requirement of a flow
before admitting it to the network. A router can deny establishing a virtualcircuit connection
if there is congestion in the network or if there is a possibility of future congestion.
Closed-Loop Congestion Control
Closed-loop congestion control mechanisms try to alleviate congestion after it happens.
Several mechanisms have been used by different protocols. We describe a few of them here.
Backpressure
The technique of backpressure refers to a congestion control mechanism in which a congested
node stops receiving data from the immediate upstream node or nodes. This may cause the
upstream node or nodes to become congested, and they, in turn, reject data from their
upstream nodes or nodes. And so on. Backpressure is a node-to-node congestion control that
starts with a node and propagates, in the opposite direction of data flow, to the source. The
backpressure technique can be applied only to virtual circuit networks, in which each node
knows the upstream node from which a flow of data is corning. Figure shows the idea of
backpressure.
Prepared By: Dr. Ajay N. Upadhyaya & Prof. Parinda Prajapati
77
Backpressure method for alleviating congestion
Node III in the figure has more input data than it can handle. It drops some packets in its
input buffer and informs node II to slow down. Node II, in turn, may be congested because
it is slowing down the output flow of data. If node II is congested, it informs node I to slow
down, which in turn may create congestion. If so, node I informs the source of data to slow
down. This, in time, alleviates the congestion. Note that the pressure on node III is moved
backward to the source to remove the congestion.
Choke Packet
A choke packet is a packet sent by a node to the source to inform it of congestion. Note the
difference between the backpressure and choke packet methods. In backpressure, the
warning is from one node to its upstream node, although the warning may eventually reach
the source station. In the choke packet method, the warning is from the router, which has
encountered congestion, to the source station directly. The intermediate nodes through
which the packet has traveled are not warned. We have seen an example of this type of
control in ICMP. When a router in the Internet is overwhelmed with IP datagram, it may
discard some of them; but it informs the source host, using a source quench ICMP message.
The warning message goes directly to the source station; the intermediate routers, and does
not take any action. Figure shows the idea of a choke packet.
Implicit Signaling
In implicit signaling, there is no communication between the congested node or nodes and
the source. The source guesses that there is a congestion somewhere in the network from
other symptoms. For example, when a source sends several packets and there is no
acknowledgment for a while, one assumption is that the network is congested. The delay in
receiving an acknowledgment is interpreted as congestion in the network; the source should
slow down.
Explicit Signaling
The node that experiences congestion can explicitly send a signal to the source or
78
destination. The explicit signaling method, however, is different from the choke packet
method. In the choke packet method, a separate packet is used for this purpose; in the
explicit signaling method, the signal is included in the packets that carry data. Explicit
signaling, as we will see in Frame Relay congestion control, can occur in either the forward
or the backward direction.
Backward Signaling A bit can be set in a packet moving in the direction opposite to the
congestion. This bit can warn the source that there is congestion and that it needs to slow
down to avoid the discarding of packets.
Forward Signaling
A bit can be set in a packet moving in the direction of the congestion. This bit can warn the
destination that there is congestion. The receiver in this case can use policies, such as
slowing down the acknowledgments, to alleviate the congestion.
29. Give differences between TCP and UDP.
TCP UDP
79
TCP rearranges data packets UDP has no inherent order as
in the order specified. all packets are independent of
Ordering of data each other. If ordering is
packets required, it has to be
managed by the application
layer.
The speed for TCP is slower UDP is faster because there is
Speed of transfer
than UDP. no error-checking for packets.
There is absolute guarantee There is no guarantee that the
that the data transferred messages or packets sent
Reliability remains intact and arrives in would reach at all.
the same order in which it
was sent.
Header Size TCP header size is 20 bytes UDP Header size is 8 bytes.
Common Header Source port, Destination port, Source port, Destination port,
Fields Check Sum Check Sum
Data is read as a byte stream, Packets are sent individually
no distinguishing indications and are checked for integrity
are transmitted to signal only if they arrive. Packets
message (segment) have definite boundaries
boundaries. which are honored upon
Streaming of data
receipt, meaning a read
operation at the receiver
socket will yield an entire
message as it was originally
sent.
TCP is heavy-weight. TCP UDP is lightweight. There is
requires three packets to set no ordering of messages, no
up a socket connection, tracking connections, etc. It is
Weight
before any user data can be a small transport layer
sent. TCP handles reliability designed on top of IP.
and congestion control.
TCP does Flow Control. TCP UDP does not have an option
requires three packets to set for flow control
up a socket connection,
Data Flow Control
before any user data can be
sent. TCP handles reliability
and congestion control.
Prepared By: Dr. Ajay N. Upadhyaya & Prof. Parinda Prajapati
80
TCP does error checking UDP does error checking, but
Error Checking
no recovery options.
1. Sequence Number, 2. AcK 1. Length, 2. Source port, 3.
number, 3. Data offset, 4. Destination port, 4. Check
Reserved, 5. Control bit, 6. Sum
Fields Window, 7. Urgent Pointer 8.
Options, 9. Padding, 10.
Check Sum, 11. Source port,
12. Destination port
Acknowledgement Acknowledgement segments No Acknowledgment
SYN, SYN-ACK, ACK No handshake
Handshake
(connectionless protocol)
Checksum checksum to detect errors
30. Explain UDP with its header format, Features and Applications. (Any Two can be ask)
1. Source port number (16 bits) - An optional field, The Address of Application
Program that has created the message.
2. Destination port number (16 bits) –The Address of Application Program that will
Prepared By: Dr. Ajay N. Upadhyaya & Prof. Parinda Prajapati
81
receive the message.
3. UDP length (16 bits) –Gives the total Data Length of the user Datagram.
4. UDP checksum (16 bits)-Used in error detection.
Use of UDP
UDP is used when acknowledgement of data does not hold any significance.
UDP is good protocol for data flowing in one direction.
UDP is simple and suitable for query based communications.
UDP is not connection oriented.
UDP does not provide congestion control mechanism.
UDP does not guarantee ordered delivery of data.
UDP is stateless.
UDP is suitable protocol for streaming applications such as VoIP, multimedia
streaming.
UDP application
Congestion Window
82
We said that the sender window size is determined by the available buffer space in the
receiver (rwnd).
The sender has two pieces of information: the receiver-advertised window size and the
congestion window size. The actual size of the window is the minimum of these two.
Congestion Policy
In the slow-start phase, the sender starts with a very slow rate of transmission, but increases
the rate rapidly to reach a threshold. When the threshold is reached, the data rate is reduced
to avoid congestion. Finally if congestion is detected, the sender goes back to the slow-start
or congestion avoidance phase based on how the congestion is detected.
Exponential Increase One of the algorithms used in TCP congestion control is called slow
start. This algorithm is based on the idea that the size of the congestion window (cwnd) starts
with one maximum segment size (MSS). The MSS is determined during connection
establishment by using an option of the same name. The size of the window increases one
MSS each time an acknowledgment is received. As the name implies, the window starts
slowly, but grows exponentially. To show the idea, let us look at Figure
We have used segment numbers instead of byte numbers (as though each segment contains
only 1 byte). We have assumed that rwnd is much higher than cwnd, so that the sender
window size always equals cwnd. We have assumed that each segment is acknowledged
individually.
83
Slow start, exponential increase
The sender starts with cwnd =1 MSS. This means that the sender can send only one segment.
After receipt of the acknowledgment for segment 1, the size of the congestion window is
increased by 1, which means that cwnd is now 2. Now two more segments can be sent. When
each acknowledgment is received, the size of the window is increased by 1 MSS. When all
seven segments are acknowledged, cwnd = 8.
In the slow-start algorithm, the size of the congestion window increases exponentially
until it reaches a threshold.
If we start with the slow-start algorithm, the size of the congestion window increases
exponentially. To avoid congestion before it happens, one must slow down this exponential
growth. TCP defines another algorithm called congestion avoidance, which undergoes an
additive increase instead of an exponential one. When the size of the congestion window
reaches the slow-start threshold, the slow-start phase stops and the additive phase begins. In
this algorithm, each time the whole window of segments is acknowledged (one round), the
size of the congestion window is increased by 1. To show the idea, we apply this algorithm
to the same scenario as slow start, although we will see that the congestion avoidance
algorithm usually starts when the size of the window is much greater than 1. Figure shows
the idea.
84
Congestion avoidance, additive increase
In the congestion avoidance algorithm, the size of the congestion window increases
additively until congestion is detected.
If congestion occurs, the congestion window size must be decreased. The only way the
sender can guess that congestion has occurred is by the need to retransmit a segment.
However, retransmission can occur in one of two cases: when a timer times out or when
three ACKs are received. In both cases, the size of the threshold is dropped to one-half, a
multiplicative decrease. Most TCP implementations have two reactions:
2. If three ACKs are received, there is a weaker possibility of congestion; a segment may
have been dropped, but some segments after that may have arrived safely since three ACKs
are received. This is called fast transmission and fast recovery. In this case, TCP has a weaker
reaction: a. It sets the value of the threshold to one-half of the current window size. b. It sets
cwnd to the value of the threshold (some implementations add three segment sizes to the
threshold). c. It starts the congestion avoidance phase.
85
o If detection is by time-out, a new slow-start phase starts.
Instead of just one MTA at the sender site and one at the receiving site, other MTAs,
acting either as client or server, can relay the mail.
86
Specific format of E-mail Address:
Local Part: The local part defines the name of a special file, called the user mailbox, where all
the mail received for a user is stored for retrieval by the message access agent.
Domain Name: The second part of the address is the domain name. An organization usually
selects one or more hosts to receive and send e-mail; the hosts are sometimes called mail
servers or exchangers. The domain name assigned to each mail exchanger either comes from
the DNS database or is a logical name (for example, the name of the organization).
Mailing List
Electronic mail allows one name, an alias, to represent several different e-mail addresses;
this is called a mailing list. Every time a message is to be sent, the system checks the
recipient's name against the alias database; if there is a mailing list for the defined alias,
separate messages, one for each entry in the list, must be prepared and handed to the MTA.
If there is no mailing list for the alias, the name itself is the receiving address and a single
message is delivered to the mail transfer entity.
SMTP expects the destination host, the mail server receiving the mail, to be on-line
all the time; otherwise, a TCP connection cannot be established.
87
For this reason, it is not practical to establish an SMTP session with a desktop
computer because desktop computers are usually powered down at the end of the
day.
In many organizations, mail is received by an SMTP server that is always on-line.
This SMTP server provides a mail-drop service.
The server receives the mail on behalf of every host in the organization.
Workstations interact with the SMTP host to retrieve messages by using a client-
server protocol such as Post office protocol version 3 (POP3).
Although POP3 is used to download messages form the server, the SMTP client is
still needed on the desktop to forward messages from the workstation user to its
SMTP mail server
Post Office Protocol, version 3 (POP3) is simple and limited in functionality. The
client POP3 software is installed on the recipient computer; the server POP3
software is installed on the mail server.
Figure shows an example of downloading using POP3
POP3 has two modes: the delete mode and the keep mode. In the delete mode, the
mail is deleted from the mailbox after each retrieval. In the keep mode, the mail
remains in the mailbox after retrieval. The delete mode is normally used when the
user is working at her permanent computer and can save and organize the received
mail after reading or replying. The keep mode is normally used when the user
accesses her mail away from her primary computer (e.g., a laptop). The mail is read
but kept in the system for later retrieval and organizing.
33. Explain IMAP4 and MIME.
IMAP4
Another mail access protocol is Internet Mail Access Protocol, version 4 (IMAP4).
IMAP4 is similar to POP3, but it has more features; IMAP4 is more powerful and
88
more complex.
POP3 is deficient in several ways. It does not allow the user to organize her mail on
the server; the user cannot have different folders on the server. (Of course, the user
can create folders on her own computer.) In addition, POP3 does not allow the user to
partially check the contents of the mail before downloading.
IMAP4 provides the following extra functions:
o A user can check the e-mail header prior to downloading.
o A user can search the contents of the e-mail for a specific string of characters
prior to downloading.
o A user can partially download e-mail. This is especially useful if bandwidth is
limited and the e-mail contains multimedia with high bandwidth
requirements.
o A user can create, delete, or rename mailboxes on the mail server.
o A user can create a hierarchy of mailboxes in a folder for e-mail storage.
MIME
Electronic mail has a simple structure. Its simplicity, however, comes at a price. It can send
messages only in NVT 7-bit ASCII format. In other words, it has some limitations.
For example, it cannot be used for languages that are not supported by 7-bit ASCII
characters (such as French, German, Hebrew, Russian, Chinese, and Japanese). Also, it
cannot be used to send binary files or video or audio data.
Multipurpose Internet Mail Extensions (MIME) is a supplementary protocol that allows non-
ASCII data to be sent through e-mail. MIME transforms non-ASCII data at the sender site to
NVT ASCII data and delivers them to the client MTA to be sent through the Internet. The
message at the receiving side is transformed back to the original data
We can think of MIME as a set of software functions that transforms non-ASCII data (stream
of bits) to ASCII data and vice versa, as shown in Figure
MIME defines five headers that can be added to the original e-mail header section
to define the transformation parameters:
1. MIME-Version
2. Content-Type
3. Content-Transfer-Encoding
89
4. Content-Id
5. Content-Description
34. What is HTTP? Explain with respect to persistent and non-persistent connections.
1. The HTTP client process initiates a TCP connection to the server www.someSchool.edu on
port number 80, which is the default port number for HTTP. Associated with the TCP
connection, there will be a socket at the client and a socket at the server.
2. The HTTP client sends an HTTP request message to the server via its socket. The request
message includes the path name /someDepartment/home.index.
3. The HTTP server process receives the request message via its socket, retrieves the object
/someDepartment/home.index from its storage (RAM or disk), encapsulates the object in an
HTTP response message, and sends the response message to the client via its socket.
4. The HTTP server process tells TCP to close the TCP connection. (But TCP doesn’t actually
terminate the connection until it knows for sure that the client has received the response
90
message intact.)
5. The HTTP client receives the response message. The TCP connection terminates. The
message indicates that the encapsulated object is an HTML file. The client extracts the file
from the response message, examines the HTML file, and finds references to the 10 JPEG
objects.
6. The first four steps are then repeated for each of the referenced JPEG objects.
Persistent HTTP
With persistent connections, the server leaves the TCP connection open after sending
a response.
Subsequent requests and responses between the same client and server can be sent
over the same connection.
In particular, an entire Web page (in the example above, the base HTML file and the
10 images) can be sent over a single persistent TCP connection.
Moreover, multiple Web pages residing on the same server can be sent from the
server to the same client over a single persistent TCP connection.
These requests for objects can be made back-to-back, without waiting for replies to
pending requests (pipelining).
Prepared By: Dr. Ajay N. Upadhyaya & Prof. Parinda Prajapati
91
Typically, the HTTP server closes a connection when it isn’t used for a certain time (a
configurable timeout interval).
When the server receives the back-to-back requests, it sends the objects back-to-back.
The default mode of HTTP uses persistent connections with pipelining.
WWW
In the early 1990s, a major new application arrived on the scene—the World Wide
Web [Berners-Lee 1994].
The Web was the first Internet application that caught the general public’s eye. It
dramatically changed, and continues to change, how people interact inside and
outside their work environments. It elevated the Internet from just one of many data
networks to essentially the one and only data network.
web page consists of objects
object can be HTML file, JPEG image, Java applet, audio file,…
web page consists of base HTML-file which includes several referenced objects
each object is addressable by a URL, e.g.,
<
HTTP
The Hypertext Transfer Protocol (HTTP) is a protocol used mainly to access data on the
World Wide Web. HTTP functions as a combination of FTP and SMTP. It is similar to FTP
because it transfers files and uses the services of TCP. However, it is much simpler
than FTP because it uses only one TCP connection. There is no separate control connection;
only data are transferred between the client and the server. HTTP is like SMTP because the
92
data transferred between the client and the server look like SMTP messages. In addition, the
format of the messages is controlled by MIME-like headers. Unlike SMTP, the HTTP
messages are not destined to be read by humans; they are read and interpreted by the HTTP
server and HTTP client (browser). SMTP messages are stored and forwarded, but HTTP
messages are delivered immediately. The commands from the client to the server are
embedded in a request message. The contents of the requested file or other information are
embedded in a response message. HTTP uses the services of TCP on well-known port 80.
HTTP Transaction
Figure illustrates the HTTP transaction between the client and server. Although
HTTP uses the services of TCP, HTTP itself is a stateless protocol. The client initializes
the transaction by sending a request message. The server replies by sending a response.
Messages
The formats of the request and response messages are similar; both are shown in Figure A
request message consists of a request line, a header, and sometimes a body. A response
message consists of a status line, a header, and sometimes a body.
93
Request and Status Lines The first line in a request message is called a request line;
the first line in the response message is called the status line. There is one common
Request type. This field is used in the request message. In version 1.1 of HTTP, several
request types are defined. The request type is categorized into methods as defined in Table
36. Consider the following HTTP message and answer the following questions:
94
ii. Does browser request a non-persistent or a persistent connection?
Persistent
iii. Which is the (complete) URL of the document requested by the user?
www.gtu.ac.in/home.asp
iv. Which HTML method is used to retrieve the requested URL?
GET
37. Explain the high-level view of Internet e-mail system and its major components.
Message Handling System (MHS):
MHS is the OSI protocol that underlines electronic mail and store-and-forward handling.
It is derived from the ITU-T X.400 series.
MHS is the system used to send any message (including copies of data and files) that can
be delivered in a store-and-forward manner.
o Store-and-forward delivery means that, instead of opening an active channel
between the sender and receiver, the protocol provides a delivery service that
forwards the message when a link becomes available.
o In most information-sharing protocols, both sender and the receiver must be
able to participate in the exchange concurrently.
o In a store-and-forward system, the sender passes the message to delivery
system.
o The delivery system may not be able to transmit the message immediately, in
which case it stores the message until conditions change.
o When the message is delivered, it is stored in the recipient’s mailbox until
called for. For example, the regular postal system.
The structure of the MHS:
o The structure of the OSI message handling system is shown in figure.
o Each user communicates with a program or process called a user agent (UA).
o The UA is unique for each user (each user receives a copy of the program or
process).
o An example of a UA is electronic mail program associated with a specific
operating system that allows a user to type and edit messages.
o Each user has message storage (MS), which consists of disk space in a mail
storage system and is usually referred to as a mailbox.
o Message storage can be used for storing, sending, or receiving messages.
o The message storage communicates with a series of processes called message
transfer agents (MTAs).
Prepared By: Dr. Ajay N. Upadhyaya & Prof. Parinda Prajapati
95
o MTAs are like the different departments of a post office.
o The combined MTAs make up the message transfer system (MTS).
[MHS]
Message Format
o The MHS standard defines the format of a message.
o The body of the message corresponds to the material (like a letter) that goes inside
of the envelope of a conventional mailing.
o Every message can include the address (name) of the sender, the address (name)
of the recipient, the subject of the message, and a list of anyone other than the
primary recipient who is to receive a copy.
96
2) Country domains
3) Inverse domain.
1) Generic Domains:
The generic domains define registered hosts according to their behavior.
Label Description
com Commercial organizations
edu Educational institutions
gov Government institutions
int International organizations
mil Military groups
net Network support centers
org Nonprofit organizations
2) Country Domains:
The country domain section follows the same format as the generic domains but uses
two-character country abbreviations (eg., “in” for India).
3) Inverse Domains:
The inverse domain is used to map an address to a name.
97
Hierarchy of DNS servers
DNS Message
98
(FQDN). An FQDN is a domain name that contains the full name of a host. It contains
all labels, from the most specific to the most general, that uniquely define the name of
the host. For example, the domain name
challenger.ate.tbda.edu.
is the FQDN of a computer named challenger installed at the Advanced Technology
Center (ATC) at De Anza College. A DNS server can only match an FQDN to an
address. Note that the name must end with a null label, but because null means
nothing, the label ends with a dot (.).
1) LAN
2) MAN
3) WAN
4) Internet
1. LAN:
A Local Area Network (LAN) is a collection of networking equipment located
geographically close together. E.g. Single room, campus etc.
Data transferred in High speed which ranges from 100 Mbps to gigabit for system
development and have a low implementation cost.
Upper limit: 1 km
Twisted pair cable or Co-axial cable connects the plug in cards to form a network.
Designed to share resources between PCs and workstation such as hardware or data.
99
MERITS:
• Special security measures are needed to stop users from using programs and data that
they should not have access to;
• Networks are difficult to set up and need to be maintained by skilled technicians.
• If the file server develops a serious fault, all the users are affected, rather than just
one user in the case of a stand-alone machine.
2. MAN:
It is in between LAN & WAN technology that covers the entire city.
It uses similar technology as LAN.
It can be a single network such as cable TV network, or a measure of connecting a
number of LAN’s o a large network so that resources can be shared LAN to LAN as
well as device to device.
It uses DUAL BUS for data transmissions
Range: Within 5 to 50 km (a city).
MERITS:
The dual bus used in MAN helps the transmission of data in both direction
simultaneously.
Prepared By: Dr. Ajay N. Upadhyaya & Prof. Parinda Prajapati
100
A Man usually encompasses several blocks of a city or an entire city.
DEMERITS
More cable required for a MAN connection from one place to another.
It is difficult to make the system secure from hackers
3. WAN
When network spans over a large distance or when the computers to be connected to
each other are at widely separated locations a local area network cannot be used. A
wide area network (WAN) is installed.
The communication between different users of WAN is established using leased
telephone lines, satellite links and similar channels.
It is cheaper and more efficient to use the phone network for the link.
Most WAN networks are used to transfer large blocks of data between its users.
MERITS:
•Covers a large geographical area so long distance businesses can connect on the one
network.
• Shares software and resources with connecting workstations.
• Messages can be sent very quickly to anyone else on the network. These messages can
have pictures, sounds, or data included with them (called attachments).
DEMERITS:
• Need a good firewall to restrict outsiders from entering and disrupting the network
• Setting up a network can be an expensive, slow and complicated. The bigger the
network the more expensive it is.
Ownership of Private or
Private Private or public
network public
Geographical area
Small Very large Moderate
covered
101
Design and
Easy Not easy Not easy
maintenance
1) Jitter
Jitter is defined as a variation in the delay of received packets. The sending side transmits
packets in a continuous stream and spaces them evenly apart. Because of network
congestion, improper queuing, or configuration errors, the delay between packets can vary
instead of remaining constant.
2) Bandwidth
In networking, we use the term bandwidth in two contexts.
1. The first, in analog signals, bandwidth in hertz, refers to the range of frequencies in a
composite signal or the range of frequencies that a channel can pass.
The bandwidth is normally a difference between two numbers. For example, if a
composite signal contains frequencies between 1000 and 5000, its bandwidth is
5000 - 1000, or 4000.
102
2. The second, in digital signals bandwidth in bits per second, refers to the speed of bit
transmission in a channel or link. Often referred to as Capacity.
Another term-bit rate (instead of frequency)-is used to describe digital signals. The bit
rate is the number of bits sent in 1 sec, expressed in bits per second (bps).
3) Throughput
In data transmission, network throughput is the amount of data moved successfully from
one place to another in a given time period, and typically measured in bits per second (bps),
as in megabits per second (Mbps) or gigabits per second (Gbps).
4) Bridge
Bridge operates at both the physical layer and data link layer.
Bridge divides a larger network onto smaller segments.
Maintain traffic for each segment.
Maintain physical address of each node of each segment.
Store address into look-up table.
Types of bridge
1. Simple Bridge
o Used to connect two segments
103
o Least expensive
o Address of each node entered manually
o Installation and maintenance is high & time consuming
2. Multiport Bridge
o Used to connect more than two LAN
o Maintain physical address of each station
o If three segment is connected then it maintain three tables
o Address of each node entered manually
o Installation and maintenance is high & time consuming
3. Transparent Bridge
o It’s also known as a learning Bridge
o Maintain physical address of each station
o Manually entry of each node is not required
o Maintain address by its own table is empty
o At initial level
o After each process its storing details of each node
o Its self updating
5) Repeater
Repeater is also known as a Regenerator.
It’s an electronic device.
Used at Physical layer of OSI model.
Used for carry information at longer distance.
Not work as an amplifier, it’s not amplifying the signal.
It regenerates the original bit patterns from weak signal patterns.
So, Repeater is a regenerator not an amplifier.
It’s not an intelligent device.
104
6) Hub and Switch
Hub
Used at physical layer
Dump device
Not maintaining any type of node details
Each time broadcast the packets
Cheaper
Used to connect two or more number of computers
Switch
Two type of switch: Layer two and Layer three switch
Layer two switch is operates at second layer and layer three switch is operates at third
layer
We can configure layer three switch
Switch(l2) maintain physical address of each node
L3 switch handle logical address
Not manually handling is required for tables
For the first time broadcast the message if destination address is not available in the
table.
Two different strategies of switch
o Store and forward Switch: store the frame in the input buffer until the whole
packet has arrived.
o Cut through Switch: Not wait for other frames; just transmit it towards the
destination
7) Router
It is used at network layer of OSI model
Maintain logical address of each node
It’s an intelligent device
It has its own software
Determines the best path among available different path
Used to connect two different networks
Prepared By: Dr. Ajay N. Upadhyaya & Prof. Parinda Prajapati
105
Manually handling is not required
We can configure the router according to our requirement
Maintain address by its own
At initial level table is empty
After each process its storing details of each node
Its self updating
8) Gateway
Operates In all seven layer of OSI model
Gateway is a protocol converter
Both network is working on different protocol then accept a packet from one network
and transmit packet to the different network.
It convert’s packet into suitable form
A gateway is software installed within a router.
A gateway adjusts the data rate size and format of each packet.
106