CN CompleteNotes
CN CompleteNotes
INTRODUCTION TO NETWORKING
1.1.Introduction to Computer Network
Computer Network is a group of computers connected with each other through wires,
optical fibres or optical links so that various devices can interact with each other
through a network.
The aim of the computer network is the sharing of resources among various devices.
1.2.Network Application
Resource sharing: Resource sharing is the sharing of resources such as programs,
printers, and data among the users on the network without the requirement of the
physical location of the resource and user.
Server-Client model: Computer networking is used in the server-client model. A
server is a central computer used to store the information and maintained by the system
administrator. Clients are the machines used to access the information stored in the
server remotely.
Communication medium: Computer network behaves as a communication medium
among the users. For example, a company contains more than one computer has an
email system which the employees use for daily communication.
E-commerce: Computer network is also important in businesses. We can do the
business over the internet. For example, amazon.com is doing their business over the
internet, i.e., they are doing their business over the internet.
The bus topology is designed in such a way that all the stations are connected through
a single cable known as a backbone cable.
Each node is either connected to the backbone cable by drop line or directly connected
to the backbone cable.
When a node wants to send a message over the network, it puts a message over the
network. All the stations available in the network will receive the message whether it
has been addressed or not.
The bus topology is mainly used in 802.3 (Ethernet) and 802.4 standard networks.
The configuration of a bus topology is quite simpler as compared to other topologies.
The backbone cable is considered as a "single lane" through which the message is
broadcast to all the stations.
2. Ring Topology
Ring topology is like a bus topology, but with connected ends.
The node that receives the message from the previous computer will retransmit to the
next node.
The data flows in one direction, i.e., it is unidirectional.
The data flows in a single loop continuously known as an endless loop.
It has no terminated ends, i.e., each node is connected to other node and having no
termination point.
The data in a ring topology flow in a clockwise direction.
The most common access method of the ring topology is token passing. Token is a
frame that circulates around the network.
A token move around the network, and it is passed from computer to computer until it
reaches the destination.
The sender modifies the token by putting the address along with the data.
The data is passed from one device to another device until the destination address
matches. Once the token received by the destination device, then it sends the
acknowledgment to the sender.
In a ring topology, a token is used as a carrier.
3. Star Topology
Star topology is an arrangement of the network in which every node is connected to the
central hub, switch or a central computer.
The central computer is known as a server, and the peripheral devices attached to the
server are known as clients.
Coaxial cable or RJ-45 cables are used to connect the computers.
Hubs or Switches are mainly used as connection devices in a physical star topology.
Star topology is the most popular topology in network implementation.
Advantages:
Each device needs only one link and one I/O port, which makes star topology less
expensive, easy to install and easy to configure.
4. Mesh Topology
5. Hybrid Topology
The combination of various different topologies is known as hybrid topology.
A hybrid topology is a connection between different links and nodes to transfer the data.
When two or more different topologies are combined together is termed as Hybrid
topology and if similar topologies are connected with each other will not result in
Hybrid topology. For example, if there exist a ring topology in one branch of HDFC
bank and bus topology in another branch of HDFC bank, connecting these two
topologies will result in Hybrid topology.
1.9.TCP/IP Model
TCP/IP means Transmission Control Protocol and Internet Protocol.
It is the network model used in the current Internet architecture as well.
Protocols are set of rules which govern every possible communication over a network.
These protocols describe the movement of data between the source and destination or
the internet.
They also offer simple naming and addressing schemes.
TCP/IP was developed by Department of Defence's Project Research Agency (ARPA,
later DARPA) as a part of a research project of network interconnection to connect
remote machines.
Protocols and networks in the TCP/IP model is shown in figure 27.
Following are some similarities between OSI Reference Model and TCP/IP Reference Model.
Following are some major differences between OSI Reference Model and TCP/IP Reference
Model.
OSI(Open System Interconnection) TCP/IP(Transmission Control Protocol /
Internet Protocol)
1. OSI is a generic, protocol independent 1. TCP/IP model is based on standard
standard, acting as a communication protocols around which the Internet has
gateway between the network and end user. developed. It is a communication protocol,
₻₻₻₻₻
Q1. Determine the data rate for a noiseless channel having bandwidth of 3kHz and two signal
levels are used for signal transmission.
Solution: For a noiseless channel, the maximum data rate is given by Nyquist bit rate as
Maximum Bit Rate = 2 × BW × log 2 L
Maximum Bit Rate = 2 × (3 × 103 ) × log 2 2
Maximum Bit Rate = 6000bps
Q2. Calculate the bandwidth of a noiseless channel having a maximum bit rate of 12kbps and
four signal levels.
Solution: For a noiseless channel, the maximum data rate is given by Nyquist bit rate as
Maximum Bit Rate = 2 × BW × log 2 L
12 × 103 = 2 × BW × log 2 4
12 × 103
BW = = 3000 Hz = 3kHz
4
S
Q3. Calculate the capacity of a telephone channel. The channel bandwidth is 3000 Hz and N is
3162.
Solution: The telephone channel is a noisy channel.
S
C = BW × log 2 (1 + )
N
C = 3000 × log 2 (1 + 3162) = 34881 bps
1. Magnetic Media
Data is written on magnetic tape or floppy disk or CD ROM.
Bandwidth is excellent i.e. upto 19 Gbps.
Cost effective way to transmit large amount of data.
High delay in accessing data. It takes minutes to hours to days to physically
transport cassette from one location to another.
2. Twisted Pair
Twisted pair cable is the most common transmission medium for LANs.
It is comprised of copper wires individually surrounded by a PVC insulating layer
and twisted around each other in a spiral.
The wires are twisted to improve the transmission characteristics by reducing the
interference.
It can be used for either analog or digital transmission.
Bandwidth depends on the thickness of the wire and the distance travelled.
Twisted pair is relatively inexpensive and easy to install and terminate.
There are two types of twisted pair cable: Unshielded twisted pair (UTP) and
shielded twisted pair (STP).
3. Coaxial Cable
Coaxial is called by this name because it contains two conductors that are parallel
to each other.
Copper is used in this as centre conductor which can be a solid wire or a standard
one.
It is surrounded by PVC installation, a sheath which is encased in an outer conductor
of metal foil, braid or both.
Outer metallic wrapping is used as a shield against noise and as the second
conductor which completes the circuit.
The outer conductor is also encased in an insulating sheath.
The outermost part is the plastic cover which protects the whole cable.
To connect coaxial cable to devices, we need coaxial connectors. The most common
type of connector used today is the Bayonet Neill-Concelman (BNC) connector.
2. Multimode fiber
Have a multiple strands of glass fiber.
Have larger core with diameter of 62.5 microns i.e. about thickness of human hair.
Transmit infrared light having wavelength 850 to 1300nm.
Bandwidth is 2 GHz.
1. Radio Waves
These are easy to generate and can penetrate through buildings.
The sending and receiving antennas need not be aligned.
Frequency Range:3KHz – 1GHz.
AM and FM radios and cordless phones use radio waves for transmission.
Further Categorized as (i) Terrestrial and (ii) Satellite.
2. Microwave
It is a line of sight transmission i.e. the sending and receiving antennas need to be
properly aligned with each other.
The distance covered by the signal is directly proportional to the height of the
antenna.
Frequency Range:1GHz – 300GHz.
Scatternet
It is formed by using various piconets.
A slave that is present in one piconet can be act as master or we can say primary in
other piconet.
This kind of node can receive message from master in one piconet and deliver the
message to its slave into the other piconet where it is acting as a slave.
This type of node is called as bridge node.
A station cannot be master in two piconets.
Advantages of Bluetooth:
Low cost.
Easy to use.
It can also penetrate through walls.
It creates an adhoc connection immediately without any wires.
It is used for voice and data transfer.
Disadvantages of Bluetooth:
It can be hacked and hence, less secure.
It has slow data transfer rate: 3 Mbps.
It has small range: 10 meters.
2.5. Switching Techniques
In large networks, there can be multiple paths from sender to receiver. The switching
technique will decide the best route for data transmission.
Switching technique is used to connect the systems for making one-to-one
communication.
2.5.1. Circuit Switching
Circuit switching is a switching technique that establishes a dedicated path between
sender and receiver.
In the Circuit Switching Technique, once the connection is established then the
dedicated path will remain to exist until the connection is terminated.
Circuit switching in a network operates in a similar way as the telephone works.
A complete end-to-end path must exist before the communication takes place.
In case of circuit switching technique, when any user wants to send the data, voice,
video, a request signal is sent to the receiver then the receiver sends back the
acknowledgment to ensure the availability of the dedicated path. After receiving the
acknowledgment, dedicated path transfers the data.
Circuit switching is used in public telephone network. It is used for voice transmission.
Fixed data can be transferred at a time in circuit switching technology.
Communication through circuit switching has 3 phases:
a. Connection establishment
Techniques of Framing
a. Character Count
The first field in the header specifies the number of characters in the frame.
When the data link layer at the destination sees the character count, it knows how
many characters follow and hence where the end of frame is.
This technique is shown in figure 2 below for four frames of sizes 5, 5, 8, and 8
characters respectively.
Figure 2. A Character Stream. (a) Without errors (b) With one error
The trouble with this algorithm is that the count can be garbled by a transmission
error. For example, if the character count of 5 in the second frame of figure (b)
becomes a 7, the destination will get out of synchronization and will be unable to
locate the start of the next frame.
b. Character Stuffing
Each frame starts with a special start and end bytes (flag bytes). Here, we will image
it as same byte, FLAG as shown in figure 3.
c. Bit Stuffing
Byte stuffing specifies character format (i.e. 8 bits per character).
To allow arbitrary number of bits per character, use stuffing at bit-level rather than
at byte-stuffing.
Each frame begins and ends with bit pattern 01111110 (6 1’s).
If five 1’s in a row in data, stuff in a bit 0 so that never there will be six 1’s in a row.
Stuff it in always irrespective of whether next bit will be 1 or 0.
De-stuffer removes the 0’s after any five 1’s at the receiver end.
Advantage: Simplicity. Each frame is checked and acknowledged before the next frame is
sent.
Disadvantage: Inefficiency. Stop-and-wait is slow. Each frame must travel all the way to the
receiver and the acknowledgement must travel all the way back to the sender before the next
frame can be sent.
2. Sliding window flow control
In sliding window method of flow control, the sender can transmit several frames before
getting the acknowledgement.
The link can carry several frames at one time and its capacity can be used efficiently.
The sliding window refers to imaginary boxes at both the sender and the receiver end.
The window can hold frames at either end and these may be acknowledged at any point
without waiting for the window to fill up.
To keep track of which frames have been transmitted and received, sliding window
introduces an identification scheme based on size of the window.
The frames are numbered from 0 to n-1 and the size of window is also n-1.
Example: If n=8, the frames are numbered 0, 1, 2, 3, 4, 5, 6, 7 and size of window =7.
Thus, the receiver sends an acknowledgment which includes the number of next frame
it expects to receive.
Sender’s Window
At the beginning of transmission, the sender’s window contains n-1 frames.
As the frames are sent out, the left boundary of the window moves inwards shrinking
the size of the window.
Once an acknowledgement arrives, the window expands to allow in a number of new
frames equal to number of frames acknowledged by the receiver.
Q2. A bit stream 10011101 is transmitted using the standard CRC method. The generator
polynomial is x3+1.
1. What is the actual bit string transmitted?
2. Suppose the third bit from the left is inverted during transmission. How will receiver
detect this error?
Solution:
Part 1 Part 2
The generator polynomial G(x) = x3 + According to the question,
1 is encoded as 1001. Third bit from the left gets inverted
Clearly, the generator polynomial during transmission.
consists of 4 bits. So, the bit stream received by the
So, a string of 3 zeroes is appended to receiver = 10111101100.
the bit stream to be transmitted. Now,
The resulting bit stream is Receiver receives the bit stream =
10011101000. 10111101100.
Now, the binary division is performed as- Receiver performs the binary
division with the same generator
polynomial as-
Figure 11. 7-bit Hamming Code Structure (D Data bits, P Parity bits)
Parity bits are in position 2m; where m = 0, 1, 2, ….
Computing the values of parity bits:
7 6 5 4 3 2 1 Position
D7 D6 D5 P4 D3 P2 P1
111 110 101 100 011 010 001 3-bit binary of position no.
To find P1, select positions that has first bit as 1 from LSB i.e. positions 1,3,5,7.
To find P2, select positions that has second bit as 1 from LSB i.e. positions 2,3,6,7.
To find P4, select positions that has third bit as 1 from LSB i.e. positions 4, 5, 6, 7.
Parity can be even or odd.
If we want to find even parity, then number of 1’s excluding parity bit has to be even.
If yes, then the parity bit becomes 0; otherwise the parity bit becomes 1.
If we want to find odd parity, then number of 1’s excluding parity bit has to be odd. If
yes, then the parity bit becomes 0; otherwise the parity bit becomes 1.
Q1. A bit word 1011 is to be transmitted. Construct the even parity 7-bit Hamming code for
the data.
Solution:
7 6 5 4 3 2 1 Position
D7 D6 D5 P4 D3 P2 P1
1 0 1 P4 1 P2 P1 Bits
To find P1, take bits in the position 1, 3, 5, 7. They are P1, 1, 1, 1 respectively.
Given even parity. Therefore, number of 1’s need to be even.
Hence, P1 = 1.
To find P2, take bits in the position 2, 3, 6, 7. They are P2, 1, 0, 1 respectively.
Given even parity. Therefore, number of 1’s need to be even.
Hence, P2 = 0.
To find P4, take bits in the position 4, 5, 6, 7. They are P4, 1, 0, 1 respectively.
Given even parity. Therefore, number of 1’s need to be even.
Hence, P4 = 0.
So, the even parity 7-bit Hamming code for data 1011 is 1010101.
Q2. Determine which bit is in error in the even parity. Hamming code character is 1100111.
Solution:
7 6 5 4 3 2 1 Position
D7 D6 D5 P4 D3 P2 P1
1 1 0 0 1 1 1 Bits
To find P1, take bits in the position 1, 3, 5, 7. They are 1, 1, 0, 1 respectively.
Given even parity. Therefore, number of 1’s need to be even.
Hence, P1 = 0. But here P1 =1 is given. Therefore, P1 bit is in error.
To find P2, take bits in the position 2, 3, 6, 7. They are 1, 1, 1, 1 respectively.
Given even parity. Therefore, number of 1’s need to be even.
Hence, P2 = 1. Therefore, P2 is not in error.
To find P4, take bits in the position 4, 5, 6, 7. They are 0, 0, 1, 1 respectively.
Given even parity. Therefore, number of 1’s need to be even.
Hence, P4 = 0. Therefore, P4 is not in error.
So only P1 bit is in error.
Corrected Hamming code character is 1100110.
Q2. A channel has a bit rate of 4 kbps and propagation delay of 20 msec. For what range of
frame size does stop and wait gives the efficiency of at least 50%.
Solution:
Given, Bit rate R = 4kbps
Propagation delay 𝑡𝑝 = 20 ms
Efficiency 𝜂 ≥ 50 % i.e. 0.5 ≤ 𝜂 ≤ 1
𝑡𝑓
𝜂=
𝑡𝑓 + 2𝑡𝑝
𝑡𝑓
For 𝜂 = 0.5 we get, 0.5 = 𝑡𝑓 +2×20×10−3
−3
∴ 0.5𝑡𝑓 + 20 × 10 = 𝑡𝑓
∴ 𝑡𝑓 = 40 × 10−3 𝑠𝑒𝑐
We have, N = R x tf
∴ 𝑁 = 4 × 103 × 40 × 10−3 = 160 𝑏𝑖𝑡𝑠
Therefore, frame size = 160 bits
3.7. High Level Data Link Control (HDLC) Protocol
HDLC (High-Level Data Link Control) is a bit-oriented code-transparent synchronous
data link layer protocol developed by the International Organization for Standardization
(ISO).
HDLC provides both connection-oriented and connectionless service.
In HDLC, data is organized into a unit (called a frame) and sent across a network to a
destination that verifies its successful arrival.
It supports half-duplex, full-duplex transmission, point-to-point, and multi-point
configuration and switched or non-switched channels.
Types of stations for HDLC Protocol:
Primary station:
It acts as a master and controls the operation.
Handles error recovery.
Frames issued by the primary station are called commands.
Secondary station:
It acts as a slave and operates under the control of the primary station.
Frames issued by a secondary station are called responses.
The primary station maintains a separate logical link with each secondary station.
Combined station:
Acts as both primary and secondary stations.
It does not rely on others for sending data.
In static channel allocation scheme, a fixed portion of the frequency channel is allotted
to each user.
For N competing users, the bandwidth is divided into N channels using frequency
division multiplexing (FDM), and each portion is assigned to one user.
This scheme is also referred as fixed channel allocation or fixed channel assignment.
In this allocation scheme, there is no interference between the users since each user is
assigned a fixed channel.
However, it is not suitable in case of a large number of users with variable bandwidth
requirements.
Dynamic Channel Allocation
In dynamic channel allocation scheme, frequency bands are not permanently assigned
to the users.
Instead channels are allotted to users dynamically as needed, from a central pool.
The allocation is done considering a number of parameters so that transmission
interference is minimized.
This allocation scheme optimises bandwidth usage and results is faster transmissions.
Dynamic channel allocation is further divided into centralised and distributed
allocation.
Possible assumptions include:
Station Model: Assumes that each of N stations independently produce frames. Once
the frame is generated at the station, the station does nothing until the frame has been
successfully transmitted.
Single Channel Assumption: In this allocation all stations are equivalent and can send
and receive on that channel.
Collision Assumption: If two frames overlap in time-wise, then that’s collision. Any
collision is an error, and both frames must be retransmitted. Collisions are only possible
error.
Time can be divided into Slotted or Continuous.
Stations can sense a channel is busy before they try it.
3.10. ALOHA
ALOHA is a multiple access protocol for transmission of data via a shared network
channel.
It operates in the medium access control sublayer (MAC sublayer).
In ALOHA, each node or station transmits a frame without trying to detect whether the
transmission channel is idle or busy.
If the channel is idle, then the frames will be successfully transmitted.
If two frames attempt to occupy the channel simultaneously, collision of frames will
occur and the frames will be discarded.
These stations may choose to retransmit the corrupted frames repeatedly until
successful transmission occurs.
Pure ALOHA
Slotted ALOHA
Slotted ALOHA reduces the number of collisions and doubles the capacity of pure
ALOHA.
The shared channel is divided into a number of discrete time intervals called slots.
A station can transmit only at the beginning of each slot.
However, there can still be collisions if more than one station tries to transmit at the
beginning of the same time slot.
Efficiency of ALOHA
Terms Used
Frame Time: Time required to transmit a frame
G: Average number of new + old frames generated per frame time.
(Old frames are the frames which have to be retransmitted due to collision)
S: Average number of new frames generated per frame time
P0: Probability that the frame does not suffer collision.
VP (Vulnerable Period): Time for which a station should not transmit anything to avoid
collision with the shaded frame.
For pure ALOHA, vulnerable period = 2 time slots.
For slotted ALOHA, vulnerable period = 1 time slot.
(This is so because there is no rule that defines when the station can send. A station
may send soon after another station has started or soon before another station has
finished. In slotted ALOHA, frame can start at only two times t - Tfr and t. If the frame
B start at time t - Tfr, it will not collide with frame A. If the frame B start at time t - Tfr,
it will collide with frame A. Therefore, collision can take place in only one time slot.)
Derivation:
At low load, S ≈ 0 and G ≈ 0.
∴S=G
At high load, S = G.P0 ………………………..(I)
According to Poisson distribution formula
𝑒 −𝑚 𝑚𝑘
𝑃𝑘 = , where m is the mean and k is the random variable
𝑘!
For k = 0, P0 = e-m
∴ S = G. e-m ………………………. (II)
Q2. A pure ALOHA network transmits 200-bit frames on a shared channel of 200 kbps. What
is the throughput if the system (all stations together) produces:
a. 1000 frames per second
b. 500 frames per second
c. 250 frames per second.
Solution:
The frame transmission time is 200/200 kbps or 1 ms.
a. If the system creates 1000 frames per second, this is 1 frame per millisecond. The load is 1.
In this case S = G× e−2G or S = 0.135 (13.5 percent). This means that the throughput is 1000 ×
0.135 = 135 frames. Only 135 frames out of 1000 will probably survive.
b. If the system creates 500 frames per second, this is (1/2) frame per millisecond. The load is
(1/2). In this case S = G× e−2G or S = 0.184 (18.4 percent). This means that the throughput is
500 × 0.184 = 92 frames. Only 92 frames out of 500 will probably survive.
c. If the system creates 250 frames per second, this is (1/4) frame per millisecond. The load is
(1/2). In this case S = G× e−2G or S = 0.152 (15.2 percent). This means that the throughput is
250 × 0.152 = 38 frames. Only 38 frames out of 250 will probably survive.
Q3. A slotted ALOHA network transmits 200-bit frames on a shared channel of 200 kbps.
What is the throughput if the system (all stations together) produces:
a. 1000 frames per second
b. 500 frames per second
c. 250 frames per second.
Solution:
The frame transmission time is 200/200 kbps or 1 ms.
a. If the system creates 1000 frames per second, this is 1 frame per millisecond. The load is 1.
In this case S = G× e−G or S = 0.368 (36.8 percent). This means that the throughput is 1000 ×
0.368 = 368 frames. Only 368 frames out of 1000 will probably survive.
b. If the system creates 500 frames per second, this is (1/2) frame per millisecond. The load is
(1/2). In this case S = G× e−G or S = 0.303 (30.3 percent). This means that the throughput is
500 × 0.303 = 151 frames. Only 151 frames out of 500 will probably survive.
c. If the system creates 250 frames per second, this is (1/4) frame per millisecond. The load is
(1/2). In this case S = G× e−G or S = 0.195 (19.5 percent). This means that the throughput is
250 × 0.195 = 49 frames. Only 49 frames out of 250 will probably survive.
Q1. A network using CSMA/CD has a bandwidth of 10 Mbps. If the maximum propagation
time (including (including the delays in the devices and ignoring the time needed to send a
jamming) is 25.6 μs, what is the minimum size of the frame?
Solution:
Propagation delay Tp = 25.6 μs
Bandwidth = 10 Mbps
Frame transmission time Tfr = 2 x Tp = 2 x 25.6 μs = 51.2 μs
Minimum frame size = Bandwidth x Tfr = 10 Mbps x 51.2 μs
= 512 bits = 64 bytes
Q2. Consider a CSMA/CD network that transmits data at a rate of 100 Mbps over a 1km cable
with no repeaters. If the minimum frame size required for this network is 1250 bytes, what is
the signal speed (km/sec) in the cable?
Solution:
Bandwidth = 100 Mbps
Distance = 1 km
Minimum frame size = 1250 bytes
Minimum frame size = Bandwidth x Tfr = 100 Mbps x Tfr
∴ 1250 × 8 = 100 × 106 × Tfr
1250 × 8
∴ Tfr = 8
= 𝟏 × 𝟏𝟎−𝟒 𝐬𝐞𝐜
10
Frame transmission time Tfr = 2 x Tp
Tfr 1 × 10−4
Tp = = = 𝟓 × 𝟏𝟎−𝟓 𝐬𝐞𝐜
2 2
Distance 1 km
Speed = = = 𝟐𝟎𝟎𝟎𝟎 𝐤𝐦/𝐬𝐞𝐜
Time 5 × 10−5 sec
₻₻₻₻₻
CHAPTER 4
NETWORK LAYER
4.1 Network Layer Design Issues
Network layer is majorly focused on getting packets from the source to the destination, routing,
error handling and congestion control. It is the lowest layer that deals with end-to-end
transmission.
The network layer comes with some design issues described as follows:
1. Store and Forward packet switching
The host sends the packet to the nearest router. This packet is stored there until it has
fully arrived.
Once the link is fully processed by verifying the checksum, then it is forwarded to the
next router till it reaches the destination. This mechanism is called “Store and Forward
packet switching.”
2. Services provided to Transport Layer
Through the network/transport layer interface, the network layer transfers its services
to the transport layer. But before providing these services to the transport layer,
following goals must be kept in mind:
a. Offering services must not depend on router technology.
b. The transport layer needs to be protected from the type, number and topology of the
available router.
c. The network addresses for the transport layer should use uniform numbering pattern
also at LAN and WAN connections
These services are described below:
a. Connectionless – The routing and insertion of packets into subnet is done
individually. No added setup is required.
b. Connection-Oriented – Subnet must offer reliable service and all the packets must
be transmitted over a single route.
3. Implementation of Connectionless Service
Packet are termed as “datagrams” and corresponding subnet as “datagram subnets”.
When the message size that has to be transmitted is 4 times the size of the packet, then
the network layer divides into 4 packets and transmits each packet to router via a few
protocol.
Each data packet has destination address and is routed independently irrespective of the
packets.
4. Implementation of Connection Oriented service
To use a connection-oriented service, first we establish a connection, use it and then
release it.
In connection-oriented services, the data packets are delivered to the receiver in the
same order in which they have been sent by the sender.
Since there are these problems, Classfull networking was replaced by Classless Inter-Domain
Routing (CIDR) in 1993.
Subnetting:
Dividing a large block of addresses into several contiguous sub-blocks and assigning
these sub-blocks to different smaller networks is called subnetting.
It is also called as subnet routing or subnet addressing.
It is a practice that is widely used when classless addressing is done.
Benefits of Subnetting
Reduced network traffic
Optimized network performance
Simplified network management
Masking:
A process that extracts the address of the physical network from an IP address is called
masking.
If we do the subnetting, then masking extracts the subnetwork address from an IP
address.
To find the subnetwork address, two methods are used. They are boundary level
masking and non-boundary level masking.
In boundary level masking, two masking numbers are considered (i.e. 0 or 255). In non-
boundary level masking, other value apart from 0 and 255 are considered.
Class A – 255.0.0.0
Class B – 255.255.0.0
Class C – 255.255.255.0
Example 1: Given IP address 132.6.17.85 and default class B mask, find the beginning address
(network address).
Solution: The default mask is 255.255.0.0, which means that the only the first 2 bytes are
preserved and the other 2 bytes are set to 0. Therefore, the network address is 132.6.0.0.
Example:
When a host on the internal network with an internal IP address does need to
communicate outside it's private network, it would use the public IP address on the
network's gateway to identify itself to the rest of the world, and this translation of
converting a private IP address to public is done by NAT.
For example, a computer on an internal address of 10.0.0.1 wanted to communicate
with a web server somewhere on the internet, NAT would translate the address 10.0.0.1
to the company's public address, let’s call this 150.150.0.1, so that the internal address
is identified as the public address when communicating with the outside world.
This has to be done because when the web server somewhere on the internet was to
reply to this internal computer, it needs to send this to a unique and routable address on
the internet, the public address.
It cannot use the original address of 10.0.0.1, as this is private, none routable and hidden
from the outside world. This address, of 150.150.0.1 would be the address of the public
address for that company and can be seen by everyone.
Now the web server would reply to that public address, 150.150.0.1.
NAT would then use its records to translate the packets received from the web server
that was destined to 150.150.0.1 back to the internal network address of 10.0.0.1, and
to the computer who requested the original info, will receive the requested packets.
NAT Types
There are three different types of NATs. People use them for different reasons, but they all still
work as a NAT.
1. Static NAT
In this, a single private IP address is mapped with single Public IP address, i.e., a private IP
address is translated to a public IP address. It is used in Web hosting.
3. PAT
PAT stands for port address translation. It’s a type of dynamic NAT, but it bands several local
IP addresses to a singular public one. Organizations that want all their employees’ activity to
use a singular IP address use a PAT, often under the supervision of a network administrator.
Dijkstra’s Algorithm:
Dijkstra's Algorithm allows you to calculate the shortest path between one node (you
pick which one) and every other node in the graph.
Here's a description of the algorithm:
1. Mark your selected initial node with a current distance of 0 and the rest with infinity.
2. Set the non-visited node with the smallest current distance as the current node C.
3. For each neighbour N of your current node C: add the current distance of C with
the weight of the edge connecting C-N. If it's smaller than the current distance of
N, set it as the new current distance of N.
4. Mark the current node C as visited.
5. If there are non-visited nodes, go to step 2.
Example 1:
Let's calculate the shortest path between node C and the other nodes in our graph:
We'll also have a current node. Initially, we set it to C (our selected node). In the image, we
mark the current node with a red dot.
Now, we check the neighbours of our current node (A, B and D) in no specific order. Let's
begin with B. We add the minimum distance of the current node (in this case, 0) with the weight
of the edge that connects our current node with B (in this case, 7), and we obtain 0 + 7 = 7. We
compare that value with the minimum distance of B (infinity); the lowest value is the one that
remains as the minimum distance of B (in this case, 7 is less than infinity):
Now, let's check neighbour A. We add 0 (the minimum distance of C, our current node) with
1 (the weight of the edge connecting our current node with A) to obtain 1. We compare that 1
with the minimum distance of A (infinity), and leave the smallest value:
We now need to pick a new current node. That node must be the unvisited node with the
smallest minimum distance (so, the node with the smallest number and no check mark). That's
A. Let's mark it with the red dot:
And now we repeat the algorithm. We check the neighbours of our current node, ignoring the
visited nodes. This means we only check B.
For B, we add 1 (the minimum distance of A, our current node) with 3 (the weight of the edge
connecting A and B) to obtain 4. We compare that 4 with the minimum distance of B (7) and
leave the smallest value: 4.
Afterwards, we mark A as visited and pick a new current node: D, which is the non-visited
node with the smallest current distance.
E doesn't have any non-visited neighbours, so we don't need to check anything. We mark it as
visited.
4.8.2 Flooding
Flooding is the static routing algorithm. In this algorithm, every incoming packet is sent
on all outgoing lines except the line on which it has arrived.
One major problem of this algorithm is that it generates a large number of duplicate
packets on the network.
Several measures are takes to stop the duplication of packets. These are:
[1] One solution is to include a hop counter in the header of each packet. This counter is
decremented at each hop along the path. When this counter reaches zero the packet is
discarded. Ideally, the hop counter should become zero at the destination hop,
[1] Knowledge about the neighborhood: Instead of sending its routing table, a router
sends the information about its neighborhood only. A router broadcast its identities and
cost of the directly attached links to other routers.
[2] Flooding: Each router sends the information to every other router on the internetwork
except its neighbors. This process is known as Flooding. Every router that receives the
packet sends the copies to all its neighbors. Finally, each and every router receives a
copy of the same information.
[3] Information sharing: A router sends the information to every other router only when
the change occurs in the information.
Link State Routing has two phases:
[1] Reliable Flooding
Initial state: Each node knows the cost of its neighbors.
Final state: Each node knows the entire graph.
[2] Route Calculation
Each node uses Dijkstra's algorithm on the graph to calculate the optimal routes to all
nodes.
Heavy traffic is created in Line state routing due to Flooding.
Flooding can cause an infinite looping, this problem can be solved by using Time-to-
leave field.
4.9 Protocols
4.9.1 ARP Protocol
Address Resolution Protocol (ARP) is an important protocol of the network layer in
the OSI model, which helps find the MAC (Media Access Control) address given the
system's IP address.
Table lookup - Bindings stored in memory with protocol address as the key. It uses the
data link layer checks the protocol address to find the hardware address.
Dynamic–This type of network messaging method is used for "just-in-time" resolution.
Data link layer sends message requests in a hardware address. Destination responds.
Closed-form computation–In this method, a protocol address is based on a hardware
address. Data link layer derives the hardware address from the protocol address.
ARP Header
RARP ARP
RARP stands for Reverse Address ARP stands for Address Resolution
Resolution Protocol Protocol
In RARP, we find our own IP address In ARP, we find the IP address of a
remote machine
4.9.3. ICMP
The ICMP stands for Internet Control Message Protocol.
It is a network layer protocol.
It is used for error handling in the network layer, and it is primarily used on network
devices such as routers.
As different types of errors can exist in the network layer, so ICMP can be used to
report these errors and to debug those errors.
For example, some sender wants to send the message to some destination, but the router
couldn't send the message to the destination. In this case, the router sends the message
to the sender that I could not send the message to that destination.
The IP protocol does not have any error-reporting or error-correcting mechanism, so it
uses a message to convey the information.
For example, if someone sends the message to the destination, the message is somehow
stolen between the sender and the destination. If no one reports the error, then the sender
might think that the message has reached the destination. If someone in-between reports
the error, then the sender will resend the message very quickly.
Messages
The ICMP messages are usually divided into two categories:
Error-reporting messages: The error-reporting message means that the router encounters a
problem when it processes an IP packet then it reports a message.
Query messages: The query messages are those messages that help the host to get the specific
information of another host. For example, suppose there are a client and a server, and the client
wants to know whether the server is live or not, then it sends the ICMP message to the server.
o Type: It is an 8-bit field. It defines the ICMP message type. The values range from 0
to 127 are defined for ICMPv6, and the values from 128 to 255 are the informational
messages.
o Code: It is an 8-bit field that defines the subtype of the ICMP message
o Checksum: It is a 16-bit field to detect whether the error exists in the message or not.
4.9.4 IGMP
The Internet Group Management Protocol (IGMP) is a protocol that allows several
devices to share one IP address so they can all receive the same data.
IGMP is a network layer protocol used to set up multicasting on networks that use
the Internet Protocol version 4 (IPv4).
Specifically, IGMP allows devices to join a multicasting group.
Multicasting is when a group of devices all receive the same messages or packets.
Multicasting works by sharing an IP address between multiple devices.
IGMPv1 Header
The IGMP header has a total length of 64 bits.
The first 8 bits always specify the protocol version IGMPv1 and the type of message.
There are two options for the field (type): “1” (for membership requests) and “2” (for
notifications about multicast data streams).
Bits 8 to 15 follow, but they have no function and only consist of zeros.
The first 32-bit block ends with a checksum.
If it is an IGMP notification package, the 32 bit-long group address will follow.
IGMPv2 Header
The header line starts similarly to the first log version, but without specifying the
version number.
The possible type codes are “0x11” (for requests), “0x16” (for notifications), and
“0x17” (for leave messages). For backwards compatibility, there is also the code
“0x12” for IGMPv1 notifications.
Bits 8 to 15 receive a concrete function in IGMPv2 – at least for membership requests
– and define the maximum response time allowed.
This is followed by the checksum (16 bits) and the group address (32 bits), which in
turn has the protocol-typical form 0.0.0.0 for general requests.
When the number of packets hosts send into the network is well within its carrying
capacity, the number delivered is proportional to the number sent. If twice as many are
sent, twice as many are delivered.
However, as the offered load approaches the carrying capacity, bursts of traffic
occasionally fill up the buffers inside routers and some packets are lost.
These lost packets consume some of the capacity, so the number of delivered packets
falls below the ideal curve. The network is now congested.
Unless the network is well designed, it may experience a congestion collapse, in which
performance plummets as the offered load increases beyond the capacity.
This can happen because packets can be sufficiently delayed inside the network that
they are no longer useful when they leave the network.
In above diagram the 3rd node is congested and stops receiving packets, as a
result 2nd node may be get congested due to slowing down of the output data
flow. Similarly, 1st node may get congested and informs the source to slow
down.
2. Choke Packet Technique:
Choke packet technique is applicable to both virtual networks as well as
datagram subnets.
A choke packet is a packet sent by a node to the source to inform it of
congestion.
Each router monitors its resources and the utilization at each of its output lines.
Whenever the resource utilization exceeds the threshold value which is set by
the administrator, the router directly sends a choke packet to the source giving
it a feedback to reduce the traffic.
The intermediate nodes through which the packets have travelled are not warned
about congestion.
₻₻₻₻₻
Getting back to our client-server example, the client’s CONNECT call causes a
CONNECTION REQUEST segment to be sent to the server.
When it arrives, the transport entity checks to see that the server is blocked on a
LISTEN (i.e., is interested in handling requests). If so, it then unblocks the server and
sends a CONNECTION ACCEPTED segment back to the client. When this segment
arrives, the client is unblocked and the connection is established.
Data can now be exchanged using the SEND and RECEIVE primitives.
When a connection is no longer needed, it must be released to free up table space within
the two transport entities.
Disconnection has two variants: asymmetric and symmetric.
In the asymmetric variant, either transport user can issue a DISCONNECT primitive,
which results in a DISCONNECT segment being sent to the remote transport entity.
Upon its arrival, the connection is released.
In the symmetric variant, each direction is closed separately, independently of the other
one. When one side does a DISCONNECT, that means it has no more data to send but
it is still willing to accept data from its partner. In this model, a connection is released
when both sides have done a DISCONNECT.
A state diagram for connection establishment and release for these simple primitives is
given in figure below.
Socket Programming
Server Side:
Server startup executes SOCKET, BIND and LISTEN primitives.
LISTEN primitive allocate queue for multiple simultaneous clients.
Then it uses ACCEPT to suspend server until request.
When client request arrives, ACCEPT returns.
Client side:
It uses SOCKET primitives to create.
Then use CONNECT to initiate connection process.
When this returns, the socket is open.
Both sides can now SEND, RECEIVE.
Connection not released until both sides do CLOSE.
Typically, client does it, server acknowledges.
5.5 TCP
TCP (Transmission Control Protocol) was specifically designed to provide a reliable
end-to-end byte stream over an unreliable internetwork.
An internetwork differs from a single network because different parts may have wildly
different topologies, bandwidth, delays, packet sizes, and other parameters.
TCP was designed to dynamically adapt to properties of the internetwork and to be
robust in the face of many kinds of failures.
TCP service is obtained by both the sender and receiver creating end points, called
sockets.
Each socket has a socket number (address) consisting of the IP address of the host and
a 16-bit number local to that host, called a port.
A port is the TCP name for a TSAP (Transport Service Access Point).
For TCP service to be obtained, a connection must be explicitly established between a
socket on the sending machine and a socket on the receiving machine.
All TCP connections are full duplex and point-to-point. Full duplex means that traffic
can go in both directions at the same time. Point-to-point means that each connection
has exactly two end points.
TCP does not support multicasting or broadcasting.
The basic protocol used by TCP entities is the sliding window protocol.
When a sender transmits a segment, it also starts a timer. When the segment arrives at
the destination, the receiving TCP entity sends back a segment (with data if any exist,
otherwise without data) bearing an acknowledgement number equal to the next
sequence number it expects to receive.
If the sender's timer goes off before the acknowledgement is received, the sender
transmits the segment again.
TCP Connection Establishment
Connections are established in TCP by means of the three-way handshake process.
To establish a connection, one side, say, the server, passively waits for an incoming
connection by executing the LISTEN and ACCEPT primitives, either specifying a
specific source or nobody in particular.
Note that a SYN segment consumes 1 byte of sequence space so that it can be
acknowledged unambiguously.
In the event that two hosts simultaneously attempt to establish a connection between
the same two sockets, the sequence of events is as illustrated in figure (b).
The result of these events is that just one connection is established, not two because
connections are identified by their end points.
TCP Connection Release
TCP connections are full duplex.
Each simplex connection is released independently of its sibling.
To release a connection, either party can send a TCP segment with the FIN bit set,
which means that it has no more data to transmit.
When the FIN is acknowledged, that direction is shut down for new data.
Data may continue to flow indefinitely in the other direction, however.
When both directions have been shut down, the connection is released.
Normally, four TCP segments are needed to release a connection, one FIN and
one ACK for each direction.
However, it is possible for the first ACK and the second FIN to be contained in the same
segment, reducing the total count to three.
1. Source Port: Source Port is 2 Byte long field used to identify port number of source.
2. Destination Port: It is 2 Byte long field, used to identify the port of destined packet.
3. Length: Length is the length of UDP including header and the data. It is 16-bits field.
4. Persistent Timer
TCP uses a persistent timer to deal with a zero-widow-size deadlock situation.
It keeps the window size information flowing even if the other end closes its receiver
window.
Sender starts the persistent timer on receiving an ACK from the receiver with a zero
window size.
When persistent timer goes off, sender sends a special segment to the receiver.
This special segment is called as probe segment and contains only 1 byte of new data.
Response sent by the receiver to the probe segment gives the updated window size.
If the updated window size is non-zero, it means data can be sent now.
If the updated window size is still zero, the persistent timer is set again and the cycle
repeats.
The window spans a portion of the buffer containing bytes received from the process.
The bytes inside the window are the bytes that can be in transit; they can be sent without
worrying about acknowledgment.
The imaginary window has two walls: one left and one right.
The window is opened, closed, or shrunk.
The sender has sent bytes up to 202. We assume that cwnd is 20 (in reality this value is
thousands of bytes).
The receiver has sent an acknowledgment number of 200 with an rwnd of 9 bytes (in
reality this value is thousands of bytes).
The size of the sender window is the minimum of rwnd and cwnd, or 9 bytes.
Bytes 200 to 202 are sent, but not acknowledged.
Bytes 203 to 208 can be sent without worrying about acknowledgment.
Bytes 209 and above cannot be sent.
Features of TCP sliding window are as follows:
The size of the window is the lesser of rwnd and cwnd.
The source does not have to send a full window's worth of data.
The window can be opened or closed by the receiver, but should not be shrunk.
The destination can send an acknowledgment at any time as long as it does not result
in a shrinking window.
The receiver can temporarily shut down the window; the sender, however, can always
send a segment of 1 byte after the window is shut down.
₻₻₻₻₻
It is very difficult to find out the IP address associated to a website because there are millions
of websites and with all those websites we should be able to generate the IP address
immediately, there should not be a lot of delay for that to happen organization of database is
very important.
The host request the DNS name server to resolve the domain name.
And the name server returns the IP address corresponding to that domain name to the
host so that the host can future connect to that IP address.
Hierarchy of Name Servers
Root name servers: It is contacted by name servers that cannot resolve the name. It
contacts authoritative name server if name mapping is not known. It then gets the
mapping and return the IP address to the host.
Top level server: It is responsible for com, org, edu, etc. and all top level country
domains like uk, fr, ca, in etc. They have info about authoritative domain servers and
know names and IP addresses of each authoritative name server for the second level
domains.
Authoritative name servers: This is organization’s DNS server, providing
authoritative hostName to IP mapping for organization servers. It can be maintained by
organization or service provider. In order to reach cse.dtu.in we have to ask the root
DNS server, then it will point out to the top level domain server and then to authoritative
domain name server which actually contains the IP address. So the authoritative domain
server will return the associative IP address.
6.2 HTTP
HTTP stands for HyperText Transfer Protocol.
It is a protocol used to access the data on the World Wide Web (www).
The HTTP protocol can be used to transfer the data in the form of plain text, hypertext,
audio, video, and so on.
6.3 SMTP
SMTP is short for Simple Mail Transfer Protocol.
It is an application layer protocol.
It is used for sending the emails efficiently and reliably over the internet.
SMTP is a push protocol.
SMTP uses TCP at the transport layer.
SMTP uses port number 25.
SMTP uses persistent TCP connections, so it can send multiple emails at once.
SMTP is a connection oriented protocol.
SMTP is a stateless protocol.
Working
SMTP server is always on a listening mode.
Client initiates a TCP connection with the SMTP server.
SMTP server listens for a connection and initiates a connection on that port.
The connection is established.
6.4 Telnet
Telnet is short for Terminal Network.
Telnet is a client-server application that allows a user to log onto remote machine and
lets the user to access any application on the remote computer.
Telnet uses NVT (Network Virtual Terminal) system to encode characters on the local
system.
On the server (remote) machine, NVT decodes the characters to a form acceptable to
the remote machine.
Telnet is a protocol that provides a general, bi-directional, 8-bit byte oriented
communications facility.
Many application protocols are built upon the Telnet protocol.
Telnet services are used on Port 23.
6.5 FTP
FTP is short for File Transfer Protocol.
It is an application layer protocol.
It is used for exchanging files over the internet.
It enables the users to upload and download the files from the internet.
FTP uses TCP at the transport layer.
FTP uses port number 21 for control connection.
FTP uses port number 20 for data connection.
FTP uses persistent TCP connections for control connection.
FTP uses non-persistent connections for data connection.
FTP is a connection oriented protocol.
6.6 DHCP
Dynamic Host Configuration Protocol (DHCP) is a network management protocol used
to dynamically assign an IP address to nay device, or node, on a network so they can
communicate using IP (Internet Protocol).
DHCP automates and centrally manages these configurations.
There is no need to manually assign IP addresses to new devices. Therefore, there is no
requirement for any user configuration to connect to a DHCP based network.
DHCP can be implemented on local networks as well as large enterprise networks.
DHCP is the default protocol used by the most routers and networking equipment.
DHCP manages the provision of all the nodes or devices added or dropped from the
network.
DHCP maintains the unique IP address of the host using a DHCP server.
It sends a request to the DHCP server whenever a client/node/device, which is
configured to work with DHCP, connects to a network. The server acknowledges by
providing an IP address to the client/node/device.
DHCP runs at the application layer of the TCP/IP protocol stack to dynamically assign
IP addresses to DHCP clients/nodes and to allocate TCP/IP configuration information
to the DHCP clients. Information includes subnet mask information, default gateway,
IP addresses and domain name system addresses.
DHCP is based on client-server protocol in which servers manage a pool of unique IP
addresses, as well as information about client configuration parameters, and assign
addresses out of those address pools.
Components of DHCP
When working with DHCP, it is important to understand all of the components. Following are
the list of components:
DHCP Server: DHCP server is a networked device running the DCHP service that
holds IP addresses and related configuration information. This is typically a server or a
router but could be anything that acts as a host, such as an SD-WAN appliance.
₻₻₻₻₻