0% found this document useful (0 votes)
45 views12 pages

Computer Networks Notes

Uploaded by

Palak Patel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
45 views12 pages

Computer Networks Notes

Uploaded by

Palak Patel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 12

COMPUTER NETWORKS NOTES

Datagram Network

In a connectionless communication systems, datagram refers to the smallest unit via which data
is transmitted. Datagrams are data packets which contain adequate header information so that
they can be individually routed by all intermediate network switching devices to the destination.
These networks are called datagram networks since communication occurs via datagrams.
They exist in packet switching networks.

Features of Datagram Networks

 Datagram switching is done at the network layer of the communication system.


 In datagram networks, each data packet or datagram is routed independently from
the source to the destination even if they belong to the same message. The network
treats the packet as if it exists alone.
 Since the datagrams are treated as independent units, no dedicated path is fixed for
data transfer. Each datagram is routed by the intermediate routers using dynamically
changing routing tables. So two successive packets from the source may follow
completely separate routes to reach destination.
 In these networks, no prior resource allocation is done for the individual packets.
This implies that no resources like buffers, processors, bandwidth, etc. are reserved
before the communication commences.
 In datagram networks, resources are allocated on demand on a First−Come
First−Serve (FCFS) basis. When a packet arrives at a router, the packet must wait if
there are other packets being processed, irrespective of its source or destination.
 Datagram communication is generally guided by User Datagram Protocol or UDP.
The following diagram shows datagram packets being send by host H1 to host H2. The four
datagram packets labelled as A, B, C and D, all belonging to same message are being
routed separately via separate routes. The packets in the message arrives in the destination
out of order. It is the responsibility of H2 to reorder the packets in order to retrieve the
original message.

Congestion Control in Datagram Subnets

Congestion control in data-gram and sub-nets :


Some congestion Control approaches which can be used in the datagram Subnet (and also in
virtual circuit subnets) are given under.
1. Choke packets
2. Load shedding
3. Jitter control.
Approach-1: Choke Packets :
 This approach can be used in virtual circuits as well as in the datagram sub-nets. In this
technique, each router associates a real variable with each of its output lines.
 This real variable says u has a value between 0 and 1, and it indicates the percentage
utilization of that line. If the value of the variable goes above the threshold then the output
line will enter into a warning state.
 The router will check each newly arriving packet to see if its output line is in the warning
state. if it is in the warning state then the router will send back choke packets. Several
variations on the congestion control algorithm have been proposed depending on the
value of thresholds.
 Depending upon the threshold value, the choke packets can contain a mild warning a
stern warning, or an ultimatum. Another variation can be in terms of queue lengths or
buffer utilization instead of using line utilization as a deciding factor

Drawback –
The problem with the choke packet technique is that the action to be taken by the source host
on receiving a choke packet is voluntary and not compulsory.

Approach-2: Load Shedding :


 Admission control, choke packets, and fair queuing are the techniques suitable for
congestion control. But if these techniques cannot make the congestion disappear, then
the load-shedding technique is to be used.
 The principle of load shedding states that when the router is inundated by packets that it
cannot handle, it should simply throw packets away.
 A router flooded with packets due to congestion can drop any packet at random. But there
are better ways of doing this.
 The policy for dropping a packet depends on the type of packet. For file transfer, an old
packet is more important than a new packet In contrast, for multimedia, a new packet is
more important than an old one So.the policy for file transfer is called wine (old is better
than new), and that for the multimedia is called milk (new is better than old).
 An intelligent discard policy can be decided depending on the applications. To implement
such an intelligent discard policy, cooperation from the sender is essential.
 The application should mark their packets in priority classes to indicate how important they
are.
 If this is done then when the packets are to be discarded the routers can first drop
packets from the lowest class (i.e. the packets which are least important). Then the routers
will discard the packets from the next lower class and so on. One or more header bits are
required to put the priority to make the class of a packet. In every ATM cell, 1 bit is
reserved in the header for marking the priority. Every ATM cell is labeled either as a low
priority or high priority.

Approach-3: Jitter control :


 Jitter may be defined as the variation in delay for the packet belonging to the same flow.
The real-time audio and video cannot tolerate jitter on the other hand the jitter doesn’t
matter if the packets are carrying information contained in a file.
 For the audio and video transmission, if the packets take 20 ms to 30 ms delay to reach
the destination, it doesn’t matter, provided that the delay remains constant.
 The quality of sound and visuals will be hampered by the delays associated with different
packets having different values. Therefore, practically we can say that 99% of packets
should be delivered with a delay ranging from 24.5 ms to 25.5 ms.
 When а packet arrives at a router, the router will check to see whether the packet is
behind or ahead and by what time.
 This information is stored in the packet and updated at every hop. If a packet is ahead of
the schedule then the router will hold it for a slightly longer time and if the packet is behind
schedule, then the router will try to send it out as quickly as possible. This will help in
keeping the average delay per packet constant and will avoid time jitter.

Classification of Routing Algorithms

Routing is the process of establishing the routes that data packets must follow to reach the
destination. In this process, a routing table is created which contains information regarding
routes that data packets follow. Various routing algorithms are used for the purpose of
deciding which route an incoming data packet needs to be transmitted on to reach the
destination efficiently.
Classification of Routing Algorithms
The routing algorithms can be classified as follows:
1. Adaptive Algorithms
2. Non-Adaptive Algorithms
3. Hybrid Algorithms

The routing protocol could be a routing formula that gives the most straightforward path from the
supply to the destination. The simplest path is the path that has the least-cost path from the
source to the destination

1. Adaptive Algorithms
These are the algorithms that change their routing decisions whenever network topology or
traffic load changes. The changes in routing decisions are reflected in the topology as well as
the traffic of the network. Also known as dynamic routing, these make use of dynamic
information such as current topology, load, delay, etc. to select routes. Optimization
parameters are distance, number of hops, and estimated transit time.
Further, these are classified as follows:
 Isolated: In this method each, node makes its routing decisions using the information it
has without seeking information from other nodes. The sending nodes don’t have
information about the status of a particular link. The disadvantage is that packets may be
sent through a congested network which may result in delay. Examples: Hot potato
routing, and backward learning.

 Centralized: In this method, a centralized node has entire information about the network
and makes all the routing decisions. The advantage of this is only one node is required to
keep the information of the entire network and the disadvantage is that if the central node
goes down the entire network is done. The link state algorithm is referred to as a
centralized algorithm since it is aware of the cost of each link in the network.

 Distributed: In this method, the node receives information from its neighbors and then
takes the decision about routing the packets. A disadvantage is that the packet may be
delayed if there is a change in between intervals in which it receives information and
sends packets. It is also known as a decentralized algorithm as it computes the least-cost
path between source and destination.

2. Non-Adaptive Algorithms
These are the algorithms that do not change their routing decisions once they have been
selected. This is also known as static routing as a route to be taken is computed in advance
and downloaded to routers when a router is booted.
Further, these are classified as follows:
 Flooding: This adapts the technique in which every incoming packet is sent on every
outgoing line except from which it arrived. One problem with this is that packets may go in
a loop and as a result of which a node may receive duplicate packets. These problems
can be overcome with the help of sequence numbers, hop count, and spanning trees.
 Random walk: In this method, packets are sent host by host or node by node to one of its
neighbors randomly. This is a highly robust method that is usually implemented by sending
packets onto the link which is least queued.
3. Hybrid Algorithms

As the name suggests, these algorithms are a combination of both adaptive and non-adaptive
algorithms. In this approach, the network is divided into several regions, and each region uses
adifferentalgorithm.
Further, these are classified as follows:
 Link-state: In this method, each router creates a detailed and complete map of the network
which is then shared with all other routers. This allows for more accurate and efficient
routing decisions to be made.
 Distance vector: In this method, each router maintains a table that contains information
about the distance and direction to every other node in the network. This table is then
shared with other routers in the network. The disadvantage of this method is that it may
lead to routing loops.

Differences between Bellman Ford’s and Dijkstra’s algorithm:


Bellman Ford’s Algorithm Dijkstra’s Algorithm

Bellman Ford’s Algorithm works when Dijkstra’s Algorithm may or may not work when
there is negative weight edge, it also there is negative weight edge. But will definitely
detects the negative weight cycle. not work when there is a negative weight cycle.

The result contains the vertices which The result contains the vertices containing whole
contains the information about the other information about the network, not only the
vertices they are connected to. vertices they are connected to.

It can easily be implemented in a It can not be implemented easily in a distributed


distributed way. way.

It is more time consuming than Dijkstra’s It is less time consuming. The time complexity is
algorithm. Its time complexity is O(VE). O(E logV).

Dynamic Programming approach is Greedy approach is taken to implement the


taken to implement the algorithm. algorithm.

Bellman Ford’s Algorithm have more Dijkstra’s Algorithm have less overheads than
overheads than Dijkstra’s Algorithm. Bellman Ford’s Algorithm.

Bellman Ford’s Algorithm have less Dijkstra’s Algorithm have more scalability than
scalability than Dijkstra’s Algorithm. Bellman Ford’s Algorithm.

Transmission Control Protocol (TCP)

TCP (Transmission Control Protocol) is one of the main protocols of the Internet protocol
suite. It lies between the Application and Network Layers which are used in providing reliable
delivery services. It is a connection-oriented protocol for communications that helps in the
exchange of messages between different devices over a network. The Internet Protocol (IP),
which establishes the technique for sending data packets between computers, works with
TCP.

Features of TCP

 TCP keeps track of the segments being transmitted or received by assigning numbers to
every single one of them.
 Flow control limits the rate at which a sender transfers data. This is done to ensure reliable
delivery.
 TCP implements an error control mechanism for reliable data transfer.
 TCP takes into account the level of congestion in the network.
Advantages of TCP

 It is reliable for maintaining a connection between Sender and Receiver.


 It is responsible for sending data in a particular sequence.
 Its operations are not dependent on OS.
 It allows and supports many routing protocols.
 It can reduce the speed of data based on the speed of the receiver.
Disadvantages of TCP

 It is slower than UDP and it takes more bandwidth.


 Slower upon starting of transfer of a file.
 Not suitable for LAN and PAN Networks.
 It does not have a multicast or broadcast category.
 It does not load the whole page if a single data of the page is missing.

User Datagram Protocol (UDP)

User Datagram Protocol (UDP) is a Transport Layer protocol. UDP is a part of the Internet
Protocol suite, referred to as the UDP/IP suite. Unlike TCP, it is an unreliable and
connectionless protocol. So, there is no need to establish a connection before data transfer.
The UDP helps to establish low-latency and loss-tolerating connections establish over the
network. The UDP enables process-to-process communication.
Features of UDP

 Used for simple request-response communication when the size of data is less and hence
there is lesser concern about flow and error control.
 It is a suitable protocol for multicasting as UDP supports packet switching.
 UDP is used for some routing update protocols like RIP(Routing Information Protocol) .
 Normally used for real-time applications which can not tolerate uneven delays between
sections of a received message.
Advantages of UDP

 It does not require any connection for sending or receiving data.


 Broadcast and Multicast are available in UDP.
 UDP can operate on a large range of networks.
 UDP has live and real-time data.
 UDP can deliver data if all the components of the data are not complete.
Disadvantages of UDP

 We can not have any way to acknowledge the successful transfer of data.
 UDP cannot have the mechanism to track the sequence of data.
 UDP is connectionless, and due to this, it is unreliable to transfer data.
 In case of a Collision, UDP packets are dropped by Routers in comparison to TCP.
 UDP can drop packets in case of detection of errors.

Differences between TCP and UDP


The main differences between TCP (Transmission Control Protocol) and UDP (User Datagram
Protocol) are:

Basis Transmission Control Protocol (TCP) User Datagram Protocol (UDP)

UDP is the Datagram-oriented


TCP is a connection-oriented protocol. protocol. This is because
Connection there is no overhead for opening
orientation means that the communicating a connection, maintaining a
Type of Service devices should establish a connection connection, or terminating a
before transmitting data and should close connection. UDP is efficient for
the connection after transmitting the data. broadcast and multicast types of
network transmission.

The delivery of data to the


TCP is reliable as it guarantees the delivery
Reliability destination cannot be
of data to the destination router.
guaranteed in UDP.

TCP provides extensive error-checking


mechanisms. UDP has only the basic error-
Error checking
checking mechanism using
mechanism It is because it provides flow control and
checksums.
acknowledgment of data.

Acknowledgment An acknowledgment segment is present. No acknowledgment segment.

Sequencing of data is a feature of


Transmission Control There is no sequencing of data
in UDP. If the order is required, it
Sequence
Protocol (TCP). this means that packets has to be managed by the
arrive in order at the receiver. application layer.

UDP is faster, simpler, and more


Speed TCP is comparatively slower than UDP.
efficient than TCP.

There is no retransmission of
Retransmission of lost packets is possible in
Retransmission lost packets in the User
TCP, but not in UDP.
Datagram Protocol (UDP).

TCP has a (20-60) bytes variable length UDP has an 8 bytes fixed-length
Header Length
header. header.

Weight TCP is heavy-weight. UDP is lightweight.


Basis Transmission Control Protocol (TCP) User Datagram Protocol (UDP)

Handshaking Uses handshakes such as SYN, ACK, SYN- It’s a connectionless protocol i.e.
Techniques ACK No handshake

Broadcasting TCP doesn’t support Broadcasting. UDP supports Broadcasting.

TCP is used by HTTP, UDP is used by DNS, DHCP,


Protocols
HTTPs, FTP, SMTP and Telnet. TFTP, SNMP, RIP, and VoIP.

UDP connection is a message


Stream Type The TCP connection is a byte stream.
stream.

Overhead Low but higher than UDP. Very low.

This protocol is used in


situations where quick
This protocol is primarily utilized in situations
communication is necessary but
when a safe and trustworthy communication
Applications where dependability is not a
procedure is necessary, such as in email, on
concern, such as VoIP, game
the web surfing, and in military services.
streaming, video, and music
stream

Techniques that can be used to improve the quality of service as follows scheduling,
traffic shaping, admission control and resource reservation.
• Scheduling :
Packets from different flows arrive at a switch or router for processing. A good scheduling
technique treats the different flows in a fair and appropriate manner. Several scheduling
techniques are designed to improve the quality of service. Three of them here: FIFO queuing,
priority queuing, and weighted fair queuing.
1) FIFO Queuing: In first-in, first-out (FIFO) queuing, packets wait in a buffer (queue) until the
node (router or switch) is ready to process them. If the average arrival rate is higher than the
average processing rate, the queue will fill up and new packets will be discarded. Figure1 shows
a conceptual view of a FIFO queue.
2) Priority Queuing: In priority queuing, packets are first assigned to a priority class. Each
priority class has its own queue. The packets in the highest-priority queue are processed first.
Packets in the lowest-priority queue are processed last. Note that the system does not stop
serving a queue until it is empty. A priority queue can provide better QoS than the FIFO queue
because higher priority traffic, such as multimedia, can reach the destination with less delay.
3) Weighted Fair Queuing: A better scheduling method is weighted fair queuing. In this
technique, the packets are still assigned to different classes and admitted to different queues.
The queues, however, are weighted based on the priority of the queues; higher priority means a
higher weight. The system processes packets in each queue in a round-robin fashion with the
number of packets selected from each queue based on the corresponding weight.
• Traffic Shaping : Traffic shaping is a mechanism to control the amount and the rate of the
traffic sent to the network. Two techniques can shape traffic: leaky bucket and token bucket.
1) Leaky Bucket: A technique called leaky bucket can smooth out bursty traffic. Bursty chunks
are stored in the bucket and sent out at an average rate.
. A FIFO queue holds the packets. If the traffic consists of fixed-size packets, the process
removes a fixed number of packets from the queue at each tick of the clock. If the traffic
consists of variable-length packets, the fixed output rate must be based on the number of bytes
or bits.
The following is an algorithm for variable-length packets:
1. Initialize a counter to n at the tick of the clock.
2. If n is greater than the size of the packet, send the packet and decrement the counter by
the packet size. Repeat this step until n is smaller than the packet size.
3. Reset the counter and go to step 1.

2) Token Bucket: The token bucket algorithm allows idle hosts to accumulate credit for the
future in the form of tokens. For each tick of the clock, the system sends n tokens to the bucket.
The system removes one token for every cell (or byte) of data sent. For example, if n is 100 and
the host is idle for 100 ticks, the bucket collects 10,000 tokens. Now the host can consume all
these tokens in one tick with 10,000 cells, or the host takes 1,000 ticks with 10 cells per tick. In
other words, the host can send bursty data as long as the bucket is not empty.
• Resource Reservation :
A flow of data needs resources such as a buffer, bandwidth, CPU time, and so on. The quality
of service is improved if these resources are reserved beforehand.
• Admission Control :
Admission control refers to the mechanism used by a router, or a switch, to accept or reject a
flow based on predefined parameters called flow specifications. Before a router accepts a flow
for processing, it checks the flow specifications to see if its capacity (in terms of bandwidth,
buffer size, CPU speed, etc.) and its previous commitments to other flows can handle the new
flow.

Differentiated Services (DiffServ) is defined as a class of service(COS) model that is


used to describe and control the IP network traffic by class. The main aim of differentiated
services is to give priority to the specific traffic that needs an uninterrupted flow of data. In
Differentiated services, the traffic is divided into multiple classes and each class is treated and
prioritized differently. This technique of classification used by differentiated services is useful
when there are limited or less number of resources. Differentiated services work at the
Network layer of the Open Systems Interconnection(OSI) Model. An example of Differentiated
services is voice traffic.

What is the Application Layer?


The application layer is the topmost layer of the OSI and TCP/IP models. The application layer
in the TCP/IP model is created by combining the top three layers: the application layer, the
presentation layer, and the session layer.

An application layer is an abstraction layer that specifies the shared communications protocols
and interface methods that host in network use. It is the layer closest to the end user, implying
that both the application layer and the end user can interact with the software application
directly.

Application layer protocols:

1.Simple Mail Transfer Protocol(SMTP)

One of the most popular application layer protocols for network services is electronic mail (e-
mail). The TCP/IP protocol that supports electronic mail on the Internet is called Simple Mail
Transfer Protocol (SMTP).

SMTP transfers messages from senders’ mail servers to the recipients’ mail servers using TCP
connections. In SMTP, users are based on e-mail addresses. SMTP provides services for mail
exchange between users on the same or different computers.

Following the client/server model:

 SMTP has two sides: a client-side, which executes on a sender’s mail server, and a
server-side, which executes on the recipient’s mail server.
 Both the client and server sides of SMTP run on every mail server.
 When a mail server sends mail (to other mail servers), it acts as an SMTP client.
 When a mail server receives mail (from other mail servers), it acts as an SMTP server.

2.Terminal Network(TELNET)
TELNET is an application layer protocol in which a client-server application allows a user to log
onto a remote machine and lets the user access any application program on a remote
computer. TELNET uses the NVT (Network Virtual Terminal) system to encode characters on
the local system.

On the server (remote) machine, NVT decodes the characters to a form acceptable to the
remote machine. TELNET is a protocol that provides a general, bi-directional, eight-bit byte-
oriented communications facility. Many application protocols are built upon the TELNET
protocol. Telnet services are used on PORT 23.

3.File Transfer Protocol(FTP)


FTP is the standard mechanism provided by TCP/IP for copying a file from one host to another.
FTP differs from other client-server applications because it establishes 2 connections between
hosts. Two connections are Data Connection and Control Connection.

Data Connection uses PORT 20, and control connection uses PORT 21. FTP is built on a client-
server architecture and uses separate control and data connections between the client and the
server. One connection is used for data transfer, the other for control information (commands
and responses). The FTP is data reliably and efficiently.

4.Multipurpose Internet Mail Extensions (MIME)


The Multipurpose Internet Mail Extensions (MIME) is an extension of SMTP that allows the
transfer of multimedia messages. If binary data is included in a message, MIME headers are
used to inform the receiving mail agent that is as follows:

 Content-Transfer-Encoding: The header alerts the receiving user agent that the
message body has been ASCII encoded and the type of encoding used.
 Content-Type: The header informs the receiving mail agent about the type of data in the
message.

5.Post Office Protocol(POP)


POP(Post Office Protocol) is also called the POP3 protocol. This is a protocol used by a mail
server in conjunction with SMTP to receive and holds mail for hosts.POP3 mail server receives
e-mails and filters them into the appropriate user folders.

When a user connects to the mail server to retrieve his mail, the messages are downloaded
from the mail server to the user’s hard disk.

6.Hypertext Transfer Protocol(HTTP)


Hypertext Transfer Protocol(HTTP) is used mainly to access World Wide Web (www) data. The
Hypertext Transfer Protocol (HTTP) is the Web’s main application-layer protocol, although
current browsers can access other types of servers. A repository of information spread all over
the world and linked together.

The HTTP protocol transfers data in plain text, hypertext, audio, video, etc. HTTP utilizes TCP
connections to send client requests and server replies. It is a synchronous protocol that works
by making both persistent and non-persistent connections.

7.Domain Name System(DNS)


In Domain Name System(DNS), TCP/IP protocol uses the IP address that uniquely identifies a
host’s connection to the Internet to identify an entity. DNS is a hierarchical system based on a
distributed database that uses a hierarchy of Name Servers to resolve Internet host names into
the corresponding IP addresses required for packet routing by issuing a DNS query to a name
server.

DNS in the Internet: DNS is a protocol that can be used on different platforms.

Domain name space is divided into three categories.

 Generic Domain: The generic domain defines registered hosts according to their generic
behavior. Each node in the tree defines a domain which is an index to the domain name
space database
 Country Domain: The country domain section follows the same format as the generic
domain but uses 2 characters of country abbreviations (e.g., the US for the United
States) instead of 3 characters.
 Inverse Domain: The inverse domain maps an address to a name.

You might also like