CN Assignment 1
Veer Shah
19125052
Q.01) What do you mean by congestion control & QoS ?
A.01) Congestion control refers to techniques and mechanisms that can either prevent
congestion, before it happens, or remove congestion, after it has happened
Q.02) What are the parameters of QoS ? Explain.
A.02) There are 4 major Quality of Service (QoS) parameters. They are as follows:
• Packet Loss :- Packet loss happens when network links become congested and routers
and switches start dropping packets. When packets are dropped during real-time
communication, such as a voice or video calls, these sessions can experience jitter and
gaps in speech.
• Jitter :- Jitter is the result of network congestion, timing drift and route changes. Too
much jitter can degrade the quality of voice and video communication.
• Latency :- Latency is the time it takes a packet to travel from its source to its
destination. Latency should be as close to zero as possible. If a voice over IP call has
a high amount of latency, it can experience echo and overlapping audio.
• Bandwidth :- Bandwidth is the capacity of a network communications link to transmit
the maximum amount of data from one point to another in a given amount of time.
QoS optimizes the network by managing bandwidth and setting priorities for
applications that require more resources than others.
• Mean Opinion Score :- Mean opinion score (MOS) is a metric to rate voice quality
that uses a five-point scale, with a five indicating the highest quality.
Q.03) Define:
• Socket
• Transit delay
A.03)
• Socket :- A Socket or a Network Socket is a software structure within a network node
of a computer network that serves as an endpoint for sending and receiving data across
the network. The structure and properties of a socket are defined by an application
programming interface (API) for the networking architecture. Sockets are created only
during the lifetime of a process of an application running in the node. Because of the
standardization of the TCP/IP protocols in the development of the Internet, the term
network socket is most commonly used in the context of the Internet Protocol Suite,
and is therefore often also referred to as Internet socket. In this context, a socket is
externally identified to other hosts by its socket address, which is the triad of transport
protocol, IP address, and port number.
• Transit Delay :- Network Delay or Transit Delay is a design and performance
characteristic of a telecommunications network. It specifies the latency for a bit of data
to travel across the network from one communication endpoint to another. It is typically
measured in multiples or fractions of a second. Delay may differ slightly, depending on
the location of the specific pair of communicating endpoints. Engineers usually report
both the maximum and average delay, and they divide the delay into several parts:
• Processing Delay – time it takes a router to process the packet header
• Queuing Delay – time the packet spends in routing queues
• Transmission Delay – time it takes to push the packet's bits onto the link
• Propagation Delay – time for a signal to reach its destination
A certain minimum level of delay is experienced by signals due to the time it takes to
transmit a packet serially through a link. This delay is extended by more variable levels
of delay due to network congestion. IP network delays can range from a few
milliseconds to several hundred milliseconds.
Q.04) List the Types of Socket ?
A.04) There are 4 types of sockets. They are as follows :
• Stream Sockets
• Raw Sockets
• Sequenced Packet Sockets
• Datagram Sockets
Q.05) Difference between IP address and port number.
A.05)
Q.06) What is meant by TCP & UDP ?
A.06)
Sr. Key TCP (Transmission Control UDP (User Datagram Protocol)
No. Protocol)
1 Definition It is a communications protocol, It is same as the TCP protocol except this
using which the data is transmitted doesn’t guarantee the error-checking and
between systems over the network. data recovery.
In this, the data is transmitted into the If you use this protocol, the data will be
form of packets. sent continuously, irrespective of the issues
It includes error-checking, guarantees in the receiving end.
the delivery and preserves the order
of the data packets.
2 Design TCP is a connection oriented UDP is a connection less protocol.
protocol.
3 Reliable As TCP provides error checking While on other hand UDP does provided
support and also guarantees delivery only basic error checking support using
of data to the destination router this checksum so the delivery of data to the
make it more reliable as compared to destination cannot be guaranteed in UDP as
UDP. compared to that in case of TCP.
4 Data In TCP the data is transmitted in a On other hand there is no sequencing of
transmission particular sequence which means that data in UDP in order to implement ordering
packets arrive in-order at the it has to be managed by the application
receiver. layer.
5 Performance TCP is slower and less efficient in On other hand UDP is faster and more
performance as compared to UDP. efficient than TCP.
Also TCP is heavy-weight as
compared to UDP.
6 Retransmission Retransmission of data packets is On other hand retransmission of packets is
possible in TCP in case packet get not possible in UDP.
lost or need to resend.
Q.07) State the threshold condition in congestion.
A.07) Thresholds dictate the conditions for which congestion control is enabled and establishes
limits for defining the state of the system (congested or clear). ... These policies dictate how
services respond when the system detects that a congestion condition threshold has been
crossed.
Q.08) Explain in detail TCP provide flow control.
A.08)
• TCP Flow Control
TCP is the protocol that guarantees we can have a reliable communication channel over an
unreliable network. When we send data from a node to another, packets can be lost, they can
arrive out of order, the network can be congested or the receiver node can be overloaded. When
we are writing an application, though, we usually don’t need to deal with this complexity, we
just write some data to a socket and TCP makes sure the packets are delivered correctly to the
receiver node. Another important service that TCP provides is what is called Flow Control.
Let’s talk about what that means and how TCP does its magic.
• What is Flow Control (and what it’s not)
Flow Control basically means that TCP will ensure that a sender is not overwhelming a receiver
by sending packets faster than it can consume. It’s pretty similar to what’s normally called
Back pressure in the Distributed Systems literature. The idea is that a node receiving data will
send some kind of feedback to the node sending the data to let it know about its current
condition.
It’s important to understand that this is not the same as Congestion Control. Although there’s
some overlap between the mechanisms TCP uses to provide both services, they are distinct
features. Congestion control is about preventing a node from overwhelming the network (i.e.
the links between two nodes), while Flow Control is about the end-node.
• How it works
When we need to send data over a network, this is normally what happens.
The sender application writes data to a socket, the transport layer (in our case, TCP) will wrap
this data in a segment and hand it to the network layer (e.g. IP), that will somehow route this
packet to the receiving node.
On the other side of this communication, the network layer will deliver this piece of data to
TCP, that will make it available to the receiver application as an exact copy of the data sent,
meaning it will not deliver packets out of order, and will wait for a retransmission in case it
notices a gap in the byte stream.
TCP stores the data it needs to send in the send buffer, and the data it receives in the receive
buffer. When the application is ready, it will then read data from the receive buffer.
Flow Control is all about making sure we don’t send more packets when the receive buffer is
already full, as the receiver wouldn’t be able to handle them and would need to drop these
packets.
To control the amount of data that TCP can send, the receiver will advertise its Receive
Window (rwnd), that is, the spare room in the receive buffer.
Every time TCP receives a packet, it needs to send an ack message to the sender,
acknowledging it received that packet correctly, and with this ack message it sends the value
of the current receive window, so the sender knows if it can keep sending data.
Q.09) Define silly window syndrome & possible solutions to overcome its effects.
A.09)
Silly Window Syndrome is a problem in computer networking caused by poorly implemented
TCP flow control. A serious problem can arise in the sliding window operation when the
sending application program creates data slowly, the receiving application program consumes
data slowly, or both. If a server with this problem is unable to process all incoming data, it
requests that its clients reduce the amount of data they send at a time (the window setting on a
TCP packet). If the server continues to be unable to process all incoming data, the window
becomes smaller and smaller, sometimes to the point that the data transmitted is smaller than
the packet header, making data transmission extremely inefficient. The name of this problem
is due to the window size shrinking to a "silly" value.
• SOLUTION :- When there is no synchronization between the sender and receiver
regarding capacity of the flow of data or the size of the packet, the window syndrome
problem is created. When the silly window syndrome is created by the sender, Nagle's
algorithm is used. Nagle's solution requires that the sender send the first segment even
if it is a small one, then that it wait until an ACK is received or a maximum sized
segment (MSS) is accumulated. When the silly window syndrome is created by the
receiver, David D Clark's solution is used.[citation needed] Clark's solution closes the
window until another segment of maximum segment size (MSS) can be received or the
buffer is half empty.
Q.10) What are the transport service primitives ?
A.10) There are five types of service primitives:
• LISTEN :- When a server is ready to accept an incoming connection it executes the
LISTEN primitive. It blocks waiting for an incoming connection
• CONNECT :- It connects the server by establishing a connection. Response is awaited.
• RECEIVE :- Then the RECIEVE call blocks the server.
• SEND :- Then the client executes SEND primitive to transmit its request followed by
the execution of RECIEVE to get the reply. Send the message.
• DISCONNECT:- This primitive is used for terminating the connection. After this
primitive one can’t send any message. When the client sends DISCONNECT packet
then the server also sends the DISCONNECT packet to acknowledge the client. When
the server package is received by client then the process is terminated.
Q.11) Explain about connection oriented concurrent server
A.11)
• What is Connection Oriented Service?
Connection oriented service is a data transfer method between two computers in a different
network, which is designed after the telephone system. Like, in telephone system if we want to
talk to someone, we just pick up the phone, dial the number of whom we want to talk with;
after the connection is established, we talk and lastly, we hang up the call.
Similarly, the connection-oriented service first establishes the virtual connection between the
source and the destination, then transfers all data packets belonging to the same message
through same dedicated established connection and after all packets of a message is transferred
it releases the connection.
To establish a connection a source sends a request packet to the destination. In response to
which destination sends the acknowledgement packet to the source confirming that the
destination is ready to accept the data from the source.
Meanwhile, the routers involved in the exchange of request and acknowledgement packet
between source and destination, define the virtual path that will be followed by all packets
belonging to the same message. So, we say that the resources involved in data transfer are
reserved before transferring all packet in a message.
Q.12) Short Technical Note on:
• Flow Control
• Buffering
A.12)
1. Flow Control :-
In a network, the sender sends the data and the receiver receives the data. But suppose a
situation where the sender is sending the data at a speed higher than the receiver is able to
receive and process it, then the data will get lost. Flow-control methods will help in ensuring
this. The flow control method will keep a check that the senders send the data only at a
speed that the receiver is able to receive and process. So, let's get started with the blog and
learn more about flow control.
• Flow Control
Flow control tells the sender how much data should be sent to the receiver so that it is not
lost. This mechanism makes the sender wait for an acknowledgment before sending the
next data. There are two ways to control the flow of data:
• Stop and Wait Protocol
• Sliding Window Protocol
Stop and Wait Protocol
It is the simplest flow control method. In this, the sender will send one frame at a time to
the receiver. Until then, the sender will stop and wait for the acknowledgment from the
receiver. When the sender gets the acknowledgment then it will send the next data packet
to the receiver and wait for the acknowledgment again and this process will continue. This
can be understood by the diagram below.
Sliding Window Protocol
As we saw that the disadvantage of the stop and wait protocol is that the sender waits for
the acknowledgment and during that time the sender is idle. In sliding window protocol we
will utilize this time. We will change this waiting time into transmission time.
A window is a buffer where we store the frames. Each frame in a window is numbered. If
the window size is n then the frames are numbered from the number 0 to n-1. A sender can
send n frames at a time. When the receiver sends the acknowledgment of the frame then we
need not store that frame in our window as it has already been received by the receiver. So,
the window in the sender side slides to the next frame and this window will now contain a
new frame along with all the previous unacknowledged frames of the window. At any
instance of time window will only contain the unacknowledged frames.
2. Buffering :-
A buffer contains data that is stored for a short amount of time, typically in the
computer's memory (RAM). The purpose of a buffer is to hold data right before it is
used. For example, when you download an audio or video file from the Internet, it may
load the first 20% of it into a buffer and then begin to play. While the clip plays back,
the computer continually downloads the rest of the clip and stores it in the buffer.
Because the clip is being played from the buffer, not directly from the Internet, there is
less of a chance that the audio or video will stall or skip when there is network
congestion.
Buffering is used to improve several other areas of computer performance as well. Most
hard disks use a buffer to enable more efficient access to the data on the disk. Video
cards send images to a buffer before they are displayed on the screen (known as a screen
buffer). Computer programs use buffers to store data while they are running. If it were
not for buffers, computers would run a lot less efficiently and we would be waiting
around a lot more.
Q.13) Explain about Multiplexing & De-Multiplexing.
A.13)
• What Is multiplexing?
Multiplexing (Muxing) is a term used in the field of communications and computer networking.
It generally refers to the process and technique of transmitting multiple analog or digital input
signals or data streams over a single channel. Since multiplexing can integrate multiple low-
speed channels into one high-speed channel for transmission, the high-speed channel is
effectively utilized. By using multiplexing, communication carriers can avoid maintaining
multiple lines, therefore, operating costs are effectively saved.
Multiplexer (Mux) is a device which performs the multiplexing process. It is a hardware
component that combines multiple analog or digital input signals into a single line of
transmission.
• What is De-Multiplexing
Demultiplexing (Demuxing) is a term relative to multiplexing. It is the reverse of the
multiplexing process. Demultiplex is a process reconverting a signal containing multiple
analog or digital signal streams back into the original separate and unrelated signals.
Although demultiplexing is the reverse of the multiplexing process, it is not the opposite of
multiplexing. The opposite of multiplexing is inverse multiplexing (Muxing), which breaks
one data stream into several related data streams. Thus, the difference between demultiplexing
and inverse multiplexing is that the output streams of demultiplexing are unrelated, while the
output streams of inverse multiplexing are related.
Demultiplexer (Demux) is a device that performs the reverse process of multiplexer.
Q.14) Explain the structure of UDP header.
A.14) UDP wraps datagrams with a UDP header, which contains four fields totaling eight
bytes.
The fields in a UDP header are :
• Source port :-The port of the device sending the data. This field can be set to zero if
the destination computer doesn’t need to reply to the sender.
• Destination port :- The port of the device receiving the data. UDP port numbers can be
between 0 and 65,535.
• Length :- Specifies the number of bytes comprising the UDP header and the UDP.
• Payload Data :- The limit for the UDP length field is determined by the underlying IP
protocol used to transmit the data.
• Checksum :- The checksum allows the receiving device to verify the integrity of the
packet header and payload. It is optional in IPv4 but was made mandatory in IPv6.
Q.15) Explain about the duties of presentation layer.
A.15)
• What is presentation layer?
The presentation layer is located at the sixth level of the OSI model, it is responsible for the
delivery and formatting of information to the application layer for further processing or display.
This type of service is needed because different computer architectures use different data
representations. In contrast to providing transparent data transport at the fifth level, the
presentation layer handles all issues related to data presentation and transport, including
translation, encryption, and compression.
Presentation Layer Functions
The actual functions of the presentation layer include the following aspects:
• Network Security and Confidentiality Management, text compression and packaging,
virtual terminal protocol (VTP).
• Syntax Conversion :- The abstract syntax is converted to the transfer syntax, and the
other side to achieve the opposite conversion (transfer syntax will be converted to
abstract syntax). Involved in the contents of the code conversion, character conversion,
data format modification, as well as data structure operation adaptation, data
compression, encryption and so on.
• Grammar Negotiation :- According to the requirements of the application layer to
negotiate the appropriate choice of context, that is, to determine the transmission syntax
and transmission.
• Connection Management :- Including the use of the session layer service to establish
a connection, manage data transport and synchronization control over this connection
(using the corresponding services at the session level), and terminate the connection
either normally or absently.