0% found this document useful (0 votes)
23 views16 pages

Lab Viva 2024

Uploaded by

savageq469
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views16 pages

Lab Viva 2024

Uploaded by

savageq469
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

1. What is Transmission Control Protocol (TCP)?

Transmission Control Protocol (TCP) is a standard that defines how to establish


and maintain a network conversation by which applications can exchange data.
TCP works with the Internet Protocol (IP), which defines how computers send
packets of data to each other. Together, TCP and IP are the basic rules that
define the internet.
How Transmission Control Protocol works
TCP is a connection-oriented protocol, which means a connection is
established and maintained until the applications at each end have finished
exchanging messages.
TCP performs the following actions:
 determines how to break application data into packets that networks can

deliver;
 sends packets to, and accepts packets from, the network layer;

 manages flow control;

 handles retransmission of dropped or garbled packets, as it's meant to

provide error-free data transmission; and


 acknowledges all packets that arrive.

2. Differentiate TCP and UDP


This process of error detection, in which TCP retransmits and reorders packets
after they arrive, can introduce latency in a TCP stream. Highly time-sensitive
applications, such as voice over IP (VoIP), streaming video and gaming,
generally rely on a transport process such as User Datagram Protocol (UDP),
because it reduces latency and jitter by not reordering packets or
retransmitting missing data.

UDP is classified as a datagram protocol, or connectionless protocol, because it


has no way of detecting whether both applications have finished their back-
and-forth communication. Instead of correcting invalid data packets, as TCP
does, UDP discards those packets and defers to the application layer for more
detailed error detection.
The header of a UDP datagram contains far less information than a TCP
segment header. The UDP header also goes through much less processing at
the transport layer in the interest of reduced latency.

TCP is the preferred protocol where reliability of data weighs more than
transmission speed. UDP is connectionless and its header size is lightweight,
which is why it is fast, but not as reliable as TCP.
3. What is a Socket?
A socket is the endpoint of communication. It is where a connection starts or
begins. There must be two sockets on either connection point for any
communication to exist—one from the sending device or server and another
from the receiving device or client. A socket is made up of an IP Address and a
port number.
A socket is one endpoint of a two-way communication link between two
programs running on the network. A socket is bound to a port number so that
the TCP layer can identify the application that data is destined to be sent to.
Socket types and associated data
SOCK_STREAM - Transmission Control Protocol (TCP)- The stream socket
(SOCK_STREAM) interface defines a reliable connection-oriented service. Data
is sent without errors or duplication and is received in the same order as it is
sent.
SOCK_DGRAM- User Datagram Protocol (UDP)- The datagram socket
(SOCK_DGRAM) interface defines a connectionless service for datagrams, or
messages. Datagrams are sent as independent packets. The reliability is not
guaranteed, data can be lost or duplicated, and datagrams can arrive out of
order. However, datagram sockets have improved performance capability over
stream sockets and are easier to use.
SOCK_RAW - IP, ICMP, RAW- The raw socket (SOCK_RAW) interface
allows direct access to lower-layer protocols such as Internet Protocol (IP).

The client-server model

The client-server model is one of the most used communication paradigms in


networked systems. Clients normally communicates with one server at a time.
From a server’s perspective, at any point in time, it is not unusual for a server
to be communicating with multiple clients. Client need to know of the
existence of and the address of the server, but the server does not need to
know the address of (or even the existence of) the client prior to the
connection being established

Client and servers communicate by means of multiple layers of network


protocols.

What is TCP/IP and what does it stand for?


TCP/IP is a data link protocol used on the internet to let computers and other
devices send and receive data. TCP/IP stands for Transmission Control
Protocol/Internet Protocol and makes it possible for devices connected to the
internet to communicate with one another across networks.
Originally developed in the 1970s by DARPA (the Defense Advanced Research
Projects Agency in the US), TCP/IP started out as just one of many internet
protocols. The TCP/IP model later became the standard protocol for ARPAnet,
the modern internet’s predecessor. Today, TCP/IP is the global standard for
internet communications.
How does the TCP/IP model work?
Whenever you send something over the internet — a message, a photo, a file
— the TCP/IP model divides that data into packets according to a four-layer
procedure. The data first goes through these layers in one order, and then in
reverse order as the data is reassembled on the receiving end.

The TCP/IP model covers many internet protocols, which define how data is
addressed and sent over the internet. Common internet protocols include
HTTP, FTP, and SMTP, and all three are often used in conjunction with the
TCP/IP model.
 HTTP (Hypertext Transfer Protocol) governs the workings of web
browsers and websites.
 FTP (File Transfer Protocol) defines how files are sent over a network.
 SMTP (Simple Mail Transfer Protocol) is used to send and receive email.
What is the difference between TCP and IP?
TCP and IP are separate computer network protocols. The difference between
TCP (Transmission Control Protocol) and IP (Internet Protocol) is their role in
the data transmission process. IP obtains the address where data is sent (your
computer has an IP address). TCP ensures accurate data delivery once that IP
address has been found. Together, the two form the TCP/IP protocol suite.
In other words, IP sorts the mail, and TCP sends and receives the mail. While
the two protocols are usually considered together, other protocols, such as
UDP (User Datagram Protocol), can send data within the IP system without the
use of TCP. But TCP requires an IP address to send data. That’s another
difference between IP and TCP.
What are the layers of the TCP/IP model?
There are four layers of the TCP/IP model: network access, internet, transport,
and application. Used together, these layers are a suite of protocols. The
TCP/IP model passes data through these layers in a particular order when a
user sends information, and then again in reverse order when the data is
received.
The open systems interconnection model:
The OSI model is important in IoT model operation. There are seven layers to
the OSI model and these are discussed briefly here [18]. It consists of seven
layers, namely:
1.Physical layer
2.Data link layer
3.Network layer
4.Transport layer
5.Session layer
6.Presentation layer
7.Application layer

Transmission Control Protocol (TCP)

TCP provides a connection oriented service, since it is based on connections


between clients and servers.

TCP provides reliability. When a TCP client send data to the server, it requires
an acknowledgement in return. If an acknowledgement is not received, TCP
automatically retransmit the data and waits for a longer period of time.

TCP is a byte-stream protocol, without any boundaries at all.


The commands, from client to server, and replies, from server to client, are
sent across the control TCP connection in 7-bit ASCII format.

Socket addresses

IPv4 socket address structure is named sockaddr_in and is defined by including


the <netinet/in.h> header.

The POSIX definition is the following:

struct in_addr{
in_addr_t s_addr; /*32 bit IPv4 network byte ordered address*/
};

struct sockaddr_in {
uint8_t sin_len; /* length of structure (16)*/
sa_family_t sin_family; /* AF_INET*/
in_port_t sin_port; /* 16 bit TCP or UDP port number */
struct in_addr sin_addr; /* 32 bit IPv4 address*/
char sin_zero[8]; /* not used but always set to zero */
};

Host Byte Order to Network Byte Order Conversion

There are two ways to store two bytes in memory: with the lower-order byte
at the starting address (little-endian byte order) or with the high-order byte at
the starting address (big-endian byte order). We call them collectively host
byte order. For example, an Intel processor stores the 32-bit integer as four
consecutives bytes in memory in the order 1-2-3-4, where 1 is the most
significant byte. IBM PowerPC processors would store the integer in the byte
order 4-3-2-1.

Networking protocols such as TCP are based on a specific network byte order.
The Internet protocols use big-endian byte ordering.
The htons(), htonl(), ntohs(), and ntohl() Functions
The follwowing functions are used for the conversion:

#include <netinet/in.h>
uint16_t htons(uint16_t host16bitvalue);
uint32_t htonl(uint32_t host32bitvalue);
uint16_t ntohs(uint16_t net16bitvalue);
uint32_t ntohl(uint32_t net32bitvalue);
The first two return the value in network byte order (16 and 32 bit,
respectively). The latter return the value in host byte order (16 and 32 bit,
respectively).
The sequence of function calls for the client and a server participating in a TCP
connection:
The bind() Function

The bind() assigns a local protocol address to a socket. With the Internet
protocols, the address is the combination of an IPv4 or IPv6 address (32-bit or
128-bit) address along with a 16 bit TCP port number.

The function is defined as follows:

#include <sys/socket.h>
int bind(int sockfd, const struct sockaddr *servaddr, socklen_t addrlen);

where sockfd is the socket descriptor, myaddr is a pointer to a protocol-


specific address and addrlen is the size of the address structure.

bind() returns 0 if it succeeds, -1 on error.

The listen() Function

The listen() function converts an unconnected socket into a passive socket,


indicating that the kernel should accept incoming connection requests directed
to this socket. It is defined as follows:

#include <sys/socket.h>
int listen(int sockfd, int backlog);

where sockfd is the socket descriptor and backlog is the maximum number of
connections the kernel should queue for this socket. The backlog argument
provides an hint to the system of the number of outstanding connect requests
that is should enqueue in behalf of the process. Once the queue is full, the
system will reject additional connection requests. The backlog value must be
chosen based on the expected load of the server.

The function listen() return 0 if it succeeds, -1 on error.

The accept() Function

The accept() is used to retrieve a connect request and convert that into a
request. It is defined as follows:

#include <sys/socket.h>
int accept(int sockfd, struct sockaddr *cliaddr,
socklen_t *addrlen);
where sockfd is a new file descriptor that is connected to the client that called
the connect(). The cliaddr and addrlen arguments are used to return the
protocol address of the client. The new socket descriptor has the same socket
type and address family of the original socket. The original socket passed
to accept() is not associated with the connection, but instead remains available
to receive additional connect requests. The kernel creates one connected
socket for each client connection that is accepted.

If we don’t care about the client’s identity, we can set


the cliaddr and addrlen to NULL. Otherwise, before calling the accept function,
the cliaddr parameter has to be set to a buffer large enough to hold the
address and set the interger pointed by addrlen to the size of the buffer.

The send() Function

Since a socket endpoint is represented as a file descriptor, we can


use read and write to communicate with a socket as long as it is connected.
However, if we want to specify options we need another set of functions.

For example, send() is similar to write() but allows to specify some


options. send() is defined as follows:

#include <sys/socket.h>
ssize_t send(int sockfd, const void *buf, size_t nbytes, int flags);

where buf and nbytes have the same meaning as they have with write. The
additional argument flags is used to specify how we want the data to be
transmitted. We will not consider the possible options in this course. We will
assume it equal to 0.

The function returns the number of bytes if it succeeds, -1 on error.

The receive() Function

The recv() function is similar to read(), but allows to specify some options to
control how the data are received. We will not consider the possible options in
this course. We will assume it is equal to 0.

receive is defined as follows:

#include <sys/socket.h>
ssize_t recv(int sockfd, void *buf, size_t nbytes, int flags);
The function returns the length of the message in bytes, 0 if no messages are
available and peer had done an orderly shutdown, or -1 on error.

4. Error Control Mechanism


The error control mechanism is used so that the received data should be
exactly same whatever sender has sent the data. The error control mechanism
is divided into two categories, i.e., Stop and Wait ARQ and sliding window. The
sliding window is further divided into two categories, i.e., Go Back N, and
Selective Repeat. Based on the usage, the people select the error control
mechanism whether it is stop and wait or sliding window.
5. What is Stop and Wait protocol?
Here stop and wait means, whatever the data that sender wants to send, he
sends the data to the receiver. After sending the data, he stops and waits until
he receives the acknowledgment from the receiver. The stop and wait protocol
is a flow control protocol where flow control is one of the services of the data
link layer.

It is a data-link layer protocol which is used for transmitting the data over the
noiseless channels. It provides unidirectional data transmission which means
that either sending or receiving of data will take place at a time. It provides
flow-control mechanism but does not provide any error control mechanism.

The idea behind the usage of this frame is that when the sender sends the
frame then he waits for the acknowledgment before sending the next frame.

Disadvantages of Stop and Wait protocol

1. Problems occur due to lost data


Suppose the sender sends the data and the data is lost. The receiver is
waiting for the data for a long time. Since the data is not received by the
receiver, so it does not send any acknowledgment. Since the sender
does not receive any acknowledgment so it will not send the next
packet. This problem occurs due to the lost data.

o Sender waits for an infinite amount of time for an acknowledgment.


o Receiver waits for an infinite amount of time for a data

2. Problems occur due to lost acknowledgment


Suppose the sender sends the data and it has also been received by the
receiver. On receiving the packet, the receiver sends the acknowledgment.
In this case, the acknowledgment is lost in a network, so there is no chance
for the sender to receive the acknowledgment. There is also no chance for
the sender to send the next packet as in stop and wait protocol, the next
packet cannot be sent until the acknowledgment of the previous packet is
received.

3. Problem due to the delayed data or acknowledgment


Suppose the sender sends the data and it has also been received by the
receiver. The receiver then sends the acknowledgment but the
acknowledgment is received after the timeout period on the sender's
side. As the acknowledgment is received late, so acknowledgment can
be wrongly considered as the acknowledgment of some other data
packet.

6. What is Sliding Window Protocol?

The sliding window is a technique for sending multiple frames at a time. It


controls the data packets between the two devices where reliable and gradual
delivery of data frames is needed. It is also used in TCP (Transmission Control
Protocol). In this technique, each frame has sent from the sequence number.
The sequence numbers are used to find the missing data in the receiver end.
The purpose of the sliding window technique is to avoid duplicate data, so it
uses the sequence number.

Types of Sliding Window Protocol

Sliding window protocol has two types:

1. Go-Back-N ARQ

2. Selective Repeat ARQ

7. What is Go-Back-N ARQ?

Go-Back-N ARQ protocol is also known as Go-Back-N Automatic Repeat


Request. It is a data link layer protocol that uses a sliding window method. In
this, if any frame is corrupted or lost, all subsequent frames have to be sent
again.
The size of the sender window is N in this protocol. For example, Go-Back-8, the
size of the sender window, will be 8. The receiver window size is always 1.

If the receiver receives a corrupted frame, it cancels it. The receiver does not
accept a corrupted frame. When the timer expires, the sender sends the correct
frame again.

In the protocol Go-Back-N ARQ (Automatic Repeat Request), the send window
size is determined by the number of bits used for the sequence number. Given
that 5 bits are being used for the sequence number, we would get 2^5 or 32
possible sequence numbers. However, in order to make sure there is no
confusion by wrapping around the sequence numbers, only half of the
sequence number space is used for the window size at any point. This means
that the maximum size of the send window must be 2^5 / 2 which is 16.

8. What is Selective Repeat ARQ?

Selective Repeat ARQ is also known as the Selective Repeat Automatic Repeat
Request. It is a data link layer protocol that uses a sliding window method. The
Go-back-N ARQ protocol works well if it has fewer errors. But if there is a lot of
error in the frame, lots of bandwidth loss in sending the frames again. So, we
use the Selective Repeat ARQ protocol. In this protocol, the size of the sender
window is always equal to the size of the receiver window. The size of the sliding
window is always greater than 1.

If the receiver receives a corrupt frame, it does not directly discard it. It sends a
negative acknowledgment to the sender. The sender sends that frame again as
soon as on the receiving negative acknowledgment. There is no waiting for any
time-out to send that frame.

9. Differenciate Go-Back-N ARQ and Selective Repeat ARQ?

Go-Back-N ARQ Selective Repeat ARQ

If a frame is corrupted or lost in it,all In this, only the frame is sent again, which is
subsequent frames have to be sent corrupted or lost.
again.

If it has a high error rate,it wastes a There is a loss of low bandwidth.


lot of bandwidth.
It is less complex. It is more complex because it has to do sorting
and searching as well. And it also requires
more storage.

It does not require sorting. In this, sorting is done to get the frames in the
correct order.

It does not require searching. The search operation is performed in it.

It is used more. It is used less because it is more complex.

Distance Vector Routing Protocol:

Let dx(y) be the cost of the least-cost path from node x to node y. The least costs
are related by Bellman-Ford equation,

dx(y) = minv{c(x,v) + dv(y)}

Where the minv is the equation taken for all x neighbors. After traveling from x
to v, if we consider the least-cost path from v to y, the path cost will be
c(x,v)+dv(y). The least cost from x to y is the minimum of c(x,v)+dv(y) taken over
all neighbors.

With the Distance Vector Routing algorithm, the node x contains the following
routing information:

o For each neighbor v, the cost c(x,v) is the path cost from x to directly
attached neighbor, v.
o The distance vector x, i.e., Dx = [ Dx(y) : y in N ], containing its cost to all
destinations, y, in N.
o The distance vector of each of its neighbors, i.e., Dv = [ Dv(y) : y in N ] for
each neighbor v of x.

Distance vector routing is an asynchronous algorithm in which node x sends the


copy of its distance vector to all its neighbors. When node x receives the new
distance vector from one of its neighboring vector, v, it saves the distance vector
of v and uses the Bellman-Ford equation to update its own distance vector. The
equation is:

dx(y) = minv{ c(x,v) + dv(y)} for each node y in N


The node x has updated its own distance vector table by using the above
equation and sends its updated table to all its neighbors so that they can update
their own distance vectors.

Three Keys to understand the working of Distance Vector Routing Algorithm:


Knowledge about the whole network: Each router shares its knowledge
through the entire network. The Router sends its collected knowledge about
the network to its neighbors. In distance vector routing, the updating packet
conveys the knowledge of the router about the whole inter network.
In distance vector routing, the updating packets are sent periodically.
Routing only to neighbors: The router sends its knowledge about the network
to only those routers which have direct links. The router sends whatever it has
about the network through the ports. The information is received by the router
and uses the information to update its own routing table.
Information sharing at regular intervals: Within 30 seconds, the router sends
the information to the neighboring routers.
Leaky Bucket Algorithm
The leaky bucket algorithm is a "traffic shaping" algorithm to reduce the load
the transport layer places on the network layer and reduce congestion in the
network. Commonly used in asynchronous transfer mode (ATM) networks, the
algorithm provides a way to temporarily store a variable number of requests
and then organize them into a set-rate output of packets.
Network congestion and traffic shaping
Traffic congestion is a common problem in all networks. When too many
packets flow through a network, packet delays or packet losses can occur, both
of which degrade the network's performance. This situation is known as
congestion.
Congestion in a network often happens when the traffic is bursty. In the seven-
layer OSI model for networking system communications, the network and
transport layers are layer 3 and layer 4, respectively. The two layers are jointly
responsible for handling network congestion, which they do by "shaping" the
traffic.
How the leaky bucket algorithm works, with an example
The leaky bucket algorithm is ideal for smoothing out bursty traffic. Just like a
hole at the bottom of a water bucket leaks water out at a fixed rate, the leaky
bucket algorithm does the same with network traffic. Bursty chunks of traffic
are stored in a "bucket" with a "hole" and sent out at a controlled, average
rate.
The hole represents the network's commitment to a particular bandwidth. The
leaky bucket shapes the incoming traffic to ensure it conforms to the
commitment. Thus, regardless of how much data traffic enters the bucket, it
always leaves at a constant output rate (the commitment). This mechanism
regulates the packet flow in the network and helps to prevent congestion that
leads to performance deterioration and traffic delays.
Leaky bucket example. Suppose data enters the network from various sources
at different speeds. Consider one bursty source that sends data at 20 Mbps for
2 seconds for a total of 40 Mbps. Then it is silent, sending no data for 5
seconds. Then it again transmits data at a rate of 10 Mbps for 3 seconds, thus
sending a total of 30 Mbps. So, in a time span of 10 seconds the source sends
70 Mb data.
However, the network has only committed a bandwidth of 5 Mbps for this
source. Therefore, it uses the leaky bucket algorithm to output traffic at the
rate of 5 Mbps during the same time period of 10 seconds, which smooths out
the network traffic.
Without the leaky bucket algorithm in place, the initial burst of 20 Mbps would
have consumed a lot more bandwidth than the network had reserved
(committed) for the source, which would have caused congestion and a
slowdown in the network.
A leaky bucket algorithm is primarily used to control the rate at which traffic
enters the network. It provides a mechanism for smoothing bursty input traffic
in a flow to present a steady stream into the network.

FTP (File Transfer Protocol):


FTP (File Transfer Protocol) is a standard network protocol used for the
transfer of files from one host to another over a TCP-based network, such as
the Internet.
FTP works by opening two connections that link the computers trying to
communicate with each other. One connection is designated for the
commands and replies that get sent between the two clients, and the other
channel handles the transfer of data. During an FTP transmission, there are
four commands used by the computers, servers, or proxy servers that are
communicating. These are “send,” “get,” “change directory,” and “transfer.”
While transferring files, FTP uses three different modes: block, stream, and
compressed. The stream mode enables FTP to manage information in a string
of data without any boundaries between them. The block mode separates the
data into blocks, and in the compress mode, FTP uses an algorithm called the
Lempel-Ziv to compress the data. FTP sends its control information out-of-
band, separate from the data connection, over the control connection.
If 5 files are transferred from server A to client B in the same session. The
client would first initiate the TCP control connection through port 21. Then for
every file transfer, a separate connection would be made through port 20.
Now, since we have five files to be transferred, 1 control connection + 5 data
connections = 6 total TCP connections.
What is FTP useful for?
One of the main reasons why modern businesses and individuals need FTP is
its ability to perform large file size transfers. When sending a relatively small
file, like a Word document, most methods will do, but with FTP, you can send
hundreds of gigabytes at once and still get a smooth transmission.
The ability to send larger amounts of data, in turn, improves workflow.
Because FTP allows you to send multiple files at once, you can select several
and then send them all at the same time. Without FTP services, you may have
to send them one by one, when you could be accomplishing other work.
For example, if you have to transfer a large collection of important documents
from headquarters to a satellite office but have a meeting to attend in five
minutes, you can use FTP to send them all at once. Even if it takes 15 minutes
for the transfer to complete, FTP can handle it, freeing you up to attend the
meeting.
What is an FTP Port?
An FTP port is a communication endpoint and allows data transfer between a
computer and a server. A computer's operating system only uses a specific
number of ports, which are necessary for software to connect through a
network. An FTP port is required for the client and server to quickly exchange
files.
FTP vs HTTP
Even though Hyper Text Transfer Protocol (HTTP) and FTP are similar in that
they are application-layer protocols that enable you to send files between
systems, there are some key differences. HTTP can support multiple sessions at
the same time because it is a stateless protocol. This means it does not save
the data used in a session to employ it in the next one.
FTP, on the other hand, is stateful, which means it collects data about the
client and uses it in the next request the client makes. Because FTP performs
this function, it is limited in the number of sessions it can support
simultaneously. Regardless of the bandwidth of a network, HTTP has the
potential to be a much more efficient method of data transmission.
Another key difference is that with FTP, there needs to be client authentication
before information is transferred. With HTTP, no client authentication is
needed. HTTP uses a well-known, common port, making it easy for firewalls to
work with. In some cases, FTP can be more difficult for a firewall to manage.

You might also like