0% found this document useful (0 votes)
23 views

Unit-5 Notes

The document discusses the transport layer and UDP protocol. It explains transport layer primitives, sockets, protocols like TCP and UDP. UDP is a connectionless protocol that provides fast yet unreliable data transmission. It has low overhead and is used for applications like DNS, streaming media, online gaming that prioritize speed over reliability.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views

Unit-5 Notes

The document discusses the transport layer and UDP protocol. It explains transport layer primitives, sockets, protocols like TCP and UDP. UDP is a connectionless protocol that provides fast yet unreliable data transmission. It has low overhead and is used for applications like DNS, streaming media, online gaming that prioritize speed over reliability.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 45

1

R-20 III-I IT-CN Unit-5


Part-1: Transport Layer

1. The network layer provides end-to-end packet delivery using datagrams or virtual
circuits. After this, the transport layer provides data transport from a process on a
source machine to a process on a destination machine.

The transport is provided with a reliability that is independent of the physical


networks in between. It provides the abstractions that applications need to use the
network.

The ultimate goal of the transport layer is to provide efficient, reliable, and cost-
effective data transmission service to its users, normally processes in the application
layer. The relationship of the network, transport, and application layers is illustrated
in Fig. 6-1.

2. Transport Layer Primitives:


2

3. NOTE: The term segment is used for messages sent from transport entity to
transport entity. Internet protocols use this term.

4. Berkeley Sockets: Sockets were first released as part of the Berkeley UNIX 4.2BSD
software distribution in 1983. They quickly became popular. The primitives are now
widely used for Internet programming on many UNIX operating systems.
3

Primitives:

5. Transport Layer Protocols: The transport service is implemented by a transport


protocol used between the two transport entities. In some ways, transport protocols
resemble the data link protocols. Both have to deal with error control, sequencing,
and flow control, and other issues.

However, differences also exist. These differences are due to dissimilarities between
the environments in which the two protocols operate, as shown in Fig. 6-7.

At the data link layer, two routers communicate directly via a physical channel. At the
transport layer, this physical channel is replaced by the entire network.

The process of establishing a connection over the wire is simple: the other end is
always there – just sending a message is sufficient. If the message is not
acknowledged due to an error, it can be resent.

In the transport layer, initial connection establishment is complicated, as we will see


later.
4

Another difference between the data link layer and the transport layer is the
existence of storage capacity in the network. When a router sends a packet over a
link, it may arrive or be lost, but it cannot go into hiding and suddenly emerge later.

A final difference between the data link and transport layers is the transport layer is
dealt in different approach in each of them.

6. Port Number: TCP service is obtained by both the sender and the receiver creating
end points, called sockets. Each socket has a socket number (address) consisting
of the IP address of the host and a 16-bit number local to that host, called a
port.

A socket may be used for multiple connections at the same time. In other words, two
or more connections may terminate at the same socket. Connections are identified
by the socket identifiers at both ends that is, (socket1, socket2 etc.).

Port numbers below 1024 are reserved for standard services of privileged users
(e.g., root in UNIX systems). They are called well-known ports.

7. User Datagram Protocol: The Internet protocol supports a connectionless transport


protocol called UDP (User Datagram Protocol). Through UDP, applications can
send IP datagrams without having to establish a connection.

NOTE: A datagram is a basic transfer unit associated with a network and provides
connectionless communications. Datagrams are structured as header and payload
sections. The delivery, arrival time, and order of arrival of datagrams are not
guaranteed by the network.

UDP transmits segments consisting of an 8-byte header followed by the payload.


5

### User Datagram Protocol (UDP)

#### Introduction

The User Datagram Protocol (UDP) is a core protocol in the Internet Protocol (IP)
suite. Defined in RFC 768, UDP provides a lightweight, connectionless
communication method that is ideal for applications needing fast, efficient data
transmission with minimal overhead.

#### Key Characteristics of UDP

1. **Connectionless**: UDP does not establish a connection before sending data,


making it faster but less reliable than connection-oriented protocols like TCP.

2. **Unreliable**: There is no guarantee of data delivery, ordering, or protection from


duplication. Applications that require reliability must implement their own error-
checking and retransmission mechanisms.

3. **Low Overhead**: The UDP header is only 8 bytes, making it suitable for
applications where minimizing overhead is crucial.

4. **Broadcast and Multicast**: UDP supports broadcasting and multicasting, making


it ideal for applications like streaming where data is sent to multiple recipients.

#### UDP Packet Structure

A UDP packet, or datagram, consists of a simple header and data:

- **Source Port (16 bits)**: The port number of the sender.

- **Destination Port (16 bits)**: The port number of the receiver.

- **Length (16 bits)**: The length of the UDP header and data.

- **Checksum (16 bits)**: Used for error-checking the header and data.

- **Data**: The actual payload being transmitted.

#### UDP Services

1. **Minimal Error Checking**: While UDP includes a checksum for error detection, it
does not guarantee delivery. If an error is detected, the packet is discarded without
any notification.

2. **No Flow Control**: UDP does not manage the rate of data transmission between
sender and receiver, which can lead to potential data loss if the receiver is
overwhelmed.

3. **No Congestion Control**: Unlike TCP, UDP does not reduce data transmission
rates in response to network congestion.
6

#### Applications of UDP

1. **DNS (Domain Name System)**: Uses UDP for quick query and response
exchanges, where low latency is more critical than reliability.

2. **VoIP (Voice over IP)**: Transmits voice data where occasional loss is tolerable,
but low latency is essential for maintaining conversation quality.

3. **Streaming Media**: Delivers audio and video content efficiently, where slight
data loss is acceptable in favor of timely delivery.

4. **Online Gaming**: Provides real-time updates and commands where speed is


crucial, and minor packet loss does not significantly impact gameplay.

5. **TFTP (Trivial File Transfer Protocol)**: Uses UDP for simple, low-overhead file
transfers.

### Conclusion

UDP is a simple but powerful protocol suitable for applications that prioritize speed
and efficiency over reliability. It is widely used in scenarios where timely delivery is
more critical than perfect accuracy, making it a fundamental component of many
real-time and streaming services.

When a UDP packet arrives, its payload is handed to the process attached to the
destination port. This attachment occurs through the BIND primitive.

The advantage of UDP over IP is the usage of source and destination ports. Without
the port fields, transport layer would not know what to do with the incoming packets.

The UDP length field includes the 8-byte header and the data. The minimum length
is 8 bytes and maximum length is 65,515 bytes, lower than the largest number that
will fit in 16 bits.

An optional Checksum is also provided for extra reliability. It checksums the header,
the data, and an IP pseudo-header.
7

It contains the 32-bit IPv4 addresses of the source and destination machines, the
protocol number for UDP (17), and the byte count for the UDP. It is used to detect
misdelivered packets.

8. UDP does not do flow control, congestion control, or retransmission upon receipt of a
bad segment. It just provides an interface to the IP protocol with multiple processes
using the ports and optional end-to-end error detection.

9. Remote Procedure Call: When a process on machine 1 calls a procedure on


machine 2, the calling process on 1 is suspended and execution of the called
procedure takes place on 2. Information can be transported from the client to the
server in the parameters and can come back in the procedure result.

UDP APPLICATIONS

 Live Streaming (Audio/Video): Streaming live video or audio is all about smooth
playback. Occasional packet loss with UDP is less disruptive than the delays caused
by retransmission requests in TCP.
 Online Gaming: Fast reflexes are king in online gaming. UDP ensures quick
response times by prioritizing speed over order and error correction. Even with a
dropped packet here and there, the gaming experience remains fluid.
 Voice over IP (VoIP): Similar to live streaming, real-time voice calls benefit from
UDP's speed. While a slight delay or missed packet might cause a stutter, it's
generally preferable to the choppiness caused by retransmissions in TCP.
 DNS (Domain Name System): DNS lookups involve translating website names into
IP addresses. These are typically small data packets, and UDP's speed and efficiency
make it a perfect fit for these quick exchanges.
 Network Management Protocols (NMPs): Protocols like SNMP (Simple Network
Management Protocol) often utilize UDP for quick exchanges of network monitoring
data. Speed is crucial for real-time network management, and some data loss can be
tolerated.
 Video Conferencing: Similar to live streaming and VoIP, real-time video
conferencing benefits from UDP's speed. A slight glitch due to a dropped packet is
less disruptive than the delays caused by data retransmission.
8

 Real-time Stock Tickers: Getting stock quotes in real-time is essential for financial
professionals. UDP delivers these updates quickly, even if a few data packets are lost
along the way.
 Online Multiplayer Gaming: Just like online gaming in general, real-time
multiplayer games rely on UDP for fast response times. A bit of data loss is less
disruptive than the lag caused by TCP's error correction mechanisms.
 TFTP (Trivial File Transfer Protocol): While not commonly used today, TFTP is a
simple file transfer protocol that utilizes UDP for quick transfers of small files.

Remember: When speed is paramount and some data loss is acceptable, UDP is your
champion. However, for applications where data integrity is critical (like downloading
important files), TCP remains the reliable choice.

10. Transmission Control Protocol: TCP was specifically designed to provide a


reliable end-to-end byte stream over an unreliable internetwork. An internetwork
differs from a single network since different parts may have different topologies,
bandwidths, delays, packet sizes, and other parameters.

TCP was designed to adapt the properties of the internetwork dynamically and to be
robust if failures occur (fault tolerance).

Each machine supporting TCP has a TCP transport entity (ex: a library procedure, a
user process, or part of the kernel). It manages TCP streams and interfaces to the IP
layer.

A TCP entity accepts user data streams from local processes, breaks them up into
pieces and sends each piece as a separate IP datagram. When datagrams
containing TCP data arrive at destination, they are given to local TCP entity, which
reconstructs the original byte streams.

11. NOTE: In computing, a daemon is a program that runs continuously as a


background process and wakes up to handle periodic service requests, coming from
remote processes. The daemon program is alerted to the request by the operating
system (OS); it either responds to the request or forwards it to another program.

The following are some examples of daemons:

 init. This is the first daemon to start up when Unix boots, and it spawns all other
processes.
 inetd. This listens for internet requests on a designated port number and assigns
a server program to handle them. Services handled by inetd include rlogin,
telnet, ftp etc.
 crond. This daemon executes scheduled commands.
 dhcpd. This daemon provides Dynamic Host Configuration Protocol services.
 ftpd. This daemon is often started by inetd to handle File Transfer Protocol
requests.
 httpd. This daemon acts as a web server.
9

All TCP connections are full duplex and point-to-point.

NOTE:
 Simplex: Only one of the two devices on a link can transmit, the other can only
receive. This mode uses the channel to send data in one direction. Example:
Keyboard and traditional monitors. The keyboard can only introduce input, the
monitor can only give the output.
 Half-Duplex: Each station can both transmit and receive, but not at the same
time. When one device is sending, the other can only receive. Example: Walkie-
talkie in which message is sent one at a time and messages are sent in both
directions.
 Full Duplex: Both stations can transmit and receive simultaneously. Ex:
Telephone.
 Point-to-point means that each connection has exactly two end points.
 TCP does not support multicasting or broadcasting.

12. A TCP connection is a byte stream, not a message stream. Message boundaries
are not preserved end to end.

For example, if the sending process sends four 512-byte packets to a TCP stream,
these data may be delivered to the receiving process as four 512-byte chunks, two
1024-byte chunks, or one 2048-byte chunk.

When an application passes data to TCP, TCP may send it immediately or buffer it
(in order to collect a larger amount to send at once), at its discretion.

If some data is to be sent immediately (ex: interactive game), TCP has the notion of
a PUSH flag that is carried on packets. Through this, applications tell TCP
implementations not to delay the transmission.

NOTE: Urgent data is an interesting feature of TCP service that remains in the
protocol but is rarely used. When an application has high priority data, the sending
application passes it to TCP with the URGENT flag. This causes TCP to stop
accumulating data and transmit everything it has immediately.
10

13. The TCP Protocol: Every byte on a TCP connection has its own 32-bit sequence
number. Separate 32-bit sequence numbers are carried on packets for the sliding
window position in one direction and for acknowledgements in the reverse direction.
The sending and receiving TCP entities exchange data in the form of segments. A
TCP segment consists of a 20-byte header followed by data bytes. The TCP
software decides how big segments should be.
The protocol used by TCP entities is the sliding window protocol with a dynamic
window size.
When a sender transmits a segment, it starts a timer. When the segment arrives at
the destination, the receiving TCP entity sends back a segment bearing an
acknowledgement number equal to the next sequence number it expects to receive
and the remaining window size.
If the sender’s timer goes off before the acknowledgement is received, the sender
transmits the segment again.
14. Disadvantages:
 Segments can arrive out of order. Ex: Bytes 3072–4095 can arrive but cannot be
acknowledged because bytes 2048–3071 have not turned up yet.
 Segments can be delayed so long in transit that the sender times out and
retransmits them.
 Retransmissions may include different ranges than the original transmission.

15. The TCP Segment Header: The figure below shows the layout of a TCP segment.
11

Every segment begins with a 20-byte header. Note that Segments without any data
are legal and are used for acknowledgements and control messages.

 The Source port and Destination port fields identify the local end points of the
connection.
 The source and destination points together identify the connection, which is called
a 5 tuple: protocol (TCP), source IP and source port, and destination IP and
destination port.
 Eight 1-bit flags: CWR and ECE are used to signal congestion when ECN
(Explicit Congestion Notification) is used, as specified in RFC 3168. ECE is set
to signal an ECN-Echo to a TCP sender to tell it to slow down when the TCP
receiver gets a congestion indication from the network. CWR is set to signal
Congestion Window Reduced from the TCP sender to the TCP receiver so that
it knows the sender has slowed down and can stop sending the ECN-Echo.
 URG -> Urgent
 ACK-> Acknowledgement
 PSH-> Pushed data
 RST -> Reset a connection
 SYN -> Synchronization
 FIN-> Finish

TCP Connection Explained: A 3-Phase Journey

TCP, or Transmission Control Protocol, is a connection-oriented transport layer


protocol. This means it establishes a reliable connection between two applications
before sending any data. Imagine it as a dedicated communication channel, ensuring
your messages get delivered accurately and in order. Here's a breakdown of the
three phases involved in a TCP connection:
1. Connection Establishment (Three-Way Handshake):
This phase acts like a handshake between the sender (client) and receiver (server)
to agree on the terms of communication. It involves three steps:
 SYN (Synchronize): The client sends a SYN segment to initiate the
connection. This segment carries a sequence number used for data tracking.
 SYN+ACK (Synchronize + Acknowledge): The server responds with a
SYN+ACK segment. This acknowledges the client's SYN and sends its own
sequence number.
 ACK (Acknowledge): The client sends a final ACK segment to acknowledge
the server's SYN+ACK. Now, both parties are synchronized and ready for
data transfer.
2. Data Transfer:
12

Once the connection is established, data can flow bidirectionally (full-duplex)


between the client and server. Here are some key points to remember:
 Data Segments: Data is broken down into segments, each with sequence
and acknowledgment numbers for tracking.
 Piggybacking: Acknowledgments can be piggybacked onto data segments
for efficiency.
 Flow Control: Data transfer is regulated to prevent overwhelming the
receiver with too much data at once.
 Error Checking: TCP includes mechanisms to detect and retransmit lost or
corrupted data segments, ensuring reliable delivery.
3. Connection Termination:
Either the client or server can initiate connection termination. There are two main
methods:
 Three-Way Handshake: Similar to connection establishment, but with FIN
(Finish) flags instead of SYN flags.
o Client sends a FIN segment to indicate it's done sending data.
o Server acknowledges with a FIN+ACK segment and sends its own FIN
if finished.
o Client sends a final ACK to acknowledge the server's FIN.
 Half-Close (Optional): Allows one side to stop sending data while still
receiving data from the other side.
o Client sends a FIN to stop sending data.
o Server acknowledges and continues sending data until finished.
o Server sends a FIN to indicate it's done, and the client acknowledges.
By understanding these phases, you'll gain a solid grasp of how TCP establishes
reliable communication channels for data transfer between applications on different
networks.
16. TCP Connection: Connections are established in TCP by means of the three-way
handshake.
13

To establish a connection the server waits for an incoming connection by executing


the LISTEN and ACCEPT primitives in that order, by specifying a source.
The client, executes a CONNECT primitive, by specifying
 the IP address and port to which it wants to connect,
 the maximum TCP segment size and
 some user data (e.g., a password).

The CONNECT primitive sends a TCP segment with the SYN bit on and ACK bit off
and waits for a response.
14

When this segment arrives at the destination, the TCP entity checks to see if there is
a process that has done a LISTEN on the port field. If not, it sends a reply with the
RST bit on to reject the connection.
If some process is listening to the port, that process is given the incoming TCP
segment. If it accepts the connection, an acknowledgement is sent back.
Note that a SYN segment consumes 1 byte of sequence space so that it can be
acknowledged.

If two hosts simultaneously try to establish a connection, the sequence of events is


as illustrated in Fig. 6-37(b). The result of these events is that just one connection is
established, because connections are identified by their end points. If the first and
second setups result in a connection identified by (x, y), only one table entry is made
for (x, y).

WINDOWS IN TCP::::::::
### Windows in TCP

#### Introduction
In TCP (Transmission Control Protocol), the concept of "windows" is essential for
managing the flow of data and ensuring efficient, reliable communication between
sender and receiver. TCP windows are part of the flow control and congestion
control mechanisms that help optimize network performance and prevent packet loss
and congestion.

#### Sliding Window Protocol


15

The sliding window protocol is a method used by TCP to control the amount of data
that can be sent before receiving an acknowledgment. It allows multiple packets to
be sent before requiring an acknowledgment for the earliest packet. The key
components are:

1. **Sender Window**: The range of sequence numbers that the sender is allowed to
send.
2. **Receiver Window**: The range of sequence numbers that the receiver is
prepared to accept.
3. **Window Size**: The number of bytes that can be sent without receiving an
acknowledgment. This size can change dynamically based on network conditions.

#### Flow Control


TCP uses flow control to ensure that a sender does not overwhelm a receiver with
too much data too quickly. This is achieved using the sliding window mechanism,
which adjusts the sender's transmission rate based on the receiver's capacity.

- **Receiver's Advertised Window**: The receiver advertises a window size,


indicating the amount of buffer space available for incoming data. The sender must
respect this limit to avoid overwhelming the receiver.
- **Sender's Transmission Window**: The sender maintains a window that includes
unacknowledged data already sent and the new data it is allowed to send.

#### Congestion Control


Congestion control aims to prevent network congestion by adjusting the rate at which
data is sent based on network conditions. TCP employs several algorithms to
manage congestion:

1. **Slow Start**: Initially, the congestion window (cwnd) grows exponentially with
each acknowledgment received, doubling each round-trip time (RTT).
2. **Congestion Avoidance**: Once a threshold (ssthresh) is reached, the window
grows linearly to avoid sudden congestion.
3. **Fast Retransmit and Fast Recovery**: Upon detecting packet loss (e.g., via
duplicate ACKs), TCP reduces the window size and retransmits the lost packet
without waiting for a timeout.

#### Dynamic Window Management


16

TCP windows are dynamic and can change based on network conditions. Key
parameters include:

- **Initial Window Size**: The starting size of the congestion window.


- **Congestion Window (cwnd)**: Dictates the number of bytes that can be sent.
Adjusts based on perceived network congestion.
- **Advertised Window (rwnd)**: The available buffer space advertised by the
receiver. Limits the amount of data the sender can transmit.
- **Effective Window Size**: The minimum of cwnd and rwnd, determining the actual
window size for data transmission.

### Example Scenario


Consider a sender transmitting data to a receiver:

1. **Initial Transmission**: The sender starts with a small congestion window (cwnd)
and sends a few packets.
2. **Acknowledgments Received**: The receiver acknowledges the packets, and the
sender increments the cwnd based on the acknowledgments.
3. **Window Adjustment**: The receiver's advertised window (rwnd) may change
based on buffer availability, and the sender adjusts its transmission rate accordingly.
4. **Congestion Detection**: If packet loss occurs, the sender reduces the cwnd and
retransmits the lost packet, then gradually increases the window size as conditions
improve.

### Conclusion
Windows in TCP play a crucial role in managing data flow and network congestion,
ensuring efficient and reliable communication. By dynamically adjusting window
sizes based on real-time conditions, TCP optimizes data transfer rates and maintains
network stability.

### References
- [RFC 793 - Transmission Control Protocol](https://tools.ietf.org/html/rfc793)
- [Wikipedia: Transmission Control
Protocol](https://en.wikipedia.org/wiki/Transmission_Control_Protocol)
- [Cisco Networking Academy](https://www.cisco.com/c/en/us/solutions/enterprise-
networks/networking.html)
17

- [GeeksforGeeks: Sliding Window Protocol]


(https://www.geeksforgeeks.org/computer-network-sliding-window-protocol/)

FLOW CONTROL:::::::
### Flow Control in TCP

Flow control is a critical mechanism in TCP (Transmission Control Protocol) that


ensures a sender does not overwhelm a receiver with too much data at once. This
mechanism prevents buffer overflow at the receiver's end, ensuring smooth and
efficient data transmission. TCP employs a sliding window protocol for flow control,
which adjusts the rate of data transmission based on the receiver's ability to process
the incoming data.

#### How Flow Control Works in TCP

1. **Sliding Window Protocol**


- TCP uses a sliding window mechanism where the sender can send multiple
packets before needing an acknowledgment for the first one.
- The size of this window can vary and is adjusted dynamically based on network
conditions and receiver capacity.

2. **Receiver's Advertised Window (rwnd)**


- The receiver specifies a window size (rwnd) in the TCP header, indicating the
available buffer space for incoming data.
- This value tells the sender the amount of data it can send without overwhelming
the receiver.
- The receiver continuously updates and sends this value to the sender as part of
the acknowledgment process.

3. **Sender's Transmission Window**


- The sender maintains its own transmission window, which is the amount of data it
is allowed to send but has not yet been acknowledged.
- This window is determined by the minimum of the sender’s congestion window
(cwnd) and the receiver’s advertised window (rwnd).

4. **Acknowledgments and Window Adjustment**


18

- The receiver acknowledges the receipt of data by sending ACK packets, which
include the updated window size.
- As the sender receives these ACKs, it adjusts its transmission window
accordingly.
- If the receiver’s buffer is full, it can advertise a window size of zero, signaling the
sender to stop sending more data until buffer space becomes available.

#### Detailed Steps of Flow Control


1. **Initial Transmission**
- The sender starts by sending data segments up to the limit defined by the
receiver’s initial advertised window size.
2. **Data Reception and Buffering**
- The receiver buffers incoming data and processes it.
- As it processes the data, it frees up buffer space and updates the advertised
window size accordingly.
3. **Acknowledgments with Window Size**
- The receiver sends ACK packets that include the current size of the available
buffer space (updated rwnd).
- These ACKs serve two purposes: they confirm receipt of data and inform the
sender of the current window size.
4. **Window Adjustment by Sender**
- The sender adjusts its transmission window based on the updated rwnd value.
- If the window size decreases, the sender slows down or stops sending new data
until space is available.
- If the window size increases, the sender can resume sending more data.

#### Example Scenario


- **Initial Condition**: The receiver advertises a window size of 5000 bytes.
- **Data Transfer**: The sender sends 4000 bytes of data in segments.
- **ACK Reception**: The receiver acknowledges the data and updates the
advertised window size to 3000 bytes (buffered data processed).
- **Continued Transmission**: The sender adjusts its window and sends additional
data up to the new window limit.

#### Importance of Flow Control


19

- **Prevents Buffer Overflow**: By matching the sender’s rate with the receiver’s
processing capability, flow control prevents buffer overflow at the receiver.
- **Efficient Data Transfer**: Ensures that data is transmitted smoothly without
excessive delays or retransmissions.
- **Network Stability**: Contributes to overall network stability by preventing
congestion and data loss.

### Conclusion
Flow control in TCP, primarily implemented through the sliding window protocol and
the receiver’s advertised window (rwnd), is essential for ensuring efficient and
reliable data transmission. It prevents the sender from overwhelming the receiver,
manages buffer space effectively, and maintains smooth network performance.
17. TCP Connection Management: The steps required to establish and release
connections can be represented in a finite state machine with the 11 states are listed
in Fig. 6-38.

Each connection starts in the CLOSED state. It leaves that state when it turns into a
passive open (LISTEN) or an active open (CONNECT). If the other side agrees, a
connection is ESTABLISHED.
Connection release can be initiated by either side. When it is complete, the state
returns to CLOSED.
18. TCP Sliding Window: Window management in TCP decouples the issues of
acknowledgement of the segments and buffer allocation.
20

Ex: Suppose the receiver has a 4096-byte buffer. If the sender transmits a 2048-byte
segment, the receiver will acknowledge the same. Now only 2048 bytes of buffer
space is left; hence, it will advertise a window of 2048 starting at the next byte.
Now the sender transmits another 2048 bytes, which are acknowledged, but the
advertised window is of size 0. The sender must stop until the application on the
receiving host has removed some data from the buffer.
NOTE: When the window is 0, the sender may not normally send segments, with two
exceptions.
 First, urgent data may be sent, for example, to allow the user to kill the
process running on the remote machine.
 Second, the sender may send a 1-byte segment to force the receiver to re-
announce the next byte expected and the window size. This packet is called a
window probe.
The TCP standard provides this option to prevent deadlock (if a window update ever
gets lost).
21

19. Delayed Acknowledgements: The idea is to delay acknowledgements and window


updates for up to 500 msec in the hope of acquiring some data on which to hitch a
free ride.

20. Nagle’s Algorithm: When data comes into the sender in small pieces, just send the
first piece and buffer all the rest until the first piece is acknowledged. Then send all
the buffered data in one TCP segment and start buffering again until the next
segment is acknowledged.

Nagle’s algorithm will put the many pieces in one segment, greatly reducing the
bandwidth used.

21. Silly Window Syndrome: This problem occurs when data is passed to the sending
TCP entity in large blocks, but the application on the receiving side reads data only 1
byte at a time.

Initially, the TCP buffer on the receiving side is full (i.e., it has a window of size 0)
and the sender knows this. Then the application reads one character from the TCP
stream.
The receiving TCP now sends a window update to the sender saying that it is all
right to send 1 byte. The sender agrees and sends 1 byte. The buffer is now full, so
the receiver acknowledges the 1-byte segment and sets the window to 0. This
behaviour can go on forever.
22. Clark’s Solution: Clark’s solution is to prevent the receiver from sending a window
update for 1byte. Instead, it is forced to wait until it has a decent amount of space
available (ex: 10 KB). This space depends upon the agreed maximum size at the
time of connection or if half of its buffer is empty etc.
22

Also, the sender should not send tiny segments and should wait till it receives half of
the packet.
23. NOTE: Nagle’s algorithm and Clark’s solution to the silly window syndrome are
complementary. Nagle was trying to solve the problem caused by the sending
application delivering data to TCP a byte at a time. Clark tried to solve the problem of
the receiving application accepting the data from TCP one byte at a time.

Both solutions are valid and can work together. The goal is for the sender not to
send small segments and the receiver not to ask for them.

24. Cumulative Acknowledgement: Acknowledgements can be sent only when all the
acknowledged data bytes have been received. This is called a cumulative
acknowledgement. If the receiver gets segments 0, 1, 2, 4, 5, 6, and 7, it can
acknowledge everything up to and including the last byte in segment 2. If the sender
times out, it then retransmits segment 3. Since the receiver has buffered segments
4 to 7, upon receipt of segment 3 it can acknowledge all bytes up to the end of
segment 7.

25. Error Control and Flow Control: Error control is ensuring that the data is delivered
with the desired level of reliability, i.e. all data is delivered without any errors. Flow
control is keeping a fast transmitter from overrunning a slow receiver.

26. Concepts used in TCP are almost same as those in DLL.


 A frame carries an error-detecting code (e.g., a CRC or checksum) that is used
to check if the information was correctly received.
 A frame carries a sequence number to identify itself and is retransmitted by the
sender until it receives an acknowledgement of successful receipt from the
receiver. This is called ARQ (Automatic Repeat reQuest).
 There is a maximum number of frames that the sender will allow to be
outstanding at any time, pausing if the receiver is not acknowledging quickly. If
this maximum is one packet, the protocol is called stop-and-wait.
 The sliding window protocol combines these features and is also used to support
bidirectional data transfer.
 Even though the same mechanisms are used in DLL and TL, there are
differences in function and degree.

27. TCP Congestion Control: When the load offered to any network is more than it can
handle, congestion builds up. The network layer detects congestion when queues
grow large at routers and tries to manage it by dropping packets. In transport layer,
the goal is to avoid congestion, instead of handling it.

NOTE: Goodput, is the rate at which useful packets are delivered by the network.

28. Efficiency and Power: An efficient allocation of bandwidth across transport entities
will use all of the network capacity that is available.
23

As the load increases in Fig. 6-19(a) goodput initially increases at the same rate, but
as the load approaches the capacity, goodput rises more gradually. If the transport
protocol is poorly designed and retransmits the delayed packets, congestion collapse
can occur.
The corresponding delay is given in Fig. 6-19(b). Initially the delay is at a fixed rate.
As the load approaches the capacity, the delay rises. Note that packets will be lost
after experiencing the maximum buffering delay.
For both goodput and delay, performance begins to degrade at the onset of
congestion. We will obtain the best performance if we allocate bandwidth before the
delay starts to climb rapidly.
Metric of Power  power = load / delay
Power will initially rise as delay remains constant, but will reach a maximum and fall
as delay grows rapidly. The load with the highest power is called as an efficient load
for the transport entity.
29. Max-min Fairness: This deals with the concept of how to allocate bandwidth for the
existing senders. Dividing bandwidth into equal fractions is not correct – a sender
might be idle at that time or sending heavy payload.
In max-min fairness, the bandwidth given to one flow (path or connection) cannot be
increased without decreasing the bandwidth given to another flow. That is,
increasing the bandwidth of a flow will decrease the same for other flows.
Ex: A max-min fair allocation is shown for a network with four flows, A, B, C, and D,
in Fig. 6-20.
24

Each of the links between routers has the same capacity, taken to be 1 unit.
Three flows compete for the bottom-left link between routers R4 and R5. Each of
these flows therefore gets 1/3 of the link.
The remaining flow, A, competes with B on the link from R2 to R3. Since B has an
allocation of 1/3, A gets the remaining 2/3 of the link.
If more of the bandwidth on the link between R2 and R3 is given to flow B, there will
be less for flow A. This is reasonable since A already has more bandwidth and it
loses nothing.
However, to give more bandwidth to B, the capacity C or D (or both) must be
decreased, and these flows will have less bandwidth than B. Thus, the allocation is
max-min fair.
30. Regulating the Sending Rate: This is another concept to avoid congestion in the
flow between transport entities.
25

In Fig. 6-22(a), we see a thick pipe leading to a small-capacity receiver. This is a


flow-control limited situation. As long as the sender does not send more water than
the bucket can contain, no water will be lost.
In Fig. 6-22(b), the limiting factor is not the bucket capacity, but the internal carrying
capacity of the network. If too much water comes in, it will back up and some will be
lost by overflowing the tunnel (not bucket).
31. TCP Congestion Control: This is carried out in transport layer using AIMD rule.
 Additive Increase Multiplicative Decrease (AIMD): In this proposal, the users
additively increase their bandwidth allocations and then multiplicatively
decrease them when congestion is signalled. This is shown in Fig. 6-25.
26

TCP congestion control is based on implementing AIMD approach using a window


and with packet loss as the binary signal. TCP maintains a congestion window
whose size is the number of bytes the sender may have in the network at any time.
The corresponding rate is the window size divided by the round-trip time of the
connection.
 Congestion Collapse: It is a prolonged period during which goodput dropped
swiftly due to congestion in the network.

32. Slow Start Algorithm: When a connection is established, the sender initializes the
congestion window to a small initial value of four segments (maximum). The sender
then sends the initial window. The packets will take a round-trip time to be
acknowledged.
For each segment that is acknowledged, the sender adds one segment the
congestion window. Since that segment has been acknowledged, there is now one
less segment in the network.
The result is that every acknowledged segment allows two more segments to be
sent. The congestion window is doubling every roundtrip time.
27

Part-2 Application Layer


1. World Wide Web: The Web is an architectural framework for accessing linked
content spread out over millions of machines all over the Internet.
The initial idea was to help teams, with members in many countries and time zones,
to collaborate using a collection of reports, blueprints, drawings, photos, and other
documents produced by experiments in particle physics.
The proposal for a web of linked documents came from CERN physicist Tim
Berners-Lee.
Marc Andreessen at the University of Illinois developed the first graphical browser
Mosaic in February 1993. Others are Netscape navigator, Internet Explorer etc.
2. Through the 1990s and 2000s, Web content grew exponentially until there were
millions of sites and billions of pages. A small number of these sites became
tremendously popular. Ex: Amazon. Google, Facebook (Meta), ChatGPT etc.

3. Architecture of the WWW: the Web consists of a vast, worldwide collection of


content in the form of Web pages, often just called pages for short. Each page may
contain links to other pages anywhere in the world.

Users can follow a link by clicking on it, which then takes them to the page pointed
to. This process can be repeated indefinitely. The idea of having one page point to
another is called hypertext.
Pages are generally viewed with a program called a browser. Firefox, Internet
Explorer, and Chrome are examples of popular browsers.
28

4. HTTP—The Hyper Text Transfer Protocol: HTTP is a simple request-response


protocol that normally runs over TCP. It specifies what messages clients may send
to servers and what responses they get back in return.
The content may simply be a document that is read off a disk, or the result of a
database query and program execution. The page is a static page if it is a document
that is the same every time it is displayed.
In contrast, if it was generated on demand by a program or contains a program it is a
dynamic page.
5. Client Side: A browser is a program that can display a Web page and catch mouse
clicks to items on the displayed page. When an item is selected, the browser follows
the hyperlink and fetches the page selected.
The web needed mechanisms for naming and locating pages. Each page is
assigned a URL (Uniform Resource Locator) that effectively serves as the page’s
worldwide name.
URLs have three parts:
 Protocol (also known as the scheme),
 DNS (Domain Name System)of the machine on which the page is located,
 Path uniquely indicating the specific page (a file to read or program to run on the
machine)
Ex: http://www.gvpcew.ac.in/it.php
This URL consists of three parts: the protocol (http), the DNS name of the host
(www.gvpcew.ac.in), and the path name (it.php).
29

When a user clicks on a hyperlink, the browser carries out a series of steps in order
to fetch the page pointed to. Let us trace the steps that occur when our example link
is selected:
1. The browser determines the URL.
2. The browser asks DNS for the IP address of the server www.gvpcew.ac.in.
3. DNS replies with 128.208.3.88.
4. The browser makes a TCP connection to 128.208.3.88 on port 80, for the HTTP
protocol.
5. It sends over an HTTP request asking for the page /it.php.
6. The www.gvpcew.ac.in server sends the page as an HTTP response, by sending
the file /it.php.
7. If the page includes URLs that are needed for display, the browser fetches the
other URLs using the same process. In this case, the URLs include multiple
embedded images, an embedded video, and a script from google-analytics.com.
8. The browser displays the page /index.html.
9. The TCP connections are released if there are no other requests.

6. Some simplified URL forms are listed below:

7. A plug-in is a third-party code module that is installed as an extension to the


browser, as illustrated in Fig. 7-20(a). Common examples are plug-ins for PDF,
Flash, and Quicktime to render documents and play audio and video. Because plug-
ins run inside the browser, they have access to the current page and can modify its
appearance.
30

The other way to extend a browser is make use of a helper application. This is a
complete program, running as a separate process. It is illustrated in Fig. 7-20(b).
Since the helper is a separate program, the interface accepts the name of a scratch
file where the content file has been stored, opens the file, and displays the contents.
Ex: Microsoft Word or PowerPoint.
8. Server Side: The browser parses the URL and interprets the part between http://
and the next slash as a DNS name to look up. Then, the browser establishes a TCP
connection to port 80 on that server and sends a command containing the rest of the
URL. The server then returns the page for the browser to display.
The steps that the server performs are:
 Accept a TCP connection from a client (a browser).
 Get the path to the page, which is the name of the file requested.
 Get the file (from disk).
 Send the contents of the file to the client.
 Release the TCP connection.

9. To tackle the problem of serving a single request at a time, we make the server
multithreaded. In one design, the server consists of a front-end module that accepts
all incoming requests and k processing modules, as shown in Fig. 7-21.

The k + 1 threads all belong to the same process, so the processing modules all
have access to the cache within the process’ address space. When a request comes
in, the front end accepts it and builds a short record describing it. It then hands the
record to one of the processing modules.
31

10. HTTP Connections: The browser contacts a server to establish a TCP connection
to port 80 on the server’s machine. The advantage of using TCP is that neither
browsers nor servers have to worry about data, reliability, or congestion control – all
are handled by TCP.

11. Persistent Connections: Through these types of connections, it is possible to


establish a TCP connection, send a request and get a response, and then send
additional requests and get additional responses. This strategy is also called
connection reuse.
32

In Fig. 7-36(b), the page is fetched with a persistent connection. That is, the TCP
connection is opened at the beginning, three requests are sent, one after the other,
and then the connection is closed.

Observe that the procedure completes more quickly. There are two reasons for the
speedup.
 Time is not wasted setting up additional connections.
 The transfer of the same images proceeds more quickly, because of TCP
congestion control.

12. Parallel Connection Method: It is possible to send one request per TCP connection,
but run multiple TCP connections in parallel. It has the disadvantage as of extra
overhead but offers better performance.

13. Methods: HTTP is designed for use in the Web, but was made more general to
support object-oriented uses. For this reason, methods are supported.

 The GET method requests the server to send the page.


 The HEAD method just asks for the message header, without the actual page.
This method can be used to collect information for indexing, or to test a URL.
 The POST method is used when forms are submitted; it uploads data to the
server.
 The PUT method is the reverse of GET: instead of reading the page, it writes the
page.
 DELETE removes the page.
 The TRACE method is for debugging. It instructs the server to send back the
request that it has received. This method is useful when requests are not being
processed correctly.
 The CONNECT method lets a user make a connection to a Web server through
an intermediate device, such as a Web cache.
 The OPTIONS method is used by a client to query the server for a page and
obtain the methods and headers that can be used on the page.
33

14. Electronic Mail:


(a) Architecture and Services:

The architecture of the email system consists of two kinds of subsystems:


 user agents -> allow people to read and send email, and
 message transfer agents -> move the messages from the source to the
destination.
The user agent is a program that provides a GUI that lets users interact with the
email system. It includes a means to compose messages and replies to messages,
display incoming messages, and organize messages by filing, searching, and
discarding them. The act of sending new messages into the mail system for delivery
is called mail submission.
The message transfer agents are typically system processes. They run in the
background on mail server machines. Their job is to automatically move email
through the system from the originator to the recipient with SMTP (Simple Mail
Transfer Protocol).
Mailboxes store the email that is received for a user. They are maintained by mail
servers.
There is a distinction between the envelope and its contents. The envelope contains
all the information needed for transporting the message, such as the destination
address, priority, and security level. The message transport agents use the envelope
for routing, just as the post office does.
The message inside the envelope consists of two separate parts: the header and the
body.
34

Email security is all about protecting your email account and the information you
send and receive through it. It's important because email is a common target for
cyberattacks, such as phishing scams, malware, and spam.
Here are some key things email security covers:
 Protecting against malicious content: This includes filtering out spam
emails, blocking malware attachments, and detecting phishing attempts.
 Securing your account: This involves using strong passwords, enabling two-
factor authentication, and keeping your email software up to date.
 Encrypting emails: This scrambles the content of your emails so that only
the intended recipient can read them.
There are several ways to improve your email security. Here are some best
practices:
 Use strong and unique passwords for your email account.
 Enable two-factor authentication (2FA) for your email account.
 Be cautious about opening attachments, especially from unknown senders.
35

 Don't click on links in emails from suspicious senders.


 Be wary of emails that urge you to take immediate action or provide personal
information.
 Use email encryption for sensitive communications.
 Keep your email software and device operating system up to date.
By following these tips, you can help to keep your email account safe and secure.

15. TELNET: TELNET stands for Teletype Network, which is a protocol that enables a
computer to connect to the local computer. It is used as a standard TCP/IP protocol
for virtual terminal service which is provided by ISO.

NOTE: The computer which starts the connection is known as the local computer.

During telnet operation, whatever is being performed on the remote computer will be
displayed by the local computer. Telnet operates on a client/server principle.

The local computer uses a telnet client program and the remote computers use a
telnet server program.

16. Logging: The logging process can be further categorized into two parts:
(a) Local Login
(b) Remote Login

17. Local Login: Whenever a user logs into its local system, it is known as local login.
36

Procedure of Local Login:


 Keystrokes are accepted by the terminal driver when the user types at the
terminal.
 Terminal Driver passes these characters to OS.
 OS validates the combination of characters and opens the required application.

18. Remote Login: It is a process in which users can log in to a remote site i.e.,
computer and use services that are available on the remote computer. With the help
of remote login, a user is able to understand the result of transferring the data from
the remote computer to the local computer.

The Procedure of Remote Login:


37

 When the user types on the local computer, the local OS accepts the data.
 The local computer send the data TELNET client.
 TELNET client transforms these characters to a universal character set called
Network Virtual Terminal (NVT).
 Commands in the form of NVT, travel through the Internet and it will arrive at
the TCP/IP stack at the remote computer.
 The remote operating system receives characters from a pseudo-terminal
driver, which is a piece of software that pretends that characters are coming
from a terminal.
 The operating system then passes the character to the appropriate application
program.

19. Domain Name System: The DNS is a hierarchical, domain-based naming scheme
and a distributed database system for implementing this naming scheme. It is used
for mapping host names to IP addresses and can also be used for other purposes.
DNS is used is as follows. To map a name onto an IP address, an application
program calls a library procedure called the resolver, passing it the name as a
parameter.
The resolver sends a query containing the name to a DNS server, which looks up the
name and returns a response containing the IP address. The resolver then returns it
to the caller.
The query and response messages are sent as UDP packets. Now the program can
then establish a TCP connection with the host or send it UDP packets.
20. For the Internet, the top of the naming hierarchy is managed by an organization
called ICANN (Internet Corporation for Assigned Names and Numbers). ICANN was
created as part of maturing the Internet to a worldwide, economic concern.
Internet is divided into 250 domains, where each domain covers many hosts. Each
domain is partitioned into subdomains, and these are further partitioned, and so on.
All these domains can be represented by a tree, as shown in Fig. 7-1. The leaves of
the tree represent domains that have no subdomains (but do contain machines, of
course). A leaf domain may contain a single host, or it may represent a company and
contain thousands of hosts.
38
39

The Domain Name System (DNS) is a hierarchical and decentralized naming system
for computers, services, or other resources connected to the internet or a private
network. It translates human-readable domain names to numerical IP addresses
needed for locating and identifying computer services and devices.

### Key Points about the Domain Name System (DNS)

#### 1. **Purpose and Function**


- **Name Resolution**: DNS translates domain names (e.g., www.example.com) into
IP addresses (e.g., 192.0.2.1) so that browsers can load Internet resources.
- **User-Friendly**: It allows users to access websites using easy-to-remember
names instead of complex numerical addresses.

#### 2. **Hierarchical Structure**


- **Domain Hierarchy**: DNS uses a hierarchical structure with different levels,
including top-level domains (TLDs), second-level domains, and subdomains.
- **Top-Level Domains (TLDs)**: The highest level in the hierarchy, such
as .com, .org, .net, .edu, and country-specific TLDs like .uk, .jp.
40

- **Second-Level Domains**: Directly below TLDs, typically representing the name


of an organization or entity (e.g., example in www.example.com).
- **Subdomains**: Additional parts of the domain that can represent different
services or departments (e.g., mail.example.com).

#### 3. **Components of DNS**


- **DNS Resolver**: A server that receives a domain name query from a user’s
device and then interacts with other DNS servers to find the IP address.
- **Root DNS Servers**: The top-level DNS servers that direct queries to the
appropriate TLD DNS servers.
- **TLD DNS Servers**: Servers responsible for specific top-level domains and
directing queries to authoritative DNS servers for the second-level domains.
- **Authoritative DNS Servers**: Servers that contain the actual DNS records for
specific domains and provide responses to queries with the corresponding IP
addresses.

#### 4. **DNS Records**


- **A Record**: Maps a domain to an IPv4 address.
- **AAAA Record**: Maps a domain to an IPv6 address.
- **CNAME Record**: Maps a domain to another domain name (canonical name).
- **MX Record**: Specifies the mail servers for handling emails for the domain.
- **TXT Record**: Provides text information to sources outside the domain, often
used for email verification and security purposes (e.g., SPF, DKIM).

#### 5. **Process of DNS Resolution**


1. **User Query**: A user types a domain name into their browser.
2. **DNS Resolver**: The user's device sends the query to a DNS resolver.
3. **Root Server**: The DNS resolver contacts a root DNS server, which points to a
TLD server.
4. **TLD Server**: The TLD server points to an authoritative DNS server for the
domain.
5. **Authoritative Server**: The authoritative server returns the IP address
associated with the domain name.
6. **Website Access**: The resolver sends the IP address back to the user's device,
which then accesses the website using the IP address.
41

#### 6. **DNS Caching**


- **Resolver Cache**: DNS resolvers cache responses to reduce load times and
decrease traffic by storing DNS query results for a certain period.
- **TTL (Time To Live)**: The duration for which a DNS record is cached, defined in
the DNS record itself.

#### 7. **Security Concerns and Measures**


- **DNS Spoofing/Cache Poisoning**: Attackers can introduce corrupt DNS data into
the resolver’s cache, redirecting users to malicious sites.
- **DNSSEC (Domain Name System Security Extensions)**: Adds security to
prevent certain types of attacks by ensuring that the responses to DNS queries are
authentic.

#### 8. **DNS Services and Providers**


- **Managed DNS Providers**: Companies that provide DNS resolution services and
additional features like load balancing, failover, and security enhancements.
- **Popular Providers**: Examples include Google Public DNS, Cloudflare DNS, and
OpenDNS.

Understanding DNS is fundamental to comprehending how the internet works, as it


ensures that users can navigate to websites using human-readable names rather
than numeric IP addresses.
21. The process of looking up for a name and finding an address is called name
resolution. When a resolver has a query about a domain name, it passes the query
to a local name server. If the concerned falls under the server, it returns the
authoritative resource records.
An authoritative record is one that comes from the authority that manages the record
and is thus always correct. Authoritative records are in contrast to cached records,
which may be out of date.
42

Deep Dive into DNS:

Here's a detailed explanation of each term related to the Domain Name System
(DNS):
1. Resolution:
 Definition: DNS resolution is the process of translating a human-readable
domain name (like [invalid URL removed]) into its corresponding machine-
readable IP address (like 172.217.160.136).
 Process:
o When you enter a domain name in your browser, your computer
contacts a DNS resolver (often provided by your internet service
provider).
o The resolver queries a series of DNS servers to find the authoritative
name server for the domain.
o The authoritative name server holds the actual IP address for that
domain name.
o Once found, the IP address is returned to the resolver, then back to
your computer.
o Your computer can then connect to the website using the IP address.
2. Caching:
 Definition: DNS caching is an optimization technique where frequently
accessed DNS records are stored temporarily on a resolver or local machine.
43

 Benefits:
o Speeds up subsequent lookups for the same domain name by
eliminating the need to query the entire DNS hierarchy again.
o Reduces load on upstream DNS servers.
 Cache Invalidation: Cached entries have a Time-To-Live (TTL) associated
with them. After the TTL expires, the cache entry becomes stale and a fresh
lookup is performed.
3. Resource Records (RRs):
 Definition: Resource Records (RRs) are the fundamental building blocks of
DNS data. They contain information about a specific domain name and come
in various types.
 Common RR Types:
o A Record: Maps a domain name to an IPv4 address (e.g., [invalid URL
removed] -> 172.217.160.136)
o AAAA Record: Maps a domain name to an IPv6 address.
o CNAME Record: Creates an alias for a domain name, pointing it to
another domain name (e.g., www.[invalid URL removed] -> [invalid
URL removed])
o MX Record: Specifies mail exchange servers for a domain name, used
for email routing.
o NS Record: Identifies the authoritative name servers for a domain.
o PTR Record (Reverse DNS): Maps an IP address to a domain name
(less common).
 Structure: Each RR consists of several fields, including:
o Name: The domain name the record pertains to.
o Type: The type of record (A, AAAA, CNAME, etc.).
o TTL: The time a record can be cached before becoming stale.
o Data: The specific information associated with the record type (e.g., IP
address for A record).
4. DNS Messages:
 Definition: DNS messages are the packets exchanged between DNS
resolvers and servers during the resolution process.
 Types:
44

o DNS Request: Initiated by a resolver to query for a specific domain


name.
o DNS Response: Sent by a server containing the requested resource
record (IP address) or an error message if not found.
 Structure: DNS messages follow a specific format defined in the DNS
protocol. They contain information about the request/response type, the
queried domain name, and additional flags and options.
5. Registrars:
 Definition: Domain name registrars are accredited organizations that allow
individuals and businesses to register domain names.
 Function:
o Registrars provide a user interface for searching available domain
names and registering them for a specific period (typically 1-10 years).
o They manage the registration process and ensure domain names are
unique within the Top-Level Domain (TLD) they represent.
o Registrars typically charge a fee for domain name registration and
renewal.
6. Security of DNS Name Servers:
 Importance: DNS security is crucial for maintaining a reliable and trustworthy
internet.
 Threats:
o DNS Spoofing: Attackers redirect users to malicious websites by
providing false IP addresses for legitimate domain names.
o DNS Hijacking: Attackers gain control of a domain's authoritative
name servers, allowing them to redirect traffic or steal sensitive data.
o DNS Cache Poisoning: Attackers inject false information into DNS
caches, leading users to compromised websites.
 Security Measures:
o DNSSEC: A set of extensions that adds cryptographic signing to DNS
data, verifying the authenticity and integrity of records.
o DDoS Protection: Mitigating attacks that overwhelm DNS servers with
a flood of traffic.
o Secure Registration Practices: Implementing strong authentication
45

**********************************************************************************************

You might also like