Unit-5 Notes
Unit-5 Notes
1. The network layer provides end-to-end packet delivery using datagrams or virtual
circuits. After this, the transport layer provides data transport from a process on a
source machine to a process on a destination machine.
The ultimate goal of the transport layer is to provide efficient, reliable, and cost-
effective data transmission service to its users, normally processes in the application
layer. The relationship of the network, transport, and application layers is illustrated
in Fig. 6-1.
3. NOTE: The term segment is used for messages sent from transport entity to
transport entity. Internet protocols use this term.
4. Berkeley Sockets: Sockets were first released as part of the Berkeley UNIX 4.2BSD
software distribution in 1983. They quickly became popular. The primitives are now
widely used for Internet programming on many UNIX operating systems.
3
Primitives:
However, differences also exist. These differences are due to dissimilarities between
the environments in which the two protocols operate, as shown in Fig. 6-7.
At the data link layer, two routers communicate directly via a physical channel. At the
transport layer, this physical channel is replaced by the entire network.
The process of establishing a connection over the wire is simple: the other end is
always there – just sending a message is sufficient. If the message is not
acknowledged due to an error, it can be resent.
Another difference between the data link layer and the transport layer is the
existence of storage capacity in the network. When a router sends a packet over a
link, it may arrive or be lost, but it cannot go into hiding and suddenly emerge later.
A final difference between the data link and transport layers is the transport layer is
dealt in different approach in each of them.
6. Port Number: TCP service is obtained by both the sender and the receiver creating
end points, called sockets. Each socket has a socket number (address) consisting
of the IP address of the host and a 16-bit number local to that host, called a
port.
A socket may be used for multiple connections at the same time. In other words, two
or more connections may terminate at the same socket. Connections are identified
by the socket identifiers at both ends that is, (socket1, socket2 etc.).
Port numbers below 1024 are reserved for standard services of privileged users
(e.g., root in UNIX systems). They are called well-known ports.
NOTE: A datagram is a basic transfer unit associated with a network and provides
connectionless communications. Datagrams are structured as header and payload
sections. The delivery, arrival time, and order of arrival of datagrams are not
guaranteed by the network.
#### Introduction
The User Datagram Protocol (UDP) is a core protocol in the Internet Protocol (IP)
suite. Defined in RFC 768, UDP provides a lightweight, connectionless
communication method that is ideal for applications needing fast, efficient data
transmission with minimal overhead.
3. **Low Overhead**: The UDP header is only 8 bytes, making it suitable for
applications where minimizing overhead is crucial.
- **Length (16 bits)**: The length of the UDP header and data.
- **Checksum (16 bits)**: Used for error-checking the header and data.
1. **Minimal Error Checking**: While UDP includes a checksum for error detection, it
does not guarantee delivery. If an error is detected, the packet is discarded without
any notification.
2. **No Flow Control**: UDP does not manage the rate of data transmission between
sender and receiver, which can lead to potential data loss if the receiver is
overwhelmed.
3. **No Congestion Control**: Unlike TCP, UDP does not reduce data transmission
rates in response to network congestion.
6
1. **DNS (Domain Name System)**: Uses UDP for quick query and response
exchanges, where low latency is more critical than reliability.
2. **VoIP (Voice over IP)**: Transmits voice data where occasional loss is tolerable,
but low latency is essential for maintaining conversation quality.
3. **Streaming Media**: Delivers audio and video content efficiently, where slight
data loss is acceptable in favor of timely delivery.
5. **TFTP (Trivial File Transfer Protocol)**: Uses UDP for simple, low-overhead file
transfers.
### Conclusion
UDP is a simple but powerful protocol suitable for applications that prioritize speed
and efficiency over reliability. It is widely used in scenarios where timely delivery is
more critical than perfect accuracy, making it a fundamental component of many
real-time and streaming services.
When a UDP packet arrives, its payload is handed to the process attached to the
destination port. This attachment occurs through the BIND primitive.
The advantage of UDP over IP is the usage of source and destination ports. Without
the port fields, transport layer would not know what to do with the incoming packets.
The UDP length field includes the 8-byte header and the data. The minimum length
is 8 bytes and maximum length is 65,515 bytes, lower than the largest number that
will fit in 16 bits.
An optional Checksum is also provided for extra reliability. It checksums the header,
the data, and an IP pseudo-header.
7
It contains the 32-bit IPv4 addresses of the source and destination machines, the
protocol number for UDP (17), and the byte count for the UDP. It is used to detect
misdelivered packets.
8. UDP does not do flow control, congestion control, or retransmission upon receipt of a
bad segment. It just provides an interface to the IP protocol with multiple processes
using the ports and optional end-to-end error detection.
UDP APPLICATIONS
Live Streaming (Audio/Video): Streaming live video or audio is all about smooth
playback. Occasional packet loss with UDP is less disruptive than the delays caused
by retransmission requests in TCP.
Online Gaming: Fast reflexes are king in online gaming. UDP ensures quick
response times by prioritizing speed over order and error correction. Even with a
dropped packet here and there, the gaming experience remains fluid.
Voice over IP (VoIP): Similar to live streaming, real-time voice calls benefit from
UDP's speed. While a slight delay or missed packet might cause a stutter, it's
generally preferable to the choppiness caused by retransmissions in TCP.
DNS (Domain Name System): DNS lookups involve translating website names into
IP addresses. These are typically small data packets, and UDP's speed and efficiency
make it a perfect fit for these quick exchanges.
Network Management Protocols (NMPs): Protocols like SNMP (Simple Network
Management Protocol) often utilize UDP for quick exchanges of network monitoring
data. Speed is crucial for real-time network management, and some data loss can be
tolerated.
Video Conferencing: Similar to live streaming and VoIP, real-time video
conferencing benefits from UDP's speed. A slight glitch due to a dropped packet is
less disruptive than the delays caused by data retransmission.
8
Real-time Stock Tickers: Getting stock quotes in real-time is essential for financial
professionals. UDP delivers these updates quickly, even if a few data packets are lost
along the way.
Online Multiplayer Gaming: Just like online gaming in general, real-time
multiplayer games rely on UDP for fast response times. A bit of data loss is less
disruptive than the lag caused by TCP's error correction mechanisms.
TFTP (Trivial File Transfer Protocol): While not commonly used today, TFTP is a
simple file transfer protocol that utilizes UDP for quick transfers of small files.
Remember: When speed is paramount and some data loss is acceptable, UDP is your
champion. However, for applications where data integrity is critical (like downloading
important files), TCP remains the reliable choice.
TCP was designed to adapt the properties of the internetwork dynamically and to be
robust if failures occur (fault tolerance).
Each machine supporting TCP has a TCP transport entity (ex: a library procedure, a
user process, or part of the kernel). It manages TCP streams and interfaces to the IP
layer.
A TCP entity accepts user data streams from local processes, breaks them up into
pieces and sends each piece as a separate IP datagram. When datagrams
containing TCP data arrive at destination, they are given to local TCP entity, which
reconstructs the original byte streams.
init. This is the first daemon to start up when Unix boots, and it spawns all other
processes.
inetd. This listens for internet requests on a designated port number and assigns
a server program to handle them. Services handled by inetd include rlogin,
telnet, ftp etc.
crond. This daemon executes scheduled commands.
dhcpd. This daemon provides Dynamic Host Configuration Protocol services.
ftpd. This daemon is often started by inetd to handle File Transfer Protocol
requests.
httpd. This daemon acts as a web server.
9
NOTE:
Simplex: Only one of the two devices on a link can transmit, the other can only
receive. This mode uses the channel to send data in one direction. Example:
Keyboard and traditional monitors. The keyboard can only introduce input, the
monitor can only give the output.
Half-Duplex: Each station can both transmit and receive, but not at the same
time. When one device is sending, the other can only receive. Example: Walkie-
talkie in which message is sent one at a time and messages are sent in both
directions.
Full Duplex: Both stations can transmit and receive simultaneously. Ex:
Telephone.
Point-to-point means that each connection has exactly two end points.
TCP does not support multicasting or broadcasting.
12. A TCP connection is a byte stream, not a message stream. Message boundaries
are not preserved end to end.
For example, if the sending process sends four 512-byte packets to a TCP stream,
these data may be delivered to the receiving process as four 512-byte chunks, two
1024-byte chunks, or one 2048-byte chunk.
When an application passes data to TCP, TCP may send it immediately or buffer it
(in order to collect a larger amount to send at once), at its discretion.
If some data is to be sent immediately (ex: interactive game), TCP has the notion of
a PUSH flag that is carried on packets. Through this, applications tell TCP
implementations not to delay the transmission.
NOTE: Urgent data is an interesting feature of TCP service that remains in the
protocol but is rarely used. When an application has high priority data, the sending
application passes it to TCP with the URGENT flag. This causes TCP to stop
accumulating data and transmit everything it has immediately.
10
13. The TCP Protocol: Every byte on a TCP connection has its own 32-bit sequence
number. Separate 32-bit sequence numbers are carried on packets for the sliding
window position in one direction and for acknowledgements in the reverse direction.
The sending and receiving TCP entities exchange data in the form of segments. A
TCP segment consists of a 20-byte header followed by data bytes. The TCP
software decides how big segments should be.
The protocol used by TCP entities is the sliding window protocol with a dynamic
window size.
When a sender transmits a segment, it starts a timer. When the segment arrives at
the destination, the receiving TCP entity sends back a segment bearing an
acknowledgement number equal to the next sequence number it expects to receive
and the remaining window size.
If the sender’s timer goes off before the acknowledgement is received, the sender
transmits the segment again.
14. Disadvantages:
Segments can arrive out of order. Ex: Bytes 3072–4095 can arrive but cannot be
acknowledged because bytes 2048–3071 have not turned up yet.
Segments can be delayed so long in transit that the sender times out and
retransmits them.
Retransmissions may include different ranges than the original transmission.
15. The TCP Segment Header: The figure below shows the layout of a TCP segment.
11
Every segment begins with a 20-byte header. Note that Segments without any data
are legal and are used for acknowledgements and control messages.
The Source port and Destination port fields identify the local end points of the
connection.
The source and destination points together identify the connection, which is called
a 5 tuple: protocol (TCP), source IP and source port, and destination IP and
destination port.
Eight 1-bit flags: CWR and ECE are used to signal congestion when ECN
(Explicit Congestion Notification) is used, as specified in RFC 3168. ECE is set
to signal an ECN-Echo to a TCP sender to tell it to slow down when the TCP
receiver gets a congestion indication from the network. CWR is set to signal
Congestion Window Reduced from the TCP sender to the TCP receiver so that
it knows the sender has slowed down and can stop sending the ECN-Echo.
URG -> Urgent
ACK-> Acknowledgement
PSH-> Pushed data
RST -> Reset a connection
SYN -> Synchronization
FIN-> Finish
The CONNECT primitive sends a TCP segment with the SYN bit on and ACK bit off
and waits for a response.
14
When this segment arrives at the destination, the TCP entity checks to see if there is
a process that has done a LISTEN on the port field. If not, it sends a reply with the
RST bit on to reject the connection.
If some process is listening to the port, that process is given the incoming TCP
segment. If it accepts the connection, an acknowledgement is sent back.
Note that a SYN segment consumes 1 byte of sequence space so that it can be
acknowledged.
WINDOWS IN TCP::::::::
### Windows in TCP
#### Introduction
In TCP (Transmission Control Protocol), the concept of "windows" is essential for
managing the flow of data and ensuring efficient, reliable communication between
sender and receiver. TCP windows are part of the flow control and congestion
control mechanisms that help optimize network performance and prevent packet loss
and congestion.
The sliding window protocol is a method used by TCP to control the amount of data
that can be sent before receiving an acknowledgment. It allows multiple packets to
be sent before requiring an acknowledgment for the earliest packet. The key
components are:
1. **Sender Window**: The range of sequence numbers that the sender is allowed to
send.
2. **Receiver Window**: The range of sequence numbers that the receiver is
prepared to accept.
3. **Window Size**: The number of bytes that can be sent without receiving an
acknowledgment. This size can change dynamically based on network conditions.
1. **Slow Start**: Initially, the congestion window (cwnd) grows exponentially with
each acknowledgment received, doubling each round-trip time (RTT).
2. **Congestion Avoidance**: Once a threshold (ssthresh) is reached, the window
grows linearly to avoid sudden congestion.
3. **Fast Retransmit and Fast Recovery**: Upon detecting packet loss (e.g., via
duplicate ACKs), TCP reduces the window size and retransmits the lost packet
without waiting for a timeout.
TCP windows are dynamic and can change based on network conditions. Key
parameters include:
1. **Initial Transmission**: The sender starts with a small congestion window (cwnd)
and sends a few packets.
2. **Acknowledgments Received**: The receiver acknowledges the packets, and the
sender increments the cwnd based on the acknowledgments.
3. **Window Adjustment**: The receiver's advertised window (rwnd) may change
based on buffer availability, and the sender adjusts its transmission rate accordingly.
4. **Congestion Detection**: If packet loss occurs, the sender reduces the cwnd and
retransmits the lost packet, then gradually increases the window size as conditions
improve.
### Conclusion
Windows in TCP play a crucial role in managing data flow and network congestion,
ensuring efficient and reliable communication. By dynamically adjusting window
sizes based on real-time conditions, TCP optimizes data transfer rates and maintains
network stability.
### References
- [RFC 793 - Transmission Control Protocol](https://tools.ietf.org/html/rfc793)
- [Wikipedia: Transmission Control
Protocol](https://en.wikipedia.org/wiki/Transmission_Control_Protocol)
- [Cisco Networking Academy](https://www.cisco.com/c/en/us/solutions/enterprise-
networks/networking.html)
17
FLOW CONTROL:::::::
### Flow Control in TCP
- The receiver acknowledges the receipt of data by sending ACK packets, which
include the updated window size.
- As the sender receives these ACKs, it adjusts its transmission window
accordingly.
- If the receiver’s buffer is full, it can advertise a window size of zero, signaling the
sender to stop sending more data until buffer space becomes available.
- **Prevents Buffer Overflow**: By matching the sender’s rate with the receiver’s
processing capability, flow control prevents buffer overflow at the receiver.
- **Efficient Data Transfer**: Ensures that data is transmitted smoothly without
excessive delays or retransmissions.
- **Network Stability**: Contributes to overall network stability by preventing
congestion and data loss.
### Conclusion
Flow control in TCP, primarily implemented through the sliding window protocol and
the receiver’s advertised window (rwnd), is essential for ensuring efficient and
reliable data transmission. It prevents the sender from overwhelming the receiver,
manages buffer space effectively, and maintains smooth network performance.
17. TCP Connection Management: The steps required to establish and release
connections can be represented in a finite state machine with the 11 states are listed
in Fig. 6-38.
Each connection starts in the CLOSED state. It leaves that state when it turns into a
passive open (LISTEN) or an active open (CONNECT). If the other side agrees, a
connection is ESTABLISHED.
Connection release can be initiated by either side. When it is complete, the state
returns to CLOSED.
18. TCP Sliding Window: Window management in TCP decouples the issues of
acknowledgement of the segments and buffer allocation.
20
Ex: Suppose the receiver has a 4096-byte buffer. If the sender transmits a 2048-byte
segment, the receiver will acknowledge the same. Now only 2048 bytes of buffer
space is left; hence, it will advertise a window of 2048 starting at the next byte.
Now the sender transmits another 2048 bytes, which are acknowledged, but the
advertised window is of size 0. The sender must stop until the application on the
receiving host has removed some data from the buffer.
NOTE: When the window is 0, the sender may not normally send segments, with two
exceptions.
First, urgent data may be sent, for example, to allow the user to kill the
process running on the remote machine.
Second, the sender may send a 1-byte segment to force the receiver to re-
announce the next byte expected and the window size. This packet is called a
window probe.
The TCP standard provides this option to prevent deadlock (if a window update ever
gets lost).
21
20. Nagle’s Algorithm: When data comes into the sender in small pieces, just send the
first piece and buffer all the rest until the first piece is acknowledged. Then send all
the buffered data in one TCP segment and start buffering again until the next
segment is acknowledged.
Nagle’s algorithm will put the many pieces in one segment, greatly reducing the
bandwidth used.
21. Silly Window Syndrome: This problem occurs when data is passed to the sending
TCP entity in large blocks, but the application on the receiving side reads data only 1
byte at a time.
Initially, the TCP buffer on the receiving side is full (i.e., it has a window of size 0)
and the sender knows this. Then the application reads one character from the TCP
stream.
The receiving TCP now sends a window update to the sender saying that it is all
right to send 1 byte. The sender agrees and sends 1 byte. The buffer is now full, so
the receiver acknowledges the 1-byte segment and sets the window to 0. This
behaviour can go on forever.
22. Clark’s Solution: Clark’s solution is to prevent the receiver from sending a window
update for 1byte. Instead, it is forced to wait until it has a decent amount of space
available (ex: 10 KB). This space depends upon the agreed maximum size at the
time of connection or if half of its buffer is empty etc.
22
Also, the sender should not send tiny segments and should wait till it receives half of
the packet.
23. NOTE: Nagle’s algorithm and Clark’s solution to the silly window syndrome are
complementary. Nagle was trying to solve the problem caused by the sending
application delivering data to TCP a byte at a time. Clark tried to solve the problem of
the receiving application accepting the data from TCP one byte at a time.
Both solutions are valid and can work together. The goal is for the sender not to
send small segments and the receiver not to ask for them.
24. Cumulative Acknowledgement: Acknowledgements can be sent only when all the
acknowledged data bytes have been received. This is called a cumulative
acknowledgement. If the receiver gets segments 0, 1, 2, 4, 5, 6, and 7, it can
acknowledge everything up to and including the last byte in segment 2. If the sender
times out, it then retransmits segment 3. Since the receiver has buffered segments
4 to 7, upon receipt of segment 3 it can acknowledge all bytes up to the end of
segment 7.
25. Error Control and Flow Control: Error control is ensuring that the data is delivered
with the desired level of reliability, i.e. all data is delivered without any errors. Flow
control is keeping a fast transmitter from overrunning a slow receiver.
27. TCP Congestion Control: When the load offered to any network is more than it can
handle, congestion builds up. The network layer detects congestion when queues
grow large at routers and tries to manage it by dropping packets. In transport layer,
the goal is to avoid congestion, instead of handling it.
NOTE: Goodput, is the rate at which useful packets are delivered by the network.
28. Efficiency and Power: An efficient allocation of bandwidth across transport entities
will use all of the network capacity that is available.
23
As the load increases in Fig. 6-19(a) goodput initially increases at the same rate, but
as the load approaches the capacity, goodput rises more gradually. If the transport
protocol is poorly designed and retransmits the delayed packets, congestion collapse
can occur.
The corresponding delay is given in Fig. 6-19(b). Initially the delay is at a fixed rate.
As the load approaches the capacity, the delay rises. Note that packets will be lost
after experiencing the maximum buffering delay.
For both goodput and delay, performance begins to degrade at the onset of
congestion. We will obtain the best performance if we allocate bandwidth before the
delay starts to climb rapidly.
Metric of Power power = load / delay
Power will initially rise as delay remains constant, but will reach a maximum and fall
as delay grows rapidly. The load with the highest power is called as an efficient load
for the transport entity.
29. Max-min Fairness: This deals with the concept of how to allocate bandwidth for the
existing senders. Dividing bandwidth into equal fractions is not correct – a sender
might be idle at that time or sending heavy payload.
In max-min fairness, the bandwidth given to one flow (path or connection) cannot be
increased without decreasing the bandwidth given to another flow. That is,
increasing the bandwidth of a flow will decrease the same for other flows.
Ex: A max-min fair allocation is shown for a network with four flows, A, B, C, and D,
in Fig. 6-20.
24
Each of the links between routers has the same capacity, taken to be 1 unit.
Three flows compete for the bottom-left link between routers R4 and R5. Each of
these flows therefore gets 1/3 of the link.
The remaining flow, A, competes with B on the link from R2 to R3. Since B has an
allocation of 1/3, A gets the remaining 2/3 of the link.
If more of the bandwidth on the link between R2 and R3 is given to flow B, there will
be less for flow A. This is reasonable since A already has more bandwidth and it
loses nothing.
However, to give more bandwidth to B, the capacity C or D (or both) must be
decreased, and these flows will have less bandwidth than B. Thus, the allocation is
max-min fair.
30. Regulating the Sending Rate: This is another concept to avoid congestion in the
flow between transport entities.
25
32. Slow Start Algorithm: When a connection is established, the sender initializes the
congestion window to a small initial value of four segments (maximum). The sender
then sends the initial window. The packets will take a round-trip time to be
acknowledged.
For each segment that is acknowledged, the sender adds one segment the
congestion window. Since that segment has been acknowledged, there is now one
less segment in the network.
The result is that every acknowledged segment allows two more segments to be
sent. The congestion window is doubling every roundtrip time.
27
Users can follow a link by clicking on it, which then takes them to the page pointed
to. This process can be repeated indefinitely. The idea of having one page point to
another is called hypertext.
Pages are generally viewed with a program called a browser. Firefox, Internet
Explorer, and Chrome are examples of popular browsers.
28
When a user clicks on a hyperlink, the browser carries out a series of steps in order
to fetch the page pointed to. Let us trace the steps that occur when our example link
is selected:
1. The browser determines the URL.
2. The browser asks DNS for the IP address of the server www.gvpcew.ac.in.
3. DNS replies with 128.208.3.88.
4. The browser makes a TCP connection to 128.208.3.88 on port 80, for the HTTP
protocol.
5. It sends over an HTTP request asking for the page /it.php.
6. The www.gvpcew.ac.in server sends the page as an HTTP response, by sending
the file /it.php.
7. If the page includes URLs that are needed for display, the browser fetches the
other URLs using the same process. In this case, the URLs include multiple
embedded images, an embedded video, and a script from google-analytics.com.
8. The browser displays the page /index.html.
9. The TCP connections are released if there are no other requests.
The other way to extend a browser is make use of a helper application. This is a
complete program, running as a separate process. It is illustrated in Fig. 7-20(b).
Since the helper is a separate program, the interface accepts the name of a scratch
file where the content file has been stored, opens the file, and displays the contents.
Ex: Microsoft Word or PowerPoint.
8. Server Side: The browser parses the URL and interprets the part between http://
and the next slash as a DNS name to look up. Then, the browser establishes a TCP
connection to port 80 on that server and sends a command containing the rest of the
URL. The server then returns the page for the browser to display.
The steps that the server performs are:
Accept a TCP connection from a client (a browser).
Get the path to the page, which is the name of the file requested.
Get the file (from disk).
Send the contents of the file to the client.
Release the TCP connection.
9. To tackle the problem of serving a single request at a time, we make the server
multithreaded. In one design, the server consists of a front-end module that accepts
all incoming requests and k processing modules, as shown in Fig. 7-21.
The k + 1 threads all belong to the same process, so the processing modules all
have access to the cache within the process’ address space. When a request comes
in, the front end accepts it and builds a short record describing it. It then hands the
record to one of the processing modules.
31
10. HTTP Connections: The browser contacts a server to establish a TCP connection
to port 80 on the server’s machine. The advantage of using TCP is that neither
browsers nor servers have to worry about data, reliability, or congestion control – all
are handled by TCP.
In Fig. 7-36(b), the page is fetched with a persistent connection. That is, the TCP
connection is opened at the beginning, three requests are sent, one after the other,
and then the connection is closed.
Observe that the procedure completes more quickly. There are two reasons for the
speedup.
Time is not wasted setting up additional connections.
The transfer of the same images proceeds more quickly, because of TCP
congestion control.
12. Parallel Connection Method: It is possible to send one request per TCP connection,
but run multiple TCP connections in parallel. It has the disadvantage as of extra
overhead but offers better performance.
13. Methods: HTTP is designed for use in the Web, but was made more general to
support object-oriented uses. For this reason, methods are supported.
Email security is all about protecting your email account and the information you
send and receive through it. It's important because email is a common target for
cyberattacks, such as phishing scams, malware, and spam.
Here are some key things email security covers:
Protecting against malicious content: This includes filtering out spam
emails, blocking malware attachments, and detecting phishing attempts.
Securing your account: This involves using strong passwords, enabling two-
factor authentication, and keeping your email software up to date.
Encrypting emails: This scrambles the content of your emails so that only
the intended recipient can read them.
There are several ways to improve your email security. Here are some best
practices:
Use strong and unique passwords for your email account.
Enable two-factor authentication (2FA) for your email account.
Be cautious about opening attachments, especially from unknown senders.
35
15. TELNET: TELNET stands for Teletype Network, which is a protocol that enables a
computer to connect to the local computer. It is used as a standard TCP/IP protocol
for virtual terminal service which is provided by ISO.
NOTE: The computer which starts the connection is known as the local computer.
During telnet operation, whatever is being performed on the remote computer will be
displayed by the local computer. Telnet operates on a client/server principle.
The local computer uses a telnet client program and the remote computers use a
telnet server program.
16. Logging: The logging process can be further categorized into two parts:
(a) Local Login
(b) Remote Login
17. Local Login: Whenever a user logs into its local system, it is known as local login.
36
18. Remote Login: It is a process in which users can log in to a remote site i.e.,
computer and use services that are available on the remote computer. With the help
of remote login, a user is able to understand the result of transferring the data from
the remote computer to the local computer.
When the user types on the local computer, the local OS accepts the data.
The local computer send the data TELNET client.
TELNET client transforms these characters to a universal character set called
Network Virtual Terminal (NVT).
Commands in the form of NVT, travel through the Internet and it will arrive at
the TCP/IP stack at the remote computer.
The remote operating system receives characters from a pseudo-terminal
driver, which is a piece of software that pretends that characters are coming
from a terminal.
The operating system then passes the character to the appropriate application
program.
19. Domain Name System: The DNS is a hierarchical, domain-based naming scheme
and a distributed database system for implementing this naming scheme. It is used
for mapping host names to IP addresses and can also be used for other purposes.
DNS is used is as follows. To map a name onto an IP address, an application
program calls a library procedure called the resolver, passing it the name as a
parameter.
The resolver sends a query containing the name to a DNS server, which looks up the
name and returns a response containing the IP address. The resolver then returns it
to the caller.
The query and response messages are sent as UDP packets. Now the program can
then establish a TCP connection with the host or send it UDP packets.
20. For the Internet, the top of the naming hierarchy is managed by an organization
called ICANN (Internet Corporation for Assigned Names and Numbers). ICANN was
created as part of maturing the Internet to a worldwide, economic concern.
Internet is divided into 250 domains, where each domain covers many hosts. Each
domain is partitioned into subdomains, and these are further partitioned, and so on.
All these domains can be represented by a tree, as shown in Fig. 7-1. The leaves of
the tree represent domains that have no subdomains (but do contain machines, of
course). A leaf domain may contain a single host, or it may represent a company and
contain thousands of hosts.
38
39
The Domain Name System (DNS) is a hierarchical and decentralized naming system
for computers, services, or other resources connected to the internet or a private
network. It translates human-readable domain names to numerical IP addresses
needed for locating and identifying computer services and devices.
Here's a detailed explanation of each term related to the Domain Name System
(DNS):
1. Resolution:
Definition: DNS resolution is the process of translating a human-readable
domain name (like [invalid URL removed]) into its corresponding machine-
readable IP address (like 172.217.160.136).
Process:
o When you enter a domain name in your browser, your computer
contacts a DNS resolver (often provided by your internet service
provider).
o The resolver queries a series of DNS servers to find the authoritative
name server for the domain.
o The authoritative name server holds the actual IP address for that
domain name.
o Once found, the IP address is returned to the resolver, then back to
your computer.
o Your computer can then connect to the website using the IP address.
2. Caching:
Definition: DNS caching is an optimization technique where frequently
accessed DNS records are stored temporarily on a resolver or local machine.
43
Benefits:
o Speeds up subsequent lookups for the same domain name by
eliminating the need to query the entire DNS hierarchy again.
o Reduces load on upstream DNS servers.
Cache Invalidation: Cached entries have a Time-To-Live (TTL) associated
with them. After the TTL expires, the cache entry becomes stale and a fresh
lookup is performed.
3. Resource Records (RRs):
Definition: Resource Records (RRs) are the fundamental building blocks of
DNS data. They contain information about a specific domain name and come
in various types.
Common RR Types:
o A Record: Maps a domain name to an IPv4 address (e.g., [invalid URL
removed] -> 172.217.160.136)
o AAAA Record: Maps a domain name to an IPv6 address.
o CNAME Record: Creates an alias for a domain name, pointing it to
another domain name (e.g., www.[invalid URL removed] -> [invalid
URL removed])
o MX Record: Specifies mail exchange servers for a domain name, used
for email routing.
o NS Record: Identifies the authoritative name servers for a domain.
o PTR Record (Reverse DNS): Maps an IP address to a domain name
(less common).
Structure: Each RR consists of several fields, including:
o Name: The domain name the record pertains to.
o Type: The type of record (A, AAAA, CNAME, etc.).
o TTL: The time a record can be cached before becoming stale.
o Data: The specific information associated with the record type (e.g., IP
address for A record).
4. DNS Messages:
Definition: DNS messages are the packets exchanged between DNS
resolvers and servers during the resolution process.
Types:
44
**********************************************************************************************