Computer Network Question Bank
Computer Network Question Bank
Question Bank
Chapter1
1. Explain in detail about the design issues of all the layers in OSI model.
2. Differentiate between OSI and TCP/IP model.
3. Compare between connection oriented and connection less service.
4. What is a topology. Explain in detail about different types of topologies with advantages and
disadvantages.
5.
6.
Chapter2
1. Discuss different types of guided media.
2.
Chapter 3
1. Explain different types of framing techniques.
Character Count :
This method is rarely used and is generally required to count total number of characters that are
present in frame. This is done by using field in header.
Character count method ensures data link layer at the receiver or destination about total number of
characters that follow, and about where the frame ends.
• There is disadvantage also of using this method i.e., if anyhow character count is
disturbed or distorted by an error occurring during transmission, then destination or
receiver might lose synchronization.
• The destination or receiver might also be not able to locate or identify beginning of
next frame.
Character Stuffing :
o Character stuffing is also known as byte stuffing or character-oriented framing and
is same as that of bit stuffing but byte stuffing actually operates on bytes whereas
bit stuffing operates on bits.
o In byte stuffing, special byte that is basically known as ESC (Escape Character) that
has predefined pattern is generally added to data section of the data stream or
frame when there is message or character that has same pattern as that of flag byte.
• But receiver removes this ESC and keeps data part that causes some problems or
issues. In simple words, we can say that character stuffing is addition of 1
additional byte if there is presence of ESC or flag in text.
Bit Stuffing :
Bit stuffing is also known as bit-oriented framing or bit-oriented approach.
In bit stuffing, extra bits are being added by network protocol designers to data streams.
It is generally insertion or addition of extra bits into transmission unit or message to be
transmitted as simple way to provide and give signaling information and data to receiver and to
avoid or ignore appearance of unintended or unnecessary control sequences.
• It is type of protocol management simply performed to break up bit pattern that
results in transmission to go out of synchronization. Bit stuffing is very essential part
of transmission process in network and communication protocol. It is also required
in USB.
Physical Layer Coding Violations :
Encoding violation is method that is used only for network in which encoding on physical medium
includes some sort of redundancy i.e., use of more than one graphical or visual structure to simply
encode or represent one variable of data.
2. Explain CSMA protocols. Explain how how collision are handled in CSMA/CD. (multiple accesss
protocol)
CSMA Protocols stands for Carrier Sense Multiple Access Protocols.
The basic idea behind CSMA/CA is that the station should be able to receive
while transmitting to detect a collision from different stations
Therefore CSMA/CA has been specially designed for wireless networks.
These are three types of strategies:
InterFrame Space (IFS):
Contention Window:
Acknowledgments:
1. A starts at t1.
2. C starts at t2.
3. C detects A at t3.
4. A detects C at t4.
5. C's transmission: t3 - t2.
6. A's transmission: t4 - t1.
1. Persistent CSMA
2. Non-Persistent CSMA
3.P-Persistent CSMA
4.CSMA/CD
How long will it take a station to realize that a collision has taken place?
1. Let the time for a signal to propagate between the two farthest stations be ττ.
2. Assume that at time t0, :Station A starts at t0.
3. B senses idle at τ - ε.
4. Collision occurs at τ.
5. B stops almost instantly.
6. Noise gets back at ττ + ττ = 2τ.
7. Station can't be sure it has the channel until 2τ without collision.
stop and wait means, whatever the data that sender wants to send, he sends the data to
the receiver. After sending the data, he stops and waits until he receives the
acknowledgment from the receiver. The stop and wait protocol is a flow control protocol
where flow control is one of the services of the data link layer.
It is a data-link layer protocol which is used for transmitting the data over the noiseless
channels.
It provides unidirectional data transmission which means that either sending or receiving
of data will take place at a time. It provides flow-control mechanism but does not provide
any error control mechanism.
The idea behind the usage of this frame is that when the sender sends the frame then he
waits for the acknowledgment before sending the next frame.
Primitives of Stop and Wait Protocol
The primitives of stop and wait protocol are:
Sender side
o Rule 1: Sender sends one data packet at a time.
o Rule 2: Sender sends the next packet only when it receives the acknowledgment
of the previous packet.
o Therefore, the idea of stop and wait protocol in the sender's side is very simple,
i.e., send one packet at a time, and do not send another packet before receiving
the acknowledgment.
Receiver side
In this technique, each frame has sent from the sequence number. The sequence numbers are
used to find the missing data in the receiver end. The purpose of the sliding window technique is
to avoid duplicate data, so it uses the sequence number.
1. Go-Back-N ARQ
2. Selective Repeat ARQ
• The number of frames transmitted while the sender waits for an acknowledgment is
also determined by the value of N.
• The sender sends frames in sequential order. The receiver can only buffer one
frame at a time since the size of the receiver window is equal to 1, but the sender
can buffer N frames.
• The Go-Back-N sender uses a retransmission timer to detect the lost segments.
• The Go- Back-N receiver uses both independent acknowledgment and cumulative
acknowledgment.
7. Show in detail steps how checksum is calculated on both sender and receiver side.
why does data link layer always put the CRC in a trailer rather than in header? give answer in short and
simple words
The Data Link Layer places the CRC (Cyclic Redundancy Check) in a trailer, not the header, because it
allows the sender to calculate the CRC based on the actual data in the frame and then append it. Placing
the CRC in the trailer ensures that the entire frame's integrity, including the data, is verified during
transmission and reception. If the CRC were in the header, it would require retransmitting the data, which
can be less efficient.
8.
9
The Data Link Layer is the second layer of the OSI (Open Systems Interconnection) model and
plays a crucial role in ensuring reliable data communication over a physical network medium.
It primarily deals with the following key duties:
1. Data Framing: One of the primary duties of the Data Link Layer is to frame data into
manageable, fixed-sized units known as frames. Frames include the data to be
transmitted, control information, and error-checking bits. Framing allows for the easy
identification of the start and end of data packets, aiding in proper data transmission.
2. Addressing and Routing: The Data Link Layer assigns unique addresses (such as MAC
addresses in Ethernet) to each device on a network. These addresses are used for local
network communication and help in routing data packets to the correct destination.
The Data Link Layer ensures that data is delivered to the right recipient on a shared
medium.
3. Flow Control: The Data Link Layer manages the flow of data between devices to
prevent congestion and data loss. It employs techniques like buffering and
acknowledgments to ensure that data is transmitted at a rate that the receiving device
can handle. Flow control prevents data overflow and ensures efficient communication.
4. Error Detection and Correction: The Data Link Layer is responsible for detecting
errors that may occur during data transmission due to interference or noise on the
physical medium. It uses techniques like CRC (Cyclic Redundancy Check) to identify
errors in received frames. While it doesn't correct errors directly, it can request
retransmission if errors are detected.
5. Media Access Control (MAC): In shared network environments, the Data Link Layer
manages access to the physical medium. It controls how devices on the same network
contend for transmission rights. This can be done using protocols like CSMA/CD
(Carrier Sense Multiple Access with Collision Detection) in Ethernet.
6. Logical Link Control (LLC): The Data Link Layer can also include a sublayer called LLC,
responsible for providing a link between the Data Link Layer and the Network Layer
(Layer 3). The LLC sublayer manages network protocol-related functions, including
encapsulation and addressing.
7. Duplex Mode Handling: The Data Link Layer handles duplex mode, which determines
whether data can be transmitted in both directions simultaneously (full-duplex) or in
only one direction at a time (half-duplex).
What are the three major duties of the data link layer?
The three main functions of the data link layer are to deal with transmission errors, regulate
the flow of data, and provide a well-defined interface to the network layer.
10. For a pattern of, 10101001 00111001 00011101 Find out whether any transmission errors
have occurred or not using checksum.
11. Show how a transmitter sends data and corrects data on the receiver side using
hamming code.
Transmitted word: 011100101010
Received word 011100101110
12.
A strong generator polynomial is used to generate checksums in error-checking codes, like
the Cyclic Redundancy Check (CRC) used in computer networks
1. Choose Polynomial: Pick a binary pattern that represents the generator polynomial,
with the leftmost bit as '1'.
2. Append Zeroes: Add zeroes at the end of your message.
3. Use Shift Register: Create a shifting window (register) equal to the polynomial's
length.
4. Process the Message: For each bit in your message, shift the register, and if the
leftmost bit is 1, apply the polynomial.
5. Checksum: The value left in the register after processing is your checksum.
6. Attach Checksum: Add the checksum to your message and send it.
7. Receiver Side: At the receiver, use the same polynomial to verify the checksum. If it
matches, the message is likely error-free.
8. Error Correction: If needed, apply error correction techniques.
The strength of the polynomial lies in its specific pattern, which helps detect errors effectively.
Chapter 4
1. Explain with examples the classification of IPv4 addresses.
2. Explain the need of subnet mask in subnetting.
3. What is subnetting? What are default subnet masks.
4.
5. One of the addresses in a block is 110.23.120.14/20. Find the number of addresses in
the network, the first address, and the last address in the block.
6. An organization is granted the block 130.34.12.64/26.The organization needs 4 subnets
each with equal no hosts. Design the sub networks and find the information about each
n/w?
8. An ISP is granted a block of addresses starting with 190.100.0.0/16. The ISP needs
to distribute these addresses to three groups of customers as follows:
10. What is IPv4 Protocol and explain the header with neat diagram.
11. Compare Open Loop Congestion and Closed Loop Congestion.
13. Explain in detail the working of ARP and RARP with neat diagrams.
14. Differentiate between IPv4 and IPv6.
19. Explain with neat diagrams Leaky bucket and Token bucket algorithms.
20.
tr
21. Calculate the new delay from J with the help of below diagram using distance vector routing.
22.
23.
Chapter 5
1. Differentiate between TCP and UDP.
There is no retransmission of
Retransmission of lost packets is
Retransmission lost packets in the User
possible in TCP, but not in UDP.
Datagram Protocol (UDP).
TCP has a (20-60) bytes variable length UDP has an 8 bytes fixed-
Header Length
header. length header.
TCP
For example, When a user requests a web page on the internet, somewhere in
the world, the server processes that request and sends back an HTML Page to
that user. The server makes use of a protocol called the HTTP Protocol.
The HTTP then requests the TCP layer to set the required connection and send
the HTML file.
UDP
1. Source Port: Source Port is a 2 Byte long field used to identify the port
number of the source.
2. Destination Port: It is a 2 Byte long field, used to identify the port of the
destined packet.
3. Length: Length is the length of UDP including the header and the data. It is a
16-bits field.
4. Checksum: Checksum is 2 Bytes long field
. It is the 16-bit one’s complement of the one’s complement sum of the UDP
header,
the pseudo-header of information from the IP header,
and the data, padded with zero octets at the end (if necessary) to make a
multiple of two octets.
2. Explain the use of TCP timers in detail.
TCP uses several timers to ensure that excessive delays are not encountered
during communications.
Several of these timers are elegant, handling problems that are not immediately
obvious at first analysis.
Persistent Timer –
To deal with a zero-window-size deadlock situation, TCP uses a persistence
timer.
When the sending TCP receives an acknowledgment with a window size of
zero, it starts a persistence timer.
When the persistence timer goes off, the sending TCP sends a special
segment called a probe
. This segment contains only 1 byte of new data.
It has a sequence number, but its sequence number is never acknowledged; it
is even ignored in calculating the sequence number for the rest of the data.
Keep Alive Timer –
1. A keepalive timer is used to prevent a long idle connection between
two TCPs.
2. If a client opens a TCP connection to a server transfers some data and
becomes silent the client will crash. In this case, the connection
remains open forever. So a keepalive timer is used.
3. Each time the server hears from a client, it resets this timer.
4. The time-out is usually 2 hours. If the server does not hear from the
client after 2 hours, it sends a probe segment. If there is no response
after 10 probes, each of which is 75 s apart, it assumes that the client is
down and terminates the connection.
Time Wait Timer – This timer is used during tcp connection termination .
The timer starts after sending the last Ack for 2nd FIN and closing the
connection.
Piggybacking is a process of attaching the acknowledgment with the data packet to be sent.
Piggybacking concept is explained below:
Suppose there is two-way communication between two devices A and B. When the data frame
is sent by A to B, then device B will not send the acknowledgment to A until B does not have
the next frame to transmit. And the delayed acknowledgment is sent by the B with the data
frame. The method of attaching the delayed acknowledgment with sending the data frame is
known as piggybacking.
Why Do We Need Piggybacking?
All other protocols such as stop and wait, Go Back N ARQ, etc. provide us a half duplex way of
communication. But in real-world situations, full-duplex communication is required. So piggybacking
comes in the scenario for defining the rules for full-duplex communication. TCP packets are also
transmitted in the full-duplex mode. So piggybacking is also required for the TCP packet transmission
Advantages of Piggybacking
Disadvantages of Piggybacking
4. Illustrate the concept of TCP 3 way handshaking signals with neat diagram.
Transmission Control Protocol (TCP) provides a secure and reliable connection
between two devices using the 3-way handshake process.
TCP uses the full-duplex connection to synchronize (SYN) and acknowledge
(ACK) each other on both sides. There are three steps for both establishing
and closing a connection. They are − SYN, SYN-ACK, and ACK.
3-Way Handshake Connection Establishment Process
The following diagram shows how a reliable connection is established using 3-way
handshake. It will support communication between a web browser on the client and
server sides whenever a user navigates the Internet.
Synchronization Sequence Number (SYN) − The client
sends the SYN to the server
When the client wants to connect to the server, then it sends the message to the server by setting the
SYN flag as 1.
The message carries some additional information like the sequence number (32-bit random
number).
The ACK is set to 0. The maximum segment size and the window size are also set.
For example, if the window size is 1000 bits and the maximum segment size is 100 bits, then a
maximum of 10 data segments can be transmitted in the connection by dividing (1000/100=10).
Synchronization and Acknowledgement (SYN-ACK) to the
client
The server acknowledges the client request by setting the ACK flag to 1.
For example, if the client has sent the SYN with sequence number = 500, then the server will send
the ACK using acknowledgment number = 5001.
The server will set the SYN flag to '1' and send it to the client if the server also wants to establish
the connection.
The sequence number used for SYN will be different from the client's SYN.
The server also advertises its window size and maximum segment size to the client. And, the
connection is established from the client-side to the server-side.
Acknowledgment (ACK) to the server
The client sends the acknowledgment (ACK) to the server after receiving the synchronization
(SYN) from the server.
After getting the (ACK) from the client, the connection is established between the client and the
server.
Now the data can be transmitted between the client and server sides.
3 -Way Handshake Closing Connection Process
First, the client requests the server to terminate the established connection by sending FIN.
After receiving the client request, the server sends back the FIN and ACK request to the client.
After receiving the FIN + ACK from the server, the client confirms by sending an ACK to the
server.
5. S E L E C T I V E R E P E A T
The advantage of Selective Repeat over Go-Back-N in computer network protocols is like
being more precise and efficient:
If a packet is lost, the receiver sends a NACK for the lost packet, and the
sender retransmits only that packet.
The sender also maintains a timer for each packet, and if an
acknowledgement is not received within the timer’s timeout period, the
sender retransmits only that packet.
key features include:
Receiver-based protocol
Each packet is individually acknowledged by the receiver
Only lost packets are retransmitted, reducing network congestion
Maintains a buffer to store out-of-order packets
Requires more memory and processing power than Go-Back-N
Provides efficient transmission of packets.
5. In selective Repeat protocol, receiver side needs sorting to sort the frames.
8. Explain in detail about Fast retransmit and Fast Recovery with neat diagrams.
TCP Slow Start and Congestion Avoidance, lower the data throughput
drastically when segment loss is detected. Fast re-transmit and fast
recovery have been designed to speed up the recovery of the
connection
When there is a packet loss detected, the TCP sender does 4 things:
Fast Retransmit:
1. Sender transmits packets, and receiver sends ACKs for received packets.
2. If a packet is lost, the receiver detects the gap and sends a duplicate ACK (e.g., ACK for
packet 4 is missing, so it sends ACK 3 again).
3. When the sender receives duplicate ACKs, it assumes a packet is lost and retransmits
that packet.
4. This retransmission speeds up recovery by avoiding a timeout, which can be a long
wait.
Fast Recovery:
1. Similar to Fast Retransmit, the sender transmits packets and the receiver sends ACKs.
2. When the sender receives three duplicate ACKs, it knows a packet is lost. Instead of just
retransmitting, it enters Fast Recovery.
3. In Fast Recovery, the sender reduces its sending rate but continues sending new
packets.
4. It keeps track of the congestion window and inflight packets, maintaining a more
efficient use of network resources.
5. Once it receives a non-duplicate ACK (indicating some data has been successfully
received), it exits Fast Recovery and adjusts the congestion window size for more
efficient transmission.
Suitable for recovering single lost Suitable for maintaining network performance during
Use Case packets congestion
9.
Berkely sockets
Berkeley sockets are part of an application programming interface (API)
that specifies the data structures and function calls that interact with the
operating system's network subsystem.
Transport service primitves
1. LISTEN : When a server is ready to accept an incoming connection it executes the LISTEN
primitive. It blocks waiting for an incoming connection.
2. CONNECT : It connects the server by establishing a connection. Response is awaited.
3. RECIEVE: Then the RECIEVE call blocks the server.
4. SEND : Then the client executes SEND primitive to transmit its request followed by the
execution of RECIEVE to get the reply. Send the message.
5. DISCONNECT : This primitive is used for terminating the connection. After this primitive one
can’t send any message. When the client sends DISCONNECT packet then the server also
sends the DISCONNECT packet to acknowledge the client. When the server package is
received by client then the process is terminated.
FACILITY, Primitive for enquiring about the performance of the network, like delivery
REPORT statistics.
10.
It can perform better if the calibration is It can perform better because of the
2.
properly done. feedback.
Jitter, the variation in packet delay, is critical for real-time audio and video but inconsequential for file
data.
For audio/video, consistent delays (e.g., 24.5 ms to 25.5 ms) are essential.
Chapter 6
1. Explain the need for DNS and functioning of protocol.
DNS Stands for Domain Name System.
DNS is a hierarchical decentralized naming system for computers, services, or any resource
connected to the Internet or a private network.
Need for DNS:
1. One identifier for a host is its hostname.
2. An IP address consists of four bytes and has a rigid hierarchical structure.
3. An IP address is included in the header of each IP datagram.
4. A hostname such as surf.eurecom.fr, which ends with the country code .fr, tells us that the
host is in France, but doesn't say much more.
functioning of protocol.
The DNS name space is the set of all domain names that are registered in the
DNS.
These domain names are organized into a tree-like structure, with the top of
the tree being the root domain.
Below the root domain, there are a number of top-level domains, such
as .com, .net, and .org.
Below the top-level domains, there are second-level domains, and so on.
Each domain name in the DNS name space corresponds to a set of resource
records, which contain information about that domain name, such as its IP
address, mail servers, and other information.
The DNS name space is hierarchical, meaning that each domain name can
have subdomains beneath it.
For example, the domain name "example.com" could have subdomains such
as "www.example.com" and "mail.example.com".
This allows for a very flexible and scalable naming structure for the Internet.
The DNS name space is managed by a number of organizations, including the
Internet Corporation for Assigned Names and Numbers (ICANN), which
is responsible for coordinating the allocation of unique domain names and IP
addresses.
DNS record: DNS records (short for "Domain Name System records") are types
of data that are stored in the DNS database and used to specify information
about a domain, such as its IP address and the servers that handle its email
DNS Record Types
DNS recursor - The recursor can be thought of as a librarian who is asked to go find a
particular book somewhere in a library.
The DNS recursor is a server designed to receive queries from client machines through
applications such as web browsers.
Typically the recursor is then responsible for making additional requests in order to satisfy
the client’s DNS query.
Root nameserver - The root server is the first step in translating (resolving) human
readable host names into IP addresses.
It can be thought of like an index in a library that points to different racks of books -
typically it serves as a reference to other more specific locations.
TLD nameserver - The top level domain server (TLD) can be thought of as a specific rack of
books in a library.
This nameserver is the next step in the search for a specific IP address, and it hosts the last
portion of a hostname (In example.com, the TLD server is “com”).
If the authoritative name server has access to the requested record, it will return the IP
address for the requested hostname back to the DNS Recursor (the librarian) that made the
initial request.
Forwarding DNS Servers: These servers forward DNS queries to other DNS
servers, often provided by ISPs (Internet Service Providers) or companies for
faster resolution or caching purposes.
Each of these servers plays a role in the process of translating domain names to IP
addresses and vice versa. They work together in a hierarchical manner to efficiently
resolve DNS queries across the internet.
2. Explain in detail about HTTP, DHCP, SMTP, FTP with neat diagrams.
APPLICATION LAYERS
HTTP stands for Hyper Text Transfer Protocol, FTP for File Transfer Protocol,
while SMTP stands for Simple Mail Transfer Protocol. All three are used to
transfer information over a computer network and are an integral part of today’s
internet.
We need the three protocols as they all serve different purposes. These are HTTP, FTP,
and SMTP.
1. HTTP is the backbone of the World Wide Web (WWW).
2. FTP is the underlying protocol that is used to, as the name suggests, transfer
files over a communication network.
3. SMTP is what is used by Email servers all over the globe to communicate
with each other
Type of band
transfer In-band Out-of-band In-band In-band
Number of TCP
connections 1 2 (Data and Control) 1 -
Push Protocol
Type of Protocol Pull Protocol (Mainly) - (Primarily) Push Protocol
Transfer files between Transfer directly between Transfers mails via Manages IP
Type of Transfer Web server and client computers Mail Servers allocation
3.
1. Scheme :
https://
The protocol or scheme part of the URL and indicates the set of rules that will decide
the transmission and exchange of data.
2. Subdomain :
https://www.
The subdomain is used to separate different sections of the website as it specifies the
type of resource to be delivered to the client
3. Domain Name :
https://www.example.
Domain name specifies the organization or entity that the URL belongs to
4. Top-level Domain :
https://www.example.co.uk
The TLD (top-level domain) indicates the type of organization the website is
registered to. Like the .com
5. Port Number :
https://www.example.co.uk:443
A port number specifies the type of service that is requested by the client
since servers often deliver multiple services.
6. Path :
https://www.example.co.uk:443/blog/article/search
Path specifies the exact location of the web page, file, or any resource that the user
wants access to.
7. Query String Separator :
https://www.example.co.uk:443/blog/article/search?
The query string which contains specific parameters of the search is preceded by a
question mark (?).
8. Query String :
https://www.example.co.uk:443/blog/article/search?docid=720&hl=en
The query string specifies the parameters of the data that is being queried from a
website’s database.
9. Fragment :
https://www.example.co.uk:443/blog/article/search?docid=720&hl=en#dayone
The fragment identifier of a URL is optional, usually appears at the end, and begins
with a hash (#). It indicates a specific location within a page such as the ‘id’ or
‘name’ attribute for an HTML element.
RIP
• RIP has a simple and straightforward operation, which makes it easy to understand
and configure.
• However, it also has some limitations, such as its slow convergence time and limited
scalability. In large networks, RIP can become slow and inefficient, which is why it’s
often replaced by more advanced routing protocols such as OSPF (Open Shortest Path
First)
TELNET
TELNET stands for Teletype Network. It is a type of protocol that enables one
computer to connect to the local computer.
It is used as a standard TCP/IP protocol for virtual terminal service which is provided
by ISO.
The computer which starts the connection is known as the local computer.
The computer which is being connected to i.e. which accepts the connection known as
the remote computer
Logging
The logging process can be further categorized into two parts:
1. Local Login
2. Remote Login
1. Local Login: Whenever a user logs into its local system, it is known as local
login.
2. Remote Login: Remote Login is a process in which users can log in to a remote
site i.e. computer and use services that are available on the remote computer.
With the help of remote login, a user is able to understand the result of
transferring the result of processing from the remote computer to the local
computer.
TELNET Commands
Commands of Telnet are identified by a prefix character, Interpret As Command (IAC)
with code 255. IAC is followed by command and option codes.
The basic format of the command is as shown in the following figure :
1. Offering to enable.
WILL 251 11111011 2. Accepting a request to enable.
Modes of Operation
Most telnet implementations operate in one of the following three modes:
1. Default mode
2. Character mode
3. Line mode
1. Default Mode: If no other modes are invoked then this mode is used. Echoing
is performed in this mode by the client.
In this mode, the user types a character and the client echoes the character on
the screen but it does not send it until the whole line is completed.
2. Character Mode: Each character typed in this mode is sent by the client to the
server. A server in this type of mode normally echoes characters back to be displayed
on the client’s screen.
3. Line Mode: Line editing like echoing, character erasing, etc. is done from the
client side. The client will send the whole line to the server.
The dotted black lines in the figure represent the transition that a server normally goes through; the
solid black lines show the transitions that a client normally goes through.
The state marked as ESTBLISHED in the FSM is in fact two different sets of states that the client and
server undergo to transfer data.
If there are N number of users and channel is divided into N equal-sized sub channels,
Each user is assigned one portion.
If the number of users are small and don’t vary at times, then Frequency Division
Multiplexing can be used as it is a simple and efficient channel bandwidth allocating
technique.
Channel allocation problem can be solved by two schemes: Static Channel Allocation
in LANs and MANs, and Dynamic Channel Allocation.
1. Static Channel Allocation in LANs and MANs:
It is the classical or traditional approach of allocating a single channel
among multiple competing users using Frequency Division Multiplexing
(FDM).
if there are N users, the frequency channel is divided into N equal sized
portions (bandwidth), each user being assigned one portion. since each
user has a private frequency band, there is no interference between users.
It is not efficient to divide into fixed number of chunks.
T = 1/(U*C-L)
T(FDM) = N*T(1/U(C/N)-L/N)
Where,
1. Station Model:
Assumes that each of N stations independently produce frames.
The probability of producing a packet in the interval IDt where I is the
constant arrival rate of new frames.
3. Collision Assumption:
If two frames overlap in time-wise, then that’s collision.
Any collision is an error, and both frames must re transmitted. Collisions are
only possible error.
The data link layer is used in a computer network to transmit the data between two devices or
nodes.
Aloha Rules
Pure Aloha
Whenever data is available for sending over a channel at stations, we use Pure Aloha.
In pure Aloha, when each station transmits data to a channel without checking whether
the channel is idle or not, the chances of collision may occur, and the data frame can
be lost.
When any station transmits the data frame to a channel, the pure Aloha waits for the
receiver's acknowledgment.
Slotted Aloha
The slotted Aloha is designed to overcome the pure Aloha's efficiency because pure
Aloha has a very high possibility of frame hitting.
In slotted Aloha, the shared channel is divided into a fixed time interval called slots
So that, if a station wants to send a frame to a shared channel, the frame can only be
sent at the beginning of the slot, and only one frame is allowed to be sent to each slot.
And if the stations are unable to send data to the beginning of the slot, the station will
have to wait until the beginning of the slot for the next time.
However, the possibility of a collision remains when trying to send a frame at the
beginning of two or more station time slot.
The data link layer is used in a computer network to transmit the data between two devices or
nodes.
Protocols in the data link layer are designed so that this layer can perform its basic
functions: framing, error control and flow control
Data link protocols can be broadly divided into two categories, depending on whether
the transmission channel is noiseless or noisy.
Simplex Protocol
Stop – and – wait Automatic Repeat Request (Stop – and – Wait ARQ) is a
variation of the above protocol with added error control mechanisms,
appropriate for noisy channels.
The sender keeps a copy of the sent frame. It then waits for a finite time to
receive a positive acknowledgement from receiver.
If the timer expires or a negative acknowledgement is received, the frame is
retransmitted.
If a positive acknowledgement is received then the next frame is sent.
Go – Back – N ARQ
Go – Back – N ARQ provides for sending multiple frames before receiving the
acknowledgement for the first frame.
It uses the concept of sliding window, and so is also called sliding window
protocol. The frames are sequentially numbered and a finite number of frames
are sent.
If the acknowledgement of a frame is not received within the time period, all
frames starting from that frame are retransmitted.
This protocol also provides for sending multiple frames before receiving the
acknowledgement for the first frame.
However, here only the erroneous or lost frames are retransmitted, while the
good frames are received and buffered.