0% found this document useful (0 votes)
93 views44 pages

(A) What Is The Need of Modulating A Signal? Will It Be A Right Approach To Send The Information As The Signal Itself?

Multistage switching networks can overcome some limitations of circuit switching by using multiple stages of smaller switching elements and interconnecting links. This allows for connections to be established in stages rather than requiring an entire dedicated path. Specific multistage network topologies like Omega networks and Clos networks can provide non-blocking or rearrangeable non-blocking switching to fully connect any available inputs to outputs. This makes more efficient use of bandwidth compared to circuit switching which reserves bandwidth for the entire call duration.

Uploaded by

shubham
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
93 views44 pages

(A) What Is The Need of Modulating A Signal? Will It Be A Right Approach To Send The Information As The Signal Itself?

Multistage switching networks can overcome some limitations of circuit switching by using multiple stages of smaller switching elements and interconnecting links. This allows for connections to be established in stages rather than requiring an entire dedicated path. Specific multistage network topologies like Omega networks and Clos networks can provide non-blocking or rearrangeable non-blocking switching to fully connect any available inputs to outputs. This makes more efficient use of bandwidth compared to circuit switching which reserves bandwidth for the entire call duration.

Uploaded by

shubham
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 44

1. (a) What is the need of modulating a signal?

Will it be a right
approach to send the information as the signal itself?
Answer : - In the modulation process, two signals are used namely the
modulating signal and the carrier. The modulating signal is nothing but the
baseband signal or information signal while the carrier is a high frequency
sinusoidal signal.

In the modulation process, some parameter of the carrier wave (such as


amplitude, frequency or phase ) is varied in accordance with the modulating
signal. This modulated signal is then transmitted by the transmitter.

The receiver demodultes the received modulated signal and gets the original
information signal back. Thus, demodulation is exactly opposite to
modulation.

In the process of modulation the carrier wave actually acts as carrier which
carries the information signal from the transmitter to receiver.

Need of Modulation

The baseband signals can be transmitted directly, but the baseband


transmission has many limitations which can be overcome using modulation.

In the process of modulation, the baseband signal is translated i.e. shifted


from low frequency to high frequency. This frequency shift is proportional to
the frequency of carrier.

Advantages of Modulation

 Avoids mixing of signals - This is a point from the practical side of


things. Suppose you are transmitting the baseband signal as it is to a
receiver, say your friends phone. Just like you , there will be thousands
of people of people in the city using their mobile phones.

There is no way to tell such signals apart and they will interfere with
each other leading to a lot of noise in the system and a very bad output.
By using a carrier wave of high frequencies and allotting a band of
frequencies to each message, there is no mixing up of signals and the
received signals are absolutely perfect.

 Reduction in the height of antenna - For the transmission of radio


signals, the antenna height must be multiple of λ/4 ,where λ is the
wavelength .

λ = c/f
c → is the velocity of light
f → is the frequency of the signal to be transmitted

The minimum antenna height required to transmit a baseband signal of


f = 10 kHz is calculated as follows :

Minimum antenna height = λ/4 = c/4f = (3*108)/(4*10*103) = 7500


meters i.e. 7.5 km

The antenna of this height is practically impossible to install.

Now, let us consider a modulated signal at f=1 MHz . The minimum


antenna height is given by,

Minimum antenna height = λ/4 = c/4f = (3*108)/(4*10*106) = 75 meters

This antenna can be easily installed practically . Thus, modulation


reduces the height of the antenna.

 Increase the range of communication - By using modulation to


transmit the signals through space to long distances, we have removed
the need for wires in the communication systems. The technique of
modulation helped humans to become wireless.
 Multiplexing is possible - Multiplexing is a process in which two or
more signals can be transmitted over the same communication channel
simultaneously. This is possible only with modulation.
 Improves quality of reception - With frequency modulation (FM), and
the digital communication techniques like PCM, the effect of noise is
reduced to a great extent. This improves quality of reception.
Q1. (b) Explain techniques used in digital to analog modulation
with the help of diagram
Answer : - Digital-to-analog conversion is the process of changing one of the
characteristics of an analog signal based on the information in digital data.

A sine wave is defined by three characteristics : amplitude,


frequency, and phase. When we change anyone of these characteristics, we
create a different version of that wave. So, by changing one characteristic of a
simple electric signal, we can use it to represent digital data.

There are three mechanisms for modulating digital data into an analog signal :
Amplitude Shift Keying (ASK), Frequency Shift Keying (FSK), and Phase Shift
Keying (PSK). In addition, there is a fourth (and better) mechanism that
combines changing both the amplitude and phase, called Quadrature
Amplitude Modulation (QAM).

Amplitude Shift Keying (ASK) - In an ASK system, the binary symbol 1 is
represented by transmitting a fixed-amplitude carrier wave and fixed
frequency for a bit duration of T seconds. If the signal value is 1 then the
carrier signal will be transmitted; otherwise, a signal value of 0 will be
transmitted.

The simplest and most common form of ASK operates as a switch, using the
presence of a carrier wave to indicate a binary one and its absence to indicate
a binary zero. This type of modulation is called on-off keying (OOK), and is
used at radio frequencies to transmit Morse code (referred to as continuous
wave operation).

The ASK technique is commonly used to transmit digital data over optical
fiber.
Frequency Shift Keying (FSK) - In FSK, the frequency of the carrier signal is
varied according to the digital signal changes. The frequency of the modulated
signal is constant for the duration of one signal element, but changes for the
next signal element if the data element changes. Both amplitude and phase
remain constant for all signal elements.

Phase Shift Keying (PSK) - In PSK, the phase of the carrier is varied to
represent two or more different signal elements. Both amplitude and
frequency remain constant when the phase changes. The simplest PSK is
binary PSK, in which we have only two signal elements, one with a phase of 0°,
and the other with a phase of 180°. The modulation is accomplished by
varying the sine and cosine inputs at a precise time.. It is widely used for
wireless LANs, RFID and Bluetooth communication. The following figure gives
a conceptual view of PSK.
Quadrature Amplitude Modulation (QAM) - QAM utilises both amplitude
and phase components to provide a form of modulation that is able to provide
high levels of spectrum usage efficiency.
Q2. (a) Discuss the different approaches to circuit switching?
Why it is suitable for voice Transmission? What are its
limitations ?
Answer : - A circuit-switched communication system involves three phases :
circuit establishment (setting up dedicated links between the source and
destination); data transfer (transmitting the data between the source and
destination); and circuit disconnect (removing the dadicated links).

In circuit switching network dedicated channel has to be established before


the call is made between users. The channel is reserved between the users till
the connection is active. For half duplex communication, one channel is
allocated and for full duplex communication, two channels are allocated. It is
mainly used for voice communication.

Reason behind suitable for Voice Transmission

 The dedicated path/circuit established between sender and receiver


provides a guaranteed data rate.
 Once the circuit is established, data is transmitted without any delay as
there is no waiting time at each switch.
 Since a dedicated continuous transmission path is established, the
method is suitable for long continuous transmission.

Limitations of Circuit Switch

 As the connection is dedicated it cannot be used to transmit any other


data even if the channel is free.
 It is inefficient in terms of utilization of system resources. As resources
are allocated for the entire duration of connection, these are not
available to other connections.
 Dedicated channels require more bandwidth.
 Prior to actual data transfer, the time required to establish a physical
link between the two stations is too long.

Q2. (b) How does multistage switching overcome these


limitations ?
Answer : - Multistage interconnection networks (MINs) consist of more than
one stages of small interconnection elements called switching elements and
links interconnecting them. A MIN normally connects N inputs to N outputs
and is referred as an N × N MIN. The parameter N is called the size of the
network.

Multistage Interconnect Network can be classified into three types :

 Non-blocking - A non-blocking network can connect any idle input to


any idle output, regardless of the connections already established
across the network.
 Rearrangeable non-blocking - This type of network can establish all
possible connections between inputs and outputs by rearranging its
existing connections.
 Blocking - This type of network cannot realize all possible connections
between inputs and outputs. This is because a connection between one
free input to another free output is blocked by an existing connection in
network.

Omega Network

An Omega network consists of multiple stages of 2*2 switching elements. Each


input has a dedicated connection to an output. An N*N omega network has
log(N) number of stages and N/2 number of switching elements in each stage
for perfect shuffle between stages.

Example - 8*8 Omega Network

 Consists of four 2*2 switches per stage.


 The fixed links between every pair of stages are identical.
 A perfect shuffle is formed for the fixed link between every pair of
stages.
 For 8 possible inputs, there are a total of 8! = 40320 1 to 1 mappings of
the inputs onto the outputs; but only 12 switches for a total of 212 =
4096 settings. Thus network is blocking.
Clos Network

A Clos network uses 3 stages to switch from N inputs to N outputs. In the first
stage, there are r= N/n crossbar switches and each switch is of size n*m. In the
second stage there are m switches of size r*r and finally the last stage is
mirror of first stage with r switches of size m*n. A clos network will be
completely non-blocking if m >= 2n-1.

Example - 9*9 Clos Network

We assume that the value of n=3 and m=2n. So, r=N/n =9/3 =3 and m=2*3 =6.
From the above diagram we can say that multistage switching overcome the
limitations of circuit switching because of multiple path present in the
network.
Q3. A bit string 011100111110001110 needs to be transmitted
at the data link layer. What is the string actually transmitted
after bit stuffing ?
Answer : - Solve it Yourself

Q4. Define checksum and write the algorithm for computing the
checksum ?
Answer : - COMING SOON

Q5. Define flow error control and piggybacking concepts. Show


the operation of Stop & Wait ARQ with the help of an
illustration. Also illustrate the outcome of Stop & Wait ARQ in
the following scenarios.

i. When ACK is lost


ii. When Frame is lost
iii. When ACK timeout occurs
Answer : -

Flow Control - When a data frame (Layer-2 data) is sent from one host to
another over a single medium, it is required that the sender and receiver
should work at the same speed. That is, sender sends at a speed on which the
receiver can process and accept the data. If sender is sending too fast the
receiver may be overloaded, and data may be lost.

Error Control - When data-frame is transmitted, there is a probability that


data-frame may be lost in the transit or it is received corrupted. In both cases,
the receiver does not receive the correct data-frame and sender does not
know anything about any loss. In such case, both sender and receiver are
equipped with some protocols which helps them to detect transit errors such
as loss of data-frame. Hence, either the sender retransmits the data-frame or
the receiver may request to resend the previous data-frame.

Requirements for error control mechanism :

 Positive ACK - When the receiver receives a correct frame, it should


acknowledge it.
 Negative ACK - When the receiver receives a damaged frame or a
duplicate frame, it sends a NACK back to the sender and the sender
must retransmit the correct frame.
 Retransmission - The sender maintains a clock and sets a timeout
period. If an acknowledgement of a data-frame previously transmitted
does not arrive before the timeout the sender retransmits the frame,
thinking that the frame or it’s acknowledgement is lost in transit.

There are three types of techniques available which Data-link layer may
deploy to control the errors by Automatic Repeat Requests (ARQ) :

 Stop and Wait ARQ


 Go-Back-N ARQ
 Selective Repeat ARQ

Piggybacking

Piggybacking is the process which appends acknowledgement of frame with


the data frame. Piggybacking process can be used if Sender and Receiver both
have some data to transmit. This will increase the overall efficiency of
transmission.

The three principles governing piggybacking when the station X wants to


communicate with station Y are :

 If station X has both data and acknowledgment to send, it sends a data


frame with the ACK field containing the sequence number of the frame
to be acknowledged.
 If station X has only an acknowledgment to send, it waits for a finite
period of time to see whether a data frame is available to be sent. If a
data frame becomes available, then it piggybacks the acknowledgment
with it. Otherwise, it sends an ACK frame.
 If station X has only a data frame to send, it adds the last
acknowledgment with it. The station Y discards all duplicate
acknowledgments. Alternatively, station X may send the data frame
with the ACK field containing a bit combination denoting no
acknowledgment.

Stop and Wait ARQ

 In this method of flow control, the sender sends a single frame to


receiver & waits for an acknowledgment.
 The next frame is sent by sender only when acknowledgment of
previous frame is received.
 This process of sending a frame & waiting for an acknowledgment
continues as long as the sender has data to send.
 To end up the transmission sender transmits end of transmission (EOT)
frame.
 The main advantage of stop & wait protocols is its accuracy. Next frame
is transmitted only when the first frame is acknowledged. So there is no
chance of frame being lost.
 The main disadvantage of this method is that it is inefficient. It makes
the transmission process slow. In this method single frame travels from
source to destination and single acknowledgment travels from
destination to source. As a result each frame sent and received uses the
entire time needed to traverse the link. Moreover, if two devices are
distance apart, a lot of time is wasted waiting for ACKs that leads to
increase in total transmission time.

When ACK is lost - Sequence Number on data packets help to solve the
problem of delayed acknowledgement. Consider the acknowledgement sent
by the receiver gets lost. The sender retransmits the same data packet after its
timer goes off. This prevents the occurrence of deadlock. The Sequence
Number on the data packet helps the receiver to identify the duplicate data
packet. Receiver discards the duplicate packet and re-sends the same
acknowledgement.
When Frame is lost - After sending a data packet to the receiver, sender
starts the time out timer. If the data packet gets acknowledged before the
timer expires, sender stops the time out timer. If the timer goes off before
receiving the acknowledgement, sender retransmits the same data packet.
After retransmission, sender resets the timer. This prevents the occurrence of
deadlock.
When ACK timeout occurs

 Sender sends a data packet with Sequence Number 0 to the receiver.


 Receiver receives the data packet correctly and expects the next data
packet with Sequence Number 1 by sending acknowledgement ACK-1
to the Sender.
 Acknowledgement ACK-1 sent by the receiver gets lost on the way.
 Sender retransmits the same data packet with Sequence Number 0
when time out occurs.
 Receiver receives the data packet and discovers it is the duplicate
packet. It expects the data packet with Sequence Number 1 but
receiving the data packet with Sequence Number 0. It discards the
duplicate data packet and re-sends acknowledgement ACK-1.
Q6. (a) How does MACAW work? Show diagrammatically. What
are the added features in MACAW compared to MACA ?
Answer : - Multiple Access with Collision Avoidance (MACA) is a slotted
media access control protocol used in wireless LAN data transmission to avoid
collisions caused by the hidden station problem and to simplify exposed
station problem.

The basic idea of MACA is a wireless network node makes an announcement


before it sends the data frame to inform other nodes to keep silent. When a
node wants to transmit, it sends a signal called Request-To-Send (RTS) with
the length of the data frame to send. If the receiver allows the transmission, it
replies the sender a signal called Clear-To-Send (CTS) with the length of the
frame that is about to receive.

Let us consider that a transmitting station A has data frame to send to a


receiving station B. The operation works as follows :

 Station A sends a RTS frame to the receiving station.


 On receiving the RTS, station B replies by sending a CTS frame.
 On receipt of CTS frame, station A begins transmitting its data frame.

Any node that receives CTS frame knows that it is close to the receiver,
therefore, cannot transmit a frame.
Any node that receives RTS frame but not the CTS frame knows that is not
close to the receiver to interfere with it, So it is free to transmit data.

WLAN data transmission collisions may still occur, and the MACA for Wireless
(MACAW) is introduced to extend the function of MACA. It requires nodes
sending acknowledgements after each successful frame transmission, as well
as the additional function of Carrier Sense.

Q6. (b) What are the advantages of frame fragmentation in


wireless network? Explain.
Answer : - Solve it Yourself

Q7. How does a bridge operate in different LAN environments?


What are the problems encountered in building a bridge
between the various 802 LANs ? Discuss.
Answer : - COMING SOON
Q8. Write Dijkstra and Bellman Fords shortest path routing
algorithms and make a comparison between the two
algorithms.
Answer : -

Shortest Path Algorithm Formula -

if (d[u] + c(u, v) < d[v]) then d[v] = d[u] + c(u, v)


d[u] represent the distance of vertex u
c(u, v) represent the cost of edge which connect the vertices u and v
d[v] represent the distance of vertex v

Dijkstra’s Algorithm

Dijkstra’s algorithm is an algorithm for finding the shortest paths between


nodes in a graph.

Process of Dijkstra’s Algorithm -

1. Create a set "SPT Set" (Shortest Path Tree Set) that keeps track of
vertices included in shortest path tree, i.e., whose minimum distance
from source is calculated and finalized. Initially, this set is empty.
2. Assign a distance value to all vertices in the input graph. Initialize all
distance values as INFINITE. Assign distance value as 0 for the source
vertex so that it is picked first.
3. While sptSet doesn’t include all vertices -
o Pick a vertex u which is not there in SPT Set and has minimum
distance value.
o Include u to SPT Set.
o Update distance value of all adjacent vertices of u.

Dijkstra’s Algorithm Example -

Implement Dijkstra’s algorithm to the following graph and find the shortest
path from the node A to the remaining nodes.
Start from vertex A and update distance values of its adjacent vertices which is
B, C and E.
Initial STP Set { A }.
Update the distance values of adjacent vertices of A. The distance value of
vertex B, C and E becomes 2, 2 and 7 respectively.

Pick the vertex with minimum distance value and not already included in SPT
Set. The vertex B is picked and added to SPT Set.
So SPT Set now becomes { A , B }.
Update the distance values of adjacent vertices of B. The distance value of
vertex D is 6.

Again pick the vertex with minimum distance value and not already included
in SPT Set. The vertex C is picked and added to SPT Set.
So SPT Set now becomes { A , B , C }.
There is no undiscover adjacent vertices of C, but the distance value of vertex
E is require to change from 7 to 5.

Next vertex E is picked and added to SPT Set.


So SPT Set now becomes { A , B , C , E }.
Update the distance values of adjacent vertices of E. The distance value of
vertex F is 9.

We repeat the above steps until SPT Set doesn’t include all vertices of given
graph.

Finally, we get the following Shortest Path Tree (SPT) { A , B , C , E , D , F }.


Bellman Ford’s Algorithm

Bellman Ford algorithm helps us find the shortest path from a vertex to all
other vertices of a weighted graph. It is similar to Dijkstra’s algorithm but it
can work with graphs in which edges can have negative weights.

Negative weight edges can create negative weight cycles i.e. a cycle which will
reduce the total path distance by coming back to the same point. Dijkstra’s
Algorithm can not able to detect such a cycle and give an incorrect result.

Bellman Ford algorithm works by overestimating the length of the path from
the starting vertex to all other vertices. Then it iteratively relaxes those
estimates by finding new paths that are shorter than the previously
overestimated paths.

In Bellman Ford’s algorithm shortest path contains at most (N - 1) edges,


because the shortest path could not have a cycle. N represent number of
vertices present in the graph.

Bellman Ford’s Algorithm Example -

Implement Bellman Ford’s algorithm to the following graph and find the
shortest path from the node A to the remaining nodes.
Initialize all distances as infinite, except the distance to source itself. Total
number of vertices in the graph is 6, so all edges must be processed 5 times.

Let all edges are processed in following order : (A, B), (A, C), (A, D), (B, E), (C,
B), (C, E), (D, C), (D, F), (E, G), (F, G).

First Iteration

Step 1 Distance value of vertex A = 0


- Cost of edge (A, B) = 6
Distance value of vertex B = ∞.
Since (0+6) < ∞, therefore, the distance value of vertex B is change
from ∞ to 6.
Step 2 Distance value of vertex A = 0
- Cost of edge (A, C) = 5
Distance value of vertex C = ∞.
Since(0+5) < ∞, therefore, the distance value of vertex C is change
from ∞ to 5.

Step 3 Distance value of vertex A = 0


- Cost of edge (A, D) = 5
Distance value of vertex D = ∞.
Since(0+5) < ∞, therefore, the distance value of vertex D is change
from ∞ to 5.
Step 4 Distance value of vertex B = 6
- Cost of edge (B, E) = -1
Distance value of vertex E = ∞.
Since{6+(-1)} < ∞, therefore, the distance value of vertex E is change
from ∞ to 5.

Step 5 Distance value of vertex C = 5


- Cost of edge (C, B) = -2
Distance value of vertex B = 6.
Since{5+(-2)} < 6, therefore, the distance value of vertex B is change
from 6 to 3.
Step 6 Distance value of vertex C = 5
- Cost of edge (C, E) = 1
Distance value of vertex E = 5.
Since(5+1) > 5, therefore, the distance value of vertex E remain same.

Step 7 Distance value of vertex D = 5


- Cost of edge (D, C) = -2
Distance value of vertex C = 5.
Since {5+(-2)} < 5, therefore, the distance value of vertex C is change
from 5 to 3.
Step 8 Distance value of vertex D = 5
- Cost of edge (D, F) = -1
Distance value of vertex F = ∞.
Since {5+(-1)} < ∞, therefore, the distance value of vertex F is change
from ∞ to 4.

Step 9 Distance value of vertex E = 5


- Cost of edge (E, G) = 3
Distance value of vertex G = ∞.
Since (5+3) < ∞, therefore, the distance value of vertex G is change
from ∞ to 8.
Step 10 Distance value of vertex F = 4
- Cost of edge (F, G) = 3
Distance value of vertex G = 8.
Since (4+3) < 8, therefore, the distance value of vertex G is change
from 8 to 7.

Second Iteration

Step 1 Distance value of vertex A = 0


- Cost of edge (A, B) = 6
Distance value of vertex B = 3.
Since (0+6) > 3, therefore, the distance value of vertex B remain same.
Step 2 Distance value of vertex A = 0
- Cost of edge (A, C) = 5
Distance value of vertex C = 3.
Since(0+5) > 3, therefore, the distance value of vertex C is remain
same.
Step 3 Distance value of vertex A = 0
- Cost of edge (A, D) = 5
Distance value of vertex D = 5.
Since(0+5) = 5, therefore, the distance value of vertex D is remain
same.
Step 4 Distance value of vertex B = 3
- Cost of edge (B, E) = -1
Distance value of vertex E = 5.
Since{3+(-1)} < 5, therefore, the distance value of vertex E is change
from 5 to 2.

Step 5 Distance value of vertex C = 3


- Cost of edge (C, B) = -2
Distance value of vertex B = 3.
Since{3+(-2)} < 3, therefore, the distance value of vertex B is change
from 3 to 1.
Step 6 Distance value of vertex C = 3
- Cost of edge (C, E) = 1
Distance value of vertex E = 2.
Since(3+1) > 2, therefore, the distance value of vertex E remain same.
Step 7 Distance value of vertex D = 5
- Cost of edge (D, C) = -2
Distance value of vertex C = 3.
Since {5+(-2)} = 3, therefore, the distance value of vertex C is remain
same.
Step 8 Distance value of vertex D = 5
- Cost of edge (D, F) = -1
Distance value of vertex F = 4.
Since {5+(-1)} = 4, therefore, the distance value of vertex F is remain
same.
Step 9 Distance value of vertex E = 2
- Cost of edge (E, G) = 3
Distance value of vertex G = 7.
Since (2+3) < 7, therefore, the distance value of vertex G is change
from 7 to 5.
Step 10 Distance value of vertex F = 4
- Cost of edge (F, G) = 3
Distance value of vertex G = 5.
Since (4+3) > 5, therefore, the distance value of vertex G is remain
same.
Third Iteration

Step 1 Distance value of vertex A = 0


- Cost of edge (A, B) = 6
Distance value of vertex B = 3.
Since (0+6) > 3, therefore, the distance value of vertex B remain same.
Step 2 Distance value of vertex A = 0
- Cost of edge (A, C) = 5
Distance value of vertex C = 3.
Since(0+5) > 3, therefore, the distance value of vertex C is remain
same.
Step 3 Distance value of vertex A = 0
- Cost of edge (A, D) = 5
Distance value of vertex D = 5.
Since(0+5) = 5, therefore, the distance value of vertex D is remain
same.
Step 4 Distance value of vertex B = 1
- Cost of edge (B, E) = -1
Distance value of vertex E = 2.
Since{1+(-1)} < 2, therefore, the distance value of vertex E is change
from 2 to 0.

Step 5 Distance value of vertex C = 3


- Cost of edge (C, B) = -2
Distance value of vertex B = 1.
Since{3+(-2)} = 1, therefore, the distance value of vertex B is remain
same.
Step 6 Distance value of vertex C = 3
- Cost of edge (C, E) = 1
Distance value of vertex E = 0.
Since(3+1) > 0, therefore, the distance value of vertex E remain same.
Step 7 Distance value of vertex D = 5
- Cost of edge (D, C) = -2
Distance value of vertex C = 3.
Since {5+(-2)} = 3, therefore, the distance value of vertex C is remain
same.
Step 8 Distance value of vertex D = 5
- Cost of edge (D, F) = -1
Distance value of vertex F = 4.
Since {5+(-1)} = 4, therefore, the distance value of vertex F is remain
same.
Step 9 Distance value of vertex E = 0
- Cost of edge (E, G) = 3
Distance value of vertex G = 5.
Since (0+3) < 5, therefore, the distance value of vertex G is change
from 5 to 3.

Step 10 Distance value of vertex F = 4


- Cost of edge (F, G) = 3
Distance value of vertex G = 3.
Since (4+3) > 3, therefore, the distance value of vertex G is remain
same.
After third iteration no changes are happen in this graph.

Q9. (a) Discuss general principle of congestion control and the


mechanisms used in congestion control in packet switched
network.
Answer : - Congestion is an important issue that can arise in packet switched
network. Congestion is a situation in Communication Networks in which too
many packets are present in a part of the subnet, performance degrades.
Congestion in a network may occur when the load on the network (i.e. the
number of packets sent to the network) is greater than the capacity of the
network (i.e. the number of packets a network can handle.). Network
congestion occurs in case of traffic overloading.
Causing of Congestion

 The input traffic rate exceeds the capacity of the output lines. If
suddenly, a stream of packet start arriving on three or four input lines
and all need the same output line. In this case, a queue will be built up.
If there is insufficient memory to hold all the packets, the packet will be
lost. Increasing the memory to unlimited size does not solve the
problem. This is because, by the time packets reach front of the queue,
they have already timed out (as they waited the queue). When timer
goes off source transmits duplicate packet that are also added to the
queue. Thus same packets are added again and again, increasing the
load all the way to the destination.
 The router's buffer is too limited.
 Congestion in a subnet can occur if the processors are slow. Slow speed
CPU at routers will perform the routine tasks such as queuing buffers,
updating table etc slowly. As a result of this, queues are built up even
though there is excess line capacity.
 Congestion is also caused by slow links. This problem will be solved
when high speed links are used. But it is not always the case.
Sometimes increase in link bandwidth can further deteriorate the
congestion problem as higher speed links may make the network more
unbalanced. If a route does not have free buffers, it start
ignoring/discarding the newly arriving packets. When these packets
are discarded, the sender may retransmit them after the timer goes off.
Such packets are transmitted by the sender again and again until the
source gets the acknowledgement of these packets. Therefore multiple
transmissions of packets will force the congestion to take place at the
sending end.

How to correct the Congestion Problem

Congestion control mechanisms are divided into two categories, one category
prevents the congestion from happening and the other category removes
congestion after it has taken place. These two categories are :

1. Open Loop
2. Closed Loop

Open loop congestion control policies are applied to prevent congestion


before it happens. The congestion control is handled either by the source or
the destination.

 Window Policy - To implement window policy, selective repeat method


is used for congestion control.

The go-back-n protocol works well if errors are less, but if the line is
poor it wastes a lot of bandwidth on retransmitted frames. Selective
Repeat attempts to retransmit only those packets that are actually lost.

 Acknowledgement Policy - Since acknowledgement are also the part


of the load in network, the acknowledgment policy imposed by the
receiver may also affect congestion. Several approaches can be used to
prevent congestion related to acknowledgment :

The receiver should send acknowledgement for N packets rather than


sending acknowledgement for a single packet.

The receiver should send a acknowledgment only if it has to sent a


packet or a timer expires.

 Discarding Policy - A router may discard less sensitive packets when
congestion is likely to happen. Such a discarding policy may prevent
congestion and at the same time may not harm the integrity of the
transmission.
 Retransmission Policy - The sender retransmits a packet, if it feels that
the packet it has sent is lost or corrupted. This retransmission may
increase the congestion in the network. To prevent congestion,
retransmission timers must be designed to prevent congestion and also
able to optimize efficiency.
 Admission Policy - In admission policy a mechanism should be used to
prevent congestion. Switches in a flow should first check the resource
requirement of a network flow before transmitting it further. If there is
a chance of a congestion or there is a congestion in the network, router
should deny establishing a virtual network connection to prevent
further congestion.
Q9. (b) Explain the implementation of token bucket traffic
shaper with the help of a diagram ?
Answer : - The token bucket algorithm is based on an analogy of a fixed
capacity bucket into which tokens (normally representing a unit of bytes or a
single packet of predetermined size) are added at a fixed rate. When a packet
is ready to be send, it is first checked whether the bucket contains sufficient
tokens or not. If sufficient tokens are present in the bucket, the appropriate
numbers of tokens [equivalent to the length of the packet in bytes] are
removed and the packet is passed for transmission. Else if there is a deficiency
of tokens, then the packet has to wait in a queue.
Q10. Explain with the help of an example and a diagram how
the congestion controls algorithm (slow start algorithm) work
at transport layer.
Answer : - TCP slow start is an algorithm which balances the speed of a
network connection. Slow start gradually increases the amount of data
transmitted until it finds the network’s maximum carrying capacity.

One of the most common ways to optimize the speed of a connection is to


increase the speed of the link (i.e. increase the amount of bandwidth).
However, any link can become overloaded if a device tries to send out too
much data. Overloading a link is known as congestion, and it can result in slow
communications or even data loss.

Slow start prevents a network from becoming congested by regulating the


amount of data that’s sent over it. It negotiates the connection between a
sender and receiver by defining the amount of data that can be transmitted
with each packet, and slowly increases the amount of data until the network’s
capacity is reached. This ensures that as much data is transmitted as possible
without clogging the network.

Slow Start Process Step by Step

1. A sender attempts to communicate to a receiver. The sender’s initial


packet contains a small congestion window, which is determined based
on the sender’s maximum window.
2. The receiver acknowledges the packet and responds with its own
window size. If the receiver fails to respond, the sender knows not to
continue sending data.
3. After receiving the acknowledgement, the sender increases the next
packet’s window size. The window size gradually increases until the
receiver can no longer acknowledge each packet, or until either the
sender or the receiver’s window limit is reached.

Once a limit has been determined, slow start’s job is done. Other congestion
control algorithms take over to maintain the speed of the connection.
Q11. (a) Define digital signature and explain its benefits.
Answer : - A digital signature is an electronic signature that can be used to
authenticate the identity of the sender of a message or the signer of a
document, and to ensure that the original content of the message or document
that has been sent is unchanged. Digital signatures are easily transportable,
cannot be imitated by someone else, and can be automatically time-stamped.
A digital signature can be used with any kind of message, whether it is
encrypted or plaintext.

The Digital Signatures require a key pair called the Public and Private Keys.
Just as physical keys are used for locking and unlocking, in cryptography, the
equivalent functions are encryption and decryption. The private key is kept
confidential with the owner usually on a secure media like crypto smart card
or crypto token. The public key is shared with everyone. Information
encrypted by a private key can only be decrypted using the corresponding
public key.
Digital Signature Versus Handwritten Signatures

An ink signature can be easily replicated from one document to another by


copying the image manually or electronically. Digital Signatures
cryptographically bind an electronic identity to an electronic document and
the digital signature cannot be copied to another document. Further, paper
contracts often have the ink signature block on the last page, allowing
previous pages to be replaced after the contract has been signed. Digital
signatures on the other hand compute the hash or digest of the complete
document and a change of even one bit in the previous pages of the document
will make the digital signature verification fail.

Benefits of Digital Signature

 Reduce Cost - There are a lot of direct savings from switching to a


paperless process, including the cost of paper, ink, printer maintenance
and shipping costs. You will also notice a lot of indirect savings,
including the time saved that would’ve been spent filing documents,
rekeying data, searching for lost documents or tracking down a
contract that’s been lost in the mail.
 Get Paid Faster - Because it’s so fast and easy to sign documents online,
you’re sure to see faster contract turnaround. It’s also easy to quickly
execute contracts that have multiple signers. After the first person
signs, the electronic signature software automatically routes
documents to the next signer in the workflow. And when you get
documents signed in minutes, you can get paid faster than ever before.
 Enhance Customer Relationships - Your customers are used to doing
business online, and they’ve come to expect businesses they work with
to provide online services. With digital signature software, your
customers can sign contracts online with nothing to download or
install. As long as they have an Internet connection, they can sign
documents no matter where life takes them. This service adds value for
your customers by making it fast and easy to do business with your
company.
 Upgrade Document Security - In the paper world, you secure your
documents by putting them in a locked file cabinet. You may even put
that file cabinet in a locked room. But even with those precautions,
someone could break in and tamper with documents. The only
evidence you’d have would be a broken lock (if that), and then you
would have to guess which file was altered.

This type of document security is a liability for your company, and it’s
unnecessary. With advanced electronic signature software (a type of
electronic signature called a “Digital Signature”), you can safeguard
your documents with a high level of document security and evidence.
Each signature is protected with a tamper-evident seal, which alerts
you if any part of the document is changed after signing. Signed
documents also come with a highly detailed log of events of the
document’s lifecycle. Using this evidence, you can see when each
person signed the document, which signers downloaded a copy of the
finished document, and much more.

Q11. (b) What kind of a model is being used in India to provide


public key infrastructure related services. (i.e. management of
public keys). Elaborate
Answer : - A public key infrastructure (PKI) is a set of roles, policies,
hardware, software and procedures needed to create, manage, distribute, use,
store and revoke digital certificates and manage public-key encryption. The
purpose of a PKI is to facilitate the secure electronic transfer of information
for a range of network activities such as e-commerce, internet banking and
confidential email.

Public key cryptography is a cryptographic technique that enables entities to


securely communicate on an insecure public network, and reliably verify the
identity of an entity via digital signatures.

A public key infrastructure (PKI) is a system for the creation, storage, and
distribution of digital certificates which are used to verify that a particular
public key belongs to a certain entity. The PKI creates digital certificates
which map public keys to entities, securely stores these certificates in a
central repository and revokes them if needed.

A PKI consists of

 A certificate authority (CA) that stores, issues and signs the digital


certificates
 A registration authority (RA) which verifies the identity of entities
requesting their digital certificates to be stored at the CA
 A central directory — i.e., a secure location in which to store and index
keys
 A certificate management system managing things like the access to
stored certificates or the delivery of the certificates to be issued.
 A certificate policy stating the PKI's requirements concerning its
procedures. Its purpose is to allow outsiders to analyze the PKI's
trustworthiness.

Q12. Discuss the implementation of Kerberos mechanism.


Answer : - Kerberos is a ticketing-based authentication system, based on the
use of symmetric keys. Kerberos uses tickets to provide authentication to
resources instead of passwords. This eliminates the threat of password
stealing via network sniffing. One of the biggest benefits of Kerberos is its
ability to provide single sign-on (SSO). Once you log into your Kerberos
environment, you will be automatically logged into other applications in the
environment.

To help provide a secure environment, Kerberos makes use of Mutual


Authentication. In Mutual Authentication, both the server and the client must
be authenticated. The client knows that the server can be trusted, and the
server knows that the client can be trusted. This authentication helps prevent
man-in-the-middle attacks and spoofing. Kerberos is also time sensitive. The
tickets in a Kerberos environment must be renewed periodically or they will
expire.
When a user needs to access a service protected by Kerberos, the Kerberos
protocol process can be divided into two phases - User Identity Authentication
and Service Access.

User Identity Authentication - User authentication is a process of checking


validity of identity information provided by users in the Kerberos
authentication service. Identity information can be user names and passwords
or information that can provide real identities in other forms. If user
information passes the validity check, the Kerberos authentication service
returns a valid Ticket-Granting Ticket (TGT) token, proving that the user has
passed identity authentication. The user uses the TGT in the subsequent
Service Access process.

Service Access - When the user needs to access a service, the user requests
the Ticket-Granting Service (TGS) from the Kerberos server based on the TGT
obtained in the first phase, providing the name of the service to be accessed.
TGS checks the TGT and information about the service to be accessed. After
the information passes the check, TGS returns a Service-Granting Ticket (SGT)
token to the user. The user requests the component service based on the SGT
and related user authentication information. The component service decrypts
the SGT information in a symmetrical way and finishes user authentication. If
the user passes the authentication process, the user can successfully access
related resources of the service.

Kerberos in Windows Systems

Kerberos is very prevalent in the Windows environment. In fact, Windows


2000 and later use Kerberos as the default method of authentication. When
you install your Active Directory domain, the domain controller is also the Key
Distribution Center. In order to use Kerberos in a Windows environment, your
client system must be a part of the Windows domain. Kerberos is used when
accessing file servers, Web servers, and other network resources. When you
attempt to access a Web server, Windows will try to sign you in using
Kerberos. If Kerberos authentication does not work, then the system will fall
back to NTLM authentication.

You might also like