(A) What Is The Need of Modulating A Signal? Will It Be A Right Approach To Send The Information As The Signal Itself?
(A) What Is The Need of Modulating A Signal? Will It Be A Right Approach To Send The Information As The Signal Itself?
Will it be a right
approach to send the information as the signal itself?
Answer : - In the modulation process, two signals are used namely the
modulating signal and the carrier. The modulating signal is nothing but the
baseband signal or information signal while the carrier is a high frequency
sinusoidal signal.
The receiver demodultes the received modulated signal and gets the original
information signal back. Thus, demodulation is exactly opposite to
modulation.
In the process of modulation the carrier wave actually acts as carrier which
carries the information signal from the transmitter to receiver.
Need of Modulation
Advantages of Modulation
There is no way to tell such signals apart and they will interfere with
each other leading to a lot of noise in the system and a very bad output.
By using a carrier wave of high frequencies and allotting a band of
frequencies to each message, there is no mixing up of signals and the
received signals are absolutely perfect.
λ = c/f
c → is the velocity of light
f → is the frequency of the signal to be transmitted
There are three mechanisms for modulating digital data into an analog signal :
Amplitude Shift Keying (ASK), Frequency Shift Keying (FSK), and Phase Shift
Keying (PSK). In addition, there is a fourth (and better) mechanism that
combines changing both the amplitude and phase, called Quadrature
Amplitude Modulation (QAM).
Amplitude Shift Keying (ASK) - In an ASK system, the binary symbol 1 is
represented by transmitting a fixed-amplitude carrier wave and fixed
frequency for a bit duration of T seconds. If the signal value is 1 then the
carrier signal will be transmitted; otherwise, a signal value of 0 will be
transmitted.
The simplest and most common form of ASK operates as a switch, using the
presence of a carrier wave to indicate a binary one and its absence to indicate
a binary zero. This type of modulation is called on-off keying (OOK), and is
used at radio frequencies to transmit Morse code (referred to as continuous
wave operation).
The ASK technique is commonly used to transmit digital data over optical
fiber.
Frequency Shift Keying (FSK) - In FSK, the frequency of the carrier signal is
varied according to the digital signal changes. The frequency of the modulated
signal is constant for the duration of one signal element, but changes for the
next signal element if the data element changes. Both amplitude and phase
remain constant for all signal elements.
Phase Shift Keying (PSK) - In PSK, the phase of the carrier is varied to
represent two or more different signal elements. Both amplitude and
frequency remain constant when the phase changes. The simplest PSK is
binary PSK, in which we have only two signal elements, one with a phase of 0°,
and the other with a phase of 180°. The modulation is accomplished by
varying the sine and cosine inputs at a precise time.. It is widely used for
wireless LANs, RFID and Bluetooth communication. The following figure gives
a conceptual view of PSK.
Quadrature Amplitude Modulation (QAM) - QAM utilises both amplitude
and phase components to provide a form of modulation that is able to provide
high levels of spectrum usage efficiency.
Q2. (a) Discuss the different approaches to circuit switching?
Why it is suitable for voice Transmission? What are its
limitations ?
Answer : - A circuit-switched communication system involves three phases :
circuit establishment (setting up dedicated links between the source and
destination); data transfer (transmitting the data between the source and
destination); and circuit disconnect (removing the dadicated links).
Omega Network
A Clos network uses 3 stages to switch from N inputs to N outputs. In the first
stage, there are r= N/n crossbar switches and each switch is of size n*m. In the
second stage there are m switches of size r*r and finally the last stage is
mirror of first stage with r switches of size m*n. A clos network will be
completely non-blocking if m >= 2n-1.
We assume that the value of n=3 and m=2n. So, r=N/n =9/3 =3 and m=2*3 =6.
From the above diagram we can say that multistage switching overcome the
limitations of circuit switching because of multiple path present in the
network.
Q3. A bit string 011100111110001110 needs to be transmitted
at the data link layer. What is the string actually transmitted
after bit stuffing ?
Answer : - Solve it Yourself
Q4. Define checksum and write the algorithm for computing the
checksum ?
Answer : - COMING SOON
Flow Control - When a data frame (Layer-2 data) is sent from one host to
another over a single medium, it is required that the sender and receiver
should work at the same speed. That is, sender sends at a speed on which the
receiver can process and accept the data. If sender is sending too fast the
receiver may be overloaded, and data may be lost.
There are three types of techniques available which Data-link layer may
deploy to control the errors by Automatic Repeat Requests (ARQ) :
Piggybacking
When ACK is lost - Sequence Number on data packets help to solve the
problem of delayed acknowledgement. Consider the acknowledgement sent
by the receiver gets lost. The sender retransmits the same data packet after its
timer goes off. This prevents the occurrence of deadlock. The Sequence
Number on the data packet helps the receiver to identify the duplicate data
packet. Receiver discards the duplicate packet and re-sends the same
acknowledgement.
When Frame is lost - After sending a data packet to the receiver, sender
starts the time out timer. If the data packet gets acknowledged before the
timer expires, sender stops the time out timer. If the timer goes off before
receiving the acknowledgement, sender retransmits the same data packet.
After retransmission, sender resets the timer. This prevents the occurrence of
deadlock.
When ACK timeout occurs
Any node that receives CTS frame knows that it is close to the receiver,
therefore, cannot transmit a frame.
Any node that receives RTS frame but not the CTS frame knows that is not
close to the receiver to interfere with it, So it is free to transmit data.
WLAN data transmission collisions may still occur, and the MACA for Wireless
(MACAW) is introduced to extend the function of MACA. It requires nodes
sending acknowledgements after each successful frame transmission, as well
as the additional function of Carrier Sense.
Dijkstra’s Algorithm
1. Create a set "SPT Set" (Shortest Path Tree Set) that keeps track of
vertices included in shortest path tree, i.e., whose minimum distance
from source is calculated and finalized. Initially, this set is empty.
2. Assign a distance value to all vertices in the input graph. Initialize all
distance values as INFINITE. Assign distance value as 0 for the source
vertex so that it is picked first.
3. While sptSet doesn’t include all vertices -
o Pick a vertex u which is not there in SPT Set and has minimum
distance value.
o Include u to SPT Set.
o Update distance value of all adjacent vertices of u.
Implement Dijkstra’s algorithm to the following graph and find the shortest
path from the node A to the remaining nodes.
Start from vertex A and update distance values of its adjacent vertices which is
B, C and E.
Initial STP Set { A }.
Update the distance values of adjacent vertices of A. The distance value of
vertex B, C and E becomes 2, 2 and 7 respectively.
Pick the vertex with minimum distance value and not already included in SPT
Set. The vertex B is picked and added to SPT Set.
So SPT Set now becomes { A , B }.
Update the distance values of adjacent vertices of B. The distance value of
vertex D is 6.
Again pick the vertex with minimum distance value and not already included
in SPT Set. The vertex C is picked and added to SPT Set.
So SPT Set now becomes { A , B , C }.
There is no undiscover adjacent vertices of C, but the distance value of vertex
E is require to change from 7 to 5.
We repeat the above steps until SPT Set doesn’t include all vertices of given
graph.
Bellman Ford algorithm helps us find the shortest path from a vertex to all
other vertices of a weighted graph. It is similar to Dijkstra’s algorithm but it
can work with graphs in which edges can have negative weights.
Negative weight edges can create negative weight cycles i.e. a cycle which will
reduce the total path distance by coming back to the same point. Dijkstra’s
Algorithm can not able to detect such a cycle and give an incorrect result.
Bellman Ford algorithm works by overestimating the length of the path from
the starting vertex to all other vertices. Then it iteratively relaxes those
estimates by finding new paths that are shorter than the previously
overestimated paths.
Implement Bellman Ford’s algorithm to the following graph and find the
shortest path from the node A to the remaining nodes.
Initialize all distances as infinite, except the distance to source itself. Total
number of vertices in the graph is 6, so all edges must be processed 5 times.
Let all edges are processed in following order : (A, B), (A, C), (A, D), (B, E), (C,
B), (C, E), (D, C), (D, F), (E, G), (F, G).
First Iteration
Second Iteration
The input traffic rate exceeds the capacity of the output lines. If
suddenly, a stream of packet start arriving on three or four input lines
and all need the same output line. In this case, a queue will be built up.
If there is insufficient memory to hold all the packets, the packet will be
lost. Increasing the memory to unlimited size does not solve the
problem. This is because, by the time packets reach front of the queue,
they have already timed out (as they waited the queue). When timer
goes off source transmits duplicate packet that are also added to the
queue. Thus same packets are added again and again, increasing the
load all the way to the destination.
The router's buffer is too limited.
Congestion in a subnet can occur if the processors are slow. Slow speed
CPU at routers will perform the routine tasks such as queuing buffers,
updating table etc slowly. As a result of this, queues are built up even
though there is excess line capacity.
Congestion is also caused by slow links. This problem will be solved
when high speed links are used. But it is not always the case.
Sometimes increase in link bandwidth can further deteriorate the
congestion problem as higher speed links may make the network more
unbalanced. If a route does not have free buffers, it start
ignoring/discarding the newly arriving packets. When these packets
are discarded, the sender may retransmit them after the timer goes off.
Such packets are transmitted by the sender again and again until the
source gets the acknowledgement of these packets. Therefore multiple
transmissions of packets will force the congestion to take place at the
sending end.
Congestion control mechanisms are divided into two categories, one category
prevents the congestion from happening and the other category removes
congestion after it has taken place. These two categories are :
1. Open Loop
2. Closed Loop
The go-back-n protocol works well if errors are less, but if the line is
poor it wastes a lot of bandwidth on retransmitted frames. Selective
Repeat attempts to retransmit only those packets that are actually lost.
Discarding Policy - A router may discard less sensitive packets when
congestion is likely to happen. Such a discarding policy may prevent
congestion and at the same time may not harm the integrity of the
transmission.
Retransmission Policy - The sender retransmits a packet, if it feels that
the packet it has sent is lost or corrupted. This retransmission may
increase the congestion in the network. To prevent congestion,
retransmission timers must be designed to prevent congestion and also
able to optimize efficiency.
Admission Policy - In admission policy a mechanism should be used to
prevent congestion. Switches in a flow should first check the resource
requirement of a network flow before transmitting it further. If there is
a chance of a congestion or there is a congestion in the network, router
should deny establishing a virtual network connection to prevent
further congestion.
Q9. (b) Explain the implementation of token bucket traffic
shaper with the help of a diagram ?
Answer : - The token bucket algorithm is based on an analogy of a fixed
capacity bucket into which tokens (normally representing a unit of bytes or a
single packet of predetermined size) are added at a fixed rate. When a packet
is ready to be send, it is first checked whether the bucket contains sufficient
tokens or not. If sufficient tokens are present in the bucket, the appropriate
numbers of tokens [equivalent to the length of the packet in bytes] are
removed and the packet is passed for transmission. Else if there is a deficiency
of tokens, then the packet has to wait in a queue.
Q10. Explain with the help of an example and a diagram how
the congestion controls algorithm (slow start algorithm) work
at transport layer.
Answer : - TCP slow start is an algorithm which balances the speed of a
network connection. Slow start gradually increases the amount of data
transmitted until it finds the network’s maximum carrying capacity.
Once a limit has been determined, slow start’s job is done. Other congestion
control algorithms take over to maintain the speed of the connection.
Q11. (a) Define digital signature and explain its benefits.
Answer : - A digital signature is an electronic signature that can be used to
authenticate the identity of the sender of a message or the signer of a
document, and to ensure that the original content of the message or document
that has been sent is unchanged. Digital signatures are easily transportable,
cannot be imitated by someone else, and can be automatically time-stamped.
A digital signature can be used with any kind of message, whether it is
encrypted or plaintext.
The Digital Signatures require a key pair called the Public and Private Keys.
Just as physical keys are used for locking and unlocking, in cryptography, the
equivalent functions are encryption and decryption. The private key is kept
confidential with the owner usually on a secure media like crypto smart card
or crypto token. The public key is shared with everyone. Information
encrypted by a private key can only be decrypted using the corresponding
public key.
Digital Signature Versus Handwritten Signatures
This type of document security is a liability for your company, and it’s
unnecessary. With advanced electronic signature software (a type of
electronic signature called a “Digital Signature”), you can safeguard
your documents with a high level of document security and evidence.
Each signature is protected with a tamper-evident seal, which alerts
you if any part of the document is changed after signing. Signed
documents also come with a highly detailed log of events of the
document’s lifecycle. Using this evidence, you can see when each
person signed the document, which signers downloaded a copy of the
finished document, and much more.
A public key infrastructure (PKI) is a system for the creation, storage, and
distribution of digital certificates which are used to verify that a particular
public key belongs to a certain entity. The PKI creates digital certificates
which map public keys to entities, securely stores these certificates in a
central repository and revokes them if needed.
A PKI consists of
Service Access - When the user needs to access a service, the user requests
the Ticket-Granting Service (TGS) from the Kerberos server based on the TGT
obtained in the first phase, providing the name of the service to be accessed.
TGS checks the TGT and information about the service to be accessed. After
the information passes the check, TGS returns a Service-Granting Ticket (SGT)
token to the user. The user requests the component service based on the SGT
and related user authentication information. The component service decrypts
the SGT information in a symmetrical way and finishes user authentication. If
the user passes the authentication process, the user can successfully access
related resources of the service.