0% found this document useful (0 votes)
18 views11 pages

R2031043

Shashwat

Uploaded by

piriyaabhiram123
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views11 pages

R2031043

Shashwat

Uploaded by

piriyaabhiram123
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

001.

Which of the following gives minimum probability of error B


A FSK B PSK
C ASK D QAM
002. Characteristics of Matched filter are 1) Matched filter is used to maximize Signal to C
noise ratio even for non Gaussian noise 2) It gives the output as signal energy in the
absence of noise 3) They are used for signal detection Which of these is/are correct
A 1 and 2 B 1 only
C 1, 2 and 3 D 2 and 3
003. Baseband signal receiver consists of D
A Integrator, sample switch B Integrator, dump switch
C Integrator D Integrator, sample switch and dump
switch
004. The two sided power spectral density of white gaussian noise is B
A B /2
C 2 D 4
005. The impulse response of the filter is the ________ of the mirror image of the signal A
waveform
A Delayed version B Same version
C Delayed & Same version D No relation
006. Which of the following gives maximum probability of error C
A PSK B FSK
C ASK D QPSK
007. Which of the following statements is/ are correctS1: Even if the signal is entirely lost in C
the noise, the receiver cannot be wrong more than half the time on the average S2:
The probability of error increases rapidly as Es/ decreases
A S1 and S2 are correct B only S2 correct
C only S1 correct D Both S1 and S2 are wrong
008. Matched filter provides _____ signal to noise ratio A
A Maximum B Minimum
C Zero D One
009. 1 Mbps BASK receiver detects waveforms S1(t) = A cosw0t or S2(t) =0 with a matched D
filter. If A= 1mv, then the average bit error probability assuming single-sided noise
power expected density N0= 10-11w/Hz is nearly
A B
Q( ) Q( )
C D
Q( ) Q( )
010. Which modulation scheme is also called as on-off keying method? C
A PSK B FSK
C ASK D QPSK
011. In baseband signal receiver, In a bit interval a sample value is taken at a particular time B
and if it is positive, the decision is taken in favor of
A Bit 0 B Bit 1
C Bit 0 and Bit 1 D No decision took
012. The detection method where carriers phase is given importance is called as A
A Coherent detection B Non coherent detection
C Coherent detection & Non coherent D Envelope detection
detection
013. In binary encoded PCM baseband signal, bit 1 and bit 0 represented by voltage levels C
of
A +V, +V B -V, -V
C +V, -V D -V, +V
014. 2
The optimum filter which gives maximum has a transfer function D
A B
C D

015. White noise has ------ power spectral density A


A Constant B Variable
C Constant and variable D No relation
016. The average noise power of white noise is B
A Zero B Infinity
C 1 D 0.5
017. The probability of error comparison in ASK, PSK and FSK is given by A
A ASK > FSK >PSK B PSK > FSK >ASK
C FSK > ASK <PSK D FSK > PSK >ASK
018. In ASK system, the probability of error is given by B
A B
erfc [ 2Es/ ] 1/2 erfc [ Es/ 4] 1/2
C D
erfc [ Es/ ] 1/2 erfc [ 0.6Es/ ] 1/2
019. Which of the following statements are correcti) In baseband signal receiver, In a bit B
interval , a sample value is taken at a particular time and if it is positive, the decision is
taken in favor of bit 1 ii) In baseband signal receiver, In a bit interval a sample value is
taken at a particular time and if it is negative, the decision is taken in favor of bit 0 iii) In
baseband signal receiver, In a bit interval a sample value is taken at a particular time
and if it is positive, the decision is taken in favor of bit 0 iv) In baseband signal receiver,
In a bit interval a sample value is taken at a particular time and if it is negative, the
decision is taken in favor of bit 1
A Statement i ) and iv) are correct B Statement i ) and ii) are correct
C Statement ii) and iv) are correct D Statement ii ) and iii) are correct
020. The non coherent FSK needs ________ Eb/N0 than coherent FSK. A
A 1db more B 1db less
C 3db more D 3db less
021. For coherent FSK system, the probability of error is given by D
A B
erfc [ 2Es/ ] 1/2 erfc [ Es/ ] 1/2
C D
erfc [ 0.3Es/ ] 1/2 erfc [ 0.6Es/ ] 1/2
022. In coherent detection of signals Which of the following statements are correctS1: Local D
carrier is generated S2: Carrier frequency and phase as same as transmitted carrier is
generated S3: The carrier is in synchronization with modulated carrier
A S1, S2 are correct B S2, S3 are correct
C S1 is correct D S1, S2, S3 are correct
023. The term heterodyning refers to D
A Frequency conversion B Frequency mixing
C Amplitude conversion D Frequency conversion & mixing
024. In baseband signal receiver, In a bit interval a sample value is taken at a particular time A
and if it is negative, the decision is taken in favor of
A Bit 0 B Bit 1
C Bit 0 and Bit 1 D No decision took
025. In which modulation technique does the phase of the carrier signal is changed by B
varying the sine and cosine inputs at a particular time
A Frequency modulation B Phase shift keying
C Analog modulation D Pulse code modulation
026. Which of the following statements is/ are correctS1: Optimum filter which gives the D
minimum probability of error S2: Optimum filter which maximizes the ratio = S01(T) -
S02(T) / 0
A only S2 correct B only S1 correct
C S1 and S2 are wrong D S1 and S2 are correct
027. In which detection technique phase synchronization between transmitter and receiver A
is
A Coherent B non coherent
C envelope detector D Square law detector
028. 1 Mbps BFSK receiver detects waveforms S1(t) = A cosw1t or S2(t) =A cosw2t with a C
matched filter. If A= 1mv, then the average bit error probability assuming single-sided
noise power expected density N0= 10-11w/Hz is nearly
A B
Q( ) Q( )
C D
Q( ) Q( )
029. Which of the following digital modulation can be decoded non-coherently D
A QAM B APSK
C BPSK D BFSK
030. In PSK system, the probability of error is given by C
A B
erfc [ 2Es/ ] 1/2 erfc [ Es/ 4] 1/2
C D
erfc [ Es/ ] 1/2 erfc [ 0.6Es/ ] 1/2
031. In Binary Phase Shift Keying system, the binary symbols 1 and 0 are represented by B
carrier with phase shift of
A 900 B 1800
C 1350 D 0
032. BPSK system modulates at the rate of A
A 1 bit/symbol B 2 bit/symbol
C 3 bit/symbol D 4 bit/symbol
033. Maximum signal to noise ratio of the matched filter is B
A E/N0 B 2E/N0
C E/2N0 D E/3N0
034. Which of the following has the least noise immunity A
A ASK B PSK
C FSK D QAM
035. Which of the following statements are correct S1: The probability of error in ASK is high D
S2 : The probability of error comparison in FSK is Moderate S3: The probability of error
comparison in PSK is Low
A S1, S2 are correct B S2, S3 are correct
C S1 is correct D S1, S2, S3 are correct
036. 1 Mbps BPSK receiver detects waveforms S1(t) = A cosw0t or S2(t) =- A cosw0t with a C
matched filter. If A= 1mv, then the average bit error probability assuming single-sided
noise power expected density N0= 10-11w/Hz is nearly
A Q(0.63) B Q(0.16)
C D
Q( ) Q( )
037. Probability density function of Gaussian noise sample, n0(T) is D
A B

C D

038. Baseband signal receiver is also known as C


A Integrate receiver B Dump receiver
C Integrate and dump receiver D Reflex receiver
039. The signal to ratio of Baseband signal receiver is A
A 2V2T / B V2T /
C V2T / 2 D 3V2T /
040. Schwarz inequality states that B
A B

C D

041. When the input noise is white, the optimum filter is known as A
A Matched filter B Base band signal receiver
C Optimum filter D Integrate and dump receiver
042. The probability of error -----rapidly as Es/ -------- D
A Increases, increases B Decreases, decreases
C Increases, decreases D Decreases, increases
043. Probability of error for baseband signal receiver is given by C
A B
erfc [ 2Es/ ] 1/2 erfc [ Es/ 4] 1/2
C D
erfc [ Es/ ] 1/2 erfc [ 0.6Es/ ] 1/2
044. The output of Baseband signal receiver is B
A Square voltage B Ramp voltage
C Pulse voltage D Random voltage
045. Which detection technique gives less bit error rate A
A Coherent B non coherent
C envelope detector D Square law detector
046. Probability of error depends on B
A Signal energy and signal wave shape B Signal energy and not on signal wave
shape
C Signal wave shape and not on Signal D Not Signal energy and not on signal
energy wave shape
047. In ---- the frequency of the carrier signal is varied based on the information in a digital D
signal
A ASK B QPSK
C PSK D FSK
048. Which of the following statements is/ are correctS1: The probability of error decreases B
rapidly as Es/ decreases S2: Maximum value of probability of error is 0.5
A S1 and S2 are correct B only S2 correct
C only S1 correct D Both S1 and S2 are wrong
049. For M equally likely messages, the average amount of information H is B
A H = log10M B H = log2M
C H = log10M2 D H = 2log10M
050. The coding efficiency is given by A
A 1-Redundancy B 1+Redundancy
C 1/Redundancy D 2/Redundancy
051. A source emits one of four possible messages m1, m2, m3 with the probabilities of 1/2, B
1/4, 1/4. calculate entropy
A 2.5 bits/symbol B 1.5 bits/symbol
C 1.75 bits/symbol D 2.25 bits/symbol
N
052. If there are M=2 equally likely messages then amount of information carried by each B
message will be----bits
A M B N
C N-1 D N-2
053. What is the maximum value of probability of error B
A 1 B 1/2
C 1/4 D 1/8
054. Which of the following gives moderate probability of error B
A PSK B FSK
C ASK D QPSK
055. As per Shannon theorem the condition for error free transmission is ----- R-Information B
rate, C- channel capacity
A R greater than C B R less than or equal to C
C R much greater than C D R much less than C
056. As the bandwidth approaches infinity , the channel capacity becomes C
A Infinite B Zero
C 1.44 S/ D One
057. Which of the following is incorrect B
A I(x; y) = H(x) +H(y)- H(x, y) B I(x; y) = H(x) - H(y/x)
C I(x; y) = H(x) - H(x/y) D I(x; y) = H(y) - H(y/x)
058. What is the relationship between amount of information and probability of an event A
A Inversely proportional B Directly proportional
C Square relation D No relation
059. A given source will have maximum entropy if the messages produced are D
A Mutually exclusive B Statistically independent
C Two in number D Equiprobable
060. Which of the following is incorrect D
A Mutual information of a channel is B Mutual information is always positive
symmetric
C Mutual information of noise free D Mutual information is always negative
channel is H(X) or H(y)
061. The efficiency of Huffman code is linearly proportional to C
A Average length of code B Maximum length of code
C Average entropy D Received bits
062. The condition for Mutual information of noise free channel is C
A H(x) = 0 B H(y) = 0
C H(X/y) = 0 D H(X,y) = 0
063. A source emits one of four possible messages m1, m2, m3, m4 with the probabilities of A
1/2, 1/4, 1/8, 1/8. calculate information content of message m1 is
A 1 bit B 2 bits
C 3 bits D 4 bits
064. Following is not a unit of information C
A Bit B Decit
C Hz D Nat
065. The channel capacity of a discrete memory less channel with a bandwidth of 2 MHz B
and SNR of 31 is
A 20 Mbps B 10 Mbps
C 30 Mbps D 60 Mbps
066. The channel capacity of a discrete memory less channel with a bandwidth of 5 MHz A
and SNR of 15 is
A 20 Mbps B 10 Mbps
C 30 Mbps D 60 Mbps
067. Which of the following is incorrect C
A H(x, y) = H(x/y) + H(y) B H(y/x) = H(x, y) - H(x)
C I(x, y) = H(x) - H(y/x) D I(x, y) = H(y) - H(y/x)
068. A source emits one of four possible messages m1, m2, m3, m4 with the probabilities of C
1/2, 1/4, 1/8, 1/8. calculate information content of message m3 is
A 1 bit B 2 bits
C 3 bits D 4 bits
069. A source emits one of four possible messages m1, m2, m3, m4 with the probabilities of C
1/2, 1/4, 1/8, 1/8. calculate entropy
A 2.5 bits/symbol B 1.5 bits/symbol
C 1.75 bits/symbol D 2.25 bits/symbol
070. The relation between information Rate(R) , Entropy (H) and symbol rate(r) is A
A R= r*H B R= r/H
C R= r+H D R= r-H
071. The average information per individual message is known as C
A Encoding B Information Rate
C Entropy D Decoding
072. Channel capacity for noise-free channel is D
A C= max [H(X) - H(X/Y) ] B C= max [H(Y) - H(Y/X) ]
C C= max I(X;Y) D C= max H(X)
073. The units of entropy when logarithm base is 10, 2, and e, respectively. D
A dats, bits, and bans B bits, bytes, and Hartley
C bytes, dits, and nat D bans, bits, and nat
074. Consider a discrete random variable X with possible outcomes xi, i= 1, 2, .....,n. The C
self information of the even X=xi is defined as, (where in xi, i represents a subscript)
A I(xi)=log(p(xi)) B I(xi)=log(p(xi))/p(xi)
C I(xi)=-log(p(xi)) D I(xi)=log(p(xi))-p(xi)
075. An event has two possible with probability p1 = and p2 = 1/64 . The rate of information A
with 16 outcomes per second is
A 38/4 bits/sec B 38/64 bits/sec
C 38/2 bits/sec D 38/32 bits/sec
076. What is mutual information for the channel with H(X) = 1.571 bit/message H(X/Y) = B
0.612 bit/message
A 1.959 bit/message B 0.959 bit/message
C 1.462 bit/message D 0.689 bit/message
077. Entropy gives B
A Rate of information B Measure of uncertainty
C Amount of information D Probability of message
078. The ideal communication channel is defined for a system which has D
A Finite capacity B Bandwidth = 0
C Signal to Noise ratio = 0 D Infinite capacity
079. The entropy for a fair coin toss is exactly C
A 3 bits B 5 bits
C 1 bit D 2 bits
080. For which value(s) of p is the binary entropy function H(p) maximized? B
A 0 B 0.5
C 1 D 2
081. A discrete source emits one of 5 symbols once every millisecond with probabilities 1/2, A
1/4, 1/8, 1/16, 1/16respectively. What is the information rate
A 9400 bits/sec B 18800 bits/sec
C 19400 bits/sec D 16800 bits/sec
082. The relation between entropy and mutual information is A
A I(X;Y) = H(X) - H(X/Y) B I(X;Y) = H(X/Y) - H(Y/X)
C I(X;Y) = H(X) - H(Y) D I(X;Y) = H(Y) - H(X)
083. The information I contained in a message with probability of occurrence is given by (k A
is constant)
A I = k log21/P B I = k log2P
C I = k log21/2P D I = k log21/P2
084. 1 nat is equal to C
A 3.32 bits B 2.32 bits
C 1.44 bits D 3.44 bits
085. A source produces 26 symbols with equal probabilities. What is the average D
information produced by the source
A Less than 4 bits/symbol B 6 bits/symbol
C 7 bits/symbol D Between 4 bits/symbol and 6
bits/symbol
086. Assertion (A) : Entropy of a binary source is maximum if the probabilities of occurrence B
of the both events are equal Reason (R) : The average amount of information per
source symbol is called entropy for a memory less source
A Both A and R are individually true and B Both A and R are individually true but
R is the correct explanation of A R is not the correct explanation of A
C A is true but R is false D A is false but R is true
087. Discrete source S1 has 4 Equiprobable symbols while discrete source S2 has 1 B
Equiprobable symbols. When the entropy of these two sources is compared, entropy of
A S1 is greater than S2 B S1 is less than S2
C S1 is equal to S2 D Depends on rate of symbols/second
088. Which of the following statement is true? B
A Redundancy is minimum in Shannon- B Redundancy is minimum in Huffman
Fano coding compared to Huffman coding compared to Shannon-Fano
coding coding
C Shannon-Fano coding is known as D Lempel-Ziv coding requires
optimum code probabilities
089. In Shannon-Fano coding B
A the messages are first written in the B the messages are first written in the
order of increasing probability order of decreasing probability
C lowest two probabilities are combined D lowest three probabilities are
combined
090. The mutual information I (X,Y) = H(X) -H(X/Y) between two random variables X and Y D
satisfies
A I (X,Y) > 0 B I (X,Y) 0
C I (X,Y) 0, equality holds when X and D I (X,Y) 0, equality holds when X and
Y are uncorrelated Y are independent
091. The average information associated with an extremely unlikely message is zero. What C
is the average information associated with an extremely likely message
A Zero B Infinity
C Depends on total number of D Depends on speed of transmission of
messages the message
092. What is mutual information for the channel with H(Y) = 1.49 bit/message H(Y/X) = 0.53 C
bit/message
A 1.96 bit/message B 1 bit/message
C 0.96 bit/message D 0.689 bit/message
093. Hamming distance between 101 and 110 is B
A 1 B 2
C 3 D 4
094. A discrete memory source X with two symbols x1 and x2 and p(x1) = 0.9 , p(x2) = 0.1. B

Symbols x 1 and x2 are encoded as Find the efficiency


A 41.2 B 46.9
C 53.1 D 58.8
095. The technique that may be used to to increase average information per bit is A
A Shannon-Fano algorithm B ASK
C FSK D Digital modulation techniques
096. What is mutual information for the channel with H(X) = 1.96 bit/message H(X/Y) = B
1bit/message
A 1.96 bit/message B 0.96 bit/message
C 1 bit/message D 0.689 bit/message
097. The cyclic codes are designed using A
A Shift registers with feedback B Shift registers without feedback
C Flipflops D Resistors
098. The relation between n, q & kin linear block codes is given as .. A
A n=k +q B n=k-q
C k=n/q D q= k-n
099. A ( 6,3 ) linear block code has a code rate of C
A 1.75 B 1.5
C 2 D 2.5
100. Consider a linear systematic block code (6,3), where n, r and dmin are D
A 6,2,3 B 4,3,1
C 4,3,2 D 6,3,4
101. The relationship between message polynomial M(p) , generator polynomial G(p)and D
codeword polynomial X(p) is
A X(p)= M(p) + G(p) B X(p)= M(p) - G(p)
C X(p)= M(p) / G(p) D X(p)= M(p) * G(p)
102. Cyclic code exhibit the following two fundamental properties are---- B
A Cyclic property and time shifting B Cyclic property and linearity property
property
C Cyclic property and frequency shifting D Cyclic property and convolution
property property
103. The relationship between syndrome vector(S) and error vector(E) is A
A S=EHT B S=HT/E
C S=HT+E D S=HT-E
104. A ( 7, 4 ) linear block code has a code rate of C
A 7 B 4
C 1.75 D 0.571
105. Syndrome is calculated by B
A HT/r B rHT
C rH D r/H
106. The _____of errors is more difficult than the ______ B
A detection; correction B correction; detection
C creation; correction D creation; detection
107. The order of _________matrix is (n-k) X n C
A Generator B Parity
C Parity check matrix D Transpose of Parity check matrix
108. The order of generator of generator matrix is A
A kxn B (k-n) x k
C (n k ) x k D nxk
109. _______codes are special linear block codes with one extra property. If a codeword is B
rotated, the result is another codeword
A Convolution B Cyclic
C Non-linear D Hamming
110. The minimum distance of a code dictionary is 6. The code is capable of C
A Three-error correction B Three-error correction plus four-error
detection
C Two-error correction plus three-error D Five-error correction plus six-error
detection detection
111. The code rate of an (n, k) code is defined as B
A n/k B k/n
C n-k/n D n-k/k
112. A (7, 4) linear cyclic code has a generator g(x) = 1+ x+ x2 with message bits (1101). A
Then the valid codeword is
A [ 0101101] B [ 1001101]
C [ 0100101] D [ 1100101]
113. The rate of a block code is the ration of B
A Block length to message length B Message length to block length
C Message weight to block length D Message weight to message length
114. The received code contains error if the syndrome vector is B
A Zero B Non zero
C Infinity D Zero or Infinity
115. The measure of the amount of redundancy is given by D
A Code size B Minimum distance
C Code weight D Code rate
116. The number of k bit shift over which a single information bit influences the encoder B
output is given by
A Code rate B Constraint length
C Code length D Code weight
117. For decoding in convolution coding, in a code tree A
A Diverge upward when a bit is 0 and B Diverge downward when a bit is 0
diverge downward when the bit is 1 and diverge upward when the bit is 1
C Diverge left when a bit is 0 and D Diverge right when a bit is 0 and
diverge right when the bit is 1 diverge left when the bit is 1
118. The generator polynomial of a (7, 4) cyclic code is g(x) =x3+x+1 .The code vector in C
nonsystematic form for the message 1010 will be
A 0111011 B 1100010
C 1001110 D 0111001
119. The generator polynomial of a (7, 4) cyclic code is g(x) =x3+x+1 .The code vector in B
nonsystematic form for the message 0110 will be
A 1100010 B 0111010
C 1001110 D 0111001
120. Parity check bit coding is used for B
A Error correction B Error detection
C Error correction and detection D Data compression
121. For a (7, 4) block code, 7 is the total number of bits and 4 is the number of A
A Information bits B Redundant bits
C Sum of Redundant bits and D Error bits
information bits
122. The code in convolution coding is generated using C
A AND logic B OR logic
C EX-OR logic D NOT logic
123. The hamming distance between the code words 10110 and 01011 is D
A 1 B 2
C 3 D 4
124. The hamming distance between the code words 1001011011 and 0110110010 is D
A 4 B 5
C 6 D 7
125. The redundancy of an (n, k) code is defined as B
A n/k B k/n
C n-k/n D n-k/k
126. Weight of a code is defined as A
A Number of non-zero elements in code B Number of zero elements in code
vector vector
C Sum of non-zero elements and zero D Difference of non-zero elements and
elements in code vector zero elements in code vector
127. Which of the following is correct C
A Coding reduces the noise in the B Coding deliberately introduces
signal redundancy into messages
C Coding increases the information rate D Coding increases the channel
bandwidth
128. What is surviving path in viterbi algorithm B
A Path of decoded signal with maximum B Path of decoded signal with minimum
metric metric
C Path of decoded signal with medium D Path of decoded signal with zero
metric metric
129. Which of the following is true for systematic code B
A Check bits appear first and then B Message bits appear first and then
message bits appear next check bits appear next
C Message bits and check bits appear D Only Message bits appear
randomly
130. For the given code vector X= 01110101. What is weight of the code C
A 3 B 4
C 5 D 8
131. In decoding of cyclic code, which among the following is also regarded D
as&#39Syndrome Polynomial &#39?
A Generator Polynomial B Received code word Polynomial
C Quotient Polynomial D Remainder Polynomial
132. For m=4, what is the block length of the BCH code B
A 13 B 15
C 14 D 16
133. The total number of non zero bits in code is known as D
A Code length B Code height
C Code efficiency D Code weight
134. In Viterbi algorithm ------ decoder is used to decode the received data D
A Instruction decoder B Address decoder
C Binary decoder D Trellis decoder
135. Hamming code is used to C
A Detect errors B Correct errors
C Detect and correct errors D Data compression
136. Viterbi algorithm applies the A
A maximum likelihood principle B minimum likelihood principle
C maximum entropy principle D minimum entropy principle
137. Which of the following statement is false? D
A In linear block codes, code words are B linear block codes requires buffer
produced on a block-by-block basis storage
C Convolutional codes not requires D Convolutional codes requires buffer
buffer storage storage
138. Viterbi algorithm is B
A Convolutional encoding algorithm B Convolutional decoding algorithm
C Source coding algorithm D Source decoding algorithm
139. The hamming distance between equal code words is C
A 1 B 2
C 0 D n
140. According to linearity property, the ________ of two code words in a cyclic code is also C
a valid code word
A Difference B Product
C Sum D Division
141. Which of the following is not a way to represent convolution code? C
A State diagram B Trellis diagram
C Linear matrix D Tree diagram
142. In Viterbi &#39s algorithm, the selected paths are regarded as A
A Survivors B Defenders
C Destroyers D carriers
143. Consider the following codes: 1. Hamming code2. Huffman code 3. Shannon-Fano B
code 4. Convolution code Which of these are source codes?
A 1 and 2 only B 2 and 3 only
C 3 and 4 only D 1,2,3,4

You might also like