0% found this document useful (0 votes)
50 views71 pages

Keys

Digital modulation techniques enhance data capacity, security, and communication quality compared to analog methods. This document discusses key digital modulation techniques including Amplitude Shift Keying (ASK), Frequency Shift Keying (FSK), and Phase Shift Keying (PSK), detailing their operational principles and mathematical representations. Each technique is characterized by how it modulates the carrier signal based on the input digital data, with examples illustrating their applications and bandwidth requirements.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
0% found this document useful (0 votes)
50 views71 pages

Keys

Digital modulation techniques enhance data capacity, security, and communication quality compared to analog methods. This document discusses key digital modulation techniques including Amplitude Shift Keying (ASK), Frequency Shift Keying (FSK), and Phase Shift Keying (PSK), detailing their operational principles and mathematical representations. Each technique is characterized by how it modulates the carrier signal based on the input digital data, with examples illustrating their applications and bandwidth requirements.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
You are on page 1/ 71
UNIT2 DIGITAL MODULATION TECHNIQUES Digital Modulation provides more information capacity, high data security, quicker system availability with great quality communication. Hence, digital modulation techniques have a greater demand, for their capacity to convey larger amounts of data than analog ones. There are many types of digital modulation techniques and we can even use a combination of these techniques as well. In this chapter, we will be discussing the most prominent digital modulation techniques. if the information signal is digital and the amplitude (IV of the carrier is varied proportional to the information signal, a digitally modulated signal called amplitude shift keying (ASK) is produced. If the frequency (f) is varied proportional to the information signal, frequency shift keying (FSK) is produced, and if the phase of the carrier (0) is varied proportional to the information signal, phase shift keying (PSK) is produced. If both the amplitude and the phase are varied proportional to the information signal, quadrature amplitude modulation (QAM) results. ASK, FSK, PSK, and QAM are all forms of digital modulation: M)= Vsin (Qe + fr 4 @) { tf ASK ——-FSK PSK eae a simplified block diagram for a digital modulation system. Amplitude Shift Keying The amplitude of the resultant output depends upon the input data whether it should be a zero level or a variation of positive and negative, depending upon the carrier frequency. Amplitude Shift Keying (ASK) is a type of Amplitude Modulation which represents the binary data in the form of variations in the amplitude of a signal. Following is the diagram for ASK modulated waveform along with its input. oll Ld ov Input binary sequence w ov -v ASK Modulated output wave Any modulated signal has a high frequency carrier. The binary signal when ASK is modulated, gives a zero value for LOW input and gives the carrier output for HIGH input. ‘ Mathematically, amplitude-shift keying is Vet) = [I + sso] Seo where vask(t) = amplitude-shift keying wave vm(t) = digital information (modulating) signal (volts) A/2 = unmodulated carrier amplitude (volts) coc= analog carrier radian frequency (radians per second, 2nfet) In above Equation, the modulating signal [vm({)] is a normalized binary waveform, where + 1 V = logic 1 and -1 V = logic 0. Therefore, for a logic 1 input, vm(t) = +1 V, Equation 2.12 reduces to Yel) = [+ 1] Seosta) | = A cos(w,t) Mathematically, amplitude-shift keying is (2.12) where vask(t) = amplitude-shif keying wave v(t) = digital information (modulating) signal (volts) A/2 = unmodulated carrier amplitude (volts) 2 c= analog carrier radian frequency (radians per second, 2nfot) In Equation 2.12, the modulating signal [vm(t)] is a normalized binary waveform, where + 1 V = logic 1 and -1 V = logic 0. Therefore, for a logic 1 input, vm(t) = + 1 V, Equation 2.12 reduces to and for a logic 0 input, vm(t) =-1 V.Equation reduces to A Yast) = [1 i ois Thus, the modulated wave vask(t),is either A cos(act) or 0. Hence, the carrier is either "on “or “off,” which is why amplitude-shift keying is sometimes referred to as on-off keying (OOK). it can be seen that for every change in the input binary data stream, there is one change in the ASK. waveform, and the time of one bit (tb) equals the time of one analog signaling element (t) B=fb/l =f baud = fb/1 = fb Example : Determine the baud and minimum bandwidth necessary to pass a 10 kbps binary signal using amplitude shift keying. 10Solution For ASK, N = 1, and the baud and minimum bandwidth are determined from Equations 2.11 and 2.10, respectively: B= 10,000 / 1 = 10,000 baud = 10, 000 /1 = 10,000 The use of amplitude-modulated analog carriers to transport digital information is a relatively low- quality, low-cost type of digital modulation and, therefore, is seldom used except for very low= speed telemetry circuits ASK TRANSMITTER: Mixer Modulation signal ASK modulated wave m(t) Sask(t) Caner wave ct) The input binary sequence is applied to the product modulator. The product modulator amplitude modulates the sinusoidal carrier .it passes the carrier when input bit is ‘1’ .it blocks the carrier when input bit is “0.” Coherent ASK DETECTOR: FREQUENCY SHIFT KEYING The frequency of the output signal will be either high or low, depending upon the input data applied. Frequency Shift Keying (FSK) is the digital modulation technique in which the frequency of the carrier signal varies according to the discrete digital changes, FSK is a scheme of frequency modulation, Following is the diagram for FSK modulated waveform along with its input. w ’ LI Input binary sequence time w — at time fi fa FSK Modulated output wave The output of a FSK modulated wave is high in frequency for a binary HIGH input and is low in frequency for a binary LOW input. The binary 1s and 0s are called Mark and Space frequencies ‘SK is a form of constant-amplitude angle modulation similar to standard frequency modulation (FM) except the modulating signal is a binary signal that varies between two discrete voltage levels rather than a continuously changing analog waveform.Consequently, FSK is sometimes called binary FSK (BFSK). The general expression for FSK is 4 vl) = Ve cos(2nf, + v9 AAT where vau(t) = binary FSK waveform Ve = peak analog carrier amplitude (volts) ‘fe = analog carrier center frequency (hertz) f peak change (shift)in the analog carrier frequency(hertz) V(t) = binary input (modulating) signal (volts) From Equation 2.13, it can be seen that the peak shift in the carrier frequency ( 1) is proportional to the amplitude of the binary input signal (vat}), and the direction of the shift is determined by the polarity. The modulating signal is a normalized binary waveform where a logic | = + 1 V and a logic 0 = -1 V. Thus, for a logic | input, vat) = + 1, Equation 2.13 can be rewritten as, For a logic o input, vat =-1, Bau fil Dearnke coslan(f. + Aft ve) = V. cos[2n(f, Aft With binary FSK, the carrier center frequency ({2) is shifted (deviated) up and down in the frequency domain by the binary input signal as shown in Figure 2-3. [wes — FIGURE: ¥SK-inthe frequency domain 5 As the binary input signal changes from a logic 0 to a logic 1 and vice versa, the output frequency shifts between two frequencies: a mark, or logic 1 frequency (fin), and a space, or logic 0 frequency (4). The mark and space frequencies are separated from the carrier frequency by the peak frequency deviation (f) and from each other by 2 f. Frequency deviation is illustrated in Figure 2-3 and expressed mathematically as, f= [fa £]/2 (2.14) where f= frequency deviation (hertz) [fan ~ fe] = absolute difference between the mark and space frequencies (hertz) Figure 2-4a shows in the time domain the binary input to an FSK modulator and the corresponding FSK output. ‘When the binary input (fi) changes from a logic | to a logic 0 and vice versa, the FSK output frequency shifts from a mark ( fy) to a space (f,) frequency and vice versa. In Figure 2-4a, the mark frequency is the higher frequency (f: + ) and the space frequency is the lower frequency ({: =), although this relationship could be just the opposite. Figure 2-4b shows the truth table for a binary FSK modulator. The truth table shows the input and output possibilities for a given digital modulation scheme. by a kK fete ie f toa bg binary frequency weaiees Us = = Ue tmite ilmite mite mite Saito Jom mark Kequency; face frequency @ © > s FIGURE 2-4 FSK inthe time domain: (a) waveform: (b) truth table FSK Bit Rate, Baud, and Bandwidth In Figure 2-4a, it can be seen that the time of one bit (ts) is the same as the time the FSK output is a mark of space frequency (t). Thus, the bit time equals the time of an FSK signaling element, and the bit rate equals the baud, The baud for binary FSK can also be determined by substituting N = | in Equation 2.11 baud = fp fs The minimum bandwidth for FSK is given as B= |(f. — fy) — (fm — fi) =|(f— fin)| + 2 fh, and since |(f.~ f)| equals 2 f, the minimum bandwidth can be approximated as B+ f+h) (2.15) where B= minimum Nyquist bandwidth (hertz) f= frequency deviation |(far- {) (hertz) fr = input bit rate (bps) Example 2-2 Determine (a) the peak frequency deviation, (b) minimum bandwidth, and (¢) baud for a binary FSK signal with a mark frequency of 49 kHz, a space frequency of 51 kHz, and an input bit rate of 2 kbps. Solution a, The peak frequency deviation is determined from Equation 2.14: f |149kHz - 51 kHz|/2—1 kHz b. The minimum bandwidth is determined from Equation 2.15: B=2(100 2000) =6 KHz c. For FSK, N = 1, and the baud is determined from Equation 2.11 as baud = 2000 / 1 = 2000 FSK TRANSMITTER: Figure 2-6 shows a simplified binary FSK modulator, which is very similar to a conventional FM modulator and is very often a voltage-controlled oscillator (VCO).The center frequency (f:) is chosen such that it falls halfway between the mark and space frequencies. 2 oe FSK output ySJUUL FSK modulator t > (vco) a ky = Hav “af taf tm te Logic 0 Logic 1 A logic I input shifts the VCO output to the mark frequency, and a logic 0 input shifts the VCO output to the space frequency. Consequently, as the binary input signal changes back and forth between logic 1 and logic 0 conditions, the VCO output shifts or deviates back and forth between the mark and space frequencies, az FSK ouput inary SUL SK modulator input en) >» ieeten say te Tiepeo Loge t FIGURE 26 FSK modulator A VCO-FSK modulator can be operated in the sweep mode where the peak frequeney deviation is simply the product of the binary input voltage and the deviation sensitivity of the VCO. ‘With the sweep mode of modulation, the frequency deviation is expressed mathematically as, f= Vm(t)k (2-19) V(t) = peak binary modulating-signal voltage (volts) k, = deviation sensitivity (hertz per volt). FSK Receiver FSK demodulation is quite simple with a circuit such as the one shown in Figure 2-7. ‘Analog mark or space frequency Rectified signal t ~—— [apr] SY] Envelope | de —— detector FSK input . SKinput | coltter| by 1, Batata [pr] Y Envelope |_ de LJ "| detector | Comparator FIGURE 2-7 Noncoherent FSK demodulator The FSK input signal is simultaneously applied to the inputs of both bandpass filters (BPFs) through a power splitter. The respective filter passes only the mark or only the space frequency on to its respective envelope detector.The envelope detectors, in turn, indicate the total power in each passband, and the comparator responds to the largest of the two powers.This type of FSK detection. is referred to as noncoherent detection. Figure 2-8 shows the block diagram for a coherent FSK receiver. The incoming FSK signal is multiplied by a recovered carrier signal that has the exact same frequency and phase as the transmitter reference. However, the two transmitted frequencies (the mark and space frequencies) are not generally continuous; it is not practical to reproduce a local reference that is coherent with both of them, Consequently, coherent FSK detection is seldom used. utioter ~@ LE) Power a Fk input cater = coer Prmer| aguttiptior +4 Data out “O ~ {cer Calor FIGURE 2-8 Coherent FSK demodulator 9 PHASE SHIFT KEYING: The phase of the output signal gets shifted depending upon the input. These are mainly of two types, namely BPSK and QPSK, according to the number of phase shifts. The other one is DPSK which changes the phase according to the previous value. Amo. ¥ Phase shilt keying (PSK) Phase Shift Keying (PSK) is the digital modulation technique in which the phase of the carrier signal is changed by varying the sine and cosine inputs at a particular time. PSK technique is widely used for wireless LANs, bio-metric, contactless operations, along with RFID and Bluetooth communications, PSK is of two types, depending upon the phases the signal gets shifted, They are — Binary Phase Shift Keying (BPSK) This is also called as 2-phase PSK (or) Phase Reversal Keying. In this technique, the sine wave carrier takes two phase reversals such as 0° and 180°, BPSK is basically a DSB-SC (Double Sideband Suppressed Carrier) modulation scheme, for message being the digital information. Following is the image of BPSK Modulated output wave along with its input. ~ WWW We 10 Binary Phase-Shift Keying The simplest form of PSK is binary phase-shift keying (BPSK), where N = 1 and M = 2.Therefore, with BPSK, two phases (2' = 2) are possible for the carrier.One phase represents a logic 1, and the other phase represents a logic 0. As the input digital signal changes state (i.e., from a1 toa 0 or from a 0 to a 1), the phase of the output carrier shifts between two angles that are separated by 180°. Hence, other names for BPSK are phase reversal keying (PRK) and biphase modulation. BPSK. is a form of square-wave modulation of a continuous wave (CW) signal. Binary aan Ma [fesnearen Balanced ) | Bandpass| Modulated n (pier) modulator fiter PSK output % sin(o,t) Butfer 7 Ay sinloxt Reterence| cartier oscillator FIGURE 2-12 BPSK transmitter BPSK TRANSMITTER: Figure 2-12 shows a simplified block diagram of a BPSK transmitter. The balanced modulator acts as a phase reversing switch. Depending on the logic condition of the digital input, the carrier is transferred to the output either in phase or 180° out of phase with the reference carrier oscillator. Figure 2-13 shows the schematic diagram of a balanced ring modulator. The balanced modulator has two inputs: a carrier that is in phase with the reference oscillator and the binary digital data, For the balanced modulator to operate properly, the digital input voltage must be much greater than the peak carrier voltage. This ensures that the digital input controls the on/off state of diodes D1 to D4. If the binary input is a logic I (positive voltage), diodes D 1 and D2 are forward biased and on, while diodes D3 and D4 a are reverse biased and off (Figure 2-13b). With the polarities shown, the carrier voltage is developed across transformer T2 in phase with the carrier voltage across T 1. Consequently, the output signal is in phase with the reference oscillator. If the binary input is a logic 0 (negative voltage), diodes DI and D2 are reverse biased and off, while diodes D3 and D4 are forward biased and on (Figure 9-13c). As a result, the carrier voltage is developed across transformer T2 180° out of phase with the carrier voltage across T 1 sinaxt Reference cree Moduteted PK Spur ‘ouput sinangt ~" sinext Caries input output +V (Binary 1) wo ~ain ext -V (Binaryo> 2 FIGURE 9-13 (a) Balanced ring modulator; (b) logic 1 input; (c) logic 0 input a0 cant nat sin wt ‘rar o Binary input outa crase] (2. Pi teoieo | 100 Logie 1 o @ cont on wo cont © 0° Retoronce Logica toge1 ~con et FIGURE 2-14 BPSK modulator: (a) truth table; (b) phasor diagram; (c) constellation diagram BANDWIDTH CONSIDERATIONS OF BPSK: Ina BPSK modulator. the carrier input signal is multiplied by the binary data. If + 1 V is assigned to a logic 1 and -1 V is assigned to a logic 0, the input carrier (sin wet) is multiplied by either a+ or - I The output signal is either + 1 sin tet or-1 sin wet the first represents a signal that is in phase with the reference oscillator, the latter a signal that is 180° out of phase with the reference oseillator.Each time the input logic condition changes, the output phase change Mathematically, the output of a BPSK modulator is proportional to BPSK output = [sin (21rf,t)] x [sin (211fet)] (2.20) where B f= maximum fundamental frequency of binary input (hertz) f= reference carrier frequency (hertz) Solving for the trig identity for the product of two sine functions, 0.5cos[2T1(fe — fa)t] - 0.Scos[2T1(fe + fa)t] Thus, the minimum double-sided Nyquist bandwidth (B) is fe + fa fe + fa -f +f -(fe + fa) or and because f,- fiy/ 2, where fs = input bit rate, where B is the minimum double-sided Nyquist bandwidth. Figure 2-15 shows the output phase-versus-time relationship for a BPSK waveform. Logic | input produces an analog output signal with a 0° phase angle, and a logic 0 input produces an analog output signal with a 180° phase angle. As the binary input shifts between a logic | and a logic 0 condition and vice versa, the phase of the BPSK waveform shifts between 0° and 180°, respectively. BPSK signaling element (t5) is equal to the time of one information bit (ts), which indicates that the bit rate equals the baud. Pow | w fo ow we | oe FIGURE 2-15 Output phase-versus-time ret tionship for a BPSK modulator 4 Example: For a BPSK modulator with a carrier frequency of 70 MIz and an input bit rate of 10 Mbps, determine the maximum and minimum upper and lower side frequencies, draw the output spectrum, de-termine the minimum Nyquist bandwidth, and calculate the baud. Solution Substituting into Equation 2-20 yields output = [sin (21fit)] x [sin (21rf.t)]; fa fi, / 2 = 5 MHz =[sin 211(SMHz)t)] x [sin 271(70MH7)1)} =0.5cos[211(70MHz — SMHz)t] — 0.5cos[271(70MHz + SMHz)t] lower side frequency upper side frequency Minimum lower side frequency (LSF): LSF=70MHz - SMHz = 65MHz Maximum upper side frequeney (USF): USF = 70 MHz + 5 MHz = 75 MHz Therefore, the output spectrum for the worst-case binary input conditions is as follows: The minimum Nyquist bandwidth (B) is 0 MHz 65 MHz 70 MHz. 75 MHZ (Suppressed) B= 75 MHz - 65 MHz = 10 MHz and the baud = fy or 10 megabaud, BPSK receiver: Figure 2-16 shows the block diagram of a BPSK receiver. 15 The input signal maybe+ sin wet or - sin Wet .The coherent carrier recovery circuit detects and regenerates a carrier signal that is both frequency and phase coherent with the original transmit carrier, The balanced modulator is a product det signal and the recovered carrier) for; the output is the product d the two inputs (the BPSK The low-pass filter (LPF) operates the recovered binary data from the complex demodulated signal, zsin(o[ bay = >| Balanced || | Level ee ingat Ber modulator i convener cata | Le output T T sin(og) i { Clock recovery Coherent recovery FIGURE 2-16 Block diagram of a BPSK receiver Mathematically, the demodulation process is as follows. For a BPSK input signal of + sin wet (logic 1), the output of the balanced modulator is output = (sin Wet )(sin Wet) = sinwret or sin?wct = 0.5(1 — cos 2un.t) = 0.5 filtered out (2.21) leaving output = + 0.5 V = logic 1 It can be seen that the output of the balanced modulator contains a positive voltage (+{1/2]V) and a cosine wave at twice the carrier frequency (2 Wt ) The LPF has a cutoff frequency much lower than 2 wet, and, thus, blocks the second harmonic of the carrier and passes only the positive constant component, A positive voltage represents a demodulated logic 1 For a BPSK input signal of -sin wt (logic 0), the output of the balanced modulator is output = (-sin Wet )(sin wet) = sin’wet or sin’wat = -0.5(1 ~ cos 2wat) = 0.5€0,5e05 201.t > 16 Ne filtered out leaving output = 0.5 V = logic 0 The output of the balanced modulator contains a negative voltage (-[V/2]V) and a cosine wave at twice the carrier frequeney (200). Again, the LPF blocks the second harmonic of the carrier and passes only the negative constant component. A negative voltage represents a demodulated logic 0. QUADRATURE PHASE SHIFT KEYING (QPSK): This is the phase shift keying technique, in which the sine wave carrier takes four phase reversals such as 0°, 90°, 180°, and 270°, If this kind of techniques are further extended, PSK can be done by eight or sixteen values also, depending upon the requirement. The following figure represents the QPSK waveform for two bits input, which shows the modulated result for different instances of binary inputs. Carrier / Channel Modulating value from two bits ° 2 1 3 (00) (10) (01) (14) Modulated Result QPSK is a variation of BPSK, and it is also a DSB-SC (Double Sideband Suppressed Carrier) modulation scheme, which sends two bits of digital information at a time, called as bigits. Instead of the conversion of digital bits into a series of digital stream, it converts them into bit-pairs. This decreases the data bit rate to half, which allows space for the other users. 2.5-2-1 QPSK transmitter. v A block diagram of a QPSK modulator is shown in Figure 2-17Two bits (a dibit) are clocked into the bit splitter. After both bits have been serially inputted, they are simultaneously parallel outputted. The I bit modulates a carrier that is in phase with the reference oscillator (hence the name "I" for "in phase” channel), and theQ bit modulate, a carrier that is 90° out of phase. Fora logic 1 = + 1 V anda logic 0= = 1 V, two phases are possible at the output of the I balanced modulator (+sin wet and - sin wt), and two phases are possible at the output of the Q balanced modulator (+cos wet), and (-cos wt). When the linear summer combines the two quadrature (90° out of phase) signals, there are four possible resultant phasors given by these expressions: + sin Wet + cos Wet, + sin Wet - cos Wet, -sin cos Ue. Wet + cos Wet, and -sin Wet reams. el [soi (aaa | ee erly Binary input eee sin aa a ae i - =e Unear joutent: summer [| 8PF , pm | ee way [ tress 7 ial Q channel fy/2 modulator [cos at Example: For the QPSK modulator shown in Figure 2-17, construct the truthtable, phasor diagram, and constellation diagram. Solution For a binary data input of Q = O and I= 0, the two inputs to the Ibalanced modulator are -1 and sin wt, and the two inputs to the Q balanced modulator are -1 and cos wt. Consequently, the outputs are 18 Tbalanced modulator =(-1)(sin wet) = -1 sin wet Q balanced modulator ~(-1)(cos wet) =-1 cos wet and the output of the linear summer is =] cos Wet - 1 sin Wet = 1.414 sim(wet - 135°) For the remaining dibit codes (01, 10, and 11), the procedure is the same. The results are shown in Figure 2-18a. Scat cam, wll Yo — ot Base sats inary ‘OPsK tents (0* reference) = | = a ti] Sale” ate a! fi) cats “iy . ats Saath ‘ FIGURE 2-18 QPSK modulator: (a) truth table; (b) phasor diagram; (c) constellation diagram In Figures 2-18b and ¢, it can be scen that with QPSK cach of the four possible output phasors has exactly the same amplitude, Therefore, the binary information must be encoded entirely in the phase of the output signal 19 Figure 2-18b, it can be seen that the angular separation between any two adjacent phasors in QPSK is 90°.Therefore, a QPSK signal can undergo almost a+45° or -45° shift in phase during ‘transmission and still retain the correct encoded information when demodulated at the receiver. Figure 2-19 shows the output phase-versus-time relationship for a QPSK modulator. owe =a an an a input 1 ° 1 ° 1 0 apsk output phase FIGURE 2-19 Output phase-versus-time relationship for a PSK modulator 2-5-2-2 Bandwidth considerations of QPSK With QPSK, because the input data are divided into two channels, the bit rate in either the I or the Q channel is equal to one-half of the input data rate (fi/2) (one-half of fi/2 = fiv4). QPSK RECEIVER: The block diagram of a QPSK iver is shown in Figure 2-21 The power splitter directs the input QPSK signal to the I and Q product detectors and the carrier recovery circuit. The cartier recovery circuit reproduces the original transmit cartier oscillator signal. The recovered carrier must be frequency and phase coherent with the transmit reference cartier. The QPSK. signal is demodulated in the I and Q product detectors, which generate the original I and Q data bits. The outputs of the product detectors are fed to the bit combining circuit, where they are converted from parallel I and Q data channels to a single binary output data stream. The incoming QPSK signal may be any one of the four possible output phases shown in Figure 2- 18, To illustrate the demodulation process, let the incoming QPSK signal be -sin wet + cos Wet. Mathematically, the demodulation process is as follows. 20 =H {es} = | T's" [es Ss FIGURE 2-21 QPSK receiver The receive QPSK signal (-sin wet + cos wt) is one of the inputs to the T product detector. The other input is the recovered carrier (sin wet). The output of the I product detector is, I= (-sin wt + c08 wt) (sin wt) ‘QPSK input sina emer (sin @,1)(sin @.£) + (cos o,1)(sin 01) = —sin? wt + (cos w,£)(sin ,1) = in(w, + w,)t + I)t Airedoud equi — tit fa t I= —z + 700s 2s sin oa + zsind Ly (logic 0) = -4V (logic ze (2.23) Again, the receive QPSK signal (-sin Wet + cos wel) is one of the inputs to the Q product detector. The other input is the recovered carrier shifted 90° in phase (cos et). The output of the Q product detector is 21 Q = (-sin wt + C08 w,1) (cos w,1) ———__ __ ‘QPSK input signal carrer = cos? wt — w,t)(cos w,1) 4 + 608 20,2) — tsin(ex, + w,)1 ~ Asingw, ~ w,)¢ = gg oat equals 0) Ly Loops — bia 27! i 1 2 in 203, — V(logic 1) ea The demodulated I and Q bits (0 and 1, respectively) correspond to the constellation diagram and truth table for the QPSK modulator shown in Figure 2-18, DIFFERENTIAL PHASE SHIFT KEYING (DPSK): In DPSK (Differential Phase Shift Keying) the phase of the modulated signal is shifted relative to the previous signal element, No reference signal is considered here. The signal phase follows the high or low state of the previous element. This DPSK technique doesn’t need a reference oscillator. The following figure represents the model waveform of DPSK. om o a 1 Eo 1 Wel o Een: ie It is seen from the above figure that, if the data bit is LOW ic., 0, then the phase of the signal is not, reversed, but is continued as it was. If the data is HIGH i.e., 1, then the phase of the signal is reversed, as with NRZI, invert on 1 (a form of differential encoding). 2 If we observe the above waveform, we can say that the HIGII state represents an M in the modulating signal and the LOW state represents a W in the modulating signal. The word binary represents two-bits. M simply represents a digit that corresponds to the number of conditions, levels, or combinations possible for a given number of binary variables. This is the type of digital modulation technique used for data transmission in which instead of one- bit, two or more bits are transmitted at a time, As a single signal is used for multiple bit transmission, the channel bandwidth is reduced. DBPSK TRANSMITTER. Figure 2-37a shows a simplified block diagram of a differential binary phase-shift keying (DBPSK) transmitter. An incoming information bit is XNORed with the preceding bit prior to entering the BPSK modulator (balanced modulator). For the first data bit, there is no preceding bit with which to compare it, Therefore, an initial reference bit is assumed. Figure 2-37b shows the relationship between the input data, the XNOR output data, and the phase at the output of the balanced modulator. If the initial reference bit is assumed a logic 1, the output from the XNOR circuit is simply the complement of that shown, In Figure 2-37b, the first data bit is XNORed with the reference bit. If they are the same, the XNOR output is a logic 1; if they are different, the XNOR output is a logic 0. The balanced modulator operates the same as a conventional BPSK modulator; a logic I produces +sin wt at the output, and A logic 0 produces -sin wet at the output. 2B a ‘Le FIGURE 2-37 DBPSK modulator (a) block diagram (b) timing diagram BPSK RECEIVER: Figure 9-38 shows the block diagram and timing sequence for a DBPSK receiver. The received signal is delayed by one bit time, then compared with the next signaling element in the balanced modulator. If they are the same. J logic 1(+ voltage) is generated. If they are different, a logic 0 (- voltage) is generated, [f the reference phase is incorrectly assumed, only the first demodulated bit is in error. Differential encoding can be implemented with higher-than-binary digital modulation schemes, although the differential algorithms are much more complicated than for DBPS K. The primary advantage of DBPSK is the simplicity with which it can be implemented. With DBPSK, no carrier recovery circuit is needed. A disadvantage of DBPSK is, that it requires between 1 dB and 3 dB more signal-to-noise ratio to achieve the same bit error rate as that of absolute PSK. inpxt Balanced covered modulator Tbe sina (sin od bal (nag sm et (snags (sin t eo a a a input Onase ateonce phase) FIGURE 2-38 DBPSK demodulator: (a) block diagram; (b) timing sequence COHERENT RECEPTION OF FSK: The coherent demodulator for the coherent FSK signal falls in the general form of coherent demodulators described in Appendix B. The demodulator can be implemented with two correlators as shown in Figure 3.5, where the two reference signals are cos(27r ft) and cos(27r fit). They must be synchronized with the received signal. The receiver is optimum in the sense that it minimizes the error probability for equally likely binary signals. Even though the receiver is rigorously derived in Appendix B, some heuristic explanation here may help understand its operation. When s 1 (0) is transmitted, the upper correlator yields a signal 1 with a positive signal component and a noise component. However, the lower correlator output 12, due to the signals’ orthogonality, has only a noise component. Thus the output of the summer is most likely above zero, and the threshold detector will most likely produce a 1. When s2(t) is transmitted, opposite things happen to the two correlators and the threshold detector will most likely produce a 0. However, due to the noise nature that its values range from -00 to m, occasionally the noise amplitude might overpower the signal amplitude, and then detection errors will happen. An alternative to Figure 3.5 is to use just one correlator with the reference signal cos (27r ft) - cos(2s £21) (Figure 3.6). The correlator in Figure 3.6 can be replaced by a matched filter that matches cos(27r fit) - cos(27r £2t) (Figure 3.7). All 25 implementations are equivalent in terms of error performance (see Appendix B). Assuming an AWGN channel, the received signal is r(t)=s,(t)+n(t), i= 1.2 where n(t) is the additive white Gaussian noise with zero mean and a two-sided power spectral density A'/2, From (B.33) the bit error probability for any equally likely binary signals is E, + E2 - 2p, 0 ( [Estee where No/2 is the two-sided power spectral density of the additive white Gaussian noise. For Sunde's FSK signals E1~ Ez ~ Eb, pl2 ~ 0 (orthogonal). thus the error probability is where Eb ~ A2T/2 is the average bit energy of the FSK signal. The above Pb is plotted in Figure 3.8 where Pb of noncoherently demodulated FSK, whose expression will be given shortly, is also plotted for comparison. n Threshold cos(2nfit) Detector ro) ri eal lot} + Jo cos(2nfst) ™ (kIT ty Cs - 26 ter... Noncoherent BESK nf Coherent BFSK Ey N,(dB) Figure: Pb of coherently and non-coherently demodulated FSK signal. |ON AND ERROR PERFORMANC NONCOHERENT DEMODU! Coherently FSK signals can be noncoherently demodulated to avoid the carrier recovery. Noncoherently generated FSK can only be noncoherently demodulated. We refer to both cases as noncoherent FSK. In both cases the demodulation problem becomes a problem of detecting signals with unknown phases. In Appendix B we have shown that the optimum receiver is a quadrature receiver. It can be implemented using correlators or equivalently, matched filters. Here we assume that the binary noncoherent FSK signals are equally likely and with equal energies. Under these assumptions, the demodulator using correlators is shown in Figure 3.9. Again, like in the coherent case, the optimality of the receiver has been rigorously proved (Appendix B). However, we can easily understand its operation by some heuristic argument as follows. The received signal (ignoring noise for the moment) with an unknown phase can be written as si(t.8) = Acos(2rfit+0), i=1,2 = Acos cos 2r fit ~ Asin@ sin 2x fit 7 The signal consists of an in phase component A cos 8 cos 27r f t and a quadrature component A sin 8 sin 2x ft sin 0. Thus the signal is partially correlated with cos 2s fit and partiah'y correlated with sin 27r fit. Therefore we use two correlators to collect the signal energy in these two parts. The outputs of the in phase and quadrature correlators will be cos 19 and sin 8, respectively. Depending. on the value of the unknown phase 8, these two outputs could be anything in (- 5, y). Fortunately the squared sum of these two signals is not dependent on the unknown phase. That is AT F cose)? + (Ae ere sino)? = ¢ This quantity is actually the mean value of the statistics I? when signal si (t) is transmitted and noise is taken into consideration, When si (t) is not transmitted the mean value of 1: is 0. The comparator decides which signal is sent by checking these I?, The matched filter equivalence to Figure 3.9 is shown in Figure 3.10 which has the same error performance. For implementation simplicity we can replace the matched filters by bandpass filters centered at f and fi, respectively (Figure 3.1 1) However, if the bandpass filters are not matched to the FSK signals, degradation to Sample t tar Bandpass Envelope h siteratfy [—?} Detector Wa>b O {choose 1 « Sample at | Comparatc Mb>t, ager choose 0 Bancpass Envelope |__§. \\_F finer tf Detetor bs various extents will result, The bit error probability can be derived using the correlator demodulator (Appendix B). Here we further assume that the FSK signals are orthogonal, then from Appendix B the error probability is n/2No 28 PART-2 DATATRANSMISSION BASE BAND SIGNAL RECEIVER: Consider that a binary encoded signal consists of a time sequence of voltage levels +V or -V. if there is a guard interval between the bits, the signal forms a sequence of positive and negative pulses. in either case there is no particular interest in preserving the waveform of the signal after reception .we are interested only in knowing within each bit interval whether the transmitted voltage was +V or —V. With noise present, the receives signal and noise together will yield sample values generally different from +V. In this case, what deduction shall we make from the sample value concerning the transmitted bit? Suppose that the noise is gaussian and therefore the noise voltage has a probability density which is entirely symmetrical with respect to zero volts. Then the probability that the noise has increased the sample value is the same as the probability that the noise has decreased the sample value. It then seems entirely reasonable that we can do no better than to assume that if the sample value is positive the transmitted level was +, and if the sample value is negative the transmitted level was —V. It is, of course, possible that at the sampling time the noise voltage may be of magnitude larger than V and of a polarity opposite to the polarity assigned to the transmitted bit. In this case an error will be made as indicated in Fig. 111-1. Here the transmitted bit is represented by the voltage + which is sustained over an interval T from t, to t;. Noise has been superim- posed on the level + so that the voltage v represents the reccived signal and noise. If now the sampling should happen to take place at a time f= 1, + Ar, an error will have been made. We can reduce the probability of error by processing the received signal plus noise in such a manner that we are then able to find a sample time where the sample voltage due to the signal is emphasized relative to the sample voltage due to the noise. Such a processer (receiver) is shown in Fig. 11.1-2. The signal input during a bit interval is indicated. As a matter of convenience we have set t= at the beginning of the interval. The waveform of the signal s(t) before 1 = @ and after t= T has not been indicated since, as will appear, the operation of the receiver during each bit interval is independent of the waveform during past and future bit intervals. The signal s(t) with added white gaussian noise n(r) of power spectral density n/2 is presented to an integrator. At time f= 0+ we require that capacitor C be uncharged. Such a discharged condition may be ensured by a brief closing of switch SM, at time r= 0 — , thus relieving © of any charge it may have acquired during the previous interval. The sample is taken at the output of the integrator by closing this sampling switch SW,. This sample is taken at the end of the bit interval, at ¢— T. The signal processing indicated in Fig. 11.1-2 is described by the phrase integrate and dump, the term dump referring to the abrupt discharge of the capacitor after each sampling. 23 peta ae 4 ‘ ° nse Scag ia od ive oe to - aa Peak Signal to RMS Noise Output Voltage Ratio ‘The integrator sields an output which is IRC. Using ¢ = RC, we have ran=2 [ros d=t Cara et (mod a ‘The sample voltage due to the signal is tegral of its input multiplied by wnat [Tea IE ann The sample voltage due to the noise is mcr= 2 fe a ous ‘This noise-sampling voltage 7,(7) is @ gaussian random variable in contrast with a(t), which is 4 gaussian random process. The variance of (7) was found in See. 7.9 [see Eq. (7.9-17)]) to be res nih = any and. as noted in Sec. 7.3, n,(T) has a gaussian probability density. The output of the integrator, before the sampling switch, is rr) = s,(t) +2n,(t). As shown in Fig, 11.1-3a, the signal output s,(f) is a ramp, in cach bit interval, of duration T: At the end of the interval the ramp attains the voltage s,{T) which is +UT/r or —WT/t, depending on whether the atorad, At the end of each interval the switch SW, in Fig. 11.1-2 closes momentarily to dis- charge the capacitor so that s,(f) drops to zero. The noise n(f). shown in Fig. 11.1-3b, also starts cach interval with n,(0) = 0 and has the random value n{T) at the end of each interval. The sampling switch SW closes briefly just belore the closing of SH, and hence reads the voltage ef TY = 57) nT) a 30 oa) “ oe ee 5 Figure 11.1-3 (0) The signal awiput and {h) the noise output of the integrate of Fig. 11,1 We would naturally like the output signal voltage to be as large as possible in comparison with the noise voltage, Hence a figure of merit of interest is the signal-to-noise ratio ATVI _ 2 tst7H eer (ttt -6) This result is calculated from Eqs. (11.1-2) and (11.1-4). Note that the signal-to- noise ratio increases with increasing bit duration 7 and that it depends on ¥"T which is the normalized energy of the bit signal. Therefore. a bit represented by a narrow, high amplitude signal and one by a wide, low amplitude signal are equally effective, provided VT is kept constant, Iis instructive to note that the integrator filters the signal and the noice such that the signal voltage increases linearly with time, while the standard deviation {ems value) of the noise increases more slowly, as \/7 Thus, the integrator enhances the signal relative to the noise, and this enhancement increases with time as shown in Eq. (11.1-6) PROBABILITY OF ERROR Since the function of a receiver of a data transmission is to ditinguish the bit 1 from the bit 0 in the presence of noise, a most important charcteristic is the probability that an error will be made in such a determination.we now calculate this error probabilty Pc for the integrate and dump receiver of fig 11.1-2 31 We have seen that the probability density of the noise sample »,(T) is gamse. jan and hence appears as im Fig. 112-1. The density is therefore piven hy: 121 where a3. the variance. WT) given by Faq. (111-8). Suppnse, then, that during some bit interval the inputsignal voltage is held at, say, —V. Then, at the sample time. the signal sample voltage is s{T) — —'T/z, while the noise sample is nAT). If nT) is positive and larger in magnitude than VT /r, the total sample voltage #,(T) — s9(T) + nf T) will be positive. Such a positive sample voltage will Wn error, since ag noted earlier, we have instructed the receiver to inter result in Pret such a positive sample voltage to mean that the signal voltage was +1 during the bit interval. The probability of such a misinterpretation, that is, the probability that nT) > T/z, is given by the area ef the shaded region in Fig. 112-1. The probability of error is, using Eq. (11.2-1. p= [S remcrn anny = [7 EE ery iad Defining x = ng TH \/Zo,, and using Eq. (11.1-4), Bq. (1 1.2-2) may be rewritten as 12 2 Ve Jen weve (vf) fete (22) erte (=) P12 2 7 in which E, — V27 ie the signal oneray of a hit. Tethe eignal voltage wore held loalend at 4+ during some is clear from the symmetry of the situation that the probability of error woul Sanin he given by Pin Eq. (11.2-3). Honec Bq. (112-3) gives. M_ quite generality P, em dx interval, then it Haai7 Flgere 124 The gronsan probity enay of theo sample nf 32 The probability of error pe, as given in eq.(11.2-3),is plotted in fig.11.2-2.note that pe decreases rapidly as Es/n increases. The maximum value of pe is ¥i.thus ,even if the signal is entirely lost in the noise so that any determination of the receiver is a sheer guess, the receiver cannot bi wrong more than half the time on the average. THE OPTIMUM FILTER: In the receiver system of Fig 11.1-2, the signal was passed through a filter(integrator),so that at the sampling time the signal voltage might be emphasized in comparison with the noise voltage. We are naturally led to risk whether the integrator is the optimum filter for the purpose of minimizing the probability of error. We shall find that the received signal contemplated in system of fig 11.1-2 the integrator is indeed the optimum filter. However, before returning specifically to the integrator receiver. We assume that the received signal is a binary waveform. One binary digit is represented by a signal waveforms; (1) which persists for time T, while the4 other bit is represented by the waveform S2(t) which also lasts for an interval T. For example, in the transmission at baseband, as shown in fig I1.1-2 Si(t)=+V; for other modulation systems, different waveforms are transmitted. for example for PSK signaling , $:(t)=Acoswot and S2(t)=-Acoswot;while for FSK, Si()=Acos(worant As shown in Fig. 11.3- the input, which is s,(¢) or s,(f), is corrupted by the addition of noise n(1). The noise is gaussian and has a spectral density G{f). [In most cases of interest the ne white, so that G(s) = 7/2. However, we shall assume the more general poss since it introduces no complication to do so] The signal and noise are filtered and then sampled at the end of each bit interval. The output sample is either (7) = s..(T)+ mT) or rT) = s.3(T) +n{7T). We assume that immediately after each sample, every energy-storing element in the filter has been discharged. We have already considered in Sec. 2.22, the matter of signal determination im the presence of noise. Thus, we note that in the absence of noise the ouiput sample would be nF) — #4(T) or s,:(T). When noise is present we have shown that to minimize the probability of error one should assume that s,(¢) has been transmitted if u,(7) is closer to s,,(T) than to s,.(7). Similarly, we assume s,(1) has been transmitted if 0,(7) is claser to s,,(T). The decision boundary is there- fore midway between s,,(T) and s,,(T). For example, in the baseband system of Fig. 111-2, where s,.(T} = VT/e and s,.(T) = —V Tix, the decision boundary is v fT) = 0. in general, we shall take the decision boundary to be in) ay The probability of error for this general case may be deduced as an extension of the considerations used in the baseband case. Suppose that 8,,(T) > 3,37) and that 5,(1) was transmitted. If, at the sampling time, the noise n,(7) is positive and larger in magnitude than the voltage difference H[s,4(T) + aya(TH — sot T) an error will have been made. That is, an error [we decide that s\(1) is transmitted rather than s,(7)] will result if sal) = 5.07) mene (32, 33 Hence probability of error is e (3) = Jaingnan te LBL gins firey z te Far 134 Arar iro nating If we make the substitution x = n,(T)//20,, Eq. (113-3) becomes 2 J sp ~12 ot ae (113-46) 2 Se Ni -nsararten A(T) = 557 AT) — sof) (134 a 2 Note that for the case s,,(7) = VI/1 and s,.(7) = —WVT/t, and, using Ea. ( 4), Eq. (113-46) reduces to Eq, (11.2-3) as expected. The complementary error function is a monotonically decreasing function of its argument. (Sec Fig. 11.2-2.) Hence, as is to be anticipated, P, decreases as the difference 5,3(7) — 547) becomes larger and as the rms noise voltage a, becomes smailer, The optimum filter, then, is the filter which maximizes the ratio y — Sel) = Sel) We now calculate the transfer function MS) of this optimum filter, As a matter of mathematical convenience we chall actually maximize 7? rather than y. Calculation of the Optimum-Filter Transfer Function H(/) The fundamental requirement we make of a binary encoded data receiver is that it distinguishes the voltages 5,(¢) + mr) and s,(r)-+ a(t), We have seen that the ability of the receiver to do so depends on how large a particular receiver can make y. It is important to note that 7 is proportional not to s,(1) nor to s(t). but Father to the difference between them. For example, in the baseband system we represented the signals by voltage levels + and —V. But clearly, if our only imterest was in distinguishing levels, we would do just ac well to use +2 volts and 0 volt, or +8 volts and +6 volts, etc. (The +¥ and —V levels, however. have the advantage of requiring the least average power to be transmitted.) Hence, while s;(1} or s(t) is the received signal, the signal which is to be compared with the noise, ne. the signal which ts relevant in all our error-probability calculations, is the difference signal PA) = 5,60) — 5360) 113-6) ‘Thus, for the purpose of calculating the minimum error probability, we shall assume that the input signal to the optimum filter is p{e). The corresponding, output signal of the Gilter is then Pat) = 84) — Seal) (113-7) 34 We shall let P(f) and P,(f) be the Fourier transforms, respectively, of ptr) and re) If HCP) is the transfer function of the filter, PLS) = HSPs) i and p(T) = C PLS eT f= e HOPPE df (1.39) 3-8) The input noise to the optimum filter is mfr). The output noise is ,{¢ which has a power spectral density G,(f) and is rclated to the power spectral density of the input noise GLA) by GA) = | HOO PG.AD, (113-10) 5), we find that the normalized output noise Using Parseval's theorem (Eq. | power, ie., the noise variance a2, is a= [ GN) are f° LAN P GN dF (31D From Eqs. (11.3-9) and (11.3-11) we now find that G 2eTF ap]? Lt MUP TF af (113-12) FPO GAD) af Equation (11,3-12) is unaltered by the inclusien or deletion of the absolute value sign in the numerator since the quantity within the magnitude sign p{T) is a positive real number. The sign has been included, however, in order to allow further development of the equation through the use of the Schwarz inequality. The Schwarz inequality states that given arbitrary complex functions X() and Y(f) of a common variable f, then £ xcrvirrar| =f" cron ar {7 IYonP ar ans The equal sign applies when XU = REVS) a where K is an arbitrary constant and ¥*%(f) is the complex conjugate of ¥{/’). ‘We now apply the Schwarz inequality to Eq. (11-3-12) by making the ident sation X(N = MGA HUY (11.3-15) 1 and YU) = == Pifiel™s 11.3-16) VGA § ‘Using Eqs, (11.3-15) and (11.3-16) and using the Schwarz inequality, Eq. (113-13), we may rewrite Eq. (11.3-12) as PHT) _ Uf22 XW AAP C Lyon? af t 0 |XLOP af n 35 or. using Eq. (11.3-16), AD < [" ronP ara £ Lipo)? Ma a3 Guy canna) The ratio pi(T)/o3 will attain its maximum value when the equal sign in Eq. (11.3-18) may be employed as is the case when X() = KY¥*(1. We then find from Eqs. (11.3-15) and (113-16) that the optimum filter which yields such a maximum ratio p(T a2 has a transfer function - PP) . ert AUP) K GAP) 11,3-19), ‘Correspondingly, the maximum ratio is, from Eq. (11.3-18), - PUP ~ on & ae n succeeding sections we shall have occasion te apply Eqs. (11.13-19) and (11.13-20) to a number of cases of interest. 1L4 WHITE NOISE: HE MATCHED FILTER An optimum filter which yields a maximum ratio pi(T)e2 is called a marched filter when the input noise is white. In this case Gf) = 1/2, and Eq. (113-19) hecomes ny = «ED ones Ray ‘The impulsive response of this filter, ie, the response of the filter to a unit strength impulse applied at ¢ — 0, is MO = FTE = FE ("pete erreten ag (114-24) = 2” prinetenen ay (11.4-26) A physically realizable filter will have an impulse response which is real, complex. Therefore f(r) = h*(9. Replacing the right-hand member of Ea. ( by its complex conjugate, an operation which leaves the equation unaltered, we have 2K f° mi 7 f PU fle =O af (114-34) 2K kr ( 7 wt ) (114-30) Finally, since p() = 5,0) ~ (0 Lee Eq, (113-6), we have m2 for 9s) (144 36 The significance of these results for the matched filter may be more readily appreciated by applying them to a specific example. Consider then, Fig. 114-1 that 5)(¢) is @ triangular waveform of duration T, while s2(t) as -1h, is of identical form except of reversed polarity. Then pli) is as shown in Fig, 11.4-1c and pl—#) appears in Fig. 11.4-1d. The waveform /{—1) is the waveform p{¢) rotated around the axis — 0. Finally, the waveform p(T — #) called for as the impulse response of the filter in Eq. (11.4-3b) is this rotated waveform p—1) translated in the positive ¢ direction by amount T. This last translation ensures that h(/) = for 1 -< Qas is required for a causal filter. In general, the impulsive response of the matched filter consists of p(t) rotated about t=0 and then delayed long enough(ice, a time T) to make the filter realizable. We may note in passing, that any additional delay that a filter might introduce would in no way interfere with the performance of the filter ,for both signal and noise would be delayed by the same amount, and at the sampling time (which would need similarity to be delayed)the ratio of signal to noise would remain unaltered, @ Ce) Flgure 41 The sana (2 8) 200 ad 0) — 50 Ud) ose aout The (0 (9) The waveform nd) ranalated the righ by amcunt 7. 11S PROBABILITY OF ERROR OF THE MATCHED FILTER The probability of error which results when employing a matched filter, may be found by evaluating the maximum signal-to-noise ratio (p'(TVa21,, . given by Fg. (113-20), With GCP) = n/2. Eq. (11.3-20) becomes ape =| inane a (say lan Jaw 37 From parseval's theorem we have { irnear= [" Pad | rode (115.2) In the last integral in Eq, (11,5-2), the limits take account of the fact that p(t) per- sists for only a time T. With pl) = sx — sale), and using Eq. (tt-s-2) we may write Ea. (115-1) as [2] 2! bs.) syoP at 115-30) 2 (T. 7 fr aL, oes 0) a ~2 | s(ts,(f) dt (115-36) 2 aia + Ba 26) (sx Here F,, and F,; are the energies, respectively, in s;(e) and s,(c), while E,,2 is the energy due to the correlation between #,() and »,(4). Suppose that we have selected «,(r), and let s(t) have an energy E,,. Then it can be shown that if s(t) is to have the same energy, the optimum choice of s(t) = ~3() assy The choice is optimum in that it yields a maximum output signal p3(T) for a given signal energy. Letting s(t) = —s,(t), we find and Eq, (11.5-3e) becomes (115-5) Rewriting Eq (11.3-48) using p(T) = 5,,(T) ~sT), we have P. = Gove [20] = Fone [HO 2roJ 2 " (115-6) we find that the minimum errr probability ‘Combining Eq. (11.5-6) with (11.5 (POmin COFFesponding to a maximum value of p3(T)/e? (r= Sof [ 2807] 3” ii o«(2) use We note that Eq. (11.5-8) establishes more generally the idea that the error probability depends only on the signal energy and not on the signal waveshape. Previously we had established this point only for signals which had constant voltage levels. 38 We note also that fq. (115-8) BIYeS (Ime OF the case of the matched titer and when 5)(f) — —s2(t). In See. 11.2 we Considered the case when s(t) — + and s.(t)}—= —V and the filter employed was an integrator. There we found LEg. (11.2-3)) thar the result for P, was identical with (PJmin given in Eq. (115-8). This agreement Ieads us to suspect that for an input signal Where s,(c) = + V and s,() — — V, the integrator is the matched filter. Such is indeed the case, For when we have ostsT (115-92) OstsT (1s.9m) the impulse response of the matched filter is, from Eq, (11.448), w= tro s{T—0) (11.5410) The quantity (7 — 2) —s(T —1 is a pulse of amplitude 2V extending from £= Oto r= Tand may be rewritten, with u(r) the unit step. wa arian — 4t—T)) ans ‘The constant factor of proportionality 4KV/y in the expression for h(2) (that is, the gain of the filter) has no effect on the probability of error since the gain affects signal and noise alike. We may therefore sclect the cocMficient K in Eq. (11.5-11) so that 4K Vim ‘Then the inverse transform of h(t), that is, the transfer fune— tion of the filter, becomes, with s the Laplace transform variabl HG) as 12) The first term in Eq. (1.5.12) represents an integration beginning at ¢ while the second term represents an integration with reversed polarity begin att =. The overall response of the matched filter is an integration from + to f= T and a zero response thereafter. In a physical system, as already described. we achieve the effect of a zero response after ©= T by sampling at ¢ = T. so that so far as the determination of one bit is concerned we ignore the response after r= T. COHERENT RECEPTION :CORRELATION: We discuss now an alternative type of receiving system which, as we shall see, is entical in performance with the matched filter receiver. Again, as shown in Fig. 116-1, the input is a binary data waveform #,(¢) of az{¢) corrupted by nuise n(). The bit length is T, The received signal plus noise fe) is multiplied by a locally generated waveform s,(1) — s(t). The output of the multiplier is passed through an integrator whose output is sampled at £— T. As before, immediaiely after each sampling, at the beginning of each new bit interval, all energy-storing lements in the integrator are discharged. This type of receiver is called a correla- "or, since we are correlating the received signal and noise with the wavelorm 50) ‘The output signal and noise of the correlator shown in Fig. 11.6-1 are sdos(t) ~ 340] de (11.6-1) 39 wine! [etsy san (11.6-2) Where si(t) is either si(t) or s2(1),and wthere mis the constant of the integrator(i.e.,the integrator output is I/x times the integral of its input).we now compare these outputs with the matched filter outputs. tnt | “ ving A frm HE coer Fig:11.6-1Coherent system of signal reception IF h(t) is the impulsive response of the matched filter ,then the output of the matched filter vo(t) can be found using the convolution integral. we have tuo = [°scane ayaa [vane ~ ayaa is (11.6-3) The limits on the integral have been charged to 0 and T since we are interested in the filter response to a bit which extends only over that interval. Using Eq.(11.4-4) which gives h(t) for the matched filter, we have Wt) = = Es(T =) —sT - 01 Mt) =P ET = 9 SAT — (116-4) so that Ae = [s(T 04) —s(T 14 aD (11.65) sub 11.6-5 in 11.6-3 = 2 nAedT — 14a) — se — 1-4 ayaa (11.66) Since vf) = ) + nf) and 0,0) = sit) + m0, setting t= T yields 2K it) [sats sa a te (11.6-7) where (2s eq to (2) oF (2) Similarly we find that nit) = [ene = sa) da " (11.6-8) Thus so(T) and no(T), as calculated from egs.(11.6-1) and (11.6-2) for the correlation receiver, and as calculated from eqs.(11.6-7) and (1.68) for the matched filter receiver, are identical hence the performances of the two systems are identical. The matched filter and the correlator are not simply 40 two distinct, independent techniques which happens to yield the same result. In fact they are two techniques of synthesizing the optimum filter h(t) a Sty iter Probability of ASK Amplitude Shift Keying (ASK), some number of carrier cycles are transmitted to send '1' and no signal is transmitted for binary ‘0’. Thus, Binary ‘I’ => x, (f) = 2; cos(2nfot) and Binary ‘0 => x(t) = 0 (ie. no signal) =» (13.1) Here F, is the normalized power of the signal in 12 load. ie. power P, = Hence A =,2P,. Therefore in above equation for x; (!) amplitude ‘A’ is replaced iy VR. We know that the ~— of error of the optimum filter is given as, Fan) w= (5132) 226 7 xor “Tart ‘The above equations can be applied to matched filter when we consider white Gaussian noise. The power spectral density of white Gaussian noise is given as, sun = No Putting this value of S,, (f) in above equations we get, [ouD=x0 7° a fxg KOR ay ~ - - (5.13.3) ss ST 2 wp {xn af Parseval’s power theorem states that, fxoey = frou Hence equation 5.13.3 becomes, : 2 = [aOaeO? - 2 Fama Ll Janae 42 We know that x(f) is present from 0 to T. Hence limits in above equation can be changed as follows - 52 Tr [2 (T)— 02 ) = et 2d - 6.13.4) e Jomx Noy We know that x() =x; (22 (0 For ASK x2) is. zero, hence x(é)= 2x; (1). Hence above equation becomes, r = Pf doa 9% Putting equation of x; () from equation 5.13.1 in above equation we get, Pt 2 = wo fb? cos 2nfot) |” at BPE cog? . Ne cos? (2Rfy f) dt We know that cos? @=2*°°59 Here applying this formula to cos? (2rfy!) we get, 2 Jmx No a [sO=s20f 2 Af Lroostalt Lact sntagal o 0 J 7 [isin fot)” Uo *|axfo sin dn fT ‘a | an fo We know that T is the bit period and in this one bit period, the carrier has integer number of cycles. Thus the product fy T is an integer. This is illustrated in Fig. 5.13.1 43 Fig. 5.13.1 In one bit period T, the carrier completes its two cycles, The carrier has frequency fe. From figure we can write, 1d ll | iat ty 2 one $ ie T=? we pated A Tod Gemgenect uted As shown in above figure, the carrier completes two cycles in one bit duration. Hence foT = 2 ‘Therefore, in general if carrier completes ‘k’ number of cycles, then, foT =k (Here k is an integer) ‘Therefore the sine term in equation 5.13.5 becomes, sin4nk and is integer. For all integer values of k, sin 4nk «0, Hence equation 5.13.5 becomes, [xoM)-z0(]? _ 2RT [acaem) a ws (5.13.6) [rnM-x0D] _ | - 6.137) Putting this value in equation 5.132 we get error probability of ASK using matched filter detection as, -~ (5.13.8) This is the expression for error probability of ASK using matched filter detection. 44 Error Probability of Binary FSK The observation vector x has two elements x, and x, that are defined by, respectively, Ty m= [ x(t), (t) dt (6.92) Te = [, x(t)halt) di (6.93) where x(r) is the received signal, the form of which depends on which symbol was trans mitted, Given that symbol 1 was transmitted, x(2) equals 5,(#) + 10(t), where w(t) is he sample function of a white Gaussian noise process of zero mean and power spectral densi? No/2. If, on the other hand, symbol 0 was transmitted, x(¢] equals s2(t) + w(t). . Now, applying the decision rule of Equation (5.59), we find that the observaner space is partitioned into two decision regions, labeled Z, and Z, in Figure 6.25. 7 decision boundary, separating region Z, from region Z; is the perpendicular bisector the line joining the two message points. The receiver decides in favor of symbol 1 if the received signal point represented by the observation vector x falls inside region Z,, This occurs when x, > x2. If, on the other hand, we have x < x3, the received signal point falls inside region Z, and the receiver decides in favor of symbol 0. On the decision boundary, we have x1 = x2, in which case the receiver makes a random guess in favor of symbol 1 or 0. Define a new Gaussian random variable Y whose sample value y is equal to the difference between x, and x35 that is, yeu (6.94) The mean value of the random variable ¥ depends on which binary symbol was trans- mitted, Given that symbol 1 was transmitted, the Gaussian random variables X; and X,, whose sample values are denoted by x; and x, have mean values equal to VE,, and zero, respectively. Correspondingly, the conditional mean of the random variable Y, given that symbol 1 was transmitted, is E[Y|1] = B[X,|1] — E[X,|1] 6.95) =+VE, (3s) On the other hand, given that symbol 0 was transmitted, the random variables X, and Xz, have mean values equal to zero and VE,, respectively. Correspondingly, the conditional mean of the random variable Y, given that symbol 0 was transmitted, is 45 ELY|0] = E[X,|0] — E[X2|0] -VE, The variance of the random variable Y is independent of which binary symbol was trans- mitted, Since the random variables X, and X; are statistically independent, each with a variance equal to No/2, it follows that var[¥] = var[Xi] + varlX] = No Suppose we know that symbol 0 was transmitted. The conditional probability density function of the random variable Y is then given by 1 (y + VE 0)= = exp| — 279 Fely|0) TaN = 2No Since the condition x, > x3, or equivalently, y > 0, corresponds to the receiver making a decision in favor of symbol 1, we deduce that the conditional probability of error, given that symbol 0 was transmitted, is (6.96) (6.97) (6.98) Pio = P(y > O|symbol 0 was sent) =[/ solo % ; (6.99) 1 c _(y + VE, d Vian, Jo P| 2N | + VE, = (6.100) 0 ‘Then, changing the variable of integration from y to z, we may rewrite Equation (6 9) as follows: tf Dio = +! : exp(—z*) dz — (6.101) = (ax) Similarly, we may show the Pox, the conditional probability of error given that symbol 1 was transmitted, has the same value as in Equation (6.101). Accordingly, averaging p,, and poy we find that the average probability of bit error or, equivalently, the bit error rte for coherent binary FSK is (assuming equiprobable symbols) 46 x Bes ete fe) (6.10) Comparing Equations (6.20) and (6.102), we sec that, in a coherent binary F5K system, we have to double the bit energy-to-noise density ratio, Ep/No, to maintain the same bit error rate as in a coherent binary PSK system. This result is in perfect accord with the signal-space diagrams of Figures 6.3 and 6.25, where we see that in a binary PSK system the Euclidean distance between the two message points is equal to 2VE,, where in a binary FSK system the corresponding distance is \/2E,. For a prescribed F,, the minimum distance d,,;, in binary PSK is therefore V2 times that in binary FSK. Recall from Chapter 5 that the probability of error decreases exponentially a8 din hence the difference between the formulas of Equations (6.20) and (6.102). Error Probability of QPSK In a coherent QPSK system, the received signal x(z) is defined by Q<:> 1, we may ignore the quadratic term on the right-hang side of Equation (6.33), so we approximate the formula for the average probability of symbol error for coherent QPSK as [E- Pie ere Jo (6.34) The formula of Equation (6.34) may also be derived in another insightful way, using the signal-space diagram of Figure 6.6. Since the four message points of this diagram are circularly symmetric with respect to the origin, we may apply Equation (5.92), reproduce here in the form 1g dix 4 P, == > erfel = for alli (6.35 Py a | al Consider, for example, message point m, (corsesponding to dibit 10) chosen as the trans. mitted message point. The message points », and m, (corresponding to dibits 00 and 11} are the closest to m,. From Figure 6.6 we readily find that »m, is equidistant from mand ‘mg in a Euclidean sense, as shown by dy = dis VIE 49 Assuming that E/Np is large enough to ignore the contribution of the most distant message point m: (corresponding to dibit 01) relative to am, we find that the use of Equation (6.35) yields an approximate expression for P, that is the same as Equation (6.34). Note that ia mistaking either nr, of mts for m1, single bit error is made; on the other hand, in mistaking my for m,, two bit errors are made. For a high enough E/No, the likelihood of both bits of a symbol being in error is much less than a single bit, which is a further justification for ignoring ms in calculating P, when m, is sent, In a QPSK system, we note that since there are two bits per symbol, thie transmuted signal energy per symbol is twice the signal energy per bit, as shown by E=2E, (6.36) Thus expressing the average probability of symbol error in terms of the ratio B,/No, we may write [Ew Pi = — 6.37) orfe (J na ( With Gray encoding used for the incoming symbols, we find from Equations (631! and (6.36) that the bit error rate of QPSK is exactly 1 [Ey ber =4 erte{ [Ex (638) 2 VNo ‘We may therefore state that a coherent QPSK system achieves the same average probably of bit error as a coherent binary PSK system for the same bit rate and the same Fs/No but uses only half the channel bandwidth. Stated in a different way, for the same ExINo and therefore the same average probability of bit error, a coherent QPSK system transmtt information at twice the bit rate of a coherent binary PSK system for the same chant bandwidth, For a prescribed performance, QPSK uses channel bandwidth better than bi- nary PSK, which explains the preferred use of QPSK over binary PSK in practice. ERROR PROBABILITY OF BINARY PSK: To realize a rule for making a decision in favor of symbol 1 or symbol 0,.e partition the signal space into two regions: > The set of points closest to message point 1 at +VEy. ® The set of points closest to message point 2 at —VEy 50 This is accomplished by constructing the midpoint of the line joining these two message points, and then marking off the appropriate decision regions. In Figure 6.3 these decision regions are marked Z, and Z,, according to the message point around which they are constructed. The decision rule is now simply to decide that signal s;(t) (ie., binary symbol 1) was transmitted if the received signal point falls in region Z,, and decide that signal s,() (ie. binary symbol 0) was transmitted if the received signal point falls in region Z,. Two kinds of erroncous decisions may, however, be'made. Signal s,(2) is transmitted, but the noise is such that the received signal point falls inside region Z, and so the receiver decides in favor of signal s,(r). Alternatively, signal s,(t) is transmitted, but the noise is such that the re- ceived signal point falls inside region Z, and so the receiver decides in favor of signal ss). To calculate the probability of making an error of the first kind, we note from Figure 6.3 that the decision region associated with symbol 1 or signal s(t) is described by Zu0 0.Equalitycanonlybeachievedif -PU(u)log2 PU(w)=0 Proof. Since 0 < Py(u) <1, we have =0 if Py(u) =1 >0 fO0

You might also like