Chapter 10
Chapter 10
Chapter 10
August 7, 2012
Exercise 10.1
From Exercise 8.1, the time averaged autocorrelation function is
N (
1 P 1, k = 0
RY Y [k] = lim RY Y [n, n + k] =
N →∞ 2N + 1 6 0
0, k =
n = −N
∞
2πf k
RY Y [k] e−j
P
SY Y (f ) = =1
k=∞
Exercise 10.2
From Exercise 8.2, the autocorrelation function of the process X [n]is
|k|
2 p
RXX [k] = hRXX [n, n + k]i = σW .
1 − p2
1
The corresponding power spectral density is then
∞
RXX [k] e−j2πf k
P
SXX (f ) =
k=∞
1 − p2 2
σW
= ·
1 − 2p cos (2πf ) + p2 1 − p2
2
σW
=
1 − 2p cos (2πf ) + p2
Exercise 10.3
The truncated process is
t
Xt0 (t) = Arect
2to
E [| Xt0 (f ) |2 ]
= lim 2t0 sinc2 (2f t0 ) E A2
SXX (f ) = lim
t0 →∞ 2t0 t0 →∞
SXX (f ) = E A2 δ (f ) .
2
Exercise 10.4
Therefore,
F T [ΦΨ (2πτ )] = F T [F T −1 [fΨ (φ)]] = fΨ (f ).
Likewise, F T [ΦΨ (−2πτ )] = fΨ (−f ). Thus,
b2 b2
SX,X (f ) = fΨ (f ) + fΨ (−f ).
4 4
Hence for any valid PSD, S(f ), we can construct a process X(t) = b cos(2πΨt+
Θ) which will have a PSD equal to S(f ) by choosing Ψ to have a PDF whose
even part is proportional to S(f ). The constant b is adjusted to make the
total power match that specified by S(f ).
Exercise 10.5
3
Since A and B are identically distributed, E[A2 ] = E[B 2 ] = σ 2 , so that
Since cos3 (ωt) + sin3 (ωt) is not constant, the process will not be strictly
stationary for any random variable A such that E[A3 ] 6= 0.
Exercise 10.6
(a)
XN N
X
RX,X (t, t + τ ) = E[( an cos(ωn t + θn ))( am cos(ωm (t + τ ) + θm ))]
n=1 m=1
N
XX N
= an am E[cos(ωn t + θn ) cos(ωm (t + τ ) + θm )]
n=1 m=1
The expected value in the above expression is zero for all m 6= n. Therefore
N
X
RX,X (t, t + τ ) = a2n E[cos(ωn t + θn ) cos(ωn (t + τ ) + θn )]
n=1
N
1 X
RX,X (τ ) = a2n cos(ωn τ ).
2 n=1
(b)
N
1X 2
SX,X (f ) = F T [RX,X (τ )] = a {δ(f − fn ) + δ(f + fn )}.
4 n=1 n
4
Exercise 10.7
(a)
X∞
RX,X (t, t + τ ) = E[( An cos(nωt) + Bn sin(nωt))
n=1
∞
X
( Am cos(mω(t + τ )) + Bm sin(mω(t + τ )))]
m=1
∞ X
X ∞
= {E[An Am ] cos(nωt) cos(mω(t + τ ))
n=1 m=1
+ E[An Bm ] cos(nωt) sin(mω(t + τ ))
+ E[Bn Am ] sin(nωt) cos(mω(t + τ )
+ E[Bn Bm ] sin(nωt) sin(mω(t + τ )}
∞
X
= {E[A2n ] cos(nωt) cos(nω(t + τ )) + E[Bn2 ] sin(nωt) sin(nω(t + τ ))}
n=1
Exercise 10.8
SX,X (f ) = F [RX,X (τ )] = F [1] = δ(f ).
That is, all power in the process is at d.c.
5
Exercise 10.9
(a)
RY,Y (t, t + τ ) = E[Y (t)Y (t + τ )] = E[X 2 (t)X 2 (t + τ )]
= E[X 2 (t)]E[X 2 (t + τ )] + 2E[X(t)X(t + τ )]2
2 2
= RX,X (0) + 2RX,X (τ )
2
SY,Y (f ) = RX,X (0)δ(f ) + SX,X (f ) ∗ SX,X (f )
ˆ ∞ 2
= SX,X (f )df (0)δ(f ) + SX,X (f ) ∗ SX,X (f )
−∞
(b)
ˆ ∞
RX,X (0) =
SX,X (f )df = 2B.
−∞
f
SX,X (f ) ∗ SX,X (f ) = 2Btri .
2B
2 f
SY,Y (f ) = 4B δ(f ) + 4Btri .
2B
20
18
16
14
12
SY,Y(f)
10
−10 −8 −6 −4 −2 0 2 4 6 8 10
f
(c)
E[Y (t)] = E[X 2 (t)] = σX2
= constant
2 2
RY,Y (t, t + τ ) = RX,X (0) + 2RX,X (τ )
6
Since E[Y (t)] is constant and RY,Y (t, t+τ ) is a function of τ only, the process
is WSS.
Exercise 10.10
Exercise 10.11
Let ∞
X
s(t) = sk exp(j2πfo t).
k=−∞
7
Then
∞
X
s(t − T ) = sk exp(j2πfo (t − T ))
k=−∞
" #
XX
RX,X (τ ) = E sk s∗m exp(j2πkfo (t − T ) exp(−j2πmfo (t + τ − T )
k m
XX
= sk s∗m exp(j2π(k − m)fo t) exp(−j2πmfo τ )
k m
E[exp(j2π(k − m)fo T )]
ˆ
1 to
E[exp(j2π(k − m)fo T )] = exp(j2π(k − m)fo u)du
t
(o 0
0 k 6= m
=
1 k=m
∞
X
RX,X (τ ) = |sk |2 exp(−j2πkfo τ )
k=−∞
X∞
SX,X (f ) = |sk |2 δ(f − kfo )
k=−∞
Hence the process X(t) = s(T − T ) has a line spectrum and the height of
each line is given by the magnitude squared of the Fourier Series coefficients.
Exercise 10.12
Write Y (t) = b cos(ωo t + Ωt + Θ) where Ω = 2πfo V /c and ωo = 2πfo . Note
that Ω is uniformly distributed over (ωo νo /c, −ωo νo /c). For simplicity, define
zo = ωo νo /c.
8
Assuming Θ is uniform over [0, 2π) the second expectation is zero. Hence
ˆ zo
b2 1
RY,Y (τ ) = cos(ωo + ω)τ )dω
2 2zo −zo
b2
= [sin((zo + ωo )τ ) + sin((zo − ωo )τ )]
4ωo τ
b2 νo sin(ωo τ )
= cos(ωo τ )
c ωo τ
b2
π(f − fo ) π(f + fo )
SY,Y (f ) = rect + rect
4fo zo zo
The power of the signal is now spread over a range of frequencies around
±fo .
Exercise 10.13
(a)
|k| |k|
1 1
RZ,Z [k] = RX,X [k] + RY,Y [k] = +
2 3
(see Exercise 8.19 for details)
(b) For a funcion of the form R[k] = p|k| , the Fourier Transform is (to is the
time between samples of the discrete time process)
X
S(f ) = R[k]e−j2πkf to
k
∞
X
= 1+ pk {e−j2πkf to + ej2πkf to }
k=1
pe−j2πkf to pej2πkf to
= 1+ +
1 − pe−j2πkf to 1 − pej2πkf to
1 − p2
=
1 + p2 − 2p cos(2πf to )
Therefore,
3/4
SX,X (f ) =
5/4 − cos(2πf to )
8/9
SY,Y (f ) =
10/9 − (2/3) cos(2πf to )
SZ,Z (f ) = SX,X (f ) + SY,Y (f ).
9
Exercise 10.14
For a discrete time random process: S(f ) = ∞ −j2πkf to
P
k=−∞ R[k]e .
´ 2t1o
The inverse relationship is: R[k] = to − 1 S(f )ej2πkf to df .
2to
The average power in the process is:
ˆ 1
1 1 2to
Pavg = E[X 2 [k]] = RX,X [0] = S(f )df.
to to − 2t1
o
Exercise 10.15
RX,X (t1 , t2 ) = E[cos(ωc t1 + B[n1 ]π/2) cos(ωc t2 + B[n2 ]π/2)],
where n1 and n2 are integers such that n1 T ≤ t1 < (n1 + 1)T and n2 T ≤
t2 < (n2 + 1)T /. For t1 , t2 such that n1 6= n2 ,
RX,X (t1 , t2 ) = E[cos(ωc t1 + B[n1 ]π/2)]E[cos(ωc t2 + B[n2 ]π/2)] = 0,
while for t1 , t2 such that n1 = n2 ,
1 1
RX,X (t1 , t2 ) = cos(ωc (t2 − t1 )) + E[cos(ωc (t2 + t1 ) + πB[n1 ])]
2 2
1 1
= cos(ωc (t2 − t1 )) − cos(ωc (t2 + t1 ))
2 2
Since this autocorrelation depends on more than just t1 − t2 , the process is
not WSS.
(b) From part (a),
(
0 if t, t + τ are in different intervals,
RX,X (t, t+τ ) = 1 1
2
cos(ωc τ ) − 2 cos(ωc (2t + τ )) if t, t + τ are in in the same intervals.
10
Therefore,
p(τ ) = tri(t/T )
1
RX,X (τ ) = tri(τ /T ) cos(ωc τ )
2
1
SX,X (f ) = F T [tri(τ /T )] ∗ F T [cos(ωc τ ]
2
Exercise 10.16
0.3
SY,Y(f)
0.2
0.1
−0.1 f
−10 −5 0 5 10
11
Exercise 10.17
(a)
RZZ (t) = E [Z (t) Z (t + t)]
= E [(X (t) + Y (t)) (X (t + t) + Y (t + t))]
= RXX (t) + RXY (t) + RXY (−t) + RY Y (t)
∗
SZZ (f ) = SXX (f ) + SXY (f ) + SXY (f ) + SY Y (f )
(b) We will have, SZZ (f ) = SXX (f ) + SY Y (f ) if
∗
SXY (f ) + SXY (f ) = 0
⇒ Re [SXY (f )] = 0
Exercise 10.18
´∞
f 2 S(f )df
2
Brms ´∞
= −∞
−∞
S(f )df
´∞
Recall that R(τ ) = −∞
S(f )ej2πf τ df . Thus the denominator is simply R(0).
Also note that ˆ ∞
d2 R(τ ) 2
= (j2πf ) S(f )ej2πf τ df.
dτ 2 −∞
Therefore, the numerator is
ˆ ∞ 2 2
1 d R(τ )
f 2 S(f )df = − .
2π 2
dτ
−∞
τ =0
Therefore,
2 1 d2 R(τ )
Brms =− .
(2π)2 R(0) dτ 2
τ =0
Exercise 10.19
(a) The absolute BW is ∞ since S(f ) > 0 for all |f | < ∞.
(b) The 3dB BW, f3 satisfies
1 1
=
(1 + (f3 /B)2 )3 2
p
⇒ f3 = B 21/3 − 1 = 0.5098B.
12
(c)
ˆ ∞ ˆ ∞ ˆ ∞
2 f2 3 z2 π
f S(f )df = 2 3
df = B 2 3
dz = B 3
(1 + (f /B) ) (1 + z ) 8
−∞
ˆ ∞ ˆ−∞∞ ˆ −∞
∞
1 1 3π
S(f )df = 2 3
df = B 2 3
dz = B
−∞ −∞ (1 + (f /B) ) −∞ (1 + z ) 8
π 3
2 8
B B2
Brms = 3π =
8
B 3
B
Brms = √ .
3
Exercise 10.20
(a) The absolute BW is ∞ since S(f ) > 0 for all |f | √
< ∞.
(b) The peak value of the PSD occurs at f = B/ 2 and has a value of
Smax = 4/27. Next we seek the values of f which satisfy
(f /B)2 1
2 3
= Smax
(1 + (f /B) ) 2
2
⇒ (f /B)2 = (1 + (f /B)2 )3 .
27
q √ √
The two solutions are f1 = 3 23−5 B and f2 = 2B. The 3dB bandwidth
is then s √
√ 3 3 − 5
f3 = f2 − f1 = 2 − B = 1.1010B.
2
(c) Since this is a bandpass process, the definition of (10.24) in the text is
used. First we must find ´∞
f S(f )df
fo = ´0 ∞
0
S(f )df
ˆ ∞ ˆ ∞ ˆ ∞
f (f /B)2 2 z3 1 2
f S(f )df = df = B dz = B
(1 + (f /B)2 )3 (1 + z 2 )3 4
ˆ ∞
0
ˆ ∞
0
ˆ ∞ 0
(f /B)2 z2 π
S(f )df = 2 3
df = B 2 3
dz = B
0 0 (1 + (f /B) ) 0 (1 + z ) 16
1 2
B 4
fo = 4π = B.
16
B π
13
Next, we find the RMS bandwidth according to
´∞
4 0 (f − fo )2 S(f )df
2
Brms = ´∞
S(f )df
ˆ ∞ ˆ ∞0
(z − 4/π)2 z 2
(f − fo )2 S(f )df = B 3 2 )3
dz = 0.2707B 3
0 0 (1 + z
3
2 4 · 0.2707B
Brms = = 5.5154B 2
π/16 · B
Brms = 2.3485B.
Exercise 10.21
1
X[n] = X[n − 1] + E[n]. (1)
2
Taking expectations of both sides of (1) results in
1
µ[n] = µ[n − 1], n = 1, 2, 3, . . . .
2
Hence µ[n] = (1/2)n µ[0]. Noting that X(0) = 0, then µ[0] = 0 ⇒ µ[n] = 0.
Multiply both sides of (1) by X[k] and then take expected values to produce
1
E[X[k]X[n]] = E[X[k]X[n − 1]] + E[X[k]E[n]].
2
Assuming k < n, X[k] and E[n] are independent. Thus, E[X[k]E[n]] = 0
and therefore
1
RX,X [k, n] = RX,X [k, n − 1].
2
n−k
1
⇒ RX,X [k, n] = RX,X [k, k], n = k, k + 1, k + 2, . . . .
2
Following a similar procedure, it can be shown that if k > n
k−n
1
RX,X [k, n] = RX,X [k, k].
2
Hence in general
|n−k|
1
RX,X [k, n] = RX,X [m, m], where m = min(n, k).
2
14
Note that RX,X [m, m] can be found as follows:
1
RX,X [m, m] = E[X 2 [m]] = E[( X[m − 1] + E[m])2 ]
2
1
= RX,X [m − 1, m − 1] + E[X[m − 1]E[m]] + E[E 2 [m]].
4
Since X[m − 1] and E[m] are uncorrelated, we have the following recursion
1
RX,X [m, m] = RX,X [m − 1, m − 1] + σE2
4
m m−1
X 1 i
1 2
⇒ RX,X [m, m] = RX,X [0, 0] + σE .
4 i=0
4
Exercise 10.22
(a) Given
Y [n] = a1 Y [n − 1] + a2 Y [n − 2] + X[n], (2)
then multiplying (2) by Y [n − k] and taking expectations produces
E[Y [n]Y [n − k]] = a1 E[Y [n − 1]Y [n − k]]
+ a2 E[Y [n − 2]Y [n − k]] + E[X[n]Y [n − k]]. (3)
Note that (for k > 0) X[n] and Y [n − k] are independent since Y [n − k]
depends on X[n − k], x[n − k − 1], . . . , but not on X[n]. Hence,
E[X[n]Y [n − k]] = E[X[n]]E[Y [n − k]] = 0.
Then (3) becomes
RY,Y [k] = a1 RY,Y [k − 1] + a2 RY,Y [k − 2]. (4)
15
(b) Squaring both sides of (2) and taking expectations gives
E[Y 2 [n]] = E[(a1 Y [n − 1] + a2 Y [n − 2] + X[n])2 ]
RY,Y [0] = (a21 + a22 )RY,Y [0] + 2a1 a2 RY,Y [1] + RX,X [0]
2
⇒ σX = (1 − a21 − a22 )RY,Y [0] − 2a1 a2 RY,Y [1].
From (4) with k = 1, we get
RY,Y [1] = a1 RY,Y [0] + a2 RY,Y [−1].
Since RY,Y [1] = RY,Y [−1] we have
(1 − a2 )RY,Y [1] − a1 RY,Y [0] = 0.
Thus we have the following 2x2 linear equations:
2
1 − a21 − a22 RY,Y [0] = σX
σ2
⇒ RY,Y [0] = X 1 − a2
∆
where ∆ = (1 − a2 )(1 − a21 − a22 ). This then provides the initial conditions for
the difference equation in (4).
(c) The general solution of (4) will be of the form
p
|k| |k| a1 ± a21 + 4a2
RY,Y [k] = k1 b1 + k2 b2 , where b1 , b2 = (5)
2
The constants k1 and k2 are found using the initial conditions:
2
σX
RZ,Z [0] = k1 + k2 = (1 − a2 )
∆
σ2
RZ,Z [1] = k1 b1 + k2 b2 = X a1
∆
2
σX
⇒ 1 k1 = 1 − a2
∆
1
⇒ k1 = b2 (1 − a2 ) − a1 .
∆(b2 − b1 )
(d) Taking the DFT of (5) gives
k1 (1 − b21 ) k2 (1 − b22 )
SY,Y (f ) = + .
1 + b21 − 2b1 cos(2πf ) 1 + b22 − 2b2 cos(2πf )
16
Exercise 10.23
(a)
E[2 ] = E[(Y [n + 1] − a1 Y [n] − a2 Y [n − 1])2 ]
= RY,Y [0](1 + a21 + a22 ) − 2a1 (1 − a2 )RY,Y [1] − 2a2 RY,Y [2]
(b)
∂E[2 ]
= 2a1 RY,Y [0] − 2(1 − a2 )RY,Y [1] = 0
∂a1
⇒ RY,Y [0]a1 + RY,Y [1]a2 = RY,Y [1]
2
∂E[ ]
= 2a2 RY,Y [0] − 2RY,Y [2] + 2a1 RY,Y [1] = 0
∂a2
⇒ RY,Y [1]a1 + RY,Y [0]a2 = RY,Y [2]
RY,Y [0] RY,Y [1] a1 RY,Y [1]
⇒ =
RY,Y [1] RY,Y [0] a2 RY,Y [2]
a1 1 RY,Y [0]RY,Y [1] − RY,Y [1]RY,Y [2]
⇒ = 2
a2 2
RY,Y 2
[0] − RY,Y [1] RY,Y [0]RY,Y [2] − RY,Y [1]
Exercise 10.24
(a)
p
X
2
E[ ] = E[(Y [n + 1] − ak Y [n − k + 1])2 ]
k=1
p
X
2
= E[Y [n + 1]] − 2 ak E[Y [n + 1]Y [n + 1 − k]]
k=1
p p
X X
+ ak am E[Y [n + 1 − k]Y [n + 1 − m]]
k=1 m=1
p p p
X X X
= RY,Y [0] − 2 ak RY,Y [k] + ak am RY,Y [m − k]
k=1 k=1 m=1
17
Then the mean squared error is
(b)
∇a = −2r + 2Ra = 0
⇒ a = R−1 r
Exercise 10.25
ˆ t0 −
|t|
h i 1 2
h t t i
E R̂XX (t) = E X t− X t+ dt
2t0 − | t | −t0 +
|t|
2
2 2
ˆ |t|
t0 − 2
1 t t
= RXX t − , t + dt
2t0 − | t | −t0 +
|t|
2
2 2
RXX (t)
= (2t0 − | t |) = RXX (t)
2t0 − | t |
Exercise 10.26
h i2
Var R̂XX (t) = E R̂XX (t) − E R̂XX (t)
2
= E R̂XX (t) − RXX (t)
h i
2 2
= E R̂XX (t) − RXX (t)
ˆ t0 − |t| ˆ t0 − |t| h
h
2
i 1 2 2 t t t t i
E R̂XX (t) = E X t− X t+ X u− X u+ d
(2t0 − | t |)2 −t0 + |2t| −t0 + |2t| 2 2 2 2
18
Using the Gaussian moment factoring theorem
h t t t t i 2 2
E X t− X t+ X u− X u+ = RXX (t) + RXX (u − t)
2 2 2 2
+ RXX (u − t + t) RXX (u − t − t)
Therefore
ˆ ˆ
h
2
i 1 2 2
E R̂XX (t) = RXX (t) + RXX (u − t)
(2t0 − | t |)2
+ RXX (u − t + t) RXX (u − t − t)] dtdu
ˆ ˆ
2 1 2
= RXX (t) + RXX (u − t)
(2t0 − | t |)2
+ RXX (u − t + t) RXX (u − t − t)] dtdu
ˆ ˆ
1 2
RXX (u − t) + RXX (u − t + t) RXX (u − t − t) dtdu
=
(2t0 − | t |)2
ˆ 2t0 −|t|
1 2 2 2
(v + t) RXX (v − t) (2t0 − | t | − | v) dv
= RXX (v) + RXX
(2t0 − | t |)2 −2t0 +|t|
Var 2
R̂XX (t) =
ˆ ˆ
1 2
(u − t) + RXX (u − t + t) RXX (u − t − t) dtdu
= RXX
(2t0 − | t |)2
ˆ 2t0 −|t|
1 2 2 2
(v + t) RXX (v − t) (2t0 − | t | − | v) dv
= RXX (v) + RXX
(2t0 − | t |)2 −2t0 +|t|
Exercise 10.27
Starting with the result of Exercise 10.26,
ˆ 2t0 −|t|
1
Var R̂XX (t) = (2t0 − | t | − | v |)
(2t0 − | t |)2 −2t +|t|
20
+ RXX (v + t) RXX (v − t) dv
RXX (v)
19
As | t|→ 2t0 the limits of integration approach 0 and therefore the terms in
the integrand involving RXX ()can be treated as constants and hence
R2 (0) + RXX (t) RXX (−t)
lim Var R̂XX (t) = lim XX
|t|→2t0 |t|→2t0 (2t0 − | t |)2
ˆ 2t0 −|t|
· (2t0 − | t | − | v |) dv
−2t0 +|t|
2
(2t0 − | t |)2
= lim (0) + RXX (t)
RXX
|t|→2t0 (2t0 − | t |)2
2 2
= RXX (0) + RXX (t)
2 2
> RXX (0) + RXX (t)
Exercise 10.28
Using the result of Theorem 10.2
h i
(p) (tri)
ŜXX (f ) = F R̂XX (t)
Therefore,
h i h h ii h h ii
(p) (tri) (tri)
E ŜXX (f ) = E F R̂XX (t) = F E R̂XX (t) .
Note that
ˆ t0 −
|t|
h
(tri)
i 1 2
h t t i
E R̂XX (t) = E X t− X t+ dt
2t0 −t0 +
|t|
2
2 2
ˆ |t|
t0 − 2
1
= RXX (t) dt
2t0 −t0 +
|t|
2
2t0 − | t |
= RXX (t)
2t0
Therefore,
|t|
h i
(p)
E ŜXX (f ) = F 1− RXX (t)
2t0
|t|
= F tri RXX (t)
2t0
= 2t0 sinc2 (2f t0 ) ∗ SXX (f )
20
h i
(p)
Since E ŜXX (f ) 6= SXX (f ) , the periodogram is not unbiased.
Exercise 10.29
e (f ) = f1∆ rect ff∆ , then w(τ ) =sinc(f∆ τ ). Using the result of Equation
If w
10.36, this smoothed periodogram is equivalent to a windowed correlation
estimate with a window of
τ τ
w (τ ) tri = tri sinc (f∆ τ ) .
2t0 2t0
Exercise 10.30
ktk z h|f |
SN,N (f ) = , where z =
2 ez − 1 ktk
z z
=
ez −1 (1 + z + z /2 + z 3 /3! + . . .) − 1
2
1
= .
1 + z/2 + z 2 /6 + . . .
As |f | → 0, z → 0. Clearly, as z → 0, ezz−1 → 1 so that
ktk No
lim sN,N (f ) = = .
|f |→0 2 2
Exercise 10.31
Noting that V = V1 + V2 ,
2
Vrms = E[V 2 ] = E[(V1 + V2 )2 ] = E[V12 ] + E[V22 ] + 2E[V1 V2 ] = V1,rms
2 2
+ V2,rms
q
2 2
⇒ Vrms = V1,rms + V2,rms
p
4kte (r1 + r2 )∆f = (4kt1 r1 ∆f )2 + (4kt2 r2 ∆f )2
p
= 4k∆f (t1 r1 )2 + (t2 r2 )2
p
(t1 r1 )2 + (t2 r2 )2
te =
r1 + r2
Note if t1 = t2 = to , then the effective temperature is
p
r12 + r22
te = to
r1 + r2
21
which in general is not equal to the physical temperature (unless r1 = r2 ).
Exercise 10.32
In the case of parallel resistors,
V1 V2 r1 r2
V = + .
r1 r2 r1 + r2
r1 r2
Let r be the paraalel combination of the resistances (i.e., r = r1 +r2
). Then
2 2
E[V12 ] E[V22 ] 2 V1,rms V2,rms
2 2
Vrms = E[V ] = + r = + r2
r12 r22 r12 r22
(4kte r∆f )2 = ((4kt1 ∆f )2 + (4kt1 ∆f )2 )r2
⇒ t2e = t21 + t22
q
te = t21 + t22 .
In this case, the effective temperature is the same as the physical temperature
if t1 = t2 .
22