0% found this document useful (0 votes)
20 views6 pages

HW 3 Sol

Uploaded by

sahar khosravi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views6 pages

HW 3 Sol

Uploaded by

sahar khosravi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

ECE531 Homework Assignment Number 3

Solution

Make sure your reasoning and work are clear to receive full credit for each problem.

1. 4 points. Poor textbook Chapter II. Problem 2 (b).


Solution: With uniform costs, the least-favorable prior will be interior to (0, 1), so we should
use the equalizer rule. From part (a) of this problem, we know that the conditional risks are
τ′
2τ ′ τ′
 
2
Z
R0 (δπ0 ) = (y + 1)dy = +1 ,
0 3 3 2

and Z 1
R1 (δπ0 ) = dy = 1 − τ ′
τ′
since all Bayes decision rules decide H1 if t ≥ τ ′ , for some threshold τ ′ ∈ R, otherwise decide
H0 . Using the equalizer rule, a minimax threshold τlf ′ is the solution to the equation

!

2τlf ′
τlf ′
+1 = 1 − τlf ,
3 2

′ = ( 37 − 5)/2. Hence
which yields τlf
 √
 1 if 0 ≤ y√< ( 37 − 5)/2
ρmm (y) = 0/1 if y√= ( 37 − 5)/2
0 if ( 37 − 5)/2 < y ≤ 1.

The minimax risk


√ is the value of the equalized conditional risk; i.e., R0 (ρmm ) = R1 (ρmm ) =
′ = (7 − 37)/2.
1 − τlf

2. 4 points. Poor textbook Chapter II, Problem 4 (b).


Solution: The minimax rule can be found by equating the conditional risks
Z
R0 (ρ) = p0 (y) dy
Γ1

and Z
R1 (ρ) = p1 (y) dy
Y\Γ1

where Γ1 ⊆ Y is the critical region of observations in which we decide H1 . Recall from HW2
that there are three cases for the integration region depending on the prior. It should be
clear that we can’t equalize the risks in the first case. In the second case, i.e. β ≤ π0 ≤ α

1
corresponding to a threshold 0 ≤ τ ′ ≤ 1, the equalizer rule requires us to find the value of
τ ′ ∈ [0, 1] that solves
Z 1+√τ ′ r "Z √ ′
1− τ Z ∞ #
2 y2 y2
√ e−y dy = e− 2 dy + √ e
− 2
dy
1− τ ′ π 0 1+ τ ′

√ √ h  √   √ i
exp(−(1 − τ ′ )) − exp(−(1 + τ ′ )) = 2 0.5 − Q 1 − τ ′ + Q 1 + τ ′

There is no closed-form analytical solution here, but you can find a numerical solution using,
for example, Matlab. Here is the Matlab command I used.

x = fsolve(@(x) exp(-(1-sqrt(x)))-exp(-(1+sqrt(x)))-2*(0.5-Q(1-sqrt(x))+Q(1+(sqrt(x)))),[0 1])

This yields a result τ ′ ≈ 0.3291. Since


r  
′ π π0
τ = −2 ln .
2e 1 − π0
allows us to solve for πlf ≈ 0.5274. Note that β ≤ πlf ≤ α, hence we can equalize the risks
in the second case. We don’t need to look at the third case.
The minimax decision rule is then simply the Bayes decision rule at the prior π0 = πlf . The
minimax risk can easily be computed from either of the equal conditional risks; i.e.,
√ √
V (πlf ) = R0 (ρmm ) = R1 (ρmm ) = exp(−(1 − τ ′ )) − exp(−(1 + τ ′ )) ≈ 0.4456.

3. 4 points. Poor textbook Chapter II, Problem 6 (b).


Solution: Because of the symmetry of this problem with uniform costs, we can guess that
π0 = 0.5 is the least-favorable prior. When π0 = 0.5, the Bayes decision rule δB simply
decides H0 if y < 0 and decides H1 if y ≥ 0. To confirm that this is indeed the least favorable
prior, we can check that this guess satisfies the equalizer rule:
Z ∞ Z 0
B 1 1
R0 (δ ) = 2
dy = 2]
dy = R1 (δB )
0 π [1 + (y + s) ] −∞ π [1 + (y − s)

which it does. Hence the minimax decision rule is then simply the Bayes decision rule at the
−1
prior π0 = 0.5 and the resulting risk is r(ρmm ) = 21 − tan π (s) (the same answer that we had
in HW2).

4. 4 points. Poor textbook Chapter II, Problem 9 (a).


Solution: Here the likelihood ratio is given by
(
2−|y|
p1 (y) 4(1−|y|) if |y| ≤ 1
L(y) = =
p0 (y) ∞ if 1 < |y| ≤ 2

Hence {y : 1 < |y| ≤ 2} is always included in the critical region Γ1 . The intuition is that if
the true state is x0 , then we don’t observe y > 1. Let’s first focus on the observation region
|y| ≤ 1. Note that when |y| ≤ 1, we can say

2 − |y| 1
L(y) = ≥ .
4(1 − |y|) 2
Hence we consider the following regions of π0 .
π0
Region (i) 0 ≤ π0 < 12 . In this region, τ = 2(1−π 0)
< 12 , hence L(y) is always larger than the
threshold τ and we always decide H1 . Hence, we can write

δB (y) ≡ 1, |y| ≤ 1.

π0
Region (ii) 1/2 ≤ π0 ≤ 1. In this region, τ = 2(1−π 0)
≥ 12 . We don’t always decide H1 . A
little bit of algebra leads to the decision rule
 4π0 −2
B 1 if 3π 0 −1
≤ |y| ≤ 1
δ (y) =
0 otherwise.

So putting these results together with the prior result that we should always decide H1 if we
get an observation 1 < |y| ≤ 2, we can write
1
• Region (i) 0 ≤ π < 2 overall decision rule and risk:

δB (y) ≡ 1, |y| ≤ 2

V (π0 ) = C10 π0
• Region (ii) 1/2 ≤ π0 ≤ 1 overall decision rule and risk:

1 if 4π 0 −2

B
δ (y) = 3π0 −1 ≤ |y| ≤ 2
4π0 −2
0 if 0 ≤ |y| < 3π 0 −1

V (π0 ) = π0 R0 + (1 − π0 )R1 .
4π0 −2
Defining γ := 3π 0 −1
for notational convenience and noting that γ is non-negative in
region (ii), the conditional risk R0 can be computed as
Z
R0 = C10 p0 (y) dy
Γ1
Z 1
= 2C10 1 − y dy
γ
= C10 1 − 2γ + γ 2
 

Similarly, we can derive the risk R1 as


Z
R1 = C01 p1 (y) dy
Y\Γ
Z γ1
2−y
= 2C01 dy
0 4
γ2
 
= C10 2γ − .
2
where we have used the fact that C01 = 2C10 in the last equality.

All the pieces are in place now to find the minimax decision rule. Since V (π0 ) = C10 π0
in region (i), we know that the we can equalize the risks only in region (ii). Applying the
2
equalizer rule in region (ii), we just need to solve 1 − 2γ + γ 2 = 2γ − γ2 . The positive solution
to this equation is √
4 − 10
γ= .
3
which yields the desired least favorable prior

5 + 10
πlf = ≈ 0.5442
15
which is in region (ii). The minimax decision rule is then
( √
mm Bπlf 1 if 4− 3 10 ≤ |y| ≤ 2
ρ (y) = δ (y) =
0 otherwise

and the risk can be calculated as

R0 = R1 = C10 (1 − 2γ + γ 2 ) ≈ 0.5195C10 = 0.2597C01 .

5. 4 points. Poor textbook Chapter II, Problem 11.


Solution: For the Bayes part of this problem, we are allowed to assume equally likely priors.
We can write the likelihood ratio as
p1 (y) π0 (C10 − C00 ) 1
L(y) = = ey−1/2 ≥ =
p0 (y) π1 (C01 − C11 ) N

which implies a critical region Γ1 where we decide H1 as

Γ1 = {y : y ≥ τN }

where τN := 21 + ln N1 . The Bayes decision rule is simply to decide H0 if y ∈


/ Γ1 or decide H1
if y ∈ Γ1 . The Bayes risk is of this decision rule is then

r(π0 = 0.5) = 0.5 · R0 + 0.5 · R1


Z ∞ Z τN
= 0.5 · p0 (y) dy + 0.5 · N p1 (y) dy
τN −∞
= 0.5 · (1 − Φ(τN )) + 0.5 · N Φ(τN − 1)
Rx 2
where Φ(x) := −∞ √12π e−t /2 dt = 1 − Q(x) is CDF of a zero-mean unit-variance Gaussian
random variable. It should be clear that r(π0 = 0.5) converges to 0.5 as N → ∞ because
τN → −∞ and Φ(τN ) → 0 (faster than 1/N ) as N → ∞. Hence, when N gets large enough,
the Bayes decision rule under equal priors will almost always decide H1 to avoid the huge
cost of being wrong by deciding H0 when the true hypothesis is H1 .
To investigate the minimax decision rule, we need to consider general priors. In this case we
can write the likelihood ratio as
p1 (y) π0 (C10 − C00 ) π0
L(y) = = ey−1/2 ≥ =
p0 (y) π1 (C01 − C11 ) (1 − π0 )N

which implies a critical region


Γ1 = {y : y ≥ τN (π0 )}
1 π0
where τN (π0 ) := 2 + ln (1−π 0 )N
.
For fixed N , applying the equalizer rule to the conditional risks allows us to implicitly specify
a least favorable prior πlf as the solution to

1 − Φ(τN (πlf )) = N Φ(τN (πlf ) − 1)


which can be rearranged as

Φ(τN (πlf )) + N Φ(τN (πlf ) − 1) = 1.

Recall that CDFs are monotonically increasing with range on [0, 1]. Hence, given any N ≥ 0,
we can always find x ∈ R such that Φ(x) + N Φ(x − 1) = 1. Also note that, since a CDF can’t
be negative,
1
Φ(τN (πlf ) − 1) ≤ .
N
Hence, as N → ∞, Φ(τN (πlf ) − 1) must go to zero. This implies that τN (πlf ) → −∞ as
N → ∞. Hence, the critical region Γ1 of the minimax decision rule must converge to R as
N → ∞, i.e. we always decide H1 .
What does the least favorable prior do as N → ∞? This is difficult to determine analyt-
ically, but the following Matlab code numerically determines the optimum threshold and
corresponding prior.
1 % ECE531 HW3 problem 5
2 % DRB 12−Feb −2009
3
4 % Thi s p a r t o f t h e code d e t e r m i n e s the
5 % d e c i s i o n t h r e s h o l d as a f u n c t i o n of N
6 N t e s t = logspace ( 0 , 6 , 1 0 0 ) ;
7 tau = zeros ( 1 , 1 0 0 ) ; % this is t h e t h r e s h o l d tau N ( p i 0 )
8 f v a l = zeros ( 1 , 1 0 0 ) ;
9 e x i t f l a g = zeros ( 1 , 1 0 0 ) ;
10 i 1 =0;
11 f o r N=Ntest ,
12 i 1=i 1 +1;
13 [ tau ( i 1 ) f v a l ( i 1 ) e x i t f l a g ( i 1 ) ] = f s o l v e (@( x ) Phi ( x)+N∗ Phi ( x −1) −1 ,0);
14 end
15 figure ( 1)
16 semilogx ( Ntest , tau ) ;
17 xlabel ( ’N ’ ) ;
18 ylabel ( ’ \ tau N ( \ p i { l f } ) ’ )
19
20 % Thi s p a r t o f t h e code d e t e r m i n e s t h e
21 % p r i o r t h a t c o r r e s p o n d s t o t h e t h r e s h o l d tau N ( p i 0 )
22 X = N t e s t . ∗ exp ( x − 0 . 5 ) ;
23 p i 0 = X./(1+X ) ;
24 figure ( 2)
25 semilogx ( Ntest , p i 0 ) ;
26 xlabel ( ’N ’ ) ;
27 ylabel ( ’ \ p i { l f } ’ )

where we used the function


1 % Phi f u n c t i o n
2 % Phi=1−Q;
3
4 function y=Phi ( x )
5
6 x=e r f c ( x/ sqrt ( 2 ) ) / 2 ;
7 y = 1−x ;
0.5

−0.5

−1

−1.5
τ (π )
lf
N

−2

−2.5

−3

−3.5

−4
0 1 2 3 4 5 6
10 10 10 10 10 10 10
N
1

0.9

0.8
πlf

0.7

0.6

0.5

0.4
0 1 2 3 4 5 6
10 10 10 10 10 10 10
N

Note that the threshold τN (πlf ) is going to −∞ as we expected, but it isn’t going there very
fast (much slower than the rate at which N is going to infinity). This means that πlf → 1
as N → ∞. Intuitively, the least favorable prior when N is large is for H1 to be the true
hypothesis much smaller probability than H0 . The Bayes decision is in the worst possible
situation here because it wants to decide H0 based on the fact that this is what the priors say
will occur most frequently, but the penalty for deciding H0 when the true state is H1 is very
high. So the Bayes decision rule will have to guess H1 with enough frequency to avoid these
massive penalties, and this will incur a unit cost each time H1 when the true state is H0 .

You might also like