0% found this document useful (0 votes)
66 views5 pages

Lecture 6: Detection Theory: 1.1 Method of Lagrange Multipliers

The document summarizes key concepts from a lecture on detection theory: 1) The Neyman-Pearson lemma states that the likelihood ratio test maximizes the probability of detection for a given false alarm rate. This can be proven using the method of Lagrange multipliers. 2) An example detection problem involving detecting a DC level in additive white Gaussian noise is presented. 3) Another example involving detecting a change in variance is discussed.

Uploaded by

lakshmirani
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
66 views5 pages

Lecture 6: Detection Theory: 1.1 Method of Lagrange Multipliers

The document summarizes key concepts from a lecture on detection theory: 1) The Neyman-Pearson lemma states that the likelihood ratio test maximizes the probability of detection for a given false alarm rate. This can be proven using the method of Lagrange multipliers. 2) An example detection problem involving detecting a DC level in additive white Gaussian noise is presented. 3) Another example involving detecting a change in variance is discussed.

Uploaded by

lakshmirani
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

ECE 830 Fall 2010 Statistical Signal Processing instructor: R. Nowak , scribe: A.

Pasha Hosseinbor

Lecture 6: Detection Theory

Neyman-Pearson Lemma

Consider two densities where Ho : X po (x) and H1 : X p1 (x).To maximize a probability of detection (true positive) PD for a given false alarm (false positive or type 1 error) PF A = , decide according to (x) = p(x|H1 ) Po c00 Po c10 p(x|Ho ) P1 c11 P1 c01
H1 H0

(1)

The Neyman-Pearson theorem is a constrained optimazation problem, and hence one way to prove it is via Lagrange multipliers.

1.1

Method of Lagrange Multipliers

In the method of Lagrange multipliers, the problem at hand is of the form max f (x) such that g(x) c. Theorem: Let be a xed non-negative number and let xo () be a maximizer of M (x, ) = f (x) g(x) Then xo () maximizes f(x) over all x such that g(x) g(xo ()). Proof: We assume 0 and that xo = xo () satises f (xo ) g(xo ) f (x) g(x) Then f (xo ) f (x) (g(x) g(xo )) (4) (3) (2)

Now let S = {x : g(x) g(xo )}. Thus, for all x S, g(x) g(xo ). Since is nonnegative, we conclude f (xo f (x) x S (5)

Lecture 6: Detection Theory

1.2

Proof of Neyman-Pearson Theorem

The problem at hand is max PD () such that PF A () . The Lagrangian is M (, ) = PD () PF A () =


R1 ()

p(x|H1 ) dx
R1 ()

p(x|Ho ) dx (6)

=
R1 ()

[p(x|H1 ) p(x|Ho )] dx,

where R1 () = {x : p(x|H1 ) > p(x|Ho )}. The likelihood ratio is (x) = p(x|H1 ) . p(x|Ho ) (7)

Now determine = as value that satises PF A () = Thus,

(8)

p(|Ho ) dx =
x:(x)>

p(x|Ho ) dx =

(9)

1.3

Example: DC Level in Additive White Gaussian Noise (AWGN)

Consider independent random variables xi for i = 1, ..., n: Ho : xi N (0, 2 ) According to likelihood ratio test (LRT) p(x|H1 ) (x) = = p(x|Ho )
1 e 22 (2 2 )n/2 1 e (2 2 )n/2
1 2 2 1

H1 : xi N (, 2 )

(10)

Pn

i=1

(xi )2 )2

H1 H0

Pn

i=1 (xi

(11)

Lets take the natural logarithm of the likelihood ratio: 1 ln((x)) = 2 (2 2 Assuming > 0,
n n

xi + n2 )
i=1

H1 H0

ln()

(12)

xi
i=1

H1 H0

2 n ln + , 2

(13)

where is the threshold. Note that y n xi is simply the sucient statastic for i=1 of a normal distribution of unknown mean. Lets rewrite our hypotheses test in terms of the sucient statistic:

Lecture 6: Detection Theory

Ho : y N (0, n 2 ) Lets now determine PF A and PD .

H1 : y N (n, n 2 )

(14)

PF A = P (pick H1 |given Ho ) =

1 2n 2

et

2 /2n 2

dt = Q( )

n 2

(15) (16)

PD = P (pick H1 |given H1 ) = Q(

n n2

Here Q is the complementary error function. Noting that = rewrite PD as PD = Q(Q1 (PF A ) where
n2 2

n 2 Q1 (PF A ), we can

n2 ), 2

(17)

is simply the signal-to-noise ratio (SNR).

1.4

Example: Change in Variance

Consider independent random variables xi for i = 1, ..., n:


2 Ho : xi N (0, o ) 2 H1 : xi N (, 1 )

(18)

2 2 Assume 1 > o . Lets apply LRT, taking natural log of both sides:

2 1 1 n ln( o + .5( 2 2 ) 2 2 1 o 1 After doing some algebra, we obtain


n

x2 i
i=1

H1 H0

ln()

(19)

x2 i
i=1 n i=1

H1 H0

2(

2 2 1 1 o )(ln() + n ln( ) 2 2 1 o o

(20)

is simply the sucient statistic for variance of a normal distribuNote that y tion of unknown variance. Now recall that if x1 , ..., xn are iid N (0, 1), then n x2 2 (chi-square of degree n i=1 i n). Lets rewrite our null hypothesis test using the sucient statistic:
n

x2 i

Ho : y =
i=1

x2 i 2 n 2 o

(21)

Then, the probability of false alarm is

Lecture 6: Detection Theory

PF A = P (pick H1 |given Ho )

p(y|Ho ) dy

= P (y > ) y = P( 2 > 2) o o = P (2 > 2 ) n o

(22)

We have to compute the variance numerically. For example, if we have n = 20 realizations of xi and PF A = 0.01, then we can numerically compute the threshold to be 2 = 37.57o .

1.5

Neyman-Pearson Lemma: A Second Look

Here is an alternate proof of the Neyman-Pearson Lemma. Consider a binary hypothesis test and LRT: (x) = p1 (x) po (x)
H1 H0

(23) (24)

PF A = P ((x) |Ho ) =

There does not exist another test with PF A = and a detection problem larger than P ((x) |Ho ). That is, the LRT is the most powerful test with PF A = . Proof: The region where the LRT decides H1 is Rnp = {x : p1 (x) } po (x) (25)

Let RT denote the region where some other test describes H1 . Dene for any region R Pi (R) =
R

pi (x) dx,

(26)

which is simply the probability of x R under hypothesis Hi . By assumption both tests have PF A = : = Po (Rnp ) = Po (RT ). Next observe that
c Pi (RN P ) = Pi (RN P RT ) + Pi (RN P RT )

(27)

(28) (29)

c Pi (RT ) = Pi (RN P RT ) + Pi (RN P RT )

Therefore from Eq. (27), we conclude that

Lecture 6: Detection Theory

c c Po (RN P RT ) = Po (RN P RT )

(30)

Now, we want to show P1 (RN P ) P1 (R ) which from Eq. (28 29) holds if
c c P1 (RN P RT ) P1 (RN P RT )

(31)

(32)

Note

c P1 (RN P RT ) =

p1 (x) dx
c RN P RT


RN P RT

po (x) dx

c = Po (RN P RT ) c = Po (RN P RT )

=
c RN P RT

po (x) dx p1 (x) dx

c RN P RT c P1 (RN P

RT ).

(33)

Thus, from Eq. (33) we see that at increases, Rnp decreases, and hence PF A decreases. In other words, if 1 2 , then Rnp (1 ) Rnp (2 ), and hence 1 2 .

You might also like