0% found this document useful (0 votes)
27 views2 pages

IP 2021 Final

This document contains a 10-question final exam for an image processing course. The questions cover topics like point spread functions of highpass filters, degraded image modeling, noise removal filters, Huffman coding, run length encoding, and linear prediction. Linear prediction involves establishing a formulation to obtain an optimal predictor that minimizes the error between an original value and its prediction based on neighboring pixel values, and can exploit different types of data redundancy.

Uploaded by

cccc
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views2 pages

IP 2021 Final

This document contains a 10-question final exam for an image processing course. The questions cover topics like point spread functions of highpass filters, degraded image modeling, noise removal filters, Huffman coding, run length encoding, and linear prediction. Linear prediction involves establishing a formulation to obtain an optimal predictor that minimizes the error between an original value and its prediction based on neighboring pixel values, and can exploit different types of data redundancy.

Uploaded by

cccc
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

Department of Computer and Information Science

National Chiao Tung University


Image Processing
Final Exam
6/11/2021

1. (12%) Consider the following point spreads functions, h1(x, y) to h3(x, y) from left to
right, of some highpass filters. (a) Explain the source of spikes in the figures below.

(b) Find the average value of a highpass processed image, gi(x, y) = f(x, y) * hi(x, y).
(c) Describe the appearance of |gi(x, y)|.

2. (12%) Consider degraded image G(u, v) = F(u, v) * H(u, v) + N(u, v).


(a) Describe the estimation of H(u, v) by image observation.
(b) Describe the problems of using inverse filtering for restoring F(u, v).
(c) Briefly explain whether or not the problems in (b) can be completely resolved by
using the following approximate Wiener filter.
2
1 H (u, v)
H (u , v) H (u, v) 2  K

3. (8%) Show that the following Alpha-trimmed mean filter can be converted into (a) a
median filter and (b) an arithmetic mean filter by changing the value of d. Briefly
explain why it may eliminate both (i) salt-and-pepper noise and (ii) uniform noise
distribution at the same time while arithmetic mean filter cannot.
1
fˆ ( x, y )   g r ( s, t )
mn  d ( s ,t )S xy

4. (8%) Give a numerical example to show that using improper Q in the following filter
may actually enhance the pepper ( g(x, y)=1) noise in an image.


 g ( s, t )
( s , t )S x ,
Q 1

f ( x, y ) 
y

 g ( s, t )
( s , t )S x ,
Q

5. (10%) Consider a 512×512 white image which is contaminated by an additive


sinusoidal noise cos(2πy/16), or with a period of 16 pixels, in the G component.
(a) Describe the noise patterns in its (i) H, (ii) S, and (iii) I components.
(b) Suggest a procedure to remove such noises in the frequency domain.
6. (8%) Consider a different image complement process which is defined by H' = H 
180, S' = S, and I' = 1  I.
(a) Obtain (R’, G’, B’) for (R, G, B) equal to (i) (1, 0, 1), and (ii) (1, 0.3, 1).
(b) According to (a), can R’ be determined by the value of R alone?

7. (10%) Compare the average code length of (i) Huffman code and (ii) binary code.
(a) Show that the two lengths may be the same for a four-symbol source.
(b) Show that (i) will always be shorter than (ii) for a six-symbol source.

8. (10%) Use LZW algorithm to encode the following 1×12, 8-bit image:
126 126 126 126 39 39 39 39 126 126 126 126

9. (12%) Consider the following two-line segment:


01001111000111001110011
11111110000000000110001
Use the following CCITT 2-D code table to code the second line of the segment.

An example of the pass mode (and definitions of a0, a1, b0, b1)

10. (12%) Consider linear predictor, f*(x, y) = α1f(x, y1) + α2f(x1, y1) + α3f(x1, y).
(a) Use E{f(x, y) f(xi, yj)} = σ2ρviρhj to establish the formulation (no need to solve)
for obtaining the optimal predictor which minimizes E{[f(x, y)  f*(x, y)]2}.
(b) Explain how different types of data redundancy which may be effectively
explored with the above predictor.

You might also like