Isscc02 Tutorial
Isscc02 Tutorial
Abbas El Gamal
ISSCC02
Motivation
Some scenes contain very wide range of illumination with intensities varying over 100dB range or more Biological vision systems and silver halide lm can image such high dynamic range scenes with little loss of contrast information Dynamic range of solid-state image sensors varies over wide range: high end CCDs > 78dB consumer grade CCDs 66dB consumer grade CMOS imagers 54dB So, except for high end CCDs, image sensor dynamic range is not high enough to capture high dynamic range scenes
ISSCC02
ISSCC02
Several techniques and architectures have been proposed for extending image sensor dynamic range many questions: What exactly is sensor dynamic range, what does it depend on, and how should it be quantied (100dB, 12 bits, . . . )? In what sense do these techniques extend dynamic range? How well do they work (e.g., accuracy of capturing scene information and/ or image quality)? How should their performance be compared? Are all 100dB sensors equivalent? Is a 120dB sensor better than a 100dB one?
ISSCC02
Tutorial Objectives
To provide quantitative understanding of sensor DR and SNR and their dependencies on key sensor parameters To describe some of the popular techniques for extending image sensor dynamic range To provide a framework for comparing the performance of these techniques: Quantitatively based on DR and SNR Qualitatively based on other criteria, e.g., linearity, complexity . . .
ISSCC02
Outline
Background
Introduction to Image Sensors Image Sensor Model Sensor Dynamic Range (DR) and SNR
Description/ Analysis of HDR Schemes Other HDR Image Sensors Conclusion References
ISSCC02
Image Sensors
Area image sensor consists of: 2-D array of pixels, each containing a photodetector that converts light into photocurrent and readout devices Circuits at periphery for readout, . . . Since photocurrent is very small (10s 100s of fA), it is dicult to read out directly Conventional sensors (CCDs, CMOS APS) operate in direct integration photocurrent is integrated during exposure into charge, read out as voltage In some high dynamic range sensors photocurrent is directly converted to output voltage
ISSCC02
Photons
ph/s
Photocurrent
Amp
Charge
Col
Voltage
V
Photon ux conversion to photocurrent is linear (for xed irradiance spectral distribution) governed by quantum eciency Photocurrent is integrated into charge during exposure In most sensors, charge to voltage conversion is performed using linear amplier(s)
ISSCC02
Direct Integration
vReset Q reset vo i CD tint t high light Qsat low light
Direct integration: The photodetector is reset to vReset Photocurrent discharges CD during integration time or exposure time, tint At the end of integration, the accumulated (negative) charge Q(tint) (or voltage vo(tint)) is read out Saturation charge Qsat is called well capacity
ISSCC02
Vertical CCD
Photodiode
Collected charge is simultaneously transferred to the vertical CCDs at the end of integration time (a new integration period can begin right after the transfer) and then shifted out Charge transfer to vertical CCDs simultaneously resets the photodiodes shuttering done electronically (see [2])
ISSCC02 10
Word i
Row i Read
Bit j
Col j Out
Vbias
Operation
Pixel voltage is read out one row at a time to column storage capacitors, then read out using the column decoder and multiplexer Row integration times are staggered by row/column readout time
ISSCC02
11
Temporal noise Fixed pattern noise (FPN) Dark current Spatial sampling and low pass ltering
ISSCC02
12
Temporal Noise
Caused by photodetector and MOS transistor thermal, shot, and 1/f noise Can be lumped into three additive components: Integration noise (due to photodetector shot noise) Reset noise Readout noise Noise increases with signal, but so does the signal-to-noise ratio (SNR) Noise under dark conditions presents a fundamental limit on sensor dynamic range (DR)
ISSCC02
13
FPN is the spatial variation in pixel outputs under uniform illumination due to device and interconnect mismatches over the sensor Two FPN components: oset and gain Most visible at low illumination (oset FPN more important than gain FPN) Worse for CMOS image sensors than for CCDs due to multiple levels of amplication
FPN due to column amplier mismatches major problem
ISSCC02
14
Dark current
Dark current is the leakage current at the integration node, i.e., current not induced by photogeneration, due to junction (and transistor) leakages It limits the image sensor dynamic range by introducing dark integration noise (due to shot noise) varying widely across the image sensor array causing xed pattern noise (FPN) that cannot be easily removed reducing signal swing
ISSCC02
15
The image sensor is a spatial (as well as temporal) sampling device frequency components above the Nyquist frequency cause aliasing It is not a point sampling device signal low pass ltered before sampling by spatial integration (of current density over photodetector area) crosstalk between pixels Resolution below the Nyquist frequency measured by Modulation Transfer Function (MTF) Imaging optics also limit spatial resolution (due to diraction)
ISSCC02
16
Q(i)
Qo
Qsat
QShot is the noise charge due to integration (shot noise) and has average power 1 (iph + idc)tint electrons2 q QReset is the reset noise (KT C noise)
QReadout is the readout circuit noise QFPN is the oset FPN (we ignore gain FPN) All noise components are independent
ISSCC02 17
iph
Q(.)
Qo
Since Q(.) is linear we can readily nd the average power of the equivalent input referred noise r.v. In, i.e., average input referred noise power, to be q2 1 2 2 In = 2 ( (iph + idc)tint + r ) A2, tint q where 2 2 2 2 r = Reset + Readout + FPN electron2, is the read noise power
ISSCC02 18
SNR is the ratio of the input signal power to the average input referred noise power, and is typically measured in dBs Using the average input referred noise power expression, we get SNR(iph) = 10 log10 i2 ph
q2 1 ( (i t2 q ph int
+ idc)tint +
2 r )
dB
SNR increases with the input signal iph, rst (for small iph) at 20dB per decade since read noise dominates, then at 10dB per decade when shot noise (due to photodetector) dominates
ISSCC02
20
Qsat = 40000 e r = 20 e
40
tint =10 ms
30
SNR (dB)
20
10
0
1fA 5fA 15fA
10 16 10
10
15
10
14
10
13
10
12
10
11
iph (A)
ISSCC02 21
Dynamic Range
Dynamic range quanties the ability of a sensor to adequately image both high lights and dark shadows in a scene It is dened as the ratio of the largest nonsaturating input signal to the smallest detectable input signal largest nonsaturating signal given by imax = is the well capacity
qQsat tint
smallest detectable input signal dened as standard deviation of input referred noise under dark conditions In (0) (the zero here q 2 refers to iph = 0), which gives imin = tint 1 idctint + r q
idc
2 + r
1 q idc tint
ISSCC02
22
68
66
64
62
DR (dB)
60
58
56
54
52
50 4 10
10
10
Qsat = 40000 e r = 20 e
40
40ms
20ms
10ms
5ms
idc = 1 fA
30
SNR (dB)
20
10
10 16 10
10
15
10
14
10
13
10
12
10
11
iph (A)
ISSCC02 24
Summary
Presented brief tutorial on image sensor operation and nonidealities Described sensor signal and noise model Used the model to quantify sensor SNR and DR Discussed dependencies of SNR and DR on sensor parameters
ISSCC02
25
References
ISSCC02 26
increases as integration time is decreased + ( qr )2, decreases as integration time is increased tint
q tint idc
To increase dynamic range need to spatially adapt pixel integration times to illumination
short integration times for pixels with high illumination long integration times for pixels with low illumination
Integration time cant be made too long due to saturation and motion The HDR techniques only increase imax (for a given maximum integration time)
Recent work [3] shows how imin can be reduced by lowering read noise and preventing motion blur
ISSCC02
27
The Plan
To describe Well Capacity Adjusting and Multiple Capture To analyze their SNR and show that the increase in dynamic range comes at the expense of decrease in SNR
Multiple Capture achieves higher SNR than Well Capacity Adjusting for the same increase in dynamic range (see [4])
To describe two other techniques: Spatially Varying Exposure and Time-to-Saturation To briey describe two other types of sensors that do not use direct integration: Logarithmic Sensor and Local Adaptation To qualitatively compare these six HDR techniques
ISSCC02
28
vdd
tint
Access
Bias
ISSCC02
29
imax
imax
ISSCC02
30
moderate light
Qsat
low light
t1
tint
Largest nonsaturating current is now given by (1 )qQsat imax = idc tint t1 Smallest detectable signal does not change, so dynamic range is increased by a factor (1 ) DRF 1 (1 ttint )
ISSCC02 31
Q(i) Qsat
Qsat tint t1
qQsat t1
ISSCC02
32
78dB Scene
ISSCC02
33
i2 ph
q2 1 ( (i t2 q ph int
+ idc)tint +
2 r )
dB
(1) For qQtsat idc iph < qQsatt1) idc, the variance of the noise charge is (tint 1 1 2 given by q (iph + idc)(tint t1) + r electrons2
So the variance of the equivalent input referred noise current is given by 2 2 2 In = (t q )2 ( 1 (iph + idc)(tint t1) + r ) A2 q t
int 1
+ idc)(tint t1) +
2 r )
dB
ISSCC02
34
Example
Qsat = 40000 e
45
40
35
30
SNR (dB)
25
20
15
10
10
15
10
14
10
13
10
12
10
11
iph (A)
ISSCC02 35
SNR DIP
Notice the 21dB dip in the SNR example at the transition point: iph = qQsat idc = 487 fA t1
ISSCC02
36
Well-Adjusting Issues
Increasing DR directly lowers SNR Implementation is straightforward well adjusting causes more noise and FPN (not accounted for in analysis) Sensor response is nonlinear CDS only eective at low illumination Color processing ?
ISSCC02
37
Multiple Capture
Idea: Capture several images within normal exposure time
short integration time images capture high light regions long integration time images capture low light regions
Combine images into HDR image, e.g., using Last Sample Before Saturation algorithm:
Only extends dynamic range at high illumination end
Implementation of 2 captures demonstrated for CCDs and CMOS APS [8] Implementing many captures requires very high speed non-destructive readout 9 captures demonstrated using DPS [9] Recent work [3] shows that dynamic range can also be extended at the low illumination by appropriately averaging the captured images to reduce read noise
ISSCC02 38
8T
ISSCC02
16T
32T
39
ISSCC02
40
moderate light
tint
The largest nonsaturating current is now aqQsat idc imax = tint The smallest detectable signal does not change, so dynamic range is increased by a factor DRF a
ISSCC02 41
Qsat a
qQsat tint
aqQsat tint
ISSCC02
42
Consider the case of two captures and the Last Sample Before Saturation algorithm For 0 iph < qQsat idc, the SNR is the same as for the normal tint operation, and we get
SNR(iph) = 10 log10
i2 ph
q2 1 ( (i t2 q ph int 2 + idc)tint + r )
dB
For qQsat idc iph < aqQsat idc, the SNR is the same as normal tint tint operation with tint replaced by tint/a, and we get SNR(iph) = 10 log10 i2 ph
a2 q 2 1 ( q (iph t2 int
idc) tint a
2 r )
dB
ISSCC02
43
SNR Example
50
Qsat = 40000 e r = 20 e
40
30
SNR (dB)
20
10
Well Multi
10 16 10
10
15
10
14
10
13
10
12
10
11
10
10
iph (A)
ISSCC02 44
SNR DIP
Notice that SNR dips by 12.6 dB at the transition current iph = qQsat idc = 640 fA tint In general
2 sat a2( Qa + r ) 10 log10 DRF dB DIP = 10 log10 2 (Qsat + r )
ISSCC02
45
Qsat = 40000 e r = 20 e
40
30
SNR (dB)
20
10
8 adjustments for well adjusting (solid) 9 captures for multiple capture (dashed)
10
15
10 16 10
10
14
10
13
10
12
10
11
10
10
10
iph (A)
ISSCC02 46
ISSCC02
47
High dynamic range image synthesized using low pass ltering or more sophisticated techniques such as cubic interpolation
ISSCC02
48
Very simple to implement and requires no change to the sensor itself Blocking light reduces sensor sensitivity and SNR Very high resolution sensor is needed, since spatial resolution is reduced DR is extended at the high illumination end only (same as multiple capture using the Last Sample Before Saturation algorithm)
ISSCC02
49
Time-to-Saturation
The idea is to measure the integration time required to saturate each pixel, the photocurrent is estimated by qQsat tsat so the sensor current to time transfer function is nonlinear iph =
SNR is maximized for each pixel and is equal to SNR = 20 log10
Minimum detectable current is limited by maximum allowable integration time (not zero !) qQsat imin tint and the maximum detectable current is limited by circuit mismatches, readout speed, and FPN (it is not innity !) [12,13,14]
ISSCC02
50
Time-to-Saturation Issues
Implementation is quite dicult need a way to detect saturation for each pixel, then record the time If global circuits are used [13], contention can severely limit performance If done at the pixel level [14], pixel size may become unacceptably large imin can be unacceptably large (=213fA for Qsat = 40000e and tint = 30ms) unless sensor is read out at tint No CDS support
ISSCC02
51
Logarithmic Sensor
In logarithmic sensors, photocurrent is directly converted to voltage for readout [15] vdd
iph
vout
High dynamic range achieved via logarithmic compression during conversion (to voltage), using exponential IV curve of MOS transistor in subthreshold iph vout = k ln Io Up to 56 decades of dynamic range compressed into 0.5V range (depending on vT and the number of series transistors)
ISSCC02 52
10 10 10 10
ids (A)
10 10 10 10 10 10
10
11
12
13
Subthreshold Region
14
15
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
vgs (V)
ISSCC02 53
Succeeding circuitry must be extremely precise to make use of the dynamic range aorded by the compressed output voltage Non-integrating nature limits achievable SNR even for high illumination due to exponential transconductance relationship i2 kT CD kT CD ph 10 log10 SNR(iph) = 10 log10 2 q (iph + idc)2 q2
Note: Here we neglected read noise (which would make SNR even worse)
ISSCC02
54
45
40
35
30
SNR (dB)
25
20
15
10
Integrating Logarithmic
10
15
10
14
10
13
10
12
10
11
10
10
10
iph (A)
ISSCC02
55
The idea is to mimic biological vision systems where local averages are subtracted o each receptor value in order to increase the contrast while maintaining wide dynamic range [16,17] Hexagonal resistive network is used to weight closer pixels exponentially higher than distant ones, thereby approximating Gaussian smoothing
vout
ISSCC02
56
Well-Adjust Multi-Capture Spatial Saturation Log Adapt DR: imax imin Linearity CDS SNR Complexity: Pixel External Same No No Low Low Same/ Yes Yes + Med-High Low-Med Yes Yes Low High No No High High No No No No
ISSCC02
57
Conclusion
Dened image sensor DR and SNR and discussed their dependencies on sensor parameters Described six techniques for extending DR:
Well Adjusting Mutiple Capture Spatially Varying Exposure Time to saturation Logarithmic Local adaptation (articial retina)
All techniques only increase imax (Multiple Capture can also decrease imin by weighted averaging)
SNR can be used to compare dierent HDR sensors Multiple Capture with weighted averaging is the most eective method but requires high speed non-destructive readout sensor and DSP hardware to implement
ISSCC02 58
References
[1] A. El Gamal, 392b Lectures and Classnotes, Stanford University, 2001. [2] A. J. P. Theuwissen, Solid-State Imaging with Charge-Coupled Devices. Norwell, MA: Kluwer, 1995. [3] X.Q. Liu and A. El Gamal, Simultaneous Image Formation and Motion Blur Restoration via Multiple Capture, In ICASSP2001 conference, Salt Lake City, Utah, May 2001. [4] D. Yang, and A. El Gamal, Comparative Analysis of SNR for Image Sensors with Enhanced Dynamic Range, In Proceedings of the SPIE Electronic Imaging 99 conference, Vol. 3649, San Jose, CA, January 1999. [5] T. F. Knight, Design of an Integrated Optical Sensor with On-Chip Preprocessing. PhD thesis, MIT, 1983. [6] M. Sayag, Non-linear Photosite Response in CCD Imagers. U.S Patent No. 5,055,667, 1991. Filed 1990. [7] S. J. Decker, R. D. McGrath, K. Brehmer, and C. G. Sodini, A 256x256 CMOS imaging array with wide dynamic range pixels and column-parallel digital output. IEEE J. of Solid State Circuits, Vol. 33, pp. 2081-2091, Dec. 1998. [8] O. Yadid-Pecht and E. Fossum, Wide intrascene dynamic range CMOS APS using dual sampling. IEEE Trans. Electron Devices, vol. 44, pp. 1721-1723, Oct. 1997. [9] D. X. D. Yang, A. El Gamal, B. Fowler, and H. Tian, A 640x512 CMOS image sensor with ultrawide dynamic range oating-point pixel level ADC, IEEE Journal of Solid State Circuits, vol. 34, pp. 1821-1834, Dec. 1999.
ISSCC02
59
[10] S. Kleinfelder, S.H. Lim, X.Q. Liu and A. El Gamal, A 10,000 Frames/s 0.18 um CMOS Digital Pixel Sensor with Pixel-Level Memory, In Proceedings of the 2001 IEEE International Solid-State Circuits Conference, pp. 88-89, San Francisco, CA, February 2001. [11] S. K. Nayar and T. Mitsunaga, High dynamic range imaging: Spatially varying pixel exposures. http://www.cs.columbia.edu/CAVE/, March 2000. [12] V. Brajovic, T. Kanade, A sorting image sensor: an example of massively parallel intensity-to-time processing for low-latency computational sensors. Proceedings of the 1996 IEEE International Conference on Robotics and Automation, Minneapolis, MN, pp. 1638-1643. [13] E. Culurciello, R. Etienne-Cummings, and K. Boahen, High dynamic range, arbitrated address event representation digital imager. The 2001 IEEE International Symposium on Circuits and Systems, Vol. 2, pp. 505-508, 2001. [14] V. Brajovic, R. Miyagawa and T. Kanade, Temporal photoreception for adaptive dynamic range image sensing and encoding. Neural Networks, Vol. 11, pp. 1149-1158, 1998. [15] S. Kavadias, B. Dierickx, D. Scheer, A. Alaerts, D. Uwaerts, J. Bogaerts, A Logarithmic Response CMOS Image Sensor with On-Chip Calibration, IEEE Journal of Solid-State Circuits, Volume 35, Number 8, August 2000 [16] C. A. Mead and M. A. Mahowald, A Silicon Model of Early Visual Processing, Neural Networks, vol. 1, pp. 91-97, 1988. [17] C. A. Mead, Adaptive Retina, Analog VLSI Implementation of Neural Systems, C. Mead and M. Ismail, Eds., Boston: Kluwer Academic Pub., pp. 239-246, 1989.
Supplementary References:
ISSCC02 60
[18] M. Aggarwal, N. Ahuja, High dynamic range panoramic imaging. Proceedings of Eighth IEEE International Conference on Computer Vision, Vol. 1, pp. 2-9, 2001. [19] V. Brajovic, Sensory Computing. Conference 4109: Critical Technologies for the Future of Computing, SPIEs 45th Annual Meeting, SPIE, 2000. [20] S. Chen, R. Ginosar, Adaptive sensitivity CCD image sensor. Conference C: Signal Processing, Proceedings of the 12th IAPR International Conference on Pattern Recognition, Vol. 3, pp. 363-365, 1994. [21] T. Delbruck and C. A. Mead, Analog VLSI phototransduction. California Institute of Technology, CNS Memo No. 30, May 11, 1994. [22] T. Hamamoto, K. Aizawa, A computational image sensor with adaptive pixel-based integration time IEEE Journal of Solid-State Circuits, Vol. 36, Issue 4, pp. 580-585, April 2001. [23] M. Loose, K. Meier, J. Schemmel, A self-calibrating single-chip CMOS camera with logarithmic response. IEEE Journal of Solid-State Circuits, Vol. 36, Issue 4, pp. 586-596, April 2001. [24] C. A. Mead, A Sensitive Electronic Photoreceptor, In 1985 Chapel Hill Conference on VLSI, H. Fuchs, Ed., Rockville: Computer Science Press, pp. 463-471, 1985. [25] N. Ricquier and B. Dierickx, Pixel structure with logarithmic response for intelligent and exible imager architectures, Microelectronic Eng., vol. 19, pp. 631-634, 1992. [26] M. Schanz, C. Nitta, A. Bussmann, B.J. Hosticka, and R.K. Wertheimer, A high-dynamic-range CMOS image sensor for automotive applications. IEEE Journal of Solid-State Circuits, Vol. 35, Issue 7, pp. 932-938, July 2000. [27] O. Schrey, R. Hauschild, B.J. Hosticka, U. Iurgel, and M. Schwarz, A locally adaptive CMOS image sensor with 90 dB dynamic range. Solid-State Circuits Conference Digest of Technical Papers, pp. 310-311, 1999.
ISSCC02 61
[28] Y. Wang, S. Barna, S. Campbell, and E. Fossum, A High Dynamic Range CMOS Image Sensor. In Proceedings of 2001 IEEE Workshop on Charge-Coupled Devices and Advanced Image Sensors, pp. 137-140, June 2001. [29] O. Yadid-Pecht, Wide dynamic range sensors. Optical Engineering, Vol. 38, No. 10, pp. 1650-1660, October 1999. [30] Yadid-Pecht, O.; Belenky, A. Autoscaling CMOS APS with customized increase of dynamic range. Solid-State Circuits Conference: Digest of Technical Papers. pp. 100-101, 2001. [31] W. Yang, A wide-dynamic-range, low power photosensor array. IEEE ISSCC Vol. 37, 1994.
ISSCC02
62