0% found this document useful (0 votes)
4 views

cmos_linear Image Sensor

CMOS linear image sensors are utilized in various applications such as spectrometers and machine vision cameras, with capabilities to detect both visible and non-visible light. Hamamatsu offers a range of these sensors with different specifications, including analog and digital outputs, and features like global and rolling shutters for image capture. The document details the operational principles, structures, and performance characteristics of these sensors, highlighting their suitability for high-sensitivity applications.

Uploaded by

mzzwang
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

cmos_linear Image Sensor

CMOS linear image sensors are utilized in various applications such as spectrometers and machine vision cameras, with capabilities to detect both visible and non-visible light. Hamamatsu offers a range of these sensors with different specifications, including analog and digital outputs, and features like global and rolling shutters for image capture. The document details the operational principles, structures, and performance characteristics of these sensors, highlighting their suitability for high-sensitivity applications.

Uploaded by

mzzwang
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

Technical note

CMOS linear image sensors

CMOS linear image sensors are used for applications including spectrometers, distance measurement, and machine
vision cameras (dimension measurement, foreign object inspection). Hamamatsu CMOS linear image sensors
are sensitive to non-visible light (ultraviolet, near infrared) as well as visible light, so they are commonly used
in measurement and inspection that require detection of non-visible light. We also support products that have
characteristics suitable for spectroscopic measurement, including high sensitivity in the ultraviolet region, sensitivity
in the vacuum ultraviolet region, and smooth spectral response in the whole wavelength range.
Hamamatsu offers CMOS linear image sensors with various photosensitive areas, packages, and functions, with a focus
on analog CMOS and assembly technologies uniquely cultivated at Hamamatsu’s own factories. Custom devices are
also available.

Hamamatsu CMOS linear image sensors

Spectral Line rate


Operation Number
Type Photodiode Output Type no. Pixel pitch Pixel height Image size response range max. Note
method of pixels
(µm) (µm) (mm) (nm) (lines/s)

S9226
Analog 1024 7.8 125 7.9872 × 0.125 400 to 1000 194
series

128 6.4 × 0.5 3846


256 50 500 12.8 × 0.5 1938
S8377/ 512 25.6 × 0.5 972
Analog S8378 200 to 1000
series 256 6.4 × 0.5 1938
PPS RS Surface type
512 25 500 12.8 × 0.5 972
1024 25.6 × 0.5 487

S15908-
512 50 2500 25.6 × 2.5 486
512Q
Current 200 to 1000
S15909-
1024 25 2500 25.6 × 2.5 243
1024Q

S9227
Surface type Analog 512 12.5 250 6.4 × 0.25 400 to 1000 9434
series

Analog S11639-01 2048 14 200 28.672 × 0.2 200 to 1000 4672

Analog S13488 2048 14 42 28.672 × 0.042 4672 With color filter

512 2.816 × 0.0635 3774 Space saving


S13131
Analog 736 5.5 63.5 4.048 × 0.0635 2653 type
series
1536 8.448 × 0.0635 1287 (COB package)
APS GS
Buried type Long and narrow type
S11720 1536 194.972 × 0.127
Digital 127 127 400 to 1000 45400 (for close contact
series 3072 390.044 × 0.127
optical system)

With ADC
Digital S13774 4096 7 7 28.672 × 0.007 100000
(High-speed type)

With ADC
Digital S15611 1024 7 200 7.168 × 0.2 34000
(Compact type)
Note:
∙ PPS: passive pixel sensor
∙ APS: active pixel sensor
∙ RS: rolling shutter
∙ GS: global shutter

01
where it is necessary to detect one-dimensional
1. Structure position information of light. This technical note
explains CMOS linear image sensors.

CMOS image sensors are devices that convert optical


[Figure 1-2] Pixel arrangement of CMOS image sensors
information into an electrical signal. They have a
photosensitive area, charge detection section, and (a) CMOS area image sensor
readout section [Figure 1-1]. In the photosensitive
Video wiring
area, photodiodes detect incident light and converts Amplifier

this light into a signal charge through photoelectric


Pixel Pixel Pixel
conversion. The charge detection section converts
the signal charge into an analog signal such as voltage PD PD PD
or current. The readout section reads out the analog
signal (video signal) for each pixel. Pixel Pixel Pixel

The CMOS process enables formation of a timing


PD PD PD
generator and bias generator within the CMOS image
sensor chip. This makes it possible to simplify the
Pixel Pixel Pixel
external driver circuit. Handling is made easy, because
it is possible to drive the CMOS image sensor just by PD PD PD

inputting CLK, ST, and a single power supply (e.g., 3.3


PD: photodiode
V). KMPDC0865EA

For a digital device to handle the analog signal output


by the CMOS image sensor, it is necessary to convert (b) CMOS linear image sensor
the analog signal into a digital signal. Generally, an A/D
Video wiring
converter in an external circuit is used to convert an Amplifier
analog signal to a digital signal, then that signal is
Pixel Pixel Pixel Pixel Pixel
transferred to a digital device. Hamamatsu also offers a
digital output CMOS image sensor with A/D converters
built into the chip.

[Figure 1-1] Block diagram of CMOS image sensor


PD PD PD PD PD
(typical example)
Readout section
2046

2048
2047

Shift register
1
2
3

Readout
amplifier Video
2046

2048
2047

Hold circuit
1
2
3

CLK Timing
PD: photodiode
2046

2048
2047

generator Amp array Charge


1
2
3

ST
detection area KMPDC0866EA

Power
supply Photodiode Photosensitive
2046

2048
2047
1
2
3

(3.3 V) Bias array area


generator

One pixel

KMPDC0864EA

There are two types of CMOS image sensors: CMOS


area image sensors with pixels arranged in two
dimensions, and CMOS linear image sensors with
pixels arranged in only one dimension [Figure
1-2]. In CMOS area image sensors, the area of the
photodiodes per unit area (the fill factor) is small
depending on the region occupied by the video wiring
and circuits. On the other hand, the fill factor can be
set to 100% with CMOS linear image sensors, making
it is easy to achieve high sensitivity. Furthermore, they
have the feature of easily forming rectangular pixels.
CMOS linear image sensors are used for applications

02
[Figure 2-2] Imaging of fast-moving objects
2. Operating principle CMOS linear image sensor

2-1 Shutter method

There are two operation methods for CMOS linear


image sensors: rolling shutter and global shutter.
With a rolling shutter, integration timing is shifted
for each pixel [Figure 2-1 (a)], while with a global
shutter, integration is done for all pixels at the same Object
time [Figure 2-1 (b)]. When imaging a fast-moving KMPDC0869EA

object, rolling shutter will distort the image due to the


time deviation in integration. With a global shutter, (a) Rolling shutter
integration is done for all pixels at the same time,
so there is no distortion in the image [Figure 2-2].
Therefore, a global shutter is used for imaging fast-
moving objects and detecting light pulses, such as
with machine vision. In contrast, a rolling shutter is
used for detection of constant light. Most Hamamatsu
PPS type (see "2-2. Passive pixel sensors, active pixel
sensors") CMOS linear image sensors use a rolling
shutter. The image is distorted.
KMPDC0870EA

[Figure 2-1] Operation methods (b) Global shutter


(a) Rolling shutter
1st frame 2nd frame 3rd frame
1st pixel Integration time Integration time Integration time

2nd pixel

3rd pixel

4th pixel

5th pixel

The image is not distorted.


Video signal 1 2 3 4 5 th pixel 1 2 3 4 5 th pixel 1 2 3 4 5 th pixel KMPDC0871EA

Timing of integration shifts for each pixel.

Time Operation of rolling shutter and global shutter are


KMPDC0867EA
explained below.
(b) Global shutter (1) Rolling shutter
1st frame 2nd frame 3rd frame
1st pixel Integration time Integration time Integration time
Operation of a rolling shutter is shown in Figure 2-3 (a).
2nd pixel
Each pixel is connected to video wiring and an amplifier
via a switch. Imaging with a rolling shutter CMOS linear
3rd pixel
image sensor is done in the following order.
4th pixel

5th pixel
1. In each pixel, when the switch is off, a photodiode
Hold period Hold period Hold period converts light into an electric charge, then integrates
Video signal 1 2 3 4 5 th pixel 1 2 3 4 5 th pixel 1 2 3 4 5 th pixel the charge.
Integration of all pixels simultaneously 2. The switch of the first pixel is turned on to connect
Time the pixel with the video wiring, and the pixel signal
KMPDC0868EA
is transferred to the amplifier. The amplifier reads
out the signal, then outputs a video signal. At this
time, the photodiode charge is reset and it begins
the next integration.

03
3. The same operation is done from the second pixel (b) Global shutter
to the last pixel, and the video signals of all pixels is
then output. Amplifier Video signal
Video wiring

With a rolling shutter, integration starts at the first


pixel, then continues in order from the second pixel
to the third pixel, and so on. The timing of integration  Ž 
Hold Hold Hold
time of the first pixel and the final pixel will be shifted circuit circuit circuit
by as much as the number of pixels.

(2) Global shutter


Œ Œ Œ
Operation with a global shutter is shown in Figure 2-3 1st pixel 2nd pixel 3rd pixel
(b). Unlike with a rolling shutter, a global shutter has
PD PD PD
each pixel connected to a hold circuit via a switch.
This hold circuit is connected to an amplifier via a
switch. Imaging with a global shutter CMOS linear Pixel
image sensor is done in the following order. Œ to : signal transfer order
KMPDC0873EA

1. When the switch between the photodiodes and the


hold circuits are off, the photodiodes convert light
into an electric charge at the same time for all pixels
2-2 Passive pixel sensors, active pixel sensors
and integrates it.
2. The switches between the photodiodes and the hold There are two CMOS linear image sensor structures:
circuits are turned on simultaneously for all pixels, passive pixel sensor (PPS) and active pixel sensor
then the signals are transferred to and saved at the (APS). PPS amplifies the signal after transferring it
hold circuits. to the final stage amplifier [Figure 2-4 (a)]. APS has a
3. The switch is changed between the hold circuit and structure with an amplifier placed on each pixel, and
the video wiring in order, from the first pixel to the the amplifier in each pixel transfers the signal after
last pixel, and the signal of each pixel is transferred amplifying the signal. [Figure 2-4 (b)].
to its amplifier. The amplifier reads out the signal, PPS does not have an amplifier on each pixel, so it
then outputs a video signal. consumes less power. One amplifier is shared by all
pixels, thereby achieving excellent output uniformity.
With a global shutter, integration is done for all pixels A high-performance amplifier and high-capacitance
at the same time, so there is no deviation in the timing capacitor are mounted in the final stage, realizing
of integration time. excellent output linearity and a large saturation
charge, so PPS is used for applications such as
[Figure 2-3] Operation of CMOS linear image sensors spectrophotometry.
(a) Rolling shutter With APS, the signal is amplified by the amplifier of
each pixel and then transferred. This reduces the
Œ effect of noise mixed into the circuit after each pixel,
Amplifier Video signal
 and realizes low noise. Because wiring between the
Video wiring Ž
photodiode and the amplifier is short, it is possible
to suppress parasitic capacitance, as well as realize
1st pixel 2nd pixel 3rd pixel high-speed readout and high sensitivity. In addition to
machine vision which requires high-speed detection,
PD PD PD APS is used for a wide range of applications including
consumer and spectrophotometry.

Pixel

Œ to Ž: signal transfer order


KMPDC0872EA

04
[Figure 2-4] Pixel structure surface type, the signal charge is integrated in the
N+ layer, while in the buried type, the signal charge
(a) PPS
is integrated in the N- layer. The readout circuit then
sequentially transfers the integrated signal charge (see
Amplifier Video signal
"2-1 Shutter method").
Video wiring
Dark current is mainly generated by crystal defects on
the surface of the photodiode. In the surface type, dark
current generated by crystal defects are integrated in
the charge integration section on the surface of the
photodiode. In the buried type, dark current can be
PD PD PD
suppressed because the charge integration section is
separated from the surface of the photodiode. The buried
Pixel type can completely transfer (fully deplete) the all signal
KMPDC0874EA charge, so there is less image lag than the surface type.
The buried type realizes both low dark current and low
(b) APS
image lag, so it is used in a wide range of applications
Video signal including spectroscopic measurements which require
Video wiring
integration for a long time, as well as machine vision
that does high-speed detection.
The surface type has a larger charge integration section
than the buried type, realizing a large saturation charge.
The charge integration section in the surface type has
Amplifier Amplifier Amplifier
lower electrical resistance and faster charge transfer
than the buried type, so it is possible to make the pixel
size larger.
PD PD PD

[Figure 2-5] Schematic cross section of photodiodes of


CMOS image sensor
Pixel
KMPDC0875EA (a) Surface type photodiode
Readout circuit
[Table 2-1] Comparison of PPS and APS
Crystal defect Incident
Parameter PPS APS (major dark current source) light

Power consumption Small Large Front side

Pixel output uniformity Excellent Good


N+
Linearity Excellent Good Charge integration section

Saturation charge Large Small


Noise Large Small
Sensitivity Low High P

Readout speed Slow Fast Back side


KMPDC0876EA

(b) Buried type photodiode


2-3 Surface type photodiodes,
Readout
buried type photodiodes Crystal defect
circuit

(major dark Incident


current source) light Gate

Front side
There are two photodiode structures for CMOS linear P+
image sensors: surface type and buried type [Figure
N+
2-5]. The surface type has a two-layer structure with N-
Charge integration section
an N+ layer formed on the surface of the P layer of the
silicon. The buried type has a three-layer structure,
with an N- layer and a P+ layer formed on top of the P P

layer of the silicon. Back side


The light incident on the photodiode is subjected to KMPDC0877EA

photoelectric conversion into a signal charge. In the

05
[Table 2-2] Comparison of surface type photodiode and
buried type photodiode 3. Characteristics
Parameter Surface type photodiode Buried type photodiode
Dark current Large Small
Saturation charge Large Small 3-1 Spectral response
Larger area Easy Difficult

Typical Hamamatsu CMOS linear image sensors have


sensitivity in the range of 200 to 1000 nm or 400 to
1000 nm, and peak sensitivity wavelength is around
700 nm [Figure 3-1].
The spectral response in the long wavelength region
(near infrared) is determined by the material and
thickness of the photodiode. If the material of the
photodiode is silicon, when there is incident light
with energy higher than the band gap energy (1.12
eV) of silicon, an electric charge is generated through
photoelectric conversion. Light energy is expressed by
equation (3- 1).

E = 1240/λ ......... (3-1)

E: light energy [eV]


λ: wavelength [nm]

Light has higher energy at shorter wavelengths and lower


energy at longer wavelengths. When the wavelength
exceeds 1100 nm, there is no sensitivity because there is
no photoelectric conversion. Light has longer penetration
length at longer wavelengths, so a charge is generated
even in the deep part of the photodiode. Therefore, it is
possible to increase sensitivity in the near infrared region
by increasing the thickness of the silicon. Hamamatsu
offers a near infrared-enhanced type with thicker silicon
than the standard type to increase the sensitivity in the
near infrared region. However, the standard type has
better resolution, so it is necessary to choose the type
suitable for the application.
The spectral response to the short wavelength region
(UV ) is determined by the window material of the
CMOS linear image sensor and the material of the
protective film of the photodiode. UV light is easily
absorbed by substances. Therefore, if a window
material or protective film which easily absorbs
UV light is used, the window material or protective
film will absorb the UV light before it reaches the
photodiodes, so there will be no sensitivity in the UV
region.
When detecting UV light, UV light resistance is also
necessary. When UV light is incident on a CMOS linear
image sensor that is not UV resistant, the sensitivity
of the photodiode surface drops. Short wavelength
light such as UV light is subject to photoelectric
conversion on the surface of the photodiodes, so UV
light sensitivity will drop sharply if UV light is incident
continuously on the photodiodes. Hamamatsu offers
CMOS linear image sensors that realize both high

06
UV sensitivity and high UV resistance by making [Figure 3-3] Sensitivity temperature characteristics
improvements to the structure of the photodiodes. (typical example)
(Ta=25 ˚C)
2.0
[Figure 3-1] Spectral response (typical example)
1.8

Rate of change in sensitivity (%/°C)


(Ta=25 ˚C) 1.6
0.5
1.4
NIR-enhanced type
1.2
0.4
1.0
Photosensitivity (A/W)

0.8
0.3
0.6

0.4
0.2 0.2

Standard type 0

0.1 -0.2
200 400 600 800 1000

Wavelength (nm)
0
200 400 600 800 1000 1200 KMPDB0633EA

Wavelength (nm)
KMPDB0632EA
3-2 Input/output characteristics
There are narrow peaks and valleys (strong/weak)
in the spectral response. This is caused by light The input/output characteristics express the relation
interference. Interference between incident light between the incident light level and the output. Incident
and reflected light inside the protective film formed light level is expressed by exposure (illuminance ×
on the surface of the photodiodes will cause peaks integration time).
and valleys in the spectral response at specific Figure 3-4 shows a typical example of input/output
wavelengths [Figure 3-2]. Hamamatsu offers CMOS characteristics. As exposure increases, output increases
linear image sensors with smooth spectral response linearly until it reaches saturation. The exposure at
at all wavelengths by making improvements to the which output reaches saturation is called saturation
structure of the photodiodes (see "4. Hamamatsu exposure. The saturation output is an index that
technologies"). determines the maximum level of the dynamic range
(see "3-7 S/N, dynamic range") described later.
[Figure 3-2] Schematic cross section of photodiode
[Figure 3-4] Input/output characteristics (typical example)
Incident light
(Ta=25 ˚C)
3.0
Xideal
Front side
2.5
Reflected Protective film Xmeasure
light
2.0
Output (V)

Si 1.5

1.0
Back side
KMPDC0897EA
0.5

Sensitivity varies linearly with temperature changes. 0


0 200 400 600 800
At wavelengths shorter than the peak sensitivity
wavelength, temperature dependence becomes Exposure (a.u.)
KMPDB0634EA
small [Figure 3-3]. The longer the wavelength region,
the larger the temperature dependence, and the
temperature coefficient at 1000 nm is approx. 0.8%/°C. 3-3 Linearity error

Ideal input/output characteristics are to have output


change linearly according to the change in exposure.

07
Deviation of input/output characteristics from this [Figure 3-6] Photoresponse nonuniformity (typical example)
ideal straight line is called the linearity error and is (Ta=25 ˚C, light source: 2856 K)
10
defined by equation (3-2).
8
Xmeasure - Xideal
Linearity error = × 100 ......... (3-2) 6
Xideal
4
Xmeasure: measured output value 2

PRNU (%)
Xideal : ideal straight line connecting the origin point and 5%
of the saturation output [Figure 3-4] 0

-2
Figure 3-5 shows a typical example of the linearity -4
error.
-6

-8
[Figure 3-5] Linearity error (typical example) -10
0 512 1024 1536 2048
(Ta=25 ˚C)
10
Pixel
8
KMPDB0636EA

4
Linearity error (%)

2
3-5 Offset output, dark output
0

-2
The output in the dark state is expressed as the sum of
-4 offset output and dark output.
-6
(1) Offset output
-8

-10 Offset output is caused by the circuit. Output in dark


1 10 100
state includes offset output and dark output. Hamamatsu
Output/Saturation output (%) defines output with the shortest integration time for
KMPDB0635EA
which dark output is negligible as offset output. Thus,
offset output is constant even if integration time increases
3-4 Photoresponse nonuniformity (Figure 3-7).

[Figure 3-7] Schematic diagram of offset output and dark output


Photoresponse nonuniformity (PRNU) indicates variations
in sensitivity between pixels. Multiple pixels in the CMOS
linear image sensor have sensitivity nonuniformity Dark output
Output
caused by manufacturing variations in photodiodes and Dark output

amplifiers. In Hamamatsu CMOS linear image sensors,


photoresponse nonuniformity is defined as the output Offset output Offset output Stable

variation of all pixels when uniform light of about 50% (a) When integraion time is short (b) When integraion time is long
KMPDC0950EA
saturation is incident on the entire effective photosensitive
area of the photodiodes [equation (3-3)]. (2) Dark output
PRNU = (∆X/Xaverage) × 100 [%] ......... (3-3) Dark output is caused by the photodiodes. Dark output
Xaverage: average of the output of all pixels is generated when carriers in the photodiodes get excited
∆X : difference between the Xaverage and the from the valence band to the conduction band by
maximum or minimum pixel output
heat. It increases in proportion to integration time, so
it is necessary to determine the integration time with
Figure 3-6 shows a typical example of photoresponse
consideration for the magnitude of the dark output.
nonuniformity.

The higher the temperature, the more carriers are excited


from the conduction band to the valence band, so dark
output changes exponentially relative to temperature
changes [Figure 3-8]. With Hamamatsu CMOS linear
image sensors, dark output doubles for every 5˚C rise in
temperature. When the temperature rises by 1°C, dark

08
output increases by about 1.1 times, so when temperature state and the shortest integration time. Readout noise
rises by ΔT [°C], dark output increases by about 1.1ΔT is an index that determines the minimum level of
times. the dynamic range (see "3-7 S/N, dynamic range")
described later on.
Readout noise includes kTC noise caused by circuit
[Figure 3-8] Dark output vs. chip temperature (typical example)
switching during readout, thermal noise caused
1000
by thermal random motion of charges inside MOS
transistors, and RTS (random telegraph signal) noise
100 caused by defects in MOS transistors. RTS noise is a
fine current that flows when carriers are captured and
Dark output (mV)

10 released by structural defects in the gate oxide film of a


MOS transistor. With microfabricated CMOS processes
1 in recent years, effects of these fine currents cannot be
ignored. The strength of RTS noise varies depending
0.1
on the pixel.

(2) Fixed pattern noise (Nfpn)


0.01
0 20 40 60 80 100 The cause of fixed pattern noise differs between dark
Chip temperature (°C)
state and light state. The primary causes of fixed pattern
KMPDB0637EA noise in the dark state are variations in offset output
and dark current for each pixel. The primary cause of
fixed pattern noise in the light state is photoresponse
3-6 Noise nonuniformity (see "3-4 Photoresponse nonuniformity").
Fixed pattern noise in the light state increases in
proportion to the exposure.
Noise is broadly separated into two types: random
noise that fluctuates with time, and fixed pattern noise
The total noise (Ntotal) of a CMOS linear image sensor
that is generated by specific pixels regardless of time.
is given by equation (3-5).
(1) Random noise
Ntotal = √Nshot2 + Nd2 + Nread2 + Nfpn2 ......... (3-5)
Random noise can be separated into three types: shot
noise, dark shot noise, and readout noise, depending
on the factor that causes that noise. 3-7 S/N, dynamic range
Shot noise (Nshot)
The relationship between the noise and output vs.
Even when the strength of the light is constant, the
exposure of CMOS linear image sensor is shown in
number of photons incident on the photodiodes is
Figure 3-9.
not constant, so there are fluctuations. Noise caused
by fluctuations in the number of photons is called (1) S/N
shot noise. Shot noise is expressed by equation (3-4)
The higher the S/N, the better the CMOS linear image
according to the Poisson statistics.
sensor’s image quality will be. The type of dominant
Nshot = √S ......... (3-4) noise and the S/N change depending on the strength
of the output of the sensor.
S: Number of signal electrons [e-]
(2) Dynamic range
Dark shot noise (Nd)
Dynamic range generally indicates the measurable
Dark shot noise is caused by dark current and is
range of a sensor and is defined as the ratio of the
proportional to the square root of the number of
maximum level to the minimum level. The wider
electrons generated in a dark state. When integration
the dynamic range, the wider the measurable range
time is short enough, the dark current is small, so the
will be. Hamamatsu defines the dynamic range in
effect of dark shot noise can be ignored.
equation (3-6), with the upper limit of dynamic range
as saturation output and the lower limit as readout
Readout noise (Nread) noise.
Readout noise is noise generated inside the readout
circuit. It occurs even in the dark state, regardless of
the light level. Measure the readout noise in the dark

09
Drange = Vsat/Nread ......... (3-6) Figure 3-11 shows a typical example of CTF. The narrower
Drange: dynamic range
the input pattern (i.e. the higher the spatial frequency),
Vsat : saturation output the lower the CTF will be. CTF is wavelength dependent.
Nread : readout noise
The longer the wavelength, the deeper the signal charge
is generated in the silicon substrate, which increases
[Figure 3-9] Output, noise vs. exposure (typical example)
electrical crosstalk and lowers CTF.
1000 Output

[Figure 3-10] Schematic diagrams of CTF characteristics


100 Saturation
output (a) When the input pattern is wide
Dynamic
Noise, output (ke-)

range Nfpn Light


10 Incident
light Dark

Ideal output
1 Nshot VW
Actual output
S/N Output
VB
KMPDC0898EA
0.1 Ntotal

Nread (b) When the input pattern is narrow


0.01
0.01 0.1 1 10 100 1000 Light
Incident
light
Dark
Exposure (a.u.)
KMPDB0643EA Ideal output
Actual output
VWO
Output VBO

3-8 Resolution KMPDC0899EA

[Figure 3-11] CTF vs. spatial frequency


Resolution is the degree of detail to which the input (typical example, pixel pitch: 7 µm)
pattern is reproduced in the output. Figure 3-10 shows 1.0
(Ta=25 ˚C)

a schematic diagram of output when a repeating


pattern image of square waves is incident. When the 470 nm
0.8
input pattern is wide, the image can be reproduced 525 nm

accurately. However, as the incident pattern becomes


0.6
narrower, the output difference shrinks, so that the 660 nm
CTF

image can no longer be reproduced accurately. There 880 nm


are two causes of this phenomenon: optical crosstalk 0.4

when incident light enters adjacent pixels, and


electrical crosstalk when signal charges subjected to 0.2

photoelectric conversion enter adjacent pixels due to


diffusion. 0
0 20 40 60 80
Indicators of resolution include the modulation
transfer function (MTF) for sine waves and the Spatial frequency (line pairs/mm)
contrast transfer function (CTF) for square waves. KMPDB0638EA

Hamamatsu uses a square wave pattern test chart to


evaluate CTF. CTF is defined by equation (3-7). The
fineness of the black-and-white spacing of the input
3-9 Image lag
patterns is expressed with spatial frequency. Spatial
frequency is the number of repeating patterns per unit Image lag is a phenomenon in which output from
length, and the reciprocal of the distance from one the previous frame remains in the output of the next
white part to the next in the pattern in Figure 3-10. The frame after the optical signal of the frame is read out.
unit is normally line pairs/mm. Figure 3-12 shows an example of the image lag. The
VWO - VBO .........
incident light changes from the light state to dark state
CTF = (3-7) at time T. During readout of the Nth frame, light is
VW - VB
incident during integration time, so the optical signal
VWO: output white level
VBO : output black level is read out. At the time of readout of N + 1th frame, no
VW : output white level (when the input pattern is wide) light is incident during integration time, so normally
VB : output black level (when the input pattern is wide)
the optical signal is not read out. However, if the
charge of the photodiode or the readout circuit cannot

10
be completely reset when resetting the Nth frame and [Figure 3-13] Examples of shutter leak
there is image lag, the image lagged signal is read out
(a) When α is long enough
during readout the N + 1th frame.
Nth frame N + 1th frame
Integration time Reset time Integration time Reset time
[Figure 3-12] Example of image lag Incident
light
N frame N + 1th frame N + 2th frame
Integration time Reset time Integration time Reset time Integration time Reset time
α
Light Output
Incident
light T0 T1 T2 T3
Dark Time

Output Light output Potential chart


Image lag
(1) Reset time (T0 to T2) (2) Integration time (T2 to T3)
Time T Time
KMPDC0890EA Low Low Transfer gate
Charge

Potential

Potential
3 - 10 Shutter leak Photodiode Photodiode

High Transfer gate High


FD FD
KMPDC0909EA

Shutter leak is a phenomenon in which not all of the


charge of the photodiodes can be completely reset (b) When α is short
when light is incident near the end of the reset time. Nth frame N + 1th frame
An electrical charge is generated when light enters Integration time Reset time Integration time Reset time

the photodiode. During integration time, the transfer Incident


light
gate is closed and the generated charge is integrated
in the photodiode. Later, during reset, the transfer gate α Shutter
Output leak
is opened, then the charge is transferred to floating T0 T1 T2 T3
diffusion (FD) and the photodiode charge is reset. Time
Potential chart
And the transfer gate is closed again, integration is
done in the next frame (charges transferred to the FD (1) Reset time (T0 to T2) (2) Integration time (T2 to T3)

are sequentially read out by the later readout circuit). Low Low Transfer gate
Charge
However, if light is incident near the end of reset time,
Potential

Potential
not all charges on the photodiode can be reset at the Photodiode Photodiode
reset time, which results in shutter leak.
High Transfer gate High
Figure 3-13 shows an example of shutter leak. Shutter FD FD
KMPDC0910EA
leak is evaluated by inputting light pulses during reset
time. If time (α) from the end of light pulse (T1) to
the start of integration time (T2) is long enough, the
charge generated by the photodiode is reset when Nth
frame is reset, so the output of N+1th frame will not be
read out. However, if α is short, not all the charges in
the photodiode can be reset at the reset time, and they
will be read out as the output of N+1th frame.

11
large number of pixels. Hamamatsu offers the serial
4. Hamamatsu technologies processing CMOS linear image sensor S15611 and the
column parallel processing CMOS linear image sensor
S13774.
4-1 With A/D converters
[Figure 4-2] CMOS linear image sensors with A/D converters
(a) Serial processing method
Hamamatsu offers digital output CMOS linear image
sensors with A/D (analog-to-digital) converters, in Readout circuit A/D
converter
addition to analog output CMOS linear image sensors.
Analog output CMOS linear image sensors convert
charges generated by the photodiodes into analog
signals with voltage values or current values, then
output those signals. However, in order to handle
Pixel Pixel Pixel
analog signals with a digital device, it is necessary to
convert the analog signals into digital signals with A/D
converters. Analog output CMOS linear image sensors
do A/D conversion using an external A/D converter A/D conversion is sequentially done one pixel at a time.
[Figure 4-1 (a)]. CMOS linear image sensors with A/D KMPDC0903EA

converters do A/D conversion using the built-in A/D


(b) Column parallel processing method
converters, then outputs the digital signals [Figure
4-1 (b)]. Because this type uses digital output, it has
Readout circuit
several advantages, including resistance to noise,
high-speed readout, and ease of handling.
A/D A/D A/D
converter converter converter
[Figure 4-1] A/D conversion of CMOS linear image sensor
(a) Analog output type

CMOS linear A/D


image sensor Analog signal converter Digital signal Pixel Pixel Pixel

KMPDC0901EA

(b) With A/D converters (digital output type)


A/D conversion is done for all pixels at the same time.
CMOS linear A/D KMPDC0904EA
image sensor converter Digital signal

KMPDC0902EA [Figure 4-3] Line rate vs. number of pixels


(CMOS linear image sensors with A/D converters)
There are two types of CMOS linear image sensors
120
with A/D converters: serial processing method and Serial processing method Column parallel processing method
column parallel processing method [Figure 4-2]. In 100
the serial processing method, A/D conversion is done
by one A/D converter installed on the chip. There is
Line rate (klines/s)

80

only one A/D converter, so it does save some space


compared to the column parallel processing method. 60

With the column parallel processing method, A/D


40
conversion is done by the A/D converter connected
to each pixel. With the serial processing method, A/D
20
conversion is done for each pixel. In contrast, with the
column parallel processing method, A/D conversion 0
256 512 1024 2048 4096 8192
is done for all pixels at the same time, making it easier
to speed up. With the column parallel processing Number of pixels
method, the line rate can be maintained even when Serial processing method:
The line rate decreases as the number of pixels increases.
the number of pixels increases [Figure 4-3]. For this Column parallel processing method:
reason, the column parallel processing method is Line rate stays steady even as the number of pixels increases.
KMPDB0639EA
more suitable for CMOS linear image sensors with a

12
With serial processing method A/D converter S15611 time, making it possible to do high-speed readout (line
The S15611 is a compact CMOS linear image sensor rate: 100 klines/s) even with 4096 pixels.
that uses a serial processing method A/D converter
[Figure 4-4]. High-speed readout is possible, with a [Figure 4-7] With column parallel processing method
readout speed of 40 MHz max., and a line rate of 34 A/D converter S13774
kHz max. Partial readout mode [Figure 4-5] and skip
readout mode [Figure 4-6] make it possible to realize
even higher speed line rates.

[Figure 4-4] With serial processing method A/D converter


S15611

4-2 For reduction optical system and


close contact optical system

There are two types of CMOS linear image sensor


[Figure 4-5] Partial readout mode example optical systems: reduction optical system and close
contact optical system [Figure 4-8]. With a reduction
All-pixel readout mode Partial readout mode
optical system, a lens is used to form a reduced image
Readout region Readout region of the object on the sensor. Due to the large depth
1024 pixels 260 pixels
of field, this type is suitable for imaging uneven or
three-dimensional objects. In the close contact optical
Output

Output

system, a rod lens array is used to image objects on


the sensor at their actual size. This makes it possible to
Maximum line rate: Maximum line rate: do imaging of a wide region in a small space without
34 klines/s 100 klines/s
the need for a large-scale device. However, the depth
Pixel Pixel
KMPDB0640EA
of field is smaller than that of the reduction optical
system. The close contact optical system is suitable for
imaging long, narrow, and flat objects.
[Figure 4-6] Skip readout mode example
Hamamatsu offers CMOS linear image sensors for use
All-pixel readout mode Skip readout mode with reduction optical systems or close contact optical
No skip readout One pixel skip readout systems.
(Readout pixels: 1024) (Readout pixels: 512)

[Figure 4-8] Optical systems of CMOS linear image sensor


Output

Output

(a) Reduction optical system


CMOS linear
Maximum line rate: Maximum line rate:
image sensor
34 klines/s 62 klines/s

Pixel Pixel

Readout pixels Readout pixels Lens


Non-readout pixels

KMPDB0641EA
Light source

With column parallel processing method A/D converter S13774 Object

The S13774 is a 4096-pixel CMOS linear image sensor


that uses a column parallel processing A/D converter, KMPDC0905EA

and it was developed for applications in industrial


cameras which require high-speed scanning [Figure
4-7]. A/D conversion is done for all pixels at the same

13
(b) Close contact optical system [Figure 4-10] Spectral response (S15908/S15909 series)
CMOS linear (Ta=25 ˚C)
image sensor 0.4
Previous product
Rod lens
array
Light source
0.3

Photosensitivity (A/W)
Object

KMPDC0906EA
0.2

S15908/
S15909 series
Long and narrow type for close contact optical systems S11720 series
The S11720 series is a long and narrow CMOS linear 0.1

image sensor developed for close contact optical


systems. Hamamatsu has realized a photosensitive
0
area that is long in the horizontal direction by 200 400 600 800 1000

arranging CMOS chips in a row with high accuracy Wavelength (nm)


using Hamamatsu packaging technology [Figure 4-9]. KMPDB0646EA

It also has built-in A/D converters and uses digital


output. It can be used for print inspection and film
inspection, etc. in combination with the rod lens array
4-4 High VUV sensitivity
for close contact optical systems.

Hamamatsu offers a CMOS linear image sensor in


[Figure 4-9] Structure
which we realized high sensitivity in the VUV (vacuum
(a) S11720-20 ultraviolet) region of 200 nm or shorter wavelength by
improving the photosensitive area [Figure 4-12]. This
A/D A/D is suitable for applications that involve measurement
converter converter
in the VUV range, such as light emission analysis.
Chip 1 Chip 2 Chip 3 Chip 4 Chip 5 Chip 6

Effective photosensitive area length


[Figure 4-12] Spectral Response (VUV-enhanced type)
194.972 mm
(Typ. Ta=25 ˚C)
KMPDC0907EA 0.18

0.16 High VUV sensitivity type


(b) S11720-40
0.14
Photosensitivity (A/W)

A/D A/D A/D A/D 0.12


converter converter converter converter
0.10

Chip 1 Chip 2 Chip 3 Chip 4 Chip 5 Chip 6 Chip 7 Chip 8 Chip 9 Chip 10 Chip 11 Chip 12 0.08

Effective photosensitive area length 390.044 mm 0.06

KMPDC0908EA
0.04
Previous product
0.02

4-3 Smooth spectral response 0


100 120 140 160 180 200

Wavelength (nm)
Interference between incident light and reflected KMPDB0647EA

light on the protective film of the photodiode may


cause peaks and valleys (strength/weakness) in the
spectral response (see "3-1 Spectral response").
4-5 COB package
Hamamatsu has developed the CMOS linear image
sensors S15908/S15909 series for spectrophotometry. We have made the installation area smaller by
This series realizes a smooth spectral response in the mounting the CMOS linear image sensor chip in a
ultraviolet region to near infrared region by forming a thin and compact COB (chip on board) package of
fine structure on the photosensitive area [Figure 4-10]. nearly the same size. COB package CMOS linear image
sensors contribute to cost reduction, miniaturization,
and high-volume producibility of equipment. They
are used for a wide range of applications, including
barcode readers and encoders.

14
[Figure 4-13] COB package [Figure 4-17] Spectral response (S13488, typical example)
(Ta=25 °C)
100
Red

80
Green

Relative sensitivity (%)


60

Blue
40

20

0
300 400 500 600 700 800 900 1000

4-6 With color filters Wavelength (nm)


KMPDB0483EB

The type with color filters that transmit only light of


specific wavelengths on the photodiodes of the CMOS
linear image sensor is capable of acquiring color
information of the measurement object. The S13488 is
a CMOS linear image sensor with color filters for red
(630 nm), green (540 nm) and blue (460 nm).

[Figure 4-15] S13488 with color filters

[Figure 4-16] Enlarged view of the color filters (S13488, unit: µm)
3 Light-shielding metal
42

∙∙∙

Red Green Blue Red

11
14
42
KMPDC0911EA

15
5. Application examples
[Figure 5-1] Application examples of CMOS linear
image sensors
(a) Spectrometers
Focus lens

Transmission grating
Image sensor
Collimating lens

Input slit

KACCC0256EA

(b) Rangefinders (robot cleaner)

CMOS linear Light


image sensor source

KMPDC0914EA

(c) Machine vision


CMOS linear
image sensor

KMPDC0913EA

Information described in this material is current as of June 2021.


Product specifications are subject to change without prior notice due to improvements or other reasons. This document has been carefully prepared and the
information contained is believed to be accurate. In rare cases, however, there may be inaccuracies such as text errors. Before using these products, always
contact us for the delivery specification sheet to check the latest specifications.
The product warranty is valid for one year after delivery and is limited to product repair or replacement for defects discovered and reported to us within that
one year period. However, even if within the warranty period we accept absolutely no liability for any loss caused by natural disasters or improper product use.
Copying or reprinting the contents described in this material in whole or in part is prohibited without our prior permission.

www.hamamatsu.com
HAMAMATSU PHOTONICS K.K., Solid State Division
1126-1 Ichino-cho, Higashi-ku, Hamamatsu City, 435-8558 Japan, Telephone: (81)53-434-3311, Fax: (81)53-434-5184
U.S.A.: Hamamatsu Corporation: 360 Foothill Road, Bridgewater, N.J. 08807, U.S.A., Telephone: (1)908-231-0960, Fax: (1)908-231-1218, E-mail: [email protected]
Germany: Hamamatsu Photonics Deutschland GmbH: Arzbergerstr. 10, D-82211 Herrsching am Ammersee, Germany, Telephone: (49)8152-375-0, Fax: (49)8152-265-8, E-mail: [email protected]
France: Hamamatsu Photonics France S.A.R.L.: 19, Rue du Saule Trapu, Parc du Moulin de Massy, 91882 Massy Cedex, France, Telephone: (33)1 69 53 71 00, Fax: (33)1 69 53 71 10, E-mail: [email protected]
United Kingdom: Hamamatsu Photonics UK Limited: 2 Howard Court, 10 Tewin Road, Welwyn Garden City, Hertfordshire AL7 1BW, UK, Telephone: (44)1707-294888, Fax: (44)1707-325777, E-mail: [email protected]
North Europe: Hamamatsu Photonics Norden AB: Torshamnsgatan 35 16440 Kista, Sweden, Telephone: (46)8-509 031 00, Fax: (46)8-509 031 01, E-mail: [email protected]
Italy: Hamamatsu Photonics Italia S.r.l.: Strada della Moia, 1 int. 6, 20044 Arese (Milano), Italy, Telephone: (39)02-93 58 17 33, Fax: (39)02-93 58 17 41, E-mail: [email protected]
China: Hamamatsu Photonics (China) Co., Ltd.: 1201 Tower B, Jiaming Center, 27 Dongsanhuan Beilu, Chaoyang District, 100020 Beijing, P.R.China, Telephone: (86)10-6586-6006, Fax: (86)10-6586-2866, E-mail: [email protected]
Taiwan: Hamamatsu Photonics Taiwan Co., Ltd.: 8F-3, No. 158, Section2, Gongdao 5th Road, East District, Hsinchu, 300, Taiwan R.O.C. Telephone: (886)3-659-0080, Fax: (886)3-659-0081, E-mail: [email protected]

Cat. No. KMPD9017E01 Jun. 2021 DN

16

You might also like