Solid Vision State Rep

Download as pdf or txt
Download as pdf or txt
You are on page 1of 20

WAXMAN ET AL.

Solid-State Color Night Vision: Fusion of Low-Light Visible and Thermal Infrared Imagery
VOLUME 11, NUMBER 1, 1998 LINCOLN LABORATORY JOURNAL 41
Solid-State Color Night Vision:
Fusion of Low-Light Visible and
Thermal Infrared Imagery
Allen M. Waxman, Mario Aguilar, David A. Fay, David B. Ireland, Joseph P. Racamato, Jr.,
William D. Ross, James E. Carrick, Alan N. Gove, Michael C. Seibert, Eugene D. Savoye,
Robert K. Reich, Barry E. Burke, William H. McGonagle, and David M. Craig
I We describe an apparatus and methodology to support real-time color imaging
for night operations. Registered imagery obtained in the visible through near-
infrared band is combined with thermal infrared imagery by using principles of
biological opponent-color vision. Visible imagery is obtained with a Gen III image
intensifier tube fiber-optically coupled to a conventional charge-coupled device
(CCD), and thermal infrared imagery is obtained by using an uncooled thermal
imaging array. The two fields of view are matched and imaged through a dichroic
beam splitter to produce realistic color renderings of a variety of night scenes. We
also demonstrate grayscale and color fusion of intensified-CCD/FLIR imagery.
Progress in the development of a low-light-sensitive visible CCD imager with high
resolution and wide intrascene dynamic range, operating at thirty frames per
second, is described. Example low-light CCD imagery obtained under controlled
illumination conditions, from full moon down to overcast starlight, processed by
our adaptive dynamic-range algorithm, is shown. The combination of a low-light
visible CCD imager and a thermal infrared microbolometer array in a single dual-
band imager, with a portable image-processing computer implementing our neural-
net algorithms, and color liquid-crystal display, yields a compact integrated version
of our system as a solid-state color night-vision device. The systems described here
can be applied to a large variety of military operations and civilian needs.
C
uiiixr xicur oiiiarioxs are enabled
through imaging in the visiblenear-infrared
band, as provided by Gen III image intensi-
fier tubes in night-vision goggles, and in the thermal
infrared (IR) bands, supported by a variety of for-
ward-looking infrared (FLIR) imaging devices (both
scanners and IR focal-plane arrays) displayed on
monitors, the cockpit heads-up display, or combiner
optics [1, 2]. These dual sensing modalities are com-
plementary, in that the intensifier tubes amplify re-
flected moon light and starlight (primarily yellow
through near-infrared light), whereas the FLIR senses
thermally emitted light (in the mid-wave or long-
wave infrared) from objects in the scene. Each sensing
modality has its own limitations, which at times can
be disorienting [3], while alternating between these
modalities can be difficult, confusing, and distracting
[4]. However, there is much utility in fusing this
complementary imagery in real time into a single im-
age product. This article describes a methodology to
provide such fused imagery in color and in real time.
Prior to our work [5, 6], existing methods for vis-
ible/infrared image fusion were based on taking local
measures of image contrast, choosing between the vis-
WAXMAN ET AL.
Solid-State Color Night Vision: Fusion of Low-Light Visible and Thermal Infrared Imagery
42 LINCOLN LABORATORY JOURNAL VOLUME 11, NUMBER 1, 1998
ible and infrared image on a pixel-by-pixel basis, and
attempting to maximize contrast [7, 8]. The result is a
grayscale fused-image product that combines features
(and noise) present in each of the separate image
bands. Texas Instruments Corporation (now Ray-
theon Systems) has developed a similar system for the
grayscale fusion of intensified visible and FLIR imag-
ery (the methods of which are proprietary). This sys-
tem has been tested by the U.S. Army Night Vision
and Electronic Sensors Directorate (NVESD) under
its Advanced Helicopter Pilotage Program [9].
Recognizing that color vision evolved in animals
for survival purposes, we describe in the following
section a methodology, based on biological oppo-
nent-color vision, to fuse registered visible and infra-
red imagery in real time in order to create a vivid
color night-vision capability, as shown in the section
entitled Dual-Band Visible/Infrared Imagers and
Fusion Results. Utilizing full (24-bit digital) color
allows for simultaneous presentation of multiple
fused-image products. The users visual system can
then exploit this coloring to aid perceptual pop-out of
extended navigation cues and compact targets [10,
11]. The ability to generate a rich color percept from
dual-band imagery was first demonstrated experi-
mentally in the visible (red and white imagery) do-
main by E.H. Land [12, 13], and motivated his fa-
mous retinex theory of color vision [14], which itself
lacked any notion of opponent color.
In the latter part of the article we summarize our
work on the development of low-light-sensitive CCD
cameras, which are sensitive from the ultraviolet
through near infrared, and which operate at thirty
frames per second in controlled illumination condi-
tions from full moon to overcast starlight. These
solid-state imagers possess extremely high quantum
efficiency and low read-out noise, which together
yield an extreme low-light sensitivity and support a
large intrascene dynamic range. Their utility is in-
creased by our retina-like computations that enhance
visual contrast and adaptively compress dynamic
range in real time. These CCDs for night vision
emerge from technology originally developed at Lin-
coln Laboratory [15, 16] for high-frame-rate applica-
tions (i.e., adaptive optics and missile seekers). They
represent the beginning of the technology curve for
solid-state visible night vision, and they are comple-
mented by emerging solid-state uncooled thermal in-
frared imagers [17], as well as a variety of cryogeni-
cally cooled infrared focal-plane arrays.
We conclude with a discussion on the importance
of conducting human perception and performance
testing on natural dynamic scenes in order to assess
the true utility of visible/infrared fusion and color
night vision for enhanced situational awareness and
tactical efficiency.
Visible/Infrared Fusion Architecture
The basis of our computational approach for image
fusion derives from biological models of color vision
and visible/infrared fusion. In the case of color vision
in monkeys and man, retinal cone sensitivities are
broad and overlapping, but the images are quickly
contrast enhanced within bands by spatial opponent
processing via cone-horizontal-bipolar cell interac-
tions creating both ON and OFF center-surround re-
sponse channels [18]. These signals are then color-
contrast enhanced between bands via interactions
among bipolar, sustained amacrine, and single-oppo-
nent color ganglion cells [19, 20], all within the
retina. Further color processing in the form of
double-opponent color cells is found in the primary
visual cortex of primates (and the retinas of some
fish). Opponent processing interactions form the ba-
sis of such percepts as color opponency, color con-
stancy, and color contrast, though the exact mecha-
nisms are not fully understood. (See section 4 of
Reference 21, and Reference 22, for development of
double-opponent color processing applied to multi-
spectral infrared target enhancement.)
Fusion of visible and thermal infrared imagery has
been observed in several classes of neurons in the op-
tic tectum (evolutionary progenitor of the superior
colliculus) of rattlesnakes (pit vipers) and pythons
(boid snakes), as described by E.A. Newman and P.H.
Hartline [23, 24]. These neurons display interactions
in which one sensing modality (e.g., infrared) can en-
hance or depress the response to the other sensing
modality (e.g., visible) in a strongly nonlinear fash-
ion. These tectum cell responses relate to (and per-
haps control) the attentional focus of the snake, as
observed by its striking behavior. This discovery pre-
WAXMAN ET AL.
Solid-State Color Night Vision: Fusion of Low-Light Visible and Thermal Infrared Imagery
VOLUME 11, NUMBER 1, 1998 LINCOLN LABORATORY JOURNAL 43
dates the observation of bimodal visual/auditory fu-
sion cells observed in the superior colliculus [25].
Moreover, these visible/infrared fusion cells are sug-
gestive of ON and OFF channels feeding single-op-
ponent color-contrast cells, a strategy that forms the
basis of our computational model.
There are also physical motivations for our ap-
proach to fusing visible and infrared imagery, revealed
by comparing and contrasting the different needs of a
vision system that processes reflected visible light (in
order to deduce reflectivity ) versus one that pro-
cesses emitted thermal infrared light (in order to de-
duce emissivity ). Simple physical arguments show
that spectral reflectivity and emissivity are linearly re-
lated, () = 1 (), which also suggests the utility
of ON and OFF response channels. Thus it is not sur-
prising that FLIR imagery often looks more natural
when viewed with reverse polarity (black hot as op-
posed to white hot, suggestive of OFF-channel pro-
cessing [18]). This simple relation strongly suggests
that processing anatomies designed to determine
reflectivity may also be well suited for determining
emissivity; therefore, computational models of these
anatomies will also be well suited for determining
both reflectivity and emissivity.
Figure 1 illustrates the multiple stages of process-
ing in our visible/infrared fusion architecture. These
stages mimic both the structure and function of the
layers in the retina (from the rod and cone photode-
tectors through the single-opponent color ganglion
cells), which begin the parvocellular stream of form
and color processing. The computational model that
underlies all the opponent processing stages utilized
here is the feed-forward center-surround shunting
neural network of S. Grossberg [26, 27]. This model
is used to enhance spatial contrast within the separate
visible and infrared bands, to create both positive
(ON-IR) and negative (OFF-IR) polarity infrared
contrast images, and to create two types of single-op-
ponent color-contrast images. These opponent-color
images already represent fusion of visible and infrared
imagery in the form of grayscale image products.
However, the two opponent-color images together
with the enhanced visible image form a triple that can
be presented as a fused color image product.
The neurodynamics of the center-surround recep-
FIGURE 1. Neurocomputational architecture for the fusion of low-light visible and thermal infrared imagery,
based on principles of opponent processing within and between bands, as motivated by the retina.
Color display
Low-light
visible imagery
Thermal infrared
imagery
Noise
cleaning
+
+
+ +
+


Visible and infrared
imagery

Registered
Noise cleaned
Contrast enhancement
Adaptive normalization

ON and OFF
infrared channels
Single-opponent
color contrast

Warm red
Cool blue
Hue remap
Desaturate

Image select
RGB/HSV

Color remap tables

HSV/RGB
Distortion
correction
Fused color
Remapped color
WAXMAN ET AL.
Solid-State Color Night Vision: Fusion of Low-Light Visible and Thermal Infrared Imagery
44 LINCOLN LABORATORY JOURNAL VOLUME 11, NUMBER 1, 1998
tive fields is described at pixel ij by the equations,
dE
dt
AE E CI
E G I
ij
ij ij
C
ij
ij S
S
ij
= +
+
( )[ ]
( )[ ] ,
1
1
(1)
which, in equilibrium, yields
E
CI G I
A CI G I
ij
C
S
S
ij
C
S
S
ij
=

+ +
[ ]
[ ]
,
(2)
where E is the opponent processed enhanced image,
I
C
is the input image that excites the single pixel cen-
ter of the receptive field (a single pixel center is used
to preserve resolution of the processed images), and I
S
is the input image that inhibits the Gaussian sur-
round G
S
of the receptive field. Equation 1 describes
the temporal dynamics of a charging neural mem-
brane (cf. capacitor) that leaks charge at rate A and
has excitatory and inhibitory input ion currents de-
termined by Ohms law. The shunting coefficients
(1 E) act as potential differences across the mem-
brane, and the input image signals modulate the ion-
selective membrane conductances. Equation 2 de-
scribes the equilibrium of Equation 1 that is rapidly
established at each pixel (i.e., at frame rate), and de-
fines a type of nonlinear image processing with pa-
rameters A, C, and size of the Gaussian surround. The
shunting coefficients of Equation 1 clearly imply that
the dynamic range of the enhanced image E is
bounded, 1 < E < 1, regardless of the dynamic range
of the input imagery. When the imagery that feeds
the center and Gaussian surround is taken from the
same input image (visible or infrared), the numerator
of Equation 2 is the familiar difference-of-Gaussians
filtering that, for C > 1, acts to boost high spatial fre-
quencies superimposed on the background. The de-
nominator of Equation 2 acts to adaptively normalize
this contrast-enhanced imagery on the basis of the lo-
cal mean. In fact, Equation 2 displays a smooth tran-
sition between linear filtering (when A exceeds the lo-
cal mean brightness, such as in dark regions) and ratio
processing (when A can be neglected as in bright re-
gions of the imagery). These properties are particu-
larly useful for processing the wide-dynamic-range
visible imagery obtained with low-light CCDs, as de-
scribed in the latter part of the article. Equation 2 is
used to process separately the input visible and infra-
red imagery. These enhanced visible and ON-IR im-
ages are reminiscent of the lightness images postu-
lated in Lands retinex theory [14] (also see Grossberg
on discounting the illuminant [26]).
A modified version of Equation 1, with an inhibi-
tory center and excitatory surround, is also used to
create an enhanced OFF-IR image (i.e., a reverse-po-
larity enhanced infrared image). After reducing noise
in the imagery (both real-time median filtering and
non-real-time boundary-contour and feature-contour
system processing [26, 21] have been explored), and
correcting for distortion to ensure image registration,
we form two grayscale fused single-opponent color-
contrast images by using Equation 2 with the en-
hanced visible feeding the excitatory center and the
enhanced infrared (ON-IR and OFF-IR, respec-
tively) feeding the inhibitory surround. In analogy to
the primate opponent-color cells [20], we label these
two single-opponent images +Vis IR and +Vis + IR.
In all cases, we retain only positive responses for these
various contrast images. Additional application of
Equation 2 to these two single-opponent images
serves to sharpen their appearance, restoring their
resolution to the higher of the two images (usually
visible) used to form them. These images then repre-
sent a simple form of double opponent-color contrast
between visible and ON/OFF-IR.
Our two opponent-color contrast images are
analogous to the infrared-depressed-visual and infra-
red-enhanced-visual cells, respectively, of the rattle-
snake [23, 24]; they even display similar nonlinear
behavior. In fact, because the infrared image has lower
resolution than the visible image (in the snake and in
man-made uncooled infrared imagers), a single infra-
red pixel may sometimes be treated as a small sur-
round for its corresponding visible pixel. In this con-
text, our opponent-color contrast images can also be
interpreted as coordinate rotations in the color space
of visible versus infrared, along with local adaptive
scalings of the new color axes. Such color-space trans-
formations were fundamental to Lands analyses of
his dual-band red and white colorful imagery [12
14].
To achieve a natural color presentation of these op-
WAXMAN ET AL.
Solid-State Color Night Vision: Fusion of Low-Light Visible and Thermal Infrared Imagery
VOLUME 11, NUMBER 1, 1998 LINCOLN LABORATORY JOURNAL 45
ponent images (each is an eight-bit grayscale image),
we assign the following color channels (eight bits
each) to our digital imagery: (1) enhanced Vis to
green, (2) +Vis IR to blue, and (3) +Vis + IR to red.
These channels are consistent with our natural asso-
ciations of warm red and cool blue. Finally, these
three channels are interpreted as RGB (red, green,
blue) inputs to a color remapping stage in which, fol-
lowing conversion to HSV (hue, saturation, value)
color space, hues are remapped to alternative (i.e.,
more natural) hues, colors are desaturated, and the
images are then reconverted to RGB signals to drive a
color display. The result is a fused color presentation of
visible/infrared imagery.
Dual-Band Visible/Infrared Imagers
and Fusion Results
We have developed several dual-band imaging sys-
tems to collect registered visible and long-wave infra-
red (LWIR) imagery in the field at night, as shown in
Figure 2 [6]. In our first-generation system, shown in
Figure 2(a), the visible imagery is obtained by using a
Gen III image intensifier tube optically coupled to a
conventional CCD (supporting a resolution of 640
480 pixels), while the thermal infrared imagery is ob-
tained by using an uncooled ferroelectric detector ar-
ray developed by Texas Instruments Corp. (support-
ing a resolution of approximately 320 240 pixels).
The two fields of view (about 30 wide) are matched
and imaged through a dichroic beam splitter. In our
second-generation system, shown in Figure 2(b), we
utilize a Lincoln Laboratory low-light CCD to ac-
quire visible imagery at a resolution of 640 480 pix-
els, in conjunction with the uncooled LWIR camera.
An alternative LWIR imager we currently use in
our third-generation system is the silicon micro-
bolometer array originally developed by Honeywell
Corp. [17]. For long-standoff distance (narrow field
of view) imagery, we plan to use a cryogenically
cooled infrared imager. In the field we record syn-
chronized dual-band time-stamped imagery on two
Hi-8 videotape recorders for later processing back in
our lab. We also perform real-time computations on
Matrox Genesis boards using new TMS320C80
multi-DSP chips from Texas Instruments. For com-
pact portable systems, a head-mounted display could
FIGURE 2. Dual-band visible/long-wave-infrared (LWIR) im-
agers. (a) Sensor pod consisting of a Gen III intensified
CCD, an uncooled LWIR imager, and a dichroic beam split-
ter. (b) Sensor pod consisting of a Lincoln Laboratory low-
light CCD, an uncooled LWIR camera, and a dichroic beam
splitter. (c) Design of a monocular solid-state color night-
vision scope.
Uncooled LWIR camera
Dichroic beam splitter
Visible CCD
camera
Gen III
intensifier tube
(a)
(c)
(b)
Motorized
aperture
Rotating
shutter
Lincoln Laboratory
low-light CCD
(640 x 480 pixels)
Dichroic beam splitter
Pan/tilt
mount
Uncooled LWIR
(320 x 240 pixels)
Portable image-
processing computer
Dichroic
beam splitter
Low-light CCD
imaging chip
Thermal IR
imaging chip
Users
eye
Color LCD
display chip
Visible and
IR video
inputs
Color
video out
Visible near-IR
light
Thermal IR light
Thermoelectric cooler
WAXMAN ET AL.
Solid-State Color Night Vision: Fusion of Low-Light Visible and Thermal Infrared Imagery
46 LINCOLN LABORATORY JOURNAL VOLUME 11, NUMBER 1, 1998
FIGURE 3. Dual-band imagery of the town of Gloucester, Massachusetts, at dusk, with three embedded low-contrast square
targets. (a) Enhanced visible image taken with a Gen III intensified CCD; (b) enhanced thermal IR image taken with an uncooled
IR camera; (c) gray fused opponent-color (blue channel) image; (d) gray fused opponent-color (red channel) image; (e) color
fused image; (f) remapped color fused image. Note that the color fused images support the perceptual pop-out of all three em-
bedded targets from the background.
(b)
(c) (d)
(e) (f)
(a)
WAXMAN ET AL.
Solid-State Color Night Vision: Fusion of Low-Light Visible and Thermal Infrared Imagery
VOLUME 11, NUMBER 1, 1998 LINCOLN LABORATORY JOURNAL 47
utilize a solid-state high-resolution color LCD display
on a chip, such as the displays being manufactured by
Kopin Corp., along with a low-power fusion proces-
sor utilizing custom application-specific integrated
circuits (ASIC). For vehicle-based applications in
which the user is behind a windscreen (which does
not transmit thermal infrared light), the dual-band
sensor is placed in an external turret or pod with an
appropriately transmissive window, while the real-
time fusion results are displayed on color helmet-
mounted displays or on a monitor.
We are planning to shrink our dual-band sensor to
a size of several inches, which would be suitable for
use as a hand-held or helmet-mounted color night-vi-
sion device (or mounted as a gunsight) for the soldier
on the ground. Conceptually, a compact dual-band
color night-vision scope could be laid out according
to Figure 2(c), in which much of the camera electron-
ics is remotely located away from the low-light CCD
imager and microbolometer array [5].
A dual-band visible/LWIR scene of Gloucester,
Massachusetts, is shown in each of the scenes in Fig-
ure 3, which includes three embedded low contrast
(15% or less) square targets that modulate brightness
but do not alter texture in the original visible and in-
frared images. This imagery was taken under dusk
illumination conditions (no moon) with our first-
generation system, shown in Figure 2(a), in January
1995. Note the complementary information present
in the visible and infrared imagery, where the horizon
and water line is obvious in the infrared but not in the
visible image, while the ground detail is revealed in
the visible but not the infrared. The enhanced visible,
enhanced thermal infrared, both opponent-color
contrast (i.e., fused gray), fused color, and remapped
fused color images are shown in Figure 3. In the fused
color images in Figures 3(e) and 3(f ), the horizon is
clearly rendered, as are the houses and shrubs on the
ground, the water line on the rocks, and ripples on
the water surface. The enhanced contrast afforded by
the color now supports the perceptual pop-out of all
three embedded targets, one of which (in the water) is
weakened in the gray fused image (c) and one (on the
land) is lost in the gray fused image (d). Note that the
FIGURE 4. Nahant beach on the Atlantic Ocean in overcast near-full-moon illumination conditions. Dual-band visible and ther-
mal IR imagery are combined to create grayscale and color fused images of the night scene. (a) Intensified visible image, (b)
thermal IR (FLIR) image, (c) gray fused image, and (d) color fused image.
(b)
(c) (d)
(a)
WAXMAN ET AL.
Solid-State Color Night Vision: Fusion of Low-Light Visible and Thermal Infrared Imagery
48 LINCOLN LABORATORY JOURNAL VOLUME 11, NUMBER 1, 1998
fused color imagery inherits the higher resolution of
the visible image. In the remapped color fused image
(f ), the trees and shrubs corresponding to brown in
the fused image (e) have been remapped to a greenish
hue with low saturation, and the blue water is bright-
ened. In practice, the class of color remap selected by
the user (in real time) will depend on the kind of mis-
sion undertaken.
Figure 4 illustrates a scene taken at Nahant beach
on the Atlantic Ocean, on an overcast night with
near-full moon in January 1995. We illustrate (a) the
enhanced visible, (b) the enhanced thermal infrared,
(c) the gray fused (blue channel) opponent color, and
(d) the unremapped color fused imagery. In the color
fused image, notice how the water and surf easily seg-
ment from the sand, and how the horizon is clear over
the water. A concrete picnic table and asphalt bicycle
path are also in the foreground. Real-time processing
of this scene is quite dramatic, and the incoming
waves are clearly apparent. Notice that the gray fused
imagery displays an enhanced surf but a weak hori-
zon. Clearly, even the low resolution and low sensitiv-
ity of the uncooled infrared imager seem adequate in
modulating the visible imagery into a color fused re-
sult. It will be of great interest to assess the utility of
such a night-vision system for search-and-rescue op-
erations at sea.
Figures 5 and 6 present fusion results on data pro-
vided by the U.S. Army NVESD, Advanced Helicop-
ter Pilotage Program. Here an intensified CCD pro-
vides low-light visible imagery, and a cryogenically
cooled first-generation FLIR provides high-quality
thermal infrared imagery. In many respects, the FLIR
imagery is more useful than the visible imagery. By
inspecting the original visible (a) and original infrared
(b) images, however, we can clearly see how the sen-
sors complement each other. The gray fused result (c)
is shown next to the color fused result (d). In Figure 5
we see that the color fused result (d) displays a clearer
horizon, clearer tree shadows across the road, and a
better sense of depth down the road than does the
gray fused result (c). In Figure 6, both fused results
show a strong horizon, but the color fused result (d)
reveals more detail near the top of the tower and the
FIGURE 5. Road-scene imagery collected during a helicopter flight provided by the U.S. Army Night Vision and Electronic Sen-
sors Directorate (NVESD). Dual-band visible and FLIR imagery are combined to create grayscale and color fused images of the
night scene. (a) Intensified visible image, (b) thermal IR (FLIR) image, (c) gray fused image, and (d) color fused image.
(b)
(c) (d)
(a)
WAXMAN ET AL.
Solid-State Color Night Vision: Fusion of Low-Light Visible and Thermal Infrared Imagery
VOLUME 11, NUMBER 1, 1998 LINCOLN LABORATORY JOURNAL 49
communication dish on the ground, whereas the gray
fused result (c) reveals more detail on the trailer.
Color Remapping of Fused Imagery
Figures 5 and 6 show the results of color fusion as
produced by the direct output of the opponent-color
processing described earlier in the section on the vis-
ible/infrared fusion architecture. Alternatively, these
fused channels can provide input to a final color
remapping stage, as shown in the architecture dia-
gram in Figure 1. Color remappings are essentially
transformations in the HSV color space, designed to
render the fused imagery in more natural coloring.
We have developed separate color remappings for dif-
ferent classes of scenes (not for each individual scene),
such as forested imagery like Figures 5 and 6, and for
flight over water. We expect that different color
remappings will be required for desert, ice, and urban
class scenes. Figure 7 shows fused and color-
remapped examples from the Army helicopter pilot-
age program images shown in Figures 5 and 6. We
have demonstrated real-time fusion with color
remapping on videotaped imagery from an Army he-
licopter during night flight over forest and water, pro-
cessing intensified-CCD and FLIR imagery with 640
480-pixel resolution at thirty frames per second
with two TMS320C80 processors.
Figure 8 shows another interesting example involv-
ing the penetration of a smokescreen. This imagery,
from an unclassified Canadian defense study, was
taken during the day with a conventional CCD vis-
ible camera and a first-generation FLIR thermal im-
ager. Clearly, the visible image provides the scenic
context, whereas the infrared image highlights hot
targets. The gray fused imagery puts this complemen-
tary information together nicely, but the color fused
and remapped result clearly separates the hot vehicles
(tow truck and helicopter) from the men running
through the smoke and the background.
Perceptual Testing
We have provided fused imagery for several kinds of
human performance testing. A.M. Waxman et al.
[28] studied human reaction time to detect artificial
FIGURE 6. Tower-scene imagery provided by the U.S. Army NVESD. Dual-band visible and FLIR imagery is combined to create
grayscale and color fused images of the night scene. (a) Intensified visible image, (b) thermal IR (FLIR) image, (c) gray fused
image, and (d) color fused image.
(b)
(c) (d)
(a)
WAXMAN ET AL.
Solid-State Color Night Vision: Fusion of Low-Light Visible and Thermal Infrared Imagery
50 LINCOLN LABORATORY JOURNAL VOLUME 11, NUMBER 1, 1998
targets (i.e., contrast modulations) embedded in a real
scene, as shown in Figure 3. Reaction times were
compared for gray and color fused, as well as original
sensor imagery. A. Toet et al. [29] provided visible
and infrared imagery of a person walking among
shrubs and sand, taken during early morning hours in
which both visible and thermal contrast in the scene
were low. Our gray and color fused results were com-
pared to those of Toet and J. Walraven [30], as well as
the individual sensor imagery, for the task of detect-
ing and localizing the person designated as the target
in the scene. Twenty-seven frames were used to test
the subject population. P.M. Steele and P. Perconti
[31] conducted tests on military helicopter pilots by
using Army helicopter imagery (both stills and video
sequences) to assess accuracy and reaction time in de-
tecting objects and the horizon, as well as determin-
ing image quality. Steele and Perconti compared our
FIGURE 8. Smokescreen penetration and target pop-out is achieved through the color fusion of visible CCD and FLIR imagery
in this daytime scene (imagery provided through the Canadian Defense Research Establishment, Valcartier, Qubec, as part of
a NATO study). (a) Intensified visible image, (b) thermal IR (FLIR) image, (c) gray fused image, and (d) color fused image.
(b)
(c) (d)
(a)
FIGURE 7. Color fused remappings of the road-scene imagery in Figure 5 and the tower-scene imagery in Figure 6. Color
remapping transforms the red/blue thermal colors of fused imagery to more natural and familiar hues.
WAXMAN ET AL.
Solid-State Color Night Vision: Fusion of Low-Light Visible and Thermal Infrared Imagery
VOLUME 11, NUMBER 1, 1998 LINCOLN LABORATORY JOURNAL 51
gray and color fused imagery to proprietary gray
fused results from Texas Instruments and a trivial
color assignment scheme from the Naval Research
Laboratory, as well as the original sensor imagery. In
all of these tests, our color fused imagery showed clear
improvements in human performance over the origi-
nal sensor imagery as well as every alternative fusion
method it was compared to.
Prototype 128 128 Low-Light CCD Imager
Solid-state, thinned, back-illuminated, multiported
frame-transfer CCD imagers offer enormous benefits
over electro-optic intensifier tubes, including excel-
lent quantum efficiency (>90%), broad spectral sensi-
tivity (0.31.1 ), high spatial resolution, sensitivity
in overcast starlight, enormous dynamic range, anti-
blooming capability, and near-ideal modulation
transfer-function characteristics. Such CCDs with in-
tegrated electronic shutters have been fabricated and
tested at Lincoln Laboratory [15, 16]. Our near-term
target CCD imager has 640 480 pixels and sixteen
parallel read-out ports; it also supports twelve-bit
digital imagery at less than 5e

read-out-noise level,
operates at thirty frames per second with integrated
electronic shuttering and blooming drains, and re-
quires only thermoelectric cooling (as does the
noncryogenic uncooled thermal LWIR imager).
Nearly all of these capabilities have already been de-
veloped and demonstrated in different devices. We
are currently integrating them into a single imager for
night-vision applications.
Figure 9(a) illustrates a variety of low-light CCD
imagers (a coin in the center of the image provides
size comparison) including (upper left) a wafer pat-
terned with four large 1K 1K-pixel imagers and
four smaller 512 512-pixel imagers, thinned to ten
microns for back illumination; (lower left) two 1K
1K imaging chips inside open packages with one
mounted for front illumination and the other for
back illumination; (upper right) two 512 512 imag-
ing chips mounted in open packages; and (lower
right) a mounted and sealed 128 128-pixel four-
port imager and an empty package showing the ther-
moelectric cooler upon which the imager is mounted.
Figure 9(b) shows our first laboratory prototype low-
light CCD camera built around a back-illuminated
four-port 128 128 pixel imager. This camera oper-
ates in the dark at thirty frames per second or less
(and was actually designed to operate in excess of five
hundred frames per second with adequate lighting).
In front of the camera is a multichip module contain-
ing all the analog circuitry for the four read-out ports;
the relatively small size of this module illustrates the
potential to build far more compact cameras. Further
FIGURE 9. Low-light CCD imagers. (a) Thinned wafer and packaged multiported CCDs with formats 1K x 1K, 512 x 512, and 128 x
128 pixels; (b) prototype low-light camera using a four-port 128 x 128 back-illuminated CCD, and an analog circuit multichip
module (shown in the foreground).
(b) (a)
WAXMAN ET AL.
Solid-State Color Night Vision: Fusion of Low-Light Visible and Thermal Infrared Imagery
52 LINCOLN LABORATORY JOURNAL VOLUME 11, NUMBER 1, 1998
size reduction can be realized through the use of
ASICs for the read-out and timing circuitry. This
camera operates at thirty frames per second with a
measured read-out noise of about 5e

.
Figure 10 illustrates imagery obtained in the labo-
ratory with the camera shown in Figure 9(b), under
controlled lighting conditions from full moon down
to overcast starlight (as measured at the scene with a
photometer calibrated for a Gen III intensifier tube,
FIGURE 10. Low-light CCD imagery taken at video frame rates under controlled illumination conditions as indicated. The top
row shows raw twelve-bit imagery scaled so that minimum and maximum map to zero and 255. The bottom row shows the corre-
sponding eight-bit imagery obtained from center-surround shunt neural processing of the original twelve-bit imagery.
Full moon
33 mLux
30 frames/sec
Quarter moon
6.6 mLux
30 frames/sec
Starlight
1.9 mLux
30 frames/sec
Below starlight
1.0 mLux
30 frames/sec
Overcast starlight
0.2 mLux
6 frames/sec
FIGURE 11. Low-light CCD imagery taken at White Sands, New Mexico, under starlight conditions, originally 1K 1K pixels.
(a) The high end of the twelve-bit dynamic range; (b) the low end of the twelve-bit dynamic range; (c) the entire eight-bit dy-
namic range after center-surround shunt neural processing of the original twelve-bit imagery captures all scenic details.
using a calibrated light source with a blue-cut filter).
The scene consists of a 50% contrast resolution chart,
and a toy tank in the full-moon example. All images,
except for overcast starlight, were taken at thirty
frames per second; for overcast starlight the frame rate
was reduced to six frames per second. We can obtain
better quality imagery at starlight or below by reduc-
ing the frame rate below thirty frames per second,
thereby integrating photons directly on the imager
(a) (b) (c)
WAXMAN ET AL.
Solid-State Color Night Vision: Fusion of Low-Light Visible and Thermal Infrared Imagery
VOLUME 11, NUMBER 1, 1998 LINCOLN LABORATORY JOURNAL 53
without the penalty of accumulating additional read-
out noise. Across the top row of Figure 10 we show
the original twelve-bit imagery scaled such that the
minimum pixel value is set to zero and the maximum
pixel value is set to 255 on an eight-bit grayscale dis-
play. This scaling is possible only because of the sim-
plicity of the scene and uniformity of lighting. Across
the bottom row of Figure 10 we show the correspond-
ing images after processing the twelve-bit data with
the center-surround shunt processing of Equation 2.
In all cases we can see that contrast has been enhanced
and the dynamic range has been adaptively com-
pressed to only eight bits. All images were processed
exactly the same, without individual adjustments.
Figure 11 shows an example of a 640 480 pixel
low-light CCD image. The original image, taken at
White Sands, New Mexico, in 1994 under starlight
conditions, is approximately 1K 1K pixels, digitized
to twelve bits (4096 gray levels). This high-resolution
imagery was taken at a relatively low frame rate (five
frames per second), in order to maintain low read-out
noise over the imagers four read-out ports. Figures
11(a) and 11(b) are the same image shown at opposite
ends of the twelve-bit dynamic range. At the high end
of the dynamic range, Figure 11(a) shows the stars in
the sky and the horizon, but nothing is visible on the
ground. At the low end of the dynamic range, Figure
11(b) shows the presence of vehicles on the ground,
but the sky and dome are saturated white. This enor-
mous dynamic range is a tremendous asset for night
FIGURE 12. (a) The new CCD camera with associated electronics, including thermoelectric-cooler controller (top), digitizer/
multiplexer (middle), and power supply (bottom). (b) Packaged CCD imager mounted on a two-stage internal thermoelectric
cooler. (c) CCD imager inserted into camera chassis with read-out electronics. This camera is utilized in the dual-band imaging
pod shown in Figure 2(b).
(a) (b) (c)
imaging, since the moon and cultural lighting can
dominate the high end, while objects and shadows on
the ground may be apparent only at the low end of
the dynamic range (and would ordinarily be lost due
to the automatic gain control of an intensifier tube).
The center-surround shunting neural networks of
Equation 2 can exploit the contrast inherent in the
wide-dynamic-range CCD imagery while adaptively
normalizing the local data to a dynamic range well
suited to only 256 gray levels (i.e., an eight-bit display
range). And the computations can be carried out in
real time, even at high data rates. Figure 11(c) shows
the result of this neural processing, where we can eas-
ily see the stars in the sky, the buildings on the hori-
zon, the vehicles on the ground, and the telescope
dome without any saturation at either end of the dy-
namic range.
640 480-Pixel Low-Light CCD Imager
The low-light CCD technology described in the pre-
vious section has been recently scaled up to produce a
640 480-pixel, eight-port imager with twelve-bit
dynamic range, able to operate at thirty frames per
second below starlight illumination conditions. These
imagers also contain blooming drains at each pixel, to
prevent charge spreading among neighboring pixels
in the presence of a brightness overload, and pixel
binning to reduce read-out noise when the signal-to-
noise ratio supports only lower-resolution imagery.
Figure 12 illustrates this new camera, the packaged
WAXMAN ET AL.
Solid-State Color Night Vision: Fusion of Low-Light Visible and Thermal Infrared Imagery
54 LINCOLN LABORATORY JOURNAL VOLUME 11, NUMBER 1, 1998
640 480-pixel imager, and the read-out electronics.
Figure 13 shows examples of this imagery taken un-
der controlled illumination conditions from full
moon down to half starlight. The imagery shown in
the figure incorporates adaptive processing for noise
reduction, contrast enhancement, and dynamic-
range compression.
This low-light CCD visible camera has been inte-
grated with an uncooled LWIR camera, shown in Fig-
ure 2(b), and a multi-C80 color fusion processor for
demonstration as a drivers night-vision enhancement
system (see Reference 32 for our earlier work on elec-
tronic imaging aids for night driving). Figure 14
shows an example of color fused imagery derived
from the CCD/LWIR imager pod of Figure 2(b).
This imagery was collected in March 1998 at the Lin-
coln Laboratory Antenna Test Range under approxi-
mately quarter-moon illumination conditions. Figure
FIGURE 13. Low-light CCD imagery taken with the Lincoln Laboratory CCD camera shown in Figure 12, at a resolution of 640 x
480 pixels. Under laboratory-controlled scene illumination ranging from full moon (33.3 mLux) in the upper left to half starlight
(1.0 mLux) in the lower right, imagery was captured at thirty frames per second with an f/1.4 lens, and adaptively processed to re-
duce noise, enhance contrast, and compress dynamic range to eight bits. In the background of each image are three resolution
charts of 100% contrast (upper right), 50% contrast (upper left) and 20% contrast (lower left), and a toy tank (lower right).
14(a) shows the adaptively processed low-light visible
CCD imagery, Figure 14(b) shows the processed
uncooled IR imagery, Figure 14(c) shows the color
fused imagery before remapping, and Figure 14(d)
shows the fused imagery following a remapping of
color designed to render the trees green.
Conclusions
We have described a novel approach to achieve color
night-vision capabilities through fusion of comple-
mentary low-light visible and thermal infrared imag-
ery. Our approach to image fusion is based on bio-
logically motivated neurocomputational models of
visual contrast enhancement, opponent-color con-
trast, and multisensor fusion [33]. Example imagery
illustrates the potential of the approach to exploit
wide-dynamic-range visible imagery obtained with
new low-light CCD cameras, and to create a natural
Full moon Quarter moon
Starlight Half starlight
WAXMAN ET AL.
Solid-State Color Night Vision: Fusion of Low-Light Visible and Thermal Infrared Imagery
VOLUME 11, NUMBER 1, 1998 LINCOLN LABORATORY JOURNAL 55
color scene at night that supports the perceptual pop-
out of extended navigation cues and compact targets.
We have conducted psychophysical testing on
static imagery to assess the utility of color versus gray
visible/infrared fusion in terms of human reaction
time, accuracy, and false-alarm rate for detection of
embedded low-contrast targets and extended naviga-
tion cues (relevant to enhancing situational awareness
and tactical efficiency) [34]. Related tests have been
carried out on dynamic image sequences of natural
visible and infrared night scenes, before and after real-
time fusion is carried out. Our most recent dual-band
fusion system, constructed around the Lincoln Labo-
ratory low-light CCD camera shown in Figure 12 and
a Lockheed-Martin uncooled microbolometer cam-
era, incorporates both image fusion and moving tar-
get detection/cueing. It was demonstrated at Fort
Campbell, Kentucky, under starlight illumination
conditions (1.5 mLux) in field and water operations
with Army Special Forces 5th Group.
We anticipate that solid-state, visible/infrared fu-
sion, color night-vision systems will offer many ad-
vantages over existing monochrome night-vision sys-
tems in use today. They will play increasingly
important roles in both military operations and civil-
ian applications in the air, on the ground, and at sea.
Acknowledgments
This work has been supported by the Defense Ad-
vanced Research Projects Agency, the Office of Spe-
cial Technology, the Air Force Office of Scientific Re-
search, and the Office of Naval Research.
FIGURE 14. Color fused imagery derived from the CCD/LWIR imager pod shown in Figure 2(b), collected under approximately
quarter-moon conditions. (a) Adaptively processed low-light visible CCD imagery, (b) processed uncooled IR imagery, (c)
color fused imagery before remapping, and (d) fused imagery following a remapping of color to render the trees green.
(b)
(c) (d)
(a)
WAXMAN ET AL.
Solid-State Color Night Vision: Fusion of Low-Light Visible and Thermal Infrared Imagery
56 LINCOLN LABORATORY JOURNAL VOLUME 11, NUMBER 1, 1998
REF ERENCES
1. C.G. Bull, Helmet Mounted Display with Multiple Image
Sources, SPIE 1695, 1992, pp. 3846.
2. A.A. Cameron, The Development of the Combiner Eyepiece
Night Vision Goggle, SPIE 1290, 1990, pp. 1629.
3. J.S. Crowley, C.E. Rash, and R.L. Stephens, Visual Illusions
and Other Effects with Night Vision Devices, SPIE 1695,
1992, pp. 166180.
4. J. Rabin and R. Wiley, Switching from Forward-Looking
Infrared to Night-Vision Goggles: Transitory Effects on Visual
Resolution, Aviation, Space, and Environmental Medicine 65,
Apr. 1994, pp. 327329.
5. A.M. Waxman, D.A. Fay, A.N. Gove, M.C. Seibert, and
J.P. Racamato, Method and Apparatus for Generating a
Synthetic Image by the Fusion of Signals Representative of
Different Views of the Same Scene, U.S. Patent No.
5,555,324, 10 Sept 1996; rights assigned to MIT.
6. A.M. Waxman, D.A. Fay, A.N. Gove, M.C. Seibert, J.P.
Racamato, J.E. Carrick, and E.D. Savoye, Color Night Vi-
sion: Fusion of Intensified Visible and Thermal IR Imagery,
SPIE 2463, 1995, pp. 5868.
7. A. Toet, L.J. van Ruyven, and J.M. Valeton, Merging Thermal
and Visual Images by a Contrast Pyramid, Opt. Eng. 28 (7),
1989, pp. 789792.
8. A. Toet, Multiscale Contrast Enhancement with Applications
to Image Fusion, Opt. Eng. 31 (5), 1992, pp. 10261031.
9. D. Ryan and R. Tinkler, Night Pilotage Assessment of Image
Fusion, SPIE 2465, 1995, pp. 5067.
10. J.M. Wolfe, K.R. Cave, and S.L Franzel, Guided Search:
An Alternative to the Feature Integration Model of Visual
Search, J. Experimental Psychology: Human Perception and Per-
formance 15 (3), 1989, pp. 419433.
11. S. Grossberg, E. Mingolla, and W.D. Ross, A Neural Theory
of Attentive Visual Search: Interactions of Boundary, Surface,
Spatial, and Object Representations, Psychol. Rev. 101 (3),
1994, pp. 470489.
12. E.H. Land, Color Vision and the Natural Image. Part I, Proc.
Natl. Acad. Sci. 45 (1), 1959, pp. 115129.
13. E.H. Land, Experiments in Color Vision, Sci. Am. 200,
May 1959, pp. 8499.
14. E.H. Land, Recent Advances in Retinex Theory and Some
Implications for Cortical Computations: Color Vision and the
Natural Image, Proc. Natl. Acad. Sci. USA 80, Aug. 1983, pp.
51635169.
15. C.M. Huang, B.E. Burke, B.B. Kosicki, R.W. Mountain, P.J.
Daniels, D.C. Harrison, G.A. Lincoln, N. Usiak, M.A.
Kaplan, and A.R. Forte, A New Process for Thinned, Back-
Illuminated CCD Imager Devices, Proc. 1989 Int. Symp. on
VLSI Technology, Systems and Applications, Taipei, Taiwan, 17
19 May 1989, pp. 98101.
16. R.K. Reich, R.W. Mountain, W.H. McGonagle, J.C.-M.
Huang, J.C. Twichell, B.B. Kosicki, and E.D. Savoye, Inte-
grated Electronic Shutter for Back-Illuminated Charge-
Coupled Devices, IEEE Trans. Electron Devices 40 (7), 1993,
12311237.
17. R.E. Flannery and J.E. Miller, Status of Uncooled Infrared
Imagers, SPIE 1689, 1992, pp. 379395.
18. P.H. Schiller, The ON and OFF Channels of the Visual Sys-
tem, Trends in Neuroscience 15 (3), 1992, pp. 8692.
19. P.H. Schiller and N.K. Logothetis, The Color-Opponent and
Broad-Band Channels of the Primate Visual System, Trends in
Neuroscience 13 (10), 1990, pp. 392398.
20. P. Gouras, Color Vision, chap. 31 in Principles of Neural
Science, 3rd ed., E.R. Kandel, J.H. Schwartz and T.M. Jessell,
eds. (Elsevier, New York,1991), pp. 467480.
21. A.M. Waxman, M.C. Seibert, A.N. Gove, D.A. Fay, A.M.
Bernardon, C. Lazott, W.R. Steele, and R.K. Cunningham,
Neural Processing of Targets in Visible, Multispectral IR
and SAR Imagery, Neural Networks 8 (7/8), 1995, pp. 1029
1051 (special issue on automatic target recognition; S.
Grossberg, H. Hawkins, and A.M. Waxman, eds).
22. A.N. Gove, R.K. Cunningham, and A.M. Waxman, Oppo-
nent-Color Visual Processing Applied to Multispectral Infra-
red Imagery, Proc.1996 Meeting of the IRIS Specialty Group
on Passive Sensors 2, Monterey, Calif., 1214 Mar. 1996, pp.
247262.
23. E.A. Newman and P.H. Hartline, Integration of Visual and
Infrared Information in Bimodal Neurons of the Rattlesnake
Optic Tectum, Science 213 (4508), 1981, pp. 789791.
24. E.A. Newman and P.H. Hartline, The Infrared Vision of
Snakes, Sci. Am. 246 (Mar.), 1982, pp. 116127.
25. A.J. King, The Integration of Visual and Auditory Spatial
Information in the Brain, in Higher Order Sensory Processing,
D.M. Guthrie, ed. (Manchester University Press, Manchester,
U.K., 1990), pp. 75113.
26. S. Grossberg, Neural Networks and Natural Intelligence, chaps.
14 (MIT Press, Cambridge, Mass., 1988), pp. 1211.
27. S.A. Ellias and S. Grossberg, Pattern Formation, Contrast
Control, and Oscillations in the Short-Term Memory of
Shunting On-Center Off-Surround Networks, Biol. Cyber-
netics 20 (2), 1975, pp. 6998.
28. A.M. Waxman, A.N. Gove, M.C. Seibert, D.A. Fay, J.E.
Carrick, J.P. Racamato, E.D. Savoye, B.E. Burke, R.K. Reich,
W.H. McGonagle, and D.M. Craig, Progress on Color Night
Vision: Visible/IR Fusion, Perception and Search, and Low-
Light CCD Imaging, SPIE 2736, pp. 96107.
29. A. Toet, J.K. IJspeert, A.M. Waxman, and M. Aguilar,
Fusion of Visible and Thermal Imagery Improves Situational
Awareness, SPIE 3088, 1997, pp. 177180.
30. A. Toet and J. Walraven, New False Color Mapping for Image
Fusion, Opt. Eng. 35 (3), 1996, pp. 650658.
31. P.M. Steele and P. Perconti, Part Task Investigation of Multi-
spectral Image Fusion Using Grayscale and Synthetic Color
Night-Vision Sensor Imagery for Helicopter Pilotage, SPIE
3062, 1997, pp. 88100.
32. A.M. Waxman, J.E. Carrick, D.A. Fay, J.P. Racamato, M.
Aguilar, and E.D. Savoye, Electronic Imaging Aids for Night
Driving: Low-Light CCD, Thermal IR, and Color Fused Vis-
ible/IR, SPIE

2902, 1996, pp. 6273.
33. A.M. Waxman, A.N. Gove, D.A. Fay, J.P. Racamato, J.E.
Carrick, M.C. Seibert, and E.D. Savoye, Color Night Vis-
ion: Opponent Processing in the Fusion of Visible and IR
Imagery, Neural Networks 10 (1), 1997, pp. 16.
34. M. Aguilar, D.A. Fay, W.D. Ross, A.M. Waxman, D.B. Ire-
land, and J.P. Racamato, Real-Time Fusion of Low-Light
CCD and Uncooled IR Imagery for Color Night Vision,
SPIE 3364, 1998, pp. 124135.
WAXMAN ET AL.
Solid-State Color Night Vision: Fusion of Low-Light Visible and Thermal Infrared Imagery
VOLUME 11, NUMBER 1, 1998 LINCOLN LABORATORY JOURNAL 57
aiirx x. waxxax
is a senior staff member in the
Machine Intelligence Technol-
ogy group, where his research
focuses on neural networks,
multisensor fusion, pattern
recognition, and night vision.
He also holds a joint appoint-
ment as an adjunct associate
professor in the Department of
Cognitive and Neural Systems
at Boston University. He
received a B.S. degree in
physics from the City College
of New York, and a Ph.D.
degree in astrophysics from the
University of Chicago. Prior to
joining Lincoln Laboratory in
1989, he performed research at
MIT, the University of Mary-
land, the Weizmann Institute
of Science (Israel), the Royal
Institute of Technology (Swe-
den), and Boston University.
In 1992 he was corecipient
(with Michael Seibert) of the
Outstanding Research Award
from the International Neural
Network Society for work on
3D object learning and recog-
nition. In 1996 he received the
Best Paper Award from the
IRIS Passive Sensors Group for
work on image fusion and
color night vision. He holds
three patents and has authored
over eighty publications.
xanio aouiian
is a staff member in the Ma-
chine Intelligence Technology
group. His research interests
are in data fusion for night
vision and data mining. Before
joining Lincoln Laboratory in
1986, he developed decision
support systems for the stock
market. He received a B.S.
degree in computer science
from Jacksonville State Univer-
sity, and a Ph.D. degree in
cognitive and neural systems
from Boston University.
WAXMAN ET AL.
Solid-State Color Night Vision: Fusion of Low-Light Visible and Thermal Infrared Imagery
58 LINCOLN LABORATORY JOURNAL VOLUME 11, NUMBER 1, 1998
navin n. inriaxn
is an assistant staff member in
the Machine Intelligence
Technology group. His re-
search work is in the field of
color night vision. He has
been at Lincoln Laboratory
since 1982.
navin a. vas
is a staff member in the Ma-
chine Intelligence Technology
group. His recent research has
been on real-time processing
of imagery for night vision.
He received a B.S. degree in
computer engineering and an
M.A. degree in cognitive and
neural systems, both from
Boston University. He has
been at Lincoln Laboratory
since 1989.
josrvn v. nacaxaro, jn.
is a staff specialist in the Ma-
chine Intelligence Technology
group, where his research is in
multisensor night-vision
imaging and data-collection
systems. He has also worked
on several projects for other
groups at Lincoln Laboratory,
including the acoustic detec-
tion and signal processing
project and the MX parallel
computer architecture project.
He has been at Lincoln Labo-
ratory since 1981.
wiiiiax n. noss
is a staff member in the Ma-
chine Intelligence Technology
group. His recent research has
been on image processing,
pattern recognition, and
multisensor fusion algorithms
for both real-time color night
vision and interactive 3D site
visualization. He received a
B.S. degree in electrical engi-
neering from Cornell Univer-
sity and a Ph.D. degree in
cognitive and neural systems
from Boston University. His
postdoctoral research at Bos-
ton University and at the
University of North Carolina
focused on developing and
applying neurocomputational
models of biological vision. He
has been at Lincoln Labora-
tory since January 1998.
WAXMAN ET AL.
Solid-State Color Night Vision: Fusion of Low-Light Visible and Thermal Infrared Imagery
VOLUME 11, NUMBER 1, 1998 LINCOLN LABORATORY JOURNAL 59
jaxrs r. cannic
is a real-time software develop-
ment engineer at The
MathWorks, Inc., in Natick,
Massachusetts, and a former
assistant staff member in the
Machine Intelligence Technol-
ogy group. His current work
involves the generation of
production-quality code from
high-level block diagrams of
control systems. He received a
B.S. degree in electrical engi-
neering from the University of
Wisconsin at Madison.
xicnari c. srinrnr
is a staff member at the Lin-
coln Laboratory KMR Field
Site, Kwajalein, and a former
staff member in the Machine
Intelligence Technology group.
His research interests are in
vision and neural networks,
and in 1992 he was the
corecipient (with Allen
Waxman) of the Outstanding
Research Award from the
International Neural Network
Society. He received a B.S.
degree and an M.S. degree in
computer and systems engi-
neering from the Rensselaer
Polytechnic Institute, and a
Ph.D. degree in computer
engineering from Boston
University.
ruorxr n. savosr
is the former leader of the
Microelectronics group. He
received a Ph.D. degree from
the University of Minnesota,
and joined RCAs David
Sarnoff Research Laboratories
in 1966. In 1970 he became
Manager of Advanced Tech-
nology at RCA in Lancaster,
Pennsylvania, where he initi-
ated RCAs early engineering
work on charge coupled de-
vices (CCDs). In 1983, as
Director of CCD and Silicon
Target Technology, he initiated
and directed RCAs corporate
program to develop high-
performance CCD imagers for
television, which resulted in
the world's first all-solid-state
studio-quality TV cameras.
For this work he received the
1985 Emmy Award for Out-
standing Technical Achieve-
ment. He joined Lincoln
Laboratory in 1987, and in
1990 he became leader of the
Microelectronics group, with
responsibility for the develop-
ment of advanced silicon
imaging devices, including
CCD imagers aimed at de-
manding applications in
astronomy, surveillance, and
advanced night-vision systems.
Dr. Savoye retired from Lin-
coln Laboratory in October
1997, and is currently a con-
sultant in the area of sensors
for electronic imaging.
aiax x. oovr
is a former staff member in the
Machine Intelligence Technol-
ogy group. He received a B.S.
degree in computer science
and a B.A. degree in math-
ematics from Brown Univer-
sity, an M.S. degree in com-
puter science from the
University of Texas at Austin,
and a Ph.D. degree in cogni-
tive and neural systems from
Boston University. He is
currently employed as a soft-
ware manager at Techtrix
International Corporation in
South Africa, where he is
working on camera-based
forest-fire detection.
WAXMAN ET AL.
Solid-State Color Night Vision: Fusion of Low-Light Visible and Thermal Infrared Imagery
60 LINCOLN LABORATORY JOURNAL VOLUME 11, NUMBER 1, 1998
wiiiiax n. xcooxaoir
is an associate staff member in
the Microelectronics group.
His research interests are in
analog and digital circuit
design relating to CCD de-
vices. He received a B.S.E.E.
degree from Northeastern
University, and he has been at
Lincoln Laboratory since
1960.
navin x. cnaio
is an assistant staff member in
the Submicrometer Technol-
ogy group. He joined Lincoln
Laboratory in 1980 after
graduating from Quincy
College, in Quincy, Massachu-
setts. His research efforts have
been in the development of
software and instrumentation
for a 193-nm integrated-
circuit lithography system.
He is currently working on
CCD camera development for
the Microelectronics group,
concentrating primarily on
imaging software and electron-
ics hardware and firmware.
nanns r. nunr
is a senior staff member in the
Microelectronics group. His
research interests are in the
area of CCD imagers. He
received a B.S. degree in
physics from the University of
Notre Dame, and a Ph.D.
degree in applied physics from
Stanford University. He has
been at Lincoln Laboratory
since 1969, and he is a senior
member of the IEEE.
nonrnr . nricn
is the assistant leader of the
Microelectronics group. His
area of research is in the design
of high-frame-rate and low-
noise optical detector arrays.
He received a B.S. degree from
Illinois Institute of Technol-
ogy, and M.S. and Ph.D.
degrees from Colorado State
University, all in electrical
engineering. He has been at
Lincoln Laboratory since
1987, and he is a senior mem-
ber of the IEEE.

You might also like