Digital Single Lens Reflex Camera Identification From Traces of Sensor Dust
Digital Single Lens Reflex Camera Identification From Traces of Sensor Dust
Abstract—Digital single lens reflex cameras suffer from a well- A key problem in media forensics is the identification and
known sensor dust problem due to interchangeable lenses that they analysis of media characteristics that relate to the acquisition
deploy. The dust particles that settle in front of the imaging sensor device. These characteristics are essentially a combination of
create a persistent pattern in all captured images. In this paper, we
propose a novel source camera identification method based on de- two interrelated factors: 1) the class properties that are common
tection and matching of these dust-spot characteristics. Dust spots among all devices of a brand and model and 2) the individual
in the image are detected based on a (Gaussian) intensity loss model properties that set a device apart from another in its class. Hence,
and shape properties. To prevent false detections, lens parameter- research efforts have focused on the design of techniques to
dependent characteristics of dust spots are also taken into con- identify class and individual characteristics of data-acquisition
sideration. Experimental results show that the proposed detection
scheme can be used in identification of the source digital single lens devices without requiring a specific configuration of source de-
reflex camera at low false positive rates, even under heavy compres- vices [6], [7].
sion and downsampling. Two principal research approaches have emerged in the
Index Terms—Digital forensics, digital single lens reflex (DSLR), effort to establish characteristics that can link an image or video
sensor dust. to its source. The first approach focuses on determining the
differences in processing techniques and component technolo-
gies. For example, optical distortions due to a type of lens, the
I. INTRODUCTION size of the imaging sensor, the choice of color filter array and
corresponding demosaicing algorithm, and color-processing
algorithms can be detected and quantitatively characterized by
Fig. 1. Sensor dust appears in two different images taken with the same DSLR camera. Local histogram adjustment is performed to make dust spots visible (2nd
row). White boxes show dust-spot positions.
photo sharing websites. For instance, the top five cameras for
November 2007 in Flickr (flickr.com) and Pbase (pbase.com)
photo sharing websites are all DSLR cameras.
The very nature of a DSLR camera allows users to work with
multiple lenses, but this desirable feature creates a unique and
undesired problem. Essentially, during the process of mounting/
unmounting the interchangeable lens, the inner body and work-
ings of the camera is exposed to the outside environment. When
the lens is detached, very small dust particles are attracted to the
camera and settle on the protective element (dichroic mirror or
low-pass filter) in front of the sensor surface. These tiny specks
of dust, lint, or hair cling to the surface and form a dust pat-
tern which later reveals itself on captured images as blemishes
or blotches. We will refer to this type of artifact as dust spots in
the rest of this paper. Dust spots on two different images.2 taken
with same DSLR are given in Fig. 1. To make dust spots more
visible, each pixel color is changed through histogram equaliza-
tion in windows of small size. Dust spots become visible at low
Fig. 2. Dust spots may stay in the same position for years. aperture rates (e.g., for high -numbers) since a large aperture
will allow light to wrap around the dust particles and make them
out of focus. Moreover, sensor dust is persistent and accumula-
higher quality sensors with low noise power, parallax-free tive and unless it is cleaned, it may remain in the same position
optical viewfinder that allows error-free viewing of the scenery, for a very long time, as exemplified by images3 in Fig. 2.
less shutter lag, interchangeable lenses, and a better control To deal with the sensor dust problem, various solutions
over depth of field. According to the 2006 International Data have been proposed. Some DSLR camera manufacturers have
Corporation (IDC) report on the digital camera market, DSLR already incorporated built-in mechanisms for dust removal. For
cameras showed a consistent growth with a total market and a example, Sony’s Alpha A10 DSLR uses an antidust coating on
39% increase from the 2005 figure.1 Not surprisingly, DSLR
2[Online]. Available: www.pbase.com/chucklantz/image/38843803 and
cameras also take the top place in most popular camera lists of
\ldots /72922162.
1[Online]. Available: www.imaging-resource.com/NEWS/1175724860. 3[Online]. Available: www.pbase.com/chucklantz/image/47472873 and
html. \ldots /image/72922162.
DIRIK et al.: DIGITAL SINGLE LENS REFLEX CAMERA IDENTIFICATION FROM TRACES OF SENSOR DUST 541
the CCD with a vibrating mechanism which removes the dust Section IV. The robustness of the proposed scheme to compres-
by shaking it. Similar vibration mechanisms are also utilized in sion and downsampling is explained in Section IV. Finally, our
Olympus E-300 and Canon EOS Rebel DSLR cameras. Nikon conclusions are presented in Section V.
D50 and Canon Digital Rebel also offers a software solution to
remove dust spots by creating a dust template of the camera. A. Related Work
A comprehensive benchmark on the performance of built-in The first work in the field of source identification was un-
dust removal mechanisms has been performed by pixinfo.com.4 dertaken by Kurosawa et al. [14] for camcorders. Their method
The study involved four of the state-of-the-art DSLR cameras, relies on the fact that each digital camcorder CCD sensor has
namely, Canon EOS-400D, Olympus E-300, Pentax K10D, and a unique and intrinsic dark current noise pattern. This specific
Sony Alpha DSLR-A10. In the experiments, these four cameras noise pattern reveals itself in the form of fixed offset values in
were initially exposed to the same dusty environment, and later pixel readings, and it can be easily extracted when the sensor
the cameras’ built-in functions were used to remove these dust is not exposed to any light. However, the drawback of this ap-
particles. Their results show that even after 25 consecutive invo- proach is that today, cameras are designed to compensate for
cation of the cleaning mechanism, dust spots were still present this type of an artifact. Later, Geradts et al. [13] proposed using
and their performance was far from satisfactory.5 sensor imperfections in the form of hot and dead pixels, pixel
Although vibration-based internal cleaning mechanisms do traps, and pixel defects in order to match images with cam-
not work satisfactorily, they might influence the positions of eras. Although their results show that these imperfections are
dust particles over the filter component. This phenomenon can unique to imaging sensors and they are quite robust to JPEG
also be observable from the benchmarks mentioned before. To compression, most digital cameras, today deploy mechanisms
quantify the effect of internal cleaning mechanisms on dust-spot to detect and compensate pixel imperfections through postpro-
positions, the proposed dust detection algorithm was applied to cessing, which restricts the applicability of their technique.
two blank images taken with Canon EOS-400D after 2nd and Recently, similar to [14], Lukáš et al. [15] and Chen et al.
25th cleaning operations. These two images were obtained from [16], [17] proposed a more reliable sensor noise-based source
the cleaning benchmark experiments in pixinfo.com. Once dust digital camera and camcorder identification method. Their
positions are detected, they were compared with each other. method is based on the extraction of the unique photoresponse
After the 25th cleaning, %97.01 (24 dust particles) out of 803 nonuniformity (PRNU) noise pattern which is caused by the
dust particles remain in the same position. The maximum de- impurities in silicon wafers and sensor imperfections. These
tected position shift after the 25th cleaning is 5.83 pixels (image imperfections affect the light sensitivity of each individual
size is ). Since dust shifts due to internal cleaning pixel and cause a fixed noise pattern. Similarly, Khanna et al.
mechanisms are not significant, we will omit the effect of filter [18], Gou et al. [12], and recently Gloe et al. [20] have ex-
vibrations on dust positions in the rest of this paper. tended PRNU noise extraction methodology to source scanner
An alternative solution is the manual cleaning of the dust by identification where the imaging sensor is typically a 1-D linear
using chemicals, brushes, air blowing, and dust adhesive. Al- array. The drawback of this approach is that it is very hard to
though these are known to be more effective, manual cleaning synchronize the scanner noise pattern with the noise residue
is a tedious task and may potentially harm the imaging sensor; extracted from the scanned image. This is due to difficulty in
therefore, it is not recommended by camera manufacturers.6 controlling the document position during scanning. Therefore,
In this paper, we exploit this persistent nature of the sensor authors extracted statistical characteristics of PRNU noise and
dust to match DSLR images to their sources. The matching can deployed machine learning methods to identify the scanner
be realized by obtaining a dust pattern directly from the camera brand and model. It should be noted that utilizing feature-based
or from a number of images taken by the camera, as in Fig. 1. classifiers makes these methods less effective in individual
It should be noted that since the sensor dust problem is solely source scanner identification.
intrinsic to DSLR cameras, the detection of any sensor dust in
a given image can be taken as strong evidence that the image II. SENSOR DUST CHARACTERISTICS
source is a DSLR camera. In addition, by detecting traces of Essentially, dust spots are the shadows of the dust particles
sensor dust, it may be possible to order images, taken at different in front of the imaging sensor. The shape and darkness of the
times, in capturing time by evaluating accumulation character- dust spots are determined primarily by the following factors:
istics of dust. distance between the dust particle and imaging sensor, camera
The rest of this paper is organized as follows. In Section II, we focal length, and size of aperture. A general optical model
investigate the optical characteristics of sensor dust as a function showing the formation of dust spots is given in Fig. 3. When
of imaging parameters. In Section III, a model-based dust-spot the focal plane is illuminated uniformly, all imaging sensors
detection method and its use in source camera identification will yield the same intensity values. However, in the presence
is explained in detail. The efficacy of the proposed method is of sensor dust, light beams interact with the dust particles and
substantiated by experimental results for two different cases in some of the light energy is absorbed by the dust particles.
4[Online].
The amount of the absorbed energy is directly related to the
Available: pixinfo.com/en/articles/ccd-dust-removal/.
5The
parameter -number (F/#) which is defined as the ratio between
reported dust removal performances defined based on a successfully
cleaned number of initially present dust spots are as follows: Olympus E-300: the focal length and the aperture
50%, Canon EOS-400D: 5%, Pentax K10D: 0%; and Sony Alpha A10: 0%.
6[Online]. Available: www.usa.canon.com/consumer. -number (1)
542 IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, VOL. 3, NO. 3, SEPTEMBER 2008
TABLE I
DUST-SPOT PROPERTIES FOR DIFFERENT -f NUMBERS (f = 55 mm,
2
NIKON D50). IMAGE DIMENSIONS:1504 1000
which denotes the center of the dust shadow. To see how the
dust shadow center is related to camera parameters, and
, which are computed in (2) and (3), are substituted into the
formula and is obtained as
(5)
(a) (b)
Fig. 8. Dust-spot movement analysis based on the proposed optical model. (a) Radial shifts of two dust spots. t and f are fixed. f is changed. (b) Dust-spot
shifts for different t values.
TABLE II
DUST-SPOT POSITIONS AND SHIFTS FOR DIFFERENT
FOCAL LENGTHS (NIKON D50)
and angles are given in the last columns for four dif-
ferent dust spots. It is seen from the table that the results are
consistent with (8). From (8), it is possible to estimate the shift
magnitudes. Since the parameter of Nikon D50 is not known,
first, was estimated from the observed dust shifts in Table II as
0.35 mm by the least square method. Then, for each dust spot
in the table, the shift magnitudes were estimated from (8) with
Fig. 9. Dust-spot shifts due to focal length f change.
1.2 pixel mean absolute error. The estimation results are given
with the actual values in Fig. 10.
DIRIK et al.: DIGITAL SINGLE LENS REFLEX CAMERA IDENTIFICATION FROM TRACES OF SENSOR DUST 545
intensity loss
(9)
TABLE III 1) Binary Map Analysis: For a given image, NCC values
MAX. DETECTED DUST-SPOT SIZE FOR DIFFERENT
DSLR CAMERAS (IMAGE SIZES: 800 533) 2 are computed for each Gaussian dust model corresponding to
different values. Then, a binary dust template is generated by
thresholding the correlation values such that values smaller than
a preset value are set to zero and others to one. In the binary
dust map, each binary object, obtained by combining together
neighboring binary components, is indexed and a list of dust-
spot candidates is formed. We then exploit the fact that most
dust spots have rounded shapes. This is realized by computing
the area of each binary object and removing the extremely large
TABLE IV or line-shaped objects, resulting in edges and textures from the
MAX. NORM. CROSS-CORRELATION (NCC) OUTPUTS binary dust map.
FOR VARIOUS DUST-SPOT (IMAGE SIZES: 800 533) 2 2) Validation of Correlation Results: After binary map anal-
ysis, all detected dust spots are re-evaluated by analyzing the
values in the NCC output. For actual dust spots, NCC values
are expected to monotonically decrease around the center of the
dust spot (see Figs. 11 and 18). For this, several NCC values
around each binary object are checked along a circular path to
ensure that NCC values exhibit such a decrease. The binary ob-
jects that do not confirm to this observation are also removed
from the binary dust map.
3) Spatial Analysis: The spatial intensity loss characteristics
of each dust-spot candidate (e.g., remaining binary objects in the
binary dust map) is examined by constructing a contour map of
a region surrounding each candidate dust spot and counting the
number of local minima. If there is a global minimum in the se-
lected region, the corresponding binary object is tagged as dust.
On the other hand, the presence of the multiple local minima im-
plies that detected dust-spot candidates are most likely the result
of image content and, therefore, corresponding binary objects
are removed in the final binary dust map.
Fig. 12. Dust template generation from a set of images (Canon EOS Digital Rebel). (a) Image used to create the dust template. (b) Upper left portion of the dust
template. Its actual size is shown at the left. (c) Binary version of the dust template.
candidate position to occupy a circle rather than assigning fixed C. Camera Identification
coordinates.
As can be seen in (8), the dust-spot shift magnitude is di- The final step of DSLR camera identification is done by
rectly proportional to the filter width . This entails that the matching the dust spots detected in an image with dust spots
largest radial shift may vary among DSLR cameras with dif- in the camera dust template. The identification process is
ferent brands/models. Hence, in generating the template, the comprised of three steps:
radius of the binary circle is determined empirically by mea- Step 1) dust-spot detection and matching;
suring the largest radial shifts of dust spots in several images of Step 2) computing a confidence value for each matching
different DSLR cameras. At the end, all binary dust maps are dust spot;
simply added up to create the final camera dust template. Step 3) decision making.
To exemplify camera dust template generation, ten images In the first step, dust spots are detected as explained in
taken with different -numbers were used. The DSLR camera Section III-A. Once dust spots are located, each dust position
used in this experiment was a Canon EOS Digital Rebel. In all is matched with the dust positions in the camera dust template.
images, dust spots were determined and all results were com- The comparison is realized by measuring Euclidian distances.
bined to create the camera dust template. To eliminate the false If the distance is lower than a predetermined value, the corre-
detections in the template, we utilized a threshold. If a dust spot sponding dust position is added to the matching dust-spot list.
appears in only one image and does not appear in other images In the second step, three metrics are computed for each of the
used in template generation, that spot is removed from the dust matching dust spots as follows.
template. 1) The dust occurrence metric is the number of coinciding
The upper left part of the final dust template obtained is shown dust for the corresponding dust spot in the dust template.
in Fig. 12. In Fig. 12(a), the number of coinciding dust spots Higher values of correspond to salient dust spots.
is given. The hot colors refer to high number of dust matches. 2) Smoothness metric presents the smoothness of the re-
In the figure, the dust shifts due to different focal lengths can gion in which a dust spot was detected. Measuring the
be seen clearly. In Fig. 12(b), the binary version of the final amount of local intensity variations is essential in making
dust template is given. Final dust positions were computed as decisions since dust-spot detection in smooth regions is
centroid points of each dust region in the binary map. These more reliable than in busy regions. This is computed via
points are represented with the “+” symbol in Fig. 12(b). the intensity gradient around the dust spot as a binary value.
After template generation, all dust spots in the dust template For a smooth region, becomes one, and for a nonflat or
are tagged with different numbers. Dust centroid positions and nonsmooth region around the dust spot, it becomes zero.
the number of coinciding dust in that positions are saved in a 3) Shift validity metric indicates the validity of a dust spot
file to be used in camera identification. It is assumed that the based on the shift it exhibits. To compute , we do the
higher the number of coinciding dust spots is, the more domi- following.
nant the corresponding dust spot will be. Therefore, those dust • Each dust spot in the matched dust-spot list is tracked
spots will be given more weight in making a decision. In ad- in all template images, used in template generation. (It
dition, all dust positions detected in each individual image are should be noted a different subset of dust spots will be
also maintained since the camera dust template contains only detected in each image.)
averaged dust locations. This information could not be used in • For each dust spot in the list, a set of the shift vectors
detecting the dust-spot shifts properly since we lose individual (i.e., magnitude and angle) is computed by measuring
dust positions for different -numbers after computing centroid the shifts between a dust spot and its matched counter-
positions in a binary dust template [see Fig. 12(b)]. parts in the template images).
548 IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, VOL. 3, NO. 3, SEPTEMBER 2008
For , , , , , be-
comes 0.0349. It should be noted by requiring a greater number
of random matches (as opposed at least 1), this figure can be
further reduced. In addition, dust-spot shape characteristics and
(10) shift analysis make it possible to reduce the false detection rate
(11) to lower values.
Fig. 13. Canon EOS dust template created with three blank images with different f -numbers (F/13, F/22, F36).
DSLR camera (will be referred as Camera 3). Out of 110 im- To test source camera identification performance, 100 im-
ages, two different image sets were created. The first set con- ages were taken in different environments with different -num-
sisted of 15 images in which there was no apparent sky or ex- bers with each Canon and Nikon DSLR camera. To estimate
tensive flat region, and the second set consisted of 15 images the FP rate, 1000 images were taken with eight different digital
in which the sky was clearly visible. For each image set, three cameras (including Canon A80, Canon Rebel XT, Dimage Z3,
dust templates were generated from 5, 10, and 15 images, as de- Canon S2 IS, Cybershot, DSC-P72, DSC-S90, and EX-Z850).
scribed in Section III-B. The amount of dust in each dust tem- Then, the source camera identification procedure was performed
plate is given in Table V. Not surprisingly, the greatest number on these 100+1000 images for both Canon and Nikon dust tem-
of dust spots is achieved for 15 images with flat regions. To test plates. The identification confidence values for all 1100 images
the detection performances of these six templates, the proposed are given in Fig. 14 where the -axis represents image indices
camera identification scheme was tested using the rest of the 100 and the -axis represents the overall confidence values defined
images and 500 images taken with different DSLR and compact in (10). In the figures, the dot symbol corresponds to previously
cameras. The true-positive (TP) and false-positive (FP) rates for unseen images taken by the source DSLR camera. The dust
six different dust templates are given in Table VI. In the table, templates for Canon and Nikon DSLR cameras are comprised
the TP rate significantly increases as the amount of dust in the of 38 and 36 dust spots, respectively. The decision threshold
template increases with a small increase in FP (see Tables V and (threshold ) is set to fix FP probability at 0.002. The cor-
VI). responding TP rate and accuracy, where accuracy is defined as
It is seen from Table VI that high detection accuracy is pos- the ratio of all true detections to number of images, were com-
sible even with the five images used in template generation with puted as 0.610 and 0.963 for Nikon, and 0.920 and 0.991 for
smooth content. Nevertheless, to achieve such high accuracy Canon DSLR cameras. The TP rate for Nikon was significantly
with the images that do not contain any visible sky, the number smaller than the Canon image set due to the fact that the Nikon
of images used in template generation should be as high as set contained so many nonsmooth and complex images which
possible. made the decision more prone to error.
TABLE VII
ROBUSTNESS TO JPEG COMPRESSION
Fig. 18. Effect of JPEG compression on dust-spot detection. The images are
Fig. 16. Dust template of camera 3 and the images (downloaded from the In- the outputs of NNC.
ternet) used in template generation.
The red points in the right figure show the falsely detected dust
spots as a result of JPEG compression.
The proposed identification scheme can be improved by rep-
resenting dust positions as nodes in a specific graph. This ex-
tension could make the proposed scheme more robust to geo-
metric/desynchronization attacks. However, for now, we leave
this extension as a future work.
V. CONCLUSION
In this paper, we have introduced a new source DSLR camera
identification scheme based on sensor dust traces. The location
and shape of dust specks in front of the imaging sensor and their
persistence make dust spots a useful fingerprint for DSLR cam-
eras. Although many DSLR cameras come with built-in dust
removal mechanisms, these hardware-based removal solutions
are not as effective as they claim to be. Besides, since most dust
spots are not visible or visibly irritating, most DSLR users ig-
nore them completely. To the our best knowledge, this is the first
Fig. 17. Identification results for 100 images downloaded from the Internet work in the literature which uses sensor dust spots for individual
(camera 3). camera identification. The efficacy of the proposed camera iden-
tification scheme is tested on higher than 1000 images from
different cameras. Experimental results show that the proposed
input images were resized to 800 533 resolution regardless of scheme provides high detection accuracy with very low false
input resolution. alarm rates. Our experimental tests also show that the proposed
2) JPEG Compression: To analyze the impact of compres- scheme is quite robust to JPEG compression and downsizing.
sion on the performance of source identification accuracy, 100 The biggest challenge in this research direction is the detection
images both from Nikon and Canon image sets and 500 images of dust spots in very complex regions and low -numbers.
from other digital cameras were compressed at JPEG quality 50.
The identification results are given in Table VII from which it ACKNOWLEDGMENT
can be seen that the proposed scheme is viable even under strong
JPEG compression. In the table, solely one Nikon image is iden- The authors would like to thank M. Pollitt at the University
tified better under JPEG compression. The NCC output for orig- of Central Florida for suggesting this line of research.
inal and compressed versions of that image is given in Fig. 18.
In the figure, it is seen that the JPEG compression increases the REFERENCES
number of local maxima exceeding the detection threshold in [1] D. L. M. Sacchi, F. Agnoli, and E. F. Loftus, “Changing history: Doc-
the NCC output. As a result, a dust spot which is not visible tored photographs affect memory for past public events,” Appl. Cognit.
in NCC output, corresponding to the original image, becomes Psychol., vol. 21, no. 8, pp. 1005–1022, Nov. 2007.
[2] H. Farid, Deception: Methods, Motives, Contexts and Consequences.
detectable after JPEG compression. However, at the same time, Stanford, CA: Stanford Univ. Press, 2007, ch. Digital Doctoring: Can
the number of false detections has also increased significantly. we trust photographs?.
552 IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, VOL. 3, NO. 3, SEPTEMBER 2008
[3] H. T. Sencar and N. Memon, “Overview of state-of-the-art in dig- [21] A. Krainiouk and R. T. Minner, “Method and system for detecting and
ital image forensics,” in Indian Statistical Institute Platinum Jubilee tagging dust and scratches in a digital image,” U.S. Patent 6 233 364
Monograph series titled Statistical Science and Interdisciplinary Re- B1, May 2001.
search. Singapore: World Scientific, 2008. [22] E. Steinberg , Y. Prilutsky, and P. Corcoran, “Method of detecting and
[4] T. V. Lanh, K.-S. Chong, S. Emmanuel, and M. S. Kankanhalli, “A correcting dust in digital images based on aura and shadow region anal-
survey on digital camera image forensic methods,” in Proc. IEEE Int. ysis,” pub. A1, Mar. 2005.
Conf. Multimedia Expo., 2007, pp. 16–19. [23] A. Zamfir, A. Drimbarean, M. Zamfir, V. Buzuloiu, E. Steinberg, and
[5] T.-T. Ng, S.-F. Chang, C.-Y. Lin, and Q. Sun, “Passive-blind image D. Ursu, “An optical model of the appearance of blemishes in dig-
forensics,” in Multimedia Security Technologies for Digital Rights, W. ital photographs,” Proc. SPIE, Digital Photography III, vol. 6502, pp.
Zeng, H. Yu, and A. C. Lin, Eds. New York: Elsevier, 2006. 0I1–0I12, Feb. 2007.
[6] G. Friedman, “The trustworthy digital camera: Restoring credibility to [24] E. Steinberg, P. Bigioi, and A. Zamfir, “Detection and removal of blem-
the photographic image,” IEEE Trans. Consum. Electron., vol. 39, no. ishes in digital images utilizing original images of defocused scenes,”
4, pp. 905–910, Nov. 1993. pub. A1, May 2007.
[7] P. Blythe and J. Fridrich, “Secure digital camera,” in Proc. Digital [25] J. Lewis, “Fast normalized cross-correlation,” Proc. Vision Interface,
Forensic Research Workshop, Aug. 2004, pp. 11–13. pp. 120–123, 1995.
[8] M. Kharrazi, H. T. Sencar, and N. Memon, “Blind source camera iden-
tification,” in Proc. IEEE Int. Conf. Image Processing, Oct. 2004, vol.
1, pp. 709–712.
[9] A. Swaminathan, M. Wu, and K. J. R. Liu, “Non intrusive forensic anal-
ysis of visual sensors using output images,” IEEE Trans. Inf. Forensics
Security, vol. 2, no. 1, pp. 91–106, Mar. 2007.
[10] Y. Long and Y. Huang, “Image based source camera identification Ahmet Emir Dirik received the B.S. and M.S.
using demosaicking,” in Proc. IEEE 8th Workshop Multimedia Signal degrees in electrical engineering from Uludag Uni-
Processing, Victoria, BC, Canada, Oct. 2006, pp. 4190–424. versity, Bursa, Turkey, and is currently pursuing the
[11] K. S. Choi, E. Y. Lam, and K. K. Y. Wong, “Source camera identifi- Ph.D. degree in signal processing at the Department
cation using footprints from lens aberration,” Proc. SPIE Digital Pho- of Electrical and Computer Engineering at the
tography II, vol. 6069, pp. 172–179, Feb. 2006. Polytechnic University, Brooklyn, NY.
[12] H. Gou, A. Swaminathan, and M. Wu, “Robust scanner identification His research interests include multimedia foren-
based on noise features,” Proc. SPIE Security, Steganography, Water- sics, information security, and data hiding.
marking of Multimedia Contents IX, vol. 6505, p. 65050, Feb. 2007.
[13] Z. J. Geradts, J. Bijhold, M. Kieft, K. Kurosawa, K. Kuroki, and N.
Saitoh, “Methods for identification of images acquired with digital
cameras,” Proc. SPIE Enabling Technologies for Law Enforcement
and Security, vol. 4232, pp. 505–512, Feb. 2001.
[14] K. Kurosawa, K. Kuroki, and N. Saitoh, “Ccd F ingerprint
method—Identification of a video camera from videotaped im- Husrev Taha Sencar received the Ph.D. degree in
ages,” in Proc. ICIP, Kobe, Japan, 1999, pp. 537–540. electrical engineering from the New Jersey Institute
[15] J. Lukáš, J. Fridrich, and M. Goljan, “Digital camera identification from of Technology, Newark, in 2004.
sensor noise,” IEEE Trans. Inf. Forensics Security, vol. 1, no. 2, pp. Currently, he is a Postdoctoral Researcher with
205–214, Jun. 2006. the Information Systems and Internet Security Labo-
[16] M. Chen, J. Fridrich, and M. Goljan, “Digital imaging sensor identi- ratory of the Polytechnic University, Brooklyn, NY.
fication (further study),” Proc. SPIE Security, Steganography, Water- His research interests are the security of multimedia
marking of Multimedia Contents IX, vol. 6505, p. 65050, Feb. 2007. and communications.
[17] M. Chen, J. Fridrich, M. Goljan, and J. Lukáš, “Source digital cam-
corder identification using sensor photo response non-uniformity,”
Proc. SPIE Security, Steganography, Watermarking of Multimedia
Contents IX , vol. 6505.G, p. 65051, 2007.
[18] N. Khanna, A. K. Mikkilineni, G. T. C. Chiu, J. P. Allebach, and E. J.
Delp, “Scanner identification using sensor pattern noise,” Proc. SPIE
Security, Steganography, Watermarking of Multimedia Contents IX, Nasir Memon is a Professor in the Computer Science
vol. 6505, p. 65051, Feb. 2007. Department at the Polytechnic University, Brooklyn,
[19] A. E. Dirik, T. H. Sencar, and M. Nasir, “Source camera identification NY.
based on sensor dust characteristics,” in Proc. IEEE Workshop Signal He is the Director of the Information Systems and
Processing Applications for Public Security and Forensics, Apr. 2007, Internet Security (ISIS) Lab at Polytechnic Univer-
pp. 1–6. sity. His research interests include data compression,
[20] T. Gloe, E. Franz, and A. Winkler, E. J. Delp III and P. W. Wah, Eds., computer and network security, digital forensics, and
“Forensics for flatbed scanners,” in Proc. Security, Steganography, and multimedia data security.
Watermarking of Multimedia Contents IX., Feb. 2007, vol. 6505, p.
65051.