0% found this document useful (0 votes)
20 views6 pages

Digital Forensic of JPEG Images: Pdporey@svnit - Ac.in Njm@ced - Svnit.ac - in

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 6

2014

2014 Fifth
Fifth International
International Conference
Conference on
on Signals
Signal and
and Image
Image Processing
Processing

Digital Forensic of JPEG Images

Archana V Mire Dr. S. B. Dhok Dr P. D. Porey Dr N. J. Mistry


Deptt of COED Deptt of ETRX Deptt of CED Deptt of CED
SVNIT, Surat, India VNIT, Nagpur, India SVNIT, Surat, India SVNIT, Surat, India
[email protected] [email protected] [email protected] [email protected]

Abstract— Since JPEG is the de facto image format adopted in action has been observed to cause the presence of the periodic
most of the digital cameras and image editing software, tampered effects on the recompressed DCT coefficients. Thus, the
image will be often a recompressed JPEG image. As JPEG works existence of these coefficient effects for a given JPEG image
on 8 by 8 block cosine transform most of the tampering correlation can be used to verify if the image has undergone tampering.
inherited by tampered image may get destroyed, making forgery Several methods have been proposed in this research direction.
detection difficult thus it is common practice followed by forger to In [1] it has been demonstrated that such a table can be
hide traces of resampling & splicing. JPEG forgery detection estimated and extracted from the content. Inconsistencies of it
techniques try to identify inconsistencies in the artifacts introduced
all over the image can be taken as evidence of tampering
in image due to 8 by 8 block DCT transform. The original image on
[2][3]. There exist many survey papers explaining various
which forgery is created may be compressed or uncompressed
image; similarly area pasted may belong to compressed or digital image forgery detection techniques such as [4][5][6] but
uncompressed image. Since both will be having different most of them are bibliographic listing various them. As this is
compression history JPEG forgery detection techniques try to very vast area it is not possible to explain all the techniques in
identify the difference in compression history which may be in the one paper. In this paper we have concentrated only on JPEG
form of shifting of DCT block alignment, difference in primary fingerprint & categorized JEPG forgery detection techniques
quantization table or ghost detection. based on assumed hypothesis for forgery. Also we have
mentioned results of some of the techniques we have
Keywords—ADJPEG; NADJPE; Ghost detection; primary practically verified on tampered image database CASIA
quantization table. V.2[7].
While creating forgery part of the source image is copied to
I. INTRODUCTION on target image to generate a new composite image, as
Digital images have become a very important information illustrated in Figure1. If source and composite images are
carrier in our daily lives and many times used as evidence. stored in JPEG format with quantization factor QF1 and QF2
With the development of user friendly digital image processing respectively the image block in the spliced portion constituted
software, it is very easy to create a compressed convincing by a number of JPEG blocks which originally located at (x1, y1)
image forgery from one or multiple images without leaving in the source image is copied and pasted to (x2, y2) in the target
visible clues. To maintain reliability of images researchers are and composite images. Based on these shifting DJPEG
developing automatic image forensic approaches, requiring no compressions are categorized into Aligned Double JPEG
human involvement or assumption of prior knowledge of the compression (ADJPEG) & Nonaligned double JPEG
test data for a stable and non-subjective processing. compression (NADJPEG). The composite image in suspicion
can be segmented into smaller blocks and identified one by
JPEG compression techniques utilize the sensitivity of the one.
human eye to a low frequency area over the high frequency
area. This is done by simply dividing each component in the
DCT domain by a constant for that component, and then
rounding it to the nearest integer. This rounding operation is x1,y1
the only lossy operation in the whole process (other than
chroma subsampling). As a result of this, it is typically the case x2,y2
that many of the higher frequency components are rounded to
zero, and many of the rest become small positive or negative a.Source image b.Destination image c.Composite image
numbers, which take many fewer bits to represent. This allows
Fig. 1. Example of JPEG Image forgery
one to greatly reduce the amount of information in the high
frequency components. The elements in the quantization matrix The remaining content of this paper is organized as follows.
control the compression ratio, with larger values producing Section 2 explains various types of JPEG compression
greater compression. The term “double JPEG detection technique. Section 3 addresses major work on these
compression(DJPEG)” refers to the decompression of an techniques & their comparison. Paper is concluded in section 4.
already compressed JPEG image and the application of a
subsequent recompression action to the image contents to store
it back into the JPEG format. Such a JPEG double compression

978-0-7695-5100-5/13 $31.00 © 2013 IEEE 131


DOI 10.1109/ICSIP.2014.26
II. TYPES OF JPEG FORGERY DETECTION TECHNIQUES. Where Eˆ m,n represents quantization noise of the second JPEG
Based on the assumed hypothesis for detecting forgery these
compression. {Acx,0,Acx,1,Acy,0,Acy,1} represents set of mixing
techniques can be categorized as follows.
matrices. Their coefficients ware determined by the shifted
A. ADJPEG forgery detection distance (cx, cy) and the DCT transform matrix. De-mixing of
These techniques are dependent on the assumption that Equation (5) supposed to be achieved when the NADJPEG
JPEG grid used in first compression & second compression image is shifted back to its original block segmentation. They
shown in figure 1 is exactly aligned with each other satisfying calculated objective function values of all possible demixing
equation (1), (2). matrix set to form an Independent value Map (IVM) &
derived relative asymmetric value map (RAVM) to measure
cx= (x1mod8)-(x2 mod 8) =0 (1) the symmetry of the IVM. Finally they trained classifier with
cy= (y1mod8)-(y2 mod 8) =0 (2) features extracted from the ordinary/NAJPEG images to take a
judgment. Thus the number of all possible de-mixing matrix
These techniques mostly identify double compression but not
set was limited and identical to the number of all possible
necessarily double compression mean forgery. These
shifted distances. Consequently, the IVM was calculated by
techniques can be applied to different objects in the image after
segmentation & forgery can be traced out by objects having performing Benford Law [10] model of the blockwise DCT
differing no of the compression history (objects having a (BDCT) on the decompressed JPEG image with 64 different
single compression history are real & objects having a double segmentation schemes.
compression history are forged). T. Binachi et al [11] measured clustering of DCT coefficients
around quantization lattice to measure the grid shift. They
Farid [8] analyzed ADPJPEG compression with 1D discretely modeled aligned compression using equation (6).
sampled signal. They quanized the signal in four different 1
I1  D00 (Q( D00I ))  E1  I  R1 (6)
ways i) step size 2 ii) step size 3 iii) steps size 3 followed with
step size 2 & iv)step size 2 followed with step size 3. They D00 models an 8 × 8 block DCT with the grid aligned with the
found that when step size increase some of the beans of the upper left corner of the image, Q(·) model quantization and
resultant histogram remains empty while when step size dequantization processes, and E1 is the error introduced by
decreases some of the bins become overcrowded. They found rounding and truncating the output values to eight bit integers.
some bins contain more samples than their neighboring since The last quantity R1 represent overall approximation error
the even bins receive samples from four original histogram introduced by JPEG compression with respect to the original
bins, while the odd bins receive samples from only two. They image.
gave complete theoretical proof of this phenomenon & found Non aligned JPEG compression was modeled with equation
that in both the cases of double quantization periodicity of the (7) assuming that original image was compressed with quality
artifacts gets introduced into the histograms which can be Q1
identified as spikes in the Fourier domain which can be used to I1  Dy,1x (Q1 ( Dy , x I ))  E1 (7)
identify forgery.
Where Dy,xI are the unquantized DCT coefficients of I. Again
B. NADJPEG forgery detection image I1 was assumed to be JPEG compressed with a quality
These techniques are dependent on the assumption that factor QF2, but with the block grid aligned with the upper left
JPEG grid used in first compression & second compression as corner of the image. They represented the image after the
shown in figure 1 is not aligned with each other satisfying second decompression, i.e., I2 = I1 + R2, and applied block
equation (3), (4). DCT with alignment (i,j) to have equation(8).
cx= (x1mod8)-(x2 mod 8) ≠ 0 (3)
Q2 ( D0,0 I1 )  D0,0 E2 i  0, j  0
cy= (y1mod8)-(y2 mod 8) ≠ 0 (4) 
Di , j I 2   Q1 ( Dy , x I )  Dy , x R2 i  y, j  x (8)
 D D 1 Q ( D I )  D E elsewhere
Since second compression is not aligned with the first, it is  i , j 0, 0 2 00 1 i, j 2

assumed that the original part of forged image exhibits regular


blocking artifacts while the pasted one does not. These From equation (8) they found that when DCT grid will be
techniques mostly use linear characteristics of DCT aligned with the grid of either the last or the first compression,
coefficients & checks for all possible 8×8= 64 alignments in the DCT coefficients tend to cluster around the points of a
vertical & horizontal directions to maximize correlations. lattice defined by the respective quantization tables. In the
case of the last compression (i=0, j=0), the spread around the
Z Qu et al [9] denoted relation between input block Sm,n and
lattice points is fixed and quite limited, whereas in the case of
output block Sˆ m ,n of non aligned JPEG double compression the first compression (i=y, j=x) it depends on the power of R2
ˆ i.e. on the quality of the second JPEG compression. When the
using equation (5). They expressed output block S m,n as a linear
DCT grid is aligned with neither of the two compressions,
mixture of four input blocks {Sm,n, Sm,n+1, Sm+1,n, Sm+1,n+1} that it DCT coefficients usually do not cluster around any lattice [11].
overlaps. Since quantization effects will be more evident when most of
1 1
Sˆ m,n   Acy ,i S mi ,n j AcxT , j  Eˆ m,n (5) the analyzed DCT coefficients are different from zero they
i 0 j 0

132
considered only the DC coefficient of each block & measured c2, the difference between c1 and c2 is minimal when q2 = q1
clustering around a lattice by the periodicity of the histogram and increases as the difference between q2 and q1 go on
for an integer period. According equation (8) in the presence increasing [14] (except if q2 = 1 i.e., no second quantization).
of NA-JPEG both f00(Q2) and fyx(Q1) are expected to have a Specifically, if q2 > q1 then the coefficients c2 become
higher magnitude than the other values whereas in the absence increasingly more sparse relative to c1, and if q2 < q1 then,
of NA-JPEG only f00(Q2) will have higher magnitude. In the even though the second quantization is less than the first, the
absence of NA-JPEG the value of fij(Q) will mainly depend on coefficients c2 shift relative to c1. If each DCT coefficient is
the image content, so for a given Q there is little variation in compared in YCbCr channel differently, multiple minima may
fij(Q) with (i, j). In order to capture this behavior of fij(Q) occur so they considered the cumulative effect of quantization
,integer periodicity map (IPM) at the quantization step Q was on the underlying pixel values. In order to compensate between
defined low & high frequency region present in image they considered
spatially averaged and the normalized difference measure.
C. Primary Quantization Table Detection Farid’s Ghost detection approach requires lot of human
Like ADJPEG compression detection techniques these are interaction. In practice human interaction requires lot of time as
also based on the periodicity of the histogram when step size a ghost can be visually hard to distinguish from noise &
increases or decreases. In this technique suspected region in number of difference images can become very large during
recompressed with various quality levels to identify testing. Fabian Zach [15] suggested method based on analysis
periodicity. of difference curve for ghost detection. They estimated the
Jan Lukas et al [12] suggested a simple method for quality level q1 as the global minimum over the curves derived
quantization table detection. They calculated histogram h0 of from all windows in the image & assumed c (x) as the value of
absolute value of all DCT coefficients of interest from the difference curve for quality level x. They extracted six
suspected image. To disrupt the structure of JPEG blocks they features that are defined on c(x) for 30
x
q1. The core idea
cropped (for example by 4 pixels) image & again JPEG of this feature set was to identify the steeper decay of the
compressed with the set of candidate quantization matrices double-compression difference curve. For each feature
Q1,1, …, Q1,n. Again all n JPEG files were decompressed and histogram was computed based on 2.5.106 image windows,
JPEG compressed again with the quantization matrix Q2 & Fabian et al [12] evaluated different classification algorithms
obtained histogram h(Q1,1), …, h(Q1,n). They estimated original thresholding, Neural Networks, Random Forests, AdaBoost
quantization matrix using equation (9). and Bayes. The values for thresholding were determined by
computing the mean values of the feature distributions for
Q1 = arg minQ,1,m ||h(Q1,m)–ho|| (9) single- and double-compressed areas. The actual threshold was
As a norm, they simply used the sum of absolute values (the L1 then determined as the mean of means for each feature. They
norm). Although this method is relatively robust and reliable, it considered a block as double-compressed, if at least three
requires a limited set of n possible quantization matrices. When quarters of the feature values exceeded their respective
dealing with custom quantization matrices, it is necessary to threshold.
compute minimums for each DCT coefficient of interest D ij
separately. III. COMPARSION

J. Fridrich et al [13] computed quantization step by measuring A. Detection of A-DJPG Compression


the quantization error Ei(q) of every possible step q for the
DCT coefficients dk(i), 0
i
64, k  1....T for all 8×8 blocks Propescu & Farid [16] exploited the fact that double JPEG
using equation (10). compression amounts to double quantization of the block DCT
coefficients, which introduces specific artifacts visible in the
1 T d (i) histograms of these coefficients. They devised a quantitative
Ei ( q )  
T k 1
| d k (i)  q round ( k )
q
(10) measure for these artifacts, and employed this measure to
discriminate between single and double JPEG compressed
T is the total number of 8× 8 blocks and dk is the DCT images. There are certain theoretical limitations that make
coefficient of kith DCT component in the block. Local double compressed images impossible to detect in certain
minimum occurs at the correct step and all integer divisors of cases. Images compressed first with a high quality, then with a
Q. Nevertheless, it was difficult to avoid taking some incorrect significantly lower quality are generally harder to detect. This
local minimum. method only reveals the compression history of a given image,
and cannot detect the local tampered region in a given image. If
D. JPEG Ghost detection a manipulated image in JPEG format is cropped before being
In this technique difference between giving the tampered re-saved, the artifacts may change and scheme may become
image & its various compressed versions of different quality vulnerable.
are searched for minima which appear as dark ghost. It is Based on the observation that in natural images the distribution
dependent on the assumption that when a set of DCT JPEG of the first digit of DCT coefficients in single JPEG
coefficients c1 quantized by an amount q1 get subsequently compressed images follows the generalized Benford’s law [10],
quantized a second time by an amount q2 yielding coefficients two detection methods[17,18] were proposed. Their

133
experimental results show that each compression step alters the factor of the last compression is much higher than the one used
statistics of the first digit distribution. As a consequence, the for the first. Furthermore, the method is reliable only when the
fitting provided by the generalized Benford’s law is tampered region is very large, that is, above 500 × 500 pixels.
decreasingly accurate with the number of compression steps. Li Previous algorithm is modified in [9] to localize the tampered
et al. [17] proposed mode-based first digit features (MBFDF) regions, without knowing them in advance. In [27], the
to detect whether a JPEG image has undergone double JPEG blocking artifacts in the pixel domain are again investigated.
compression. This method is superior to all previous methods As a first step, a measure of the blockiness of each pixel is
for distinguishing between single and double JPEG calculated applying a first-order derivative in the 2D spatial
compression. However, like earlier approaches it can only domain. From the absolute value of this measure, a linear
reveal the compression history. The performance of these dependency model of pixel differences is carried out for the
methods does not seem adequate, and their results are within-block and across-block pixels. In order to estimate the
outperformed by later works for example, in [19]. Starting probability of each pixel following this model, an EM
from the observation that recompression induces periodic algorithm is used. Finally, by computing the spectrum of the
artifacts and discontinuities in the image histogram, a set of probability map obtained in the previous step, the authors
features is derived from the pixel’s histogram to train an extract several statistical features, fed to an SVM; this method
(support vector machine) SVM detecting an ADJPEG shows higher performance with respect to [25]. Another
compression. In [20] the histogram of a subset of 9 DCT approach covering the NA-DJPG case is proposed in [28].
coefficients was used to train an SVM and make detection but Here, by assuming that the image signal is the result of the
it was tested only for secondary quality factors set to 75 or 80. superposition of different components that are mixed together
F. Zhao [21] extracted the 64 models of DCT coefficient in the resulting image, independent component analysis (ICA)
histograms & calculated first 4 order moments of the DCT is adopted to identify the different contributions and separate
histogram’s characteristic function to construct the feature them into independent signals. Tampering identification is still
vector. Thus moment features cover all the distinguished performed by means of a classifier. Results are improved with
modes. They also used SVM to train and classify the single respect to [25] by 5%, especially when tampered regions are
and double compressed JPEG images. They combined small.
moment feature with Mode Based Fist Digit features
(MBFDF), to improve detection. Their results match the prior Bianchi and Piva in [11] employed a simple threshold
works for most of the pairs of the compression quality factor. detector based on clustering of DCT coefficient around a given
lattice (defined by the JPEG quantization table) for any
possible grid shift. When NA-DJPG is detected, the parameters
B. Detection of NA DJPEG artifact
of the lattice also give the primary quantization table. Results
Qu et al used [9] demixing approach to determine the shift of obtained in this work show an improvement with respect to
the primary JPEG compression. They developed a method for [25,29] a forged region of 256 × 256 pixels is sufficient to
handling arbitrary block grid alignments using independent equal the best results presented in previous works, and best
component analysis. In [22], the periodicity of block wise DCT performance is obtained even in the presence of similar first
coefficients is studied. However, these algorithms investigate and second quantization factors. Consequently, this method
NADJPEG compression of possible forged area so location of retains good performances even when the last quantization is
possible forgery is still open issue. All these methods assume coarse, for example, corresponding to a quality factor equal to
different quantization matrices for the first and second 70. Their experimental results, considered different forensic
compression. Huang et al. [23] showed how to detect scenarios & demonstrated the validity of the proposed
recompression with the same quantization matrices by approach. But they cropped only central region which is not
exploiting numerical imprecision of the JPEG encoder. While scenario in real. When we tested it on cassia tampered image
the majority of the presented approaches rely on statistics, database V.2 [7] we didn’t have promising result.
Huang et al. [24] presented an efficient technique to
automatically detect duplicated regions in a tampered image. C. Detection of Both A-DJPG and NA-DJPG Compression.
This method fails if the tampered region comes from other
Recently, Chen and Hsu [27][28] have proposed a
images. Starting from an idea proposed in [25], in [26], an 8 ×
detection method which is able to detect either block-aligned or
8 blocking artifact characteristics matrix (BACM) is computed
misaligned recompression by combining periodic features in
in the pixel domain to measure the symmetrical property of the
spatial and frequency domains that are modified by
blocking artifacts in a JPEG image. An asymmetric BACM
recompression. In particular, the scheme computes a set of
will reveal the presence of misaligned JPEG compressions.
features to measure the periodicity of blocking artifacts,
Some features, cumulated over the whole image, are extracted
perturbed in the presence of NA-DJPG compression, and a set
from the BACM and fed to a classifier in order to distinguish
of features to measure the periodicity of DCT coefficients,
regions in which blocking artifacts are present from those in
perturbed when an A-DJPG compression is applied. They used
which they are not. If the suspected region (which is known by
set of nine periodic features to train a classifier to detect
hypothesis) does not exhibit blocking artifacts, then it is
DJPEG compression. They claimed that their method
classified as tampered. Results are good only when the quality
outperforms the scheme proposed in [25] for the NA-DJPG

134
case and the schemes in [11, 30] for the other case. Based on an human expert to visually examine the difference images for all
improved and unified statistical model characterizing the possible parameters. Another disadvantage of this method is
artifacts that appear in the presence of both A-DJPG and NA- assumption of uniform central forged area which is rearly
DJPG in [11], the algorithm automatically computing a possible. When we tried to execute Farid’s ghost detector on
likelihood map indicating the probability for each 8 × 8 DCT CASIA tampered image database V.2 [7] with uniform central
block of being doubly compressed was proposed. The validity forged region ghost appears but when the ghost detector is
of the method has been assessed by evaluating the performance applied on tampered images ghost appears as noise throughout
of a detector based on thresholding the likelihood map. The the image.
results show that, defined as QF1 and QF2 the quality factors
of the first and the second compression, the proposed method is Fabin et al [15] used the used specificity and sensitivity as
able to correctly identify traces of A-DJPG compression unless quality measure.
QF2 = QF1 or QF2< QF1 whereas it is able to correctly
identify traces of NA-DJPG compression whenever QF2 > TN TP
QF1 and there is a sufficient percentage of doubly compressed specificit y  sensitivit y 
TN  FN TP  FN
blocks.
Where
D. Primary Quantization Table detection

Lin et al. [30] showed how the use of different quantization TP = P(dc|dc) ; TN = P(sc|sc) ; FP = P(dc|sc) ; FN = P(sc|dc)
matrices in the first and second compression leads to a telltale
high frequency component in the coefficient spectrum.
However, their method assumes that the JPEG block grids of TP is true positives, TN means true negatives, FP is
the first and second compression are exactly aligned. By false positives and FN is false negatives, sc is single
relying on the property of idem potency of the coding process compressed & dc is double compressed. Best results they
Xian-zhe et al. [31], presented a method for identifying achieved were by training a boosted classifier on 6 specially
tampering and recompression in a JPEG image based on the designed features. They got best specificity and sensitivity as
requantization of transform coefficients. Fan and de Queiroz 0:912 and 0:957 respectively. For tampering detection, the
[1] proposed an algorithm to detect whether an image has difference in primary and secondary compression may be as
previously been JPEG compressed and further locate the whole small as 5. Like earlier method it was also tested on self
position of block artifacts. The detection result of this method generated database so accuracy is still in doubt.
is easy to be interfered by mismatched block artifacts when a
JPEG image is tampered by copy–paste. These techniques IV. CONCLUSION
work better in ADJPEG as compared to NADJPEG but
severely affected by frequency of images. If quantization Although major DJPEG forgery detection methods
coefficients are divisor of each other chances of multiple claim better performance they are often tested on a self
primary quantization table detections are more. generated database. Rather than providing full fledged forgery
detection mechanism most of the techniques have analyzed
E. JPEG Ghost Detection double quantization artefacts. When we applied some of these
identify forgery in CASIA Tampered Image database v.2 [7]
Farid’s JPEG ghost detection method [14] assumes we got very poor results. Though some automated approaches
that the first compression step was conducted on a lower for detection of forged area exist most of the approaches
quality level than the second step and it suffices to recompress require human intervention to find forged area. As compared to
the image during analysis with various lower quality levels. NADJPEG, DJPEG techniques give better performance but
Difference images are subsequently created by subtracting the most of the tampered images are NADPEG compressed.
original image from the recompressed versions. For
approximately correct recompression parameters, the double REFERENCES
compressed region appears as a dark ghost in the difference [1] Z. Fan and Ricardo de Queiroz, “ Maximum Likelihood estimation of
image. The simplicity of this approach has several advantages. JPEG quantization table in the identification of bitmap compression
It is easy to implement. Validity can be simply explained, history”, Proceedings of ICIP’ 00, 10–13 Sept. 2000, pp 948-951.
visually verified and demonstrated to non-technical experts. [2] Z. Fan and R. L. de Queiroz, “Identification of bitmap compression
However, an important drawback is the lack of automation for history: Jpeg detection and quantizer estimation”, IEEE Transaction on
Image Processing, 14(2):230–235, 2003. 6
JPEG ghost detection. A ghost appears, if the JPEG grid of
[3] J. D. Kornblum, “Using JPEG quantization tables to identify imagery
later quantization q 2 is exactly aligned with the JPEG grid of processed by software”, Digital Investigation, 5(1):S21–S25, 2008. 11
the first compression using q0. Thus, all possible 64 JPEG grid [4] Archana V. Mire, Dr S. B. Dhok, Dr N. J. Mistry, Dr P. D. Porey,
alignments must be visually examined. Ultimately, a human “Catalogue of Digital Image Forgery Detection Techniques, an
Overview A”, in proceeding of Third International Conference on
expert has to browse 64.|Q| images, where |Q| is the number of Advances in Information Technology and Mobile Communication –
difference images with q2 < q1. In practice, there are often AIM 2013.
more than 300 difference images. It is currently infeasible for a

135
[5] Babak Mahdian, Stanislav Saic, “A bibliography on blind methods for [27] Y. L. Chen and C. T. Hsu, “Image tampering detection by blocking
identifying image forgery”, Signal Processing: Image Communication periodicity analysis in JPEG compressed images,” in Proceedings of the
25 (2010) 389–399 IEEE 10th Workshop on Multimedia Signal Processing (MMSP ’08),
[6] Hany Farid, ”Image Forgery Detection, A survey”, IEEE signal pp. 803–808, October 2008
processing Magazine, March 2009. [28] Y. L. Chen and C. T. Hsu, “Detecting recompression of JPEG images
[7] CASIA Tampered Image Detection Evaluation Database via periodicity analysis of compression artifacts for tampering
http://forensics.idealtest.org:8080/index_v2.html T detection,” IEEE Transactions on Information Forensics and Security,
vol. 6, no. 2, pp. 396–406, 2011.
[8] H. Farid, “Digital Image Forensics - Department of Computer Science”.
[29] M. Barni, A. Costanzo, and L. Sabatini, “Identification of cut & paste
[9] Z. Qu, W. Luo, and J. Hu
tampering by means of double-JPEG detection and image
double JPEG compression with application to passive image
segmentation,” in Proceedings of the IEEE International Symposium on
authentication,” in Proceedings of the IEEE International Conference on
Circuits and Systems (ISCAS ’10), pp. 1687–1690,IEEE, Paris, France,
Acoustics, Speech, and Signal Processing (ICASSP ’08), pp. 1661–
May-June 2010.
1664, IEEE, Las Vegas, Nev, USA, March-April 2008.
[30] Z. Lin, J. He, X. Tang, C.-K. Tang, “Fast, automatic and fine-grained
[10] F. Benford, “The law of anomalous numbers,” Proceedings of the
tampered JPEG image detection via DCT coefficient analysis”, Pattern
American Philosophical Society, vol. 78, no. 4, pp. 551–572, 1938.
Recognit. 42, 2492–2501 (2009)
[11] T. Bianchi, A. Piva ,”Analysis of Nonaligned Double JPEG Artifacts for
[31] M. Xian-zhe, N. Shao-zhang, and Z. Jian-chen, “Tamper detection for
the Localization of Image Forgeries “,IEEE International Workshop on
shifted double jpeg compression,” in Proceedings of the 6th
Information Forensics and Security (WIFS), 2011, pp 1-6.
International Conference on Intelligent Information Hiding and
[12] J. Lukas and J. Fridric, “Estimation of primary quantization matrix in Multimedia Signal Processing (IIH-MSP ’10), pp. 434–437, October
double compressed jpeg images”, Digital Forensic Research Workshop, 2010.
2003.
[13] J. Fridrich, M. Goljan, and R. Du, “Steganalysis based on JPEG
Compatibility”, SPIE Multimedia Systems and Applications, vol. 4518,
Denver, CO, Aug. 2001.
[14] H. Farid, “Exposing digital forgeries from JPEG ghosts”, IEEE Trans.
Inf. Forensics Security 4(1), 154–160 (2009).
[15] Fabian Zach, Christian Riess and Elli Angelopoulou, “Automated Image
Forgery Detection through Classification of JPEG Ghosts”, Pattern
Recognition, Joint 34th DAGM and 36th OAGM Symposium 2012, pp
185-194.
[16] A. Popescu and H. Farid, “Statistical Tools for Digital Forensics”,
International Workshop on Information Hiding, 2004.
[17] B. Li, Y.Q. Shi, J. Huang,”Detecting doubly compressed JPEG images
by using mode based first digit features”, in IEEE International
Workshop on Multimedia Signal Processing (Cairns, Queensland,
Australia, 2008), pp. 730–735
[18] D. Fu, Y.Q. Shi, Q. Su, “A generalized Benford’s law for JPEG
coefficients and its applications in image forensics”. Proc. SPIE 6505,
65051L1–65051L11 (2007)
[19] X. Feng and G. Doërr, “JPEG recompression detection,” in Media
Forensics and Security II, vol. 7541 of Proceedings of the SPIE, January
2010, 75410J.
[20] T. Pevný and J. Fridrich, “Detection of double-compression for
applications in steganography,” IEEE Transactions on Information
Forensics and Security, vol. 3, no. 2, pp. 247–258, 2008.
[21] Feng Zhao, Zhenhua YU, Shenghong Li, “Detecting Double
Compressed JPEG Images by Using Moment Features of Mode Based
DCT Histograms”, proceedings of International Conference on
Multimedia Technology (ICMT), 2010, pp 1-4
[22] Z. Liu, X. Li, Y. Zhao, “Passive detection of copy-paste tampering for
digital image forensics”, inProc. Fourth Int. Conf. Intelligent Comput.
Technol. Automation 2, 649–652 (2011)
[23] F. Huang , J. Huang and Y. Q. Shi, “Detecting Double JPEG
Compression with the Same Quantization Matrix”, IEEE Transactions
on Information Forensics and Security, Vol. 5, No. 4, pp. 848-856, 2010.
[24] Y. Huang, W. Lu, W. Sun, D. Long, “Improved DCT-based detection of
copy-move forgery in images”, Forensic Sci. Int. 206, 178–184 (2011)
[25] W. Luo, Z. Qu, J. Huang, and G. Qiu, “A novel method for detecting
cropped and recompressed image block,” in Proceedings of the IEEE
International Conference on Acoustics, Speech, and Signal Processing
(ICASSP ’07), vol. 2, pp. 217–220, April 2007.
[26] M. Barni, A. Costanzo, and L. Sabatini, “Identification of cut & paste
tampering by means of double-JPEG detection and image
segmentation,” in Proceedings of the IEEE International Symposium on
Circuits and Systems (ISCAS ’10), pp. 1687–1690,IEEE, Paris, France,
May-June 2010.

136

You might also like