0% found this document useful (0 votes)
9 views15 pages

Paper 2

This paper presents a novel method for detecting and localizing copy-move forgery in digital images using evolving cellular automata and local binary patterns. The proposed block-based passive method improves robustness against JPEG compression and post-processing attacks compared to existing techniques. Experimental results demonstrate its superior performance in various manipulation scenarios, making it a promising approach in digital image forensics.

Uploaded by

GHODHA OP
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views15 pages

Paper 2

This paper presents a novel method for detecting and localizing copy-move forgery in digital images using evolving cellular automata and local binary patterns. The proposed block-based passive method improves robustness against JPEG compression and post-processing attacks compared to existing techniques. Experimental results demonstrate its superior performance in various manipulation scenarios, making it a promising approach in digital image forensics.

Uploaded by

GHODHA OP
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

Evolving Systems

https://doi.org/10.1007/s12530-019-09309-1

ORIGINAL PAPER

A novel method for digital image copy‑move forgery detection


and localization using evolving cellular automata and local binary
patterns
Gulnawaz Gani1 · Fasel Qadir1

Received: 23 May 2019 / Accepted: 16 October 2019


© Springer-Verlag GmbH Germany, part of Springer Nature 2019

Abstract
Copy-Move Forgery Detection (CMFD) methods aim to forensically analyze a digital image for a possible content duplication
manipulation. In the past, many block-based algorithms have been proposed for detection and localization of CMF. However,
the existing solutions show limited efficacy for images compressed in JPEG and lack robustness against post-processing
attacks such as noise addition, blurring, etc. To address this problem, we propose a new block-based passive method for
detection and localization of CMF in this paper. Passive methods, as opposed to active methods, are used to authenticate the
image content in the absence of any pre-embedded information such as watermarks. In our proposed scheme, a suspicious
input image to be analyzed is first low pass filtered and converted to Local Binary Patterns (LBP) image. The LBP texture
image is then divided into overlapping blocks. Next, a compact five-dimensional feature vector is extracted from each block
by employing thresholding and Cellular Automata. The set of feature vectors is sorted lexicographically to bring the copy-
pasted blocks nearer to each other. Finally, the feature matching step is used to reveal the duplicate blocks. Our experimental
results indicate that the proposed method performs exceptionally well relative to other state-of-art-methods, under different
image manipulation scenarios.

Keywords Copy-Move Forgery · Cellular Automata · Passive method · Thresholding · Local Binary Patterns

1 Introduction compression, etc. may be applied to the image to hide any


traces of tampering.
Digital image manipulations produced using sophisticated The development of automated forensic tools for veri-
image editing software such as Adobe Photoshop, GIMP, fying the authenticity of a digital image is motivated by
etc. have undesirable consequences across all domains. the fact that humans are not good at spotting the skillfully
One of the common image manipulation scenarios is Copy- crafted fake photos. For example, the findings reported in
Move Forgery (CMF). In a CMF, image content is copied (Nightingale et al. 2017) show that people have limited
and pasted within the same image. Figure 1 shows an exam- ability to detect and locate real-world photo manipulations.
ple image and its tampered version produced using CMF. The field of digital image forensics has been set up with the
Usually, the copied content is subjected to intermediate goal of verifying the integrity of digital images. Two major
attacks (e.g. rotation, scaling, blur etc.) before it is pasted forensic approaches for verifying the credibility of an image
at the target place. Also, prior to the release of the manipu- include active and passive forensics. Active approaches work
lated image into the public domain, several post-processing by embedding pre-determined information such as water-
operations such as blur, brightness change, contrast change, marks in the image and later verifying the credibility based
on this pre-embedded information. Passive approaches are
a more recent field used to authenticate images which are
* Fasel Qadir not watermarked or for which no reference image is known.
[email protected] Passive approaches work on the assumption that edited
1 images exhibit detectable artifacts, for example, resampling
Department of Computer Science, University
of Kashmir, North Campus, Delina, Baramulla, and noise inconsistencies (Lin et al. 2018). The proper
Jammu and Kashmir 193103, India

13
Vol.:(0123456789)
Evolving Systems

proposed method. Section 5 describes the datasets and eval-


uation criteria. Section 6 presents the different experiments
performed in this study and also shows a comparative evalu-
ation of the proposed method. Finally, Sect. 7 concludes the
paper.

2 Related work

Over the past several years, many block-based and key-point


based methods for copy-move forgery detection and locali-
Fig. 1  Copy-move forgery example zation have been developed. Given below, we present some
methods that we consider representative of the entire field.
One of the early block-based methods is proposed in
exploitation of these traces could become a potential source (Fridrich et al. 2003). In this, Discrete Cosine Transform
of digital evidence for tampering detection. (DCT) features are extracted from overlapping image blocks.
The mechanism of copy-move forgery creates two simi- To improve the matching performance, features are lexico-
lar regions inside the image. The general methodology to graphically sorted which brings similar features nearer to
detect and localize these similar areas within a CMF image each other. This technique has shown limited robustness
is to extract features and match them to determine the simi- against the post-processing manipulations. Also, very huge
lar ones assuming copy-pasted regions would yield simi- time requirements due to the high dimensionality of feature
lar features. Thus, two major components of Copy-Move vectors make it highly unattractive.
Forgery Detection (CMFD) and localization methods are Discrete Cosine Transform based features have been
feature extraction and feature matching. However, in the widely researched in the past years. For example, in a work
literature, the distinction between two types of methods (Huang et al. 2011), the authors have proposed a DCT-based
is often made: (1) Key-point based, and (2) block-based. method similar to the work (Fridrich et al. 2003). They
Block-based methods extract features from small overlap- showed that it is not necessary to use all the DCT coeffi-
ping image regions called blocks, whereas, the key-point cients to construct a feature vector for a block because most
based methods extract features from the image without any of the information from block tends to be concentrated in
image subdivision. Key-point based methods are fast and first few low-frequency DCT coefficients. So, they reduced
work well in the presence of geometric attacks. However, the size of the feature vector to just first sixteen coefficients,
they perform inadequately for images containing smooth making the subsequent matching process efficient and also
regions and images containing similar but genuine objects acquired improved robustness to post-processing opera-
(Wen et al. 2016). On the other hand, block-based methods tions. The authors in (Cao et al. 2012), proposed a method
are highly desired for close examination of an image for a in which they focus on further reducing the feature vector
possible copy-move forgery despite their unattractive com- length. They extracted four features from each DCT-trans-
putational time (Christlein et al. 2012). formed block based on the magnitude information of the
Many existing block-based methods fall short to detect DCT coefficients. However, this method has been tested only
and localize CMF in images compressed in lossy JPEG for- against a subset of image manipulations and lacks robust-
mat and lack robustness against post-processing attacks such ness against most post-processing manipulations. In another
as noise addition, blurring, etc. In this paper, we propose DCT-based work (Zhao and Guo 2013), the authors have
a new block-based method for CMFD and localization to proposed a solution using SVD (Singular Value Decompo-
circumvent these two drawbacks. We propose a new idea for sition) and DCT for feature extraction. One more method,
extracting features from blocks based on thresholding and given in (Hayat and Qazi 2017) proposes a DCT and DWT
Cellular Automata. We experimentally compare and evaluate (Discrete Wavelet Transform) based solution. The authors
our method with seven other methods in the literature and extract the DCT features from overlapping blocks derived
report promising results. from an approximation sub-band of the DWT. The latter two
The remainder of this paper is organized as follows. In studies lack the comprehensive evaluation of the proposed
Sect. 2, we give a brief overview of some of the representa- features in different post-processing scenarios and have
tive works related to CMFD problem. In Sect. 3, a brief shown limited performance in JPEG images.
introduction of different concepts (Cellular Automata, Local The authors in (Wang and Wang 2018) propose a per-
Binary Patterns, Noise filtration and Thresholding) used in ceptual hash based solution. They have not tested their
this paper is presented. Section 4 presents the details of the method on JPEG images. A recent work proposed in

13
Evolving Systems

(Mahmood et al. 2018) has applied Stationary Wavelet Based on the available literature, it is clear that a new
Transform (SWT) to the input image. To extract reduced or modified method to compensate the disadvantages of
feature descriptors, the authors then apply DCT to the the existing solutions is needed.
overlapping blocks obtained from the LL sub-band of the
SWT output image. The results demonstrate the improve-
ments in both, robustness and time complexity relative
to other techniques in the literature. However, it shows 3 Background
limited robustness to JPEG compression. In addition, they
have not tested the method for robustness against the noise This section gives a brief introduction to Cellular Autom-
attacks. ata (CA) and Local Binary Patterns (LBP) based on Rosin
The use of image moments has been investigated for (2010) and Mehta and Egiazarian (2016).
copy-move forgery detection by many authors. The most
notable work among them is given in (Ryu et al. 2013) which
is based on Zernike moments. Zernike moments essentially 3.1 Cellular Automata
are rotation invariant and therefore show very limited detec-
tion in the presence of other image manipulation artifacts A Cellular Automaton (Rosin 2010) is a discrete model
such as noise, compression and scaling. (discrete in time and space), composed of a d-dimensional
The use of texture features has also been investigated. grid of cells. Each cell can be in one of a finite number
Two such works include (Davarzani et al. 2013) and (Li et al. of states. For example, in a two-state CA, each cell can
2013). They both use the LBP operator for feature extraction. be either on (1) or off (0). All cells change their state
In the former work, the authors comprehensively investigate simultaneously based on a local update rule, also known
the multi-resolution LBP operators for copy-move forgery as transition function. The update rule is a function that
detection. The published results demonstrate robustness to returns new state for a current cell at time t based on what
post-processing operations at the cost of high time require- is happening in the neighborhood of this cell at time t−1,
ments. The authors in the latter work do not compare their i.e. the new state of a current cell depends on its own state
method with other related techniques. and the state of cells in its neighborhood at time t−1. The
Some histogram-based CFMD techniques have been pro- neighborhood is always local and is defined relative to the
posed recently. For example, in (Lee et al. 2015), the authors cell itself. Initially, at time t = 0, a CA starts by assigning
proposed a technique in which they use the histogram of ori- a state to each cell in the grid. As time advances, the cells
ented gradient features for detection of copy-move forgeries. change their states after discrete time steps according to
HOG features are based on shape and texture. This technique some update rule creating new generations. This evolution
can precisely locate the plain-CMF (when duplicated regions of cell states over time can be viewed as a computational
are perfect copies of each other) but lacks robustness against procedure and has been applied to solve many tasks. Some
post-processing manipulations. important applications where evolving systems have been
In the work (Tralic et al. 2016), the authors employ Cel- successfully used include image scrambling (Qadir et al.
lular Automata for feature extraction. They make use of Cel- 2013), (Jeelani and Qadir 2018), noise removal (Qadir and
lular Automata to extract texture features from each block Shoosha 2018), (Jeelani and Qadir 2019), object detection,
in a reduced binary feature vector of size 32. The feature identification and tracking in video streams (Angelov et al.
extraction process proposed in it is inefficient due to the use 2011), and navigation (Zhou and Angelov 2007).
of the interpolation process and results demonstrate that the The use of extremely simple CA rules produces com-
proposed method has limited robustness to noise and JPEG plex and unpredictable spatiotemporal patterns (a pat-
compression. tern that evolves in both space and time). However, in
Among the key-point based methods, SIFT (Scale Invari- this paper, we partly depend on the inverse problem of
ant Feature Transform) and SURF (Speed-Up Robust Fea- extracting the underlying local rules from a given spa-
tures) are the most widely used feature descriptors. The tiotemporal pattern. Some major contributions towards
notable works using these techniques include Amerini et al. this direction include (Billings and Yang 2003) and (Sun
(2011), Xu et al. (2010) and Ardizzone et al. (2015). Key- et al. 2011). For example, in the literature (Billings and
point based method, in general, perform remarkably well for Yang 2003), the authors identify both the rules and the
detecting rotated and scaled copies, however, their limited neighborhood using a reformed orthogonal least squares
robustness is a major concern when post-processing opera- algorithm. Although the rule derivation process proposed
tions like blurring, compression, etc. are used because these in them is quite complicated and time-consuming, in this
operations discard most high-frequency information from paper, we use a modified procedure inspired by the idea
the image. presented in (Sun et al. 2011).

13
Evolving Systems

3.2 Local Binary Patterns (LBP) median filter has been used for noise removal. In the pro-
posed method, we also noted the significant improvements
Local Binary Patterns (Mehta and Egiazarian 2016) is one in detection performance by applying the Gaussian low pass
of the widely used texture description operators due to its filter on the input image before features are extracted.
computational efficiency and high
( discriminative
) power. For
an M × N grayscale image, if gc xc , yc is the intensity value 3.4 Thresholding
of a center pixel, then the value of the LBP code for this
pixel is obtained by thresholding intensity values gp on the Image thresholding is a widely used way to convert a gray-
circular neighborhood around gc based on Eq. (1): scale image into a binary image. For example, one of the
simple ways to convert a grayscale image into a binary
P−1
∑ ( ) image is to use a global threshold (say T) value and turn all
LBPP,R = s g p − gc 2 p (1)
p=0
pixels possessing values greater than T to one and all others
to zero. This creates a binary image B(x, y) from an intensity
{ image I(x, y) using the simple criteria:
1, if x ≥ 0 {
where, s(x) = 1, if I(x, y) > T
0, otherwise B(x, y) =
0, otherwise
The parameter p is the index of the neighbor pixel, R is
the radius of the circular neighborhood and P is the total Global thresholding is very fast. However, manual selec-
number of neighbors. Figure 2 shows how a description tion of the threshold is its major disadvantage. In this paper,
of local image texture around the center pixel gc = 123 is we convert a grayscale image block to binary representation
modeled using an equally spaced circular neighborhood of to enable the efficient application of CA (reduce the number
P = 8 points and radius R = 1. While transforming a square of states). This is done by using a popular clustering-based
neighborhood to circular neighborhood, the diagonal pixels automatic thresholding procedure developed by Otsu, which
are interpolated. automatically finds the optimal threshold value for a given
image(Otsu 1979).
3.3 Noise filtration

Most image editing software provides an option for adding 4 Proposed method
noise to a photograph. The motivation for adding noise may
be either to hide problematic areas produced by awkward The proposed method for copy-move forgery detection and
editing or to add realism to the manipulated image. The localization is outlined in Fig. 3.
more experienced forger may also add noise to an image as In the proposed method, the input suspicious image to
an anti-forensic operation given the fact that current CMF be analyzed for the possible CMF goes through the follow-
detectors fail to detect a CMF in images with high levels of ing major steps: Pre-processing, feature extraction, feature
added noise (Shelke and Prasad 2016). matching, and post-processing, which are described one by
To improve the chances of forgery detection, most meth- one as follows.
ods pre-process the image by removing the high-frequency
details and enhancing the image quality prior to the detec- 4.1 Preprocessing
tion process. For example, in the work (Davarzani et al.
2013), the authors use the Weiner filter as a noise reduction In order to improve the efficiency of the analysis process,
technique and report significant improvement in detection first we convert the input RGB image to grayscale (I) image
accuracy. Similarly, in the work (Pun and Chung 2018), the using the following standard Eq. (2):

Fig. 2  Example: computation of


100 0 4
an LBP code for a pixel
123 50 1 0 8 2

LBP8,1 (123)
200 123 30 1 0 16 1
=248
230 200 1 1 32 128
220 1 64

(a) Sampled points (b) Thresholding (c) Weights (d) LBP code

13
Evolving Systems

algorithm. The feature vector formation process is described


Suspicious as follows (Sect. 4.2.1).
Input Convert to grayscale
image
4.2.1 Feature vector formation

First, we threshold each B × B block obtained from the LBP


texture image into a binary representation (say L) using the
Otsu thresholding procedure (Otsu 1979). Thresholding is
Gaussian filtering Convert to LBP texture used to enable efficient feature extraction as it reduces the
(low pass filter) image number of states per pixel from 256 in a grayscale image to
two in a binary image.
Considering the binary pattern L as a Cellular Automa-
ton, the rule derivation process from a given pattern is based
on the following excerpt from the book Wolfram, A New
Extract block featues
Divide into Kind of Science (Wolfram 2002, p. 1089).
overlapping blocks (thresholding and Cellular
Automata) “Given a complete cellular automaton pattern, it is
easy to deduce the rule which produced it. All one
needs to do is to find an occurrence of each of the eight
possible neighborhoods, and see what it produced
(Wolfram 2002, p. 1089)”.
Filter out Spurious
Match Features
matches
However, most often, a given pattern may not con-
tain all the necessary information to derive an exact
rule from it. So, we need to find an approximation to
the rule that produced the pattern. To derive an approxi-
Localizaon/ mate rule, we define the neighborhood of an observed
Visualizaon of
result cell L(k, l) in a row k, k ∈ {1, 2, … , B} to be the cells
L(k, l − 1), L(k + 1, l), L(k, l + 1), L(k − 1, l), l ∈ {1, 2, … , B}.
Fig. 3  The main stages of the proposed method The cells at the boundary are processed using the periodic
boundary conditions. The state of an observed cell L(k, l)
is seen as a result of applying a CA rule R to its neigh-
I = 0.2989 ∗ R + 0.5870 ∗ G + 0.1140 ∗ B (2) bor cells. Also, let F m , m = 0, 1 be the vectors in which the
Next, we filter the grayscale image using the Gaussian value at the ith position, that is, F m (i), i = 0, 1, 2, … , 15 gives
filter. Gaussian filter is used to reduce noise and compression the frequency of the neighborhood pattern i in L such that
artifacts in the input image. the observed cell value is m . The vectors F m , m = 0, 1 are
populated for the matrix L. The CA rule R that most likely
produced the pattern is built by combining the most common
4.2 Feature extraction outcomes for each of the sixteen different neighborhoods,
that is,
For feature extraction, we first transform the grayscale image
0
to the LBP texture image. The LBP texture image is then ∑ ( )
R= S F 1 (i) − F 0 (i) .2i (3)
split into a set of overlapping blocks of size B × B pixels, by
i=15
moving horizontally or vertically, one column or one row at {
a time. Each block is identified by storing the coordinates 1, x ≥ 0
where, S(x) =
of its center pixel. Using this scheme, the total number of 0, x < 0
blocks, X, for a M × N image with B × B block size equals The approximate rule so derived contains useful informa-
X = (M − B + 1) × (N − B + 1). tion about the neighborhood structure and can be used as a
To capture the description of a blocks information content feature descriptor. This rule, although containing sufficient
in a short and robust feature vector, we derive a five-dimen- information about the frequency and spatial relationships
sional feature vector for each block based on thresholding of different binary patterns present in the block, was found
and the Cellular Automata. Using this short feature vector to yield large feature space of matching vectors, especially
also results in better memory usage and faster matching of in high-resolution images. Therefore, to increase the dis-
the feature vectors later during the matching phase of the criminative power of the feature vector, we use two more

13
Evolving Systems

properties, mean and variance, of the frequency distributions After features are obtained for all blocks, each feature
F m , m = 0, 1. Thus, we get a five-dimensional feature vector vector is stored in a feature matrix (FM) along with the iden-
for each block defined as: tifier of the corresponding block. The matrix FM (of size
X × 7) is built so that every feature vector becomes a row
F = (R, m0 , vo , m1 , v1 )
in it.
where R is an approximate rule for a block,m0 , v0 and m1 , v1
represent the mean and variance of probability distributions
F 0 and F 1, respectively. The derived features encode the sta- 4.3 Feature matching and filtering
tistical and structural information of the block and its texture
and can be used as the feature descriptor. Two feature vectors may develop into potential duplicated
To illustrate the rule derivation process with an exam- blocks if they are sufficiently similar. To efficiently search
ple, consider a random pattern appearing on the left for the duplicated blocks, we lexicographically sort the list
side in Fig. 4. In this pattern, gray and white squares of features to obtain a sorted list, S. Next, each feature vec-
are used to represent a cell value ‘0’ and ‘1’ respec- tor is matched with its k neighboring feature vectors from
tively. Further, for this example, we define the neighbor- S. Any block pair, Bij andBmn , under consideration are des-
hood of a cell C(x, y) under consideration to be the cells ignated as potentially forged if and only if following condi-
C(x − 1, y − 1), C(x − 1, y)andC(x − 1, y + 1) from the row tions are satisfied between their feature vectors Fij andFmn:
above. C(x, y). The next state of an observed cell C(x, y) is
determined based on the probability assigned to each pos- • If the difference between the two is less than the pre-
sible outcome (0 or 1). For instance, the neighborhood defined difference threshold, Ts. The difference is meas-
‘111’ appears 6 times producing the outputs 0 (black) and 1 ured using the relative error between two feature vectors
(white) four and two times respectively. So the probability Fij andFmn using the Eq. (4):
that the next state of the cell will be ‘0’ is 2/3 and that of | |
‘1’ is 1/3. This is shown using the partially white cell under |Fij (k) − Fmn (k)|
| | ≤ T , k = 1, … , 5 (4)
the neighborhood ‘111’ on the right side in Fig. 4. If both max(Fij (k), Fmn (k)) s

the probabilities are equal, then the odds that a particular


neighborhood can produce 0 or 1 are equal. Note however • If the Euclidean distance, determined using the Eq. (5),
that, if we use this scheme, then each rule has to be formed between two feature vectors Fij andFmn is greater than Td,
using eight real numbers instead of two binary digits. This where Td is pre-determined:
increases the number of states and hence the rule space. A
simple way of translating the probabilistic rule to determin- √
istic one is to assign either 0 or 1 to a neighborhood pattern (m − i)2 + (n − j)2 ≤ Td (5)
depending on which one is produced with the maximum
probability. This way a rule can be represented with only The condition in Eq. (5) is used to filter out false matches
eight bits. The approximate rule so derived still contains due to spatially close blocks. Spatially close blocks generate
useful information about the neighborhood structure similar features due to the high correlation between them.

Fig. 4  Illustration of rule derivation process. Left: a random binary pattern (CA). Right: approximate rule giving the description of different pat-
terns in the given binary pattern. A white cell represent ‘1’ and gray cell represents ‘0’

13
Evolving Systems

After this step, the identifiers of the blocks that are the number of true positive pixels, i.e. the number of pixels
retained through the above steps are stored in a Result correctly detected as forged, fp is the number of false posi-
Matrix (RM). tive pixels, i.e. pixels erroneously detected as forged, and
fn is the number of false negative pixels, i.e. falsely missed
4.4 Post‑processing forged pixels. Then, the precision (P) refers to the proportion
of detected pixels which are truly forged and the recall (R)
In this step, we further inspect the matrix, RM, and focus on refers to the proportion of truly forged pixels that were iden-
removing false matches (blocks that are detected as similar, tified correctly. The F-measure (F), combines both precision
although lying outside the copy-pasted regions). The pro- and recall into a single parameter. That is,
cedure to remove false matches is based on computing the
shift vector. The shift vector between each matched pair of tp
P(%) = × 100 (7)
blocks,Bij andBmn stored in RM, is computed using Eq. (6) (tp + fp )
and associated with each such block:
V = |(m − i), (n − j)| (6) tp
Clearly, the matching blocks across the duplicated regions R(%) =
(tp + fn )
× 100 (8)
would be at the same relative position and therefore would
generate the same amount of shift. The shift vector for a
block is incremented each time two blocks generate the same F(%) = 2.P.R∕(P + R) (9)
shift. Therefore, the frequency of the shift vector of a block The given dataset provides the reference mask corre-
gives the number of matching block pairs that are at the sponding to each forged image which let us count tp , fp andfn,
same relative position across the duplicated regions. Next, to precisely calculate the copy-moved pixels.
only those matched blocks are considered as forged whose
shift count is greater than a predetermined threshold value,
Tm. Note that this technique avoids false matches at the cost 6 Experiments results and evaluations
of false rejection of small duplicated regions.
Further, we remove any small falsely detected regions In this study, three sets of experiments were performed.
with area fewer than A = Tm pixels using a morphological In the first set of experiments (Sect. 6.1), we empirically
area opening operation. The remaining blocks, if any, are determine the different thresholds ( B, Tm , Ts andTd ) used in
assumed to be belonging to the copy-pasted regions. the proposed method. The experiments in the second set
(Sect. 6.2) are designed to find out the effectiveness and
4.5 Forgery localization (visualization) robustness of the proposed method to detect and localize (if
any) forgeries in the input image under various manipulation
To visually show where the duplicated blocks are located scenarios. Finally, in the third set of experiments (Sect. 6.3),
in the input image, our algorithm outputs a binary detection we compare and evaluate our method with seven other refer-
map in which the pixels corresponding to blocks retained ence methods from the literature.
through above steps are shown using the white color. Option-
ally, the algorithm outputs a copy of the input color image
with duplicated regions highlighted in bright red color. The 6.1 Threshold determination
output binary detection map is also used to quantify the
detection performance by comparing it with the reference The values of the different thresholds were determined
map (mask) of the input image. empirically from the given dataset. The procedure used in
each case follows:

5 Datasets and performance measures 6.1.1 Block size, B

To test the performance of the proposed method, we selected Figure 5 shows the details of the experiment used to find
all the images from a standard public dataset, (Tralic et al. an appropriate block size. For a set of 30 forged images,
2013). This dataset contains a wide variety of images of size affected with plain-CMF (no intermediate or post-processing
512 × 512, depicting both simple and complex scenes. attacks), the average F-measure was obtained correspond-
Further, to quantify the performance, we use the precision ing to each block size appearing along the horizontal axis
and recall measures as suggested in (Al-Qershi and Khoo in Fig. 5. The optimal F-measure (one which achieves a
2018). We briefly define these metrics as follows: Let tp be good balance between precision and recall) for the given

13
Evolving Systems

Average F-measure(%) 100


95
90
85
80
75
70
7 9 11 13 15 17 19 21 23
Block size

Fig. 5  Determination of block size (B)


Fig. 7  Determination of ­Tm

dataset, was observed around B = 11. So, we use the block


size, B = 11. observed that the values higher than 10% result in a higher
number of false matches and values lower than this resulted
6.1.2 Minimum number of matches (Tm) in low accuracy. Further, to get around the false matches
generated by spatially close blocks a suitable distance
In the proposed method, the value of Tm is set to 100, threshold is needed. We use the distance threshold value,
which was found by running the algorithm with different Td= 30, which means the duplicated blocks less than 30 pix-
settings for Tm, between 25 and 200, in steps of 25. The els apart would easily evade from being identified as forged.
objective was to optimize average F-measure for a set of However, given the high resolution of natural images, this
30 forged images affected with AWGN of three variance limitation becomes irrelevant.
levels 10−4 , 10−3 , 10−2 . Each data point in Fig. 6 shows the
average F-measure recorded over these 30 images for a cor-
responding value of Tm. Observe from this figure that the 6.2 Detection results
average F-measure decreases on both sides of Tm = 100. The
reason for this behavior is that for smaller values of Tm, prob- In this section, we conducted experiments to examine the
ably large number of false matches are generated and for effectiveness and robustness of the proposed method in the
larger values of Tm images containing small sized duplicated following manipulation scenarios.
regions are falsely missed from detection.
Figure 7 visually shows the effect of Tm threshold. (a) Copy-move forgery with no intermediate or post-pro-
Observe from this figure that the initial detection map gen- cessing attacks, followed by saving an image in lossless
erated by the algorithm for this image contains a large num- format (PNG).
ber of false positives. However, the false positives disappear (b) Copy-move forgery with no intermediate or post-pro-
after Tm and A are both set to 100. cessing attacks, followed by saving an image in lossy
JPEG format at different quality factors.
6.1.3 Difference threshold (Ts) and distance threshold (Td) (c) Copy-move forgery accompanied by global post-pro-
cessing manipulations (here global refers to applying
The difference threshold is chosen to be 10% and was deter- the effect such as blurring, noise addition, etc. to the
mined based on observation. During experiments, it was entire image after CMF manipulation).

­ m threshold. a
Fig. 6  Effect of T
Input CMF image. b Reference
mask. c Initial detection map
generated by the algorithm
contains large number of false
matches (false matches are all
white pixels outside the dupli-
cated regions). d Final result
after setting ­Tm to 100

13
Evolving Systems

(d) Copy-move forgery accompanied by intermediate


manipulations, that is, when noise, brightness and con-
trast variations are applied to the copied content before
it is pasted at the target place.

In the following subsections, we present the experiments


and results for each of the above cases.

6.2.1 CMF in uncompressed (PNG) images

To test the effectiveness of the proposed method for locat-


ing duplicated regions in uncompressed images, we chose
a sample of 20 test images from the given dataset. The
images were chosen so that they are representative of real
world manipulations in terms of image content (textured and
Fig. 8  Proposed methods visual results. Column a: input CMF image,
smooth), size and shape of the tampered areas. Table 1 shows column b: reference mask showing duplicated areas, column c: output
the detection results (precision, recall, and F-measure) of the of the proposed method
proposed algorithm on these test images. It can be noted
from this table that the proposed method accurately locates
the forged areas in all test images and achieves average pre- becomes challenging because the compression process
cision of almost 90% (few false positives). However, accord- results in image degradation. The repeated resaves make the
ing to Table 1, some images (e.g. 012.png) have a lower problem more severe because with each resave, the amount
value for precision compared to others. It was observed that of degradation increases, although not linearly. The first save
such images contain overly smooth regions and therefore at a given JPEG quality results in most degradation while
some spurious matches outside duplicated regions occurred. as the subsequent resave at the same quality level cause less
Further, the detection results obtained show that the pro- degradation (Krawetz 2015).
posed method successfully locates the duplicated areas of To investigate the influence of JPEG compression on for-
different sizes and shapes. Figure 8 shows visual results for gery detection and localization performance, we performed
two cases of forgeries where a small (approximately 0.6% of a controlled experiment in which ten copy-move forged
image size) and the irregular shaped region is duplicated in images saved using five different JPEG compression levels:
the first case and the size of the duplicated region in second 90, 80, 70, 60 and 50 (total of 50 images, 10 images per
case is approximately 2.5% of image size. quality factor) were investigated thoroughly. The results of
the experiment are shown in Table 2.
6.2.2 CMF in compressed (JPEG) images In general, the results in Table 2, demonstrate that with
increasing JPEG compression level (decreasing qual-
JPEG images use a lossy compression algorithm. If the ity factor), both average precision and recall performance
image is saved at higher JPEG compression levels (low- decreases. However, it was observed that even if the JPEG
quality factor) after a copy-move manipulation, detection compression level is kept constant, the performance varies

Table 1  Results of the proposed Test image Pre Rec F-measure Test image Pre Rec F-measure
method on a subset of images name name
from the given dataset
002 88.4 74.9 81.1 023 86.7 87.7 87.2
006 89.2 91.9 90.5 024 95.2 90.9 93.0
007 86.2 88.8 87.5 025 87.8 88.4 88.1
009 96.5 86.7 91.3 027 92.9 93.9 93.4
011 100.0 47.6 64.5 028 90.8 93.1 91.9
012 69.2 95.4 80.2 029 92.6 92.6 92.6
013 99.3 92.7 95.9 032 91.8 89.2 90.5
015 84.6 83.6 84.1 034 92.3 94.9 93.6
018 92.3 69.0 79.0 036 85.3 92.4 88.7
022 87.0 89.1 88.0 040 92.6 96.7 94.6

13
Evolving Systems

Table 2  Proposed methods Test image QF = 90 QF = 80 QF = 70 QF = 60 QF = 50


results under JPEG compression (.JPG)
Pre Rec Pre Rec Pre Rec Pre Rec Pre Rec

006 79.7 79.0 74.0 70.5 72.9 51.6 66.9 60.3 72.4 45.5
012 75.5 66.2 69.3 63.0 61.7 80 58.7 63.3 71.45 39.2
015 93.7 34.2 100.0 36.7 0.0 0 0 0 0 0
018 77.2 94.7 9.80 94.7 80.3 95.2 80.8 99.0 83.2 95.3
023 87.8 77.8 91.1 78.5 88 66.7 90.8 48 95.3 46.7
024 71.3 59.7 95.4 50.2 70.6 44.7 89.1 30.7 98.2 27.1
028 94.1 65.8 94.5 61.6 96.8 49.7 97.8 46.0 97.7 40.8
029 91.5 81.3 95.6 78.1 94.3 62.7 84.9 58.7 96.1 57.8
034 92.5 81.6 80.0 75.4 78.6 67.4 98.3 54.1 43.8 59.4
040 99.6 45.9 99.1 53.2 95.5 72.0 99.4 62.3 0 0

images with satisfactory precision and recall. Note that, the


precision shows consistency, which means very few false
positives are detected by the algorithm with decreasing qual-
ity factors. Also note from Table 2, at JPEG quality factors
lesser than 70, recall starts to decrease rapidly because the
number of matching blocks between copy pasted regions
decreases with increasing compression. Further, no traces
of forgery could be located in some images (precision and
recall are shown with digit 0 for them in the table) with
increasing compression levels. Most of these images were
observed to contain duplicated regions of smaller size com-
pared to others in the given set of images. This shows that
forgeries involving larger sized duplicated regions have
increased chances of successful detection.

6.2.3 CMF in post‑processing scenarios

To conceal the traces of tampering and make forgery look


convincing, most photo fakes are complemented with post-
processing operations like addition of noise, blurring, bright-
Fig. 9  Results of the proposed method for three forged images, all ness and contrast changes, etc. These operations often make
saved at same JPEG quality level 50. Column a: input images, col- it difficult to detect and localize forged areas. In this sec-
umn b: reference masks, column c: result produced by our method.
tion, we present experiments and results that demonstrate the
Results vary, even at the same quality level
robustness of the method under noise, blurring, brightness
and contrast change manipulations.
significantly across different images. This shows that the In the first case, we tested our method on images cor-
detection performance depends on the characteristics of rupted by mean-free Additive White Gaussian Noise
image data (contents of a specific image). For example, (AWGN) of variance levels, 𝜎 2 = 0.0005, 0.005, and0.009.
Fig. 9 shows detection results for three images, saved at the Figure 10 shows an example image and the results of the
same JPEG quality level 50. Observe from this figure that proposed method for this image under different amounts of
the detection is accurate in the first and second cases (row 1 noise. Note from this figure, duplicated regions are success-
and row 2) while as a few visible traces of forgery could be fully located by the proposed method even when a large
seen in the third case (row 3) The reason for this behavior amount of noise (variance = 0.009) is added to the image.
is possibly the fact that the amount of compression itself In the second case, we tested our method on images
depends upon the characteristics of the image data. affected with Gaussian blurring with three different set-
From Table 2, it is clear that the proposed method suc- tings for kernel size and standard deviation: (kernel size,
cessfully detects and localizes the forgeries in most JPEG standard deviation) = (5, 2), (5, 3) and (7, 4). Figure 11

13
Evolving Systems

shows the results for an example image affected by differ- All the experiments in this section demonstrate that the
ent blurring amounts. Note that the detection is accurate proposed method can detect forgeries in images distorted by
in all cases. different post-processing operations.
In the third case, we tested our method on images affected
with brightness and contrast changes. Brightness and con- 6.2.4 CMF with intermediate attacks
trast was varied using three intervals: (0.01, 0.95), (0.01,
0.9) and (0.01, 0.8). To produce the brightness effect, the In a typical copy-move forgery, the copied content may be
input image is normalized by mapping the intensity values subjected to several intermediate manipulations such as
to the interval [0, 1]. Next, intensity values below lower noise addition, brightness change, contrast change, rotation,
bound and above upper bound in an image were set to 0 and scaling etc., before it is pasted at the target place. In this
1, respectively. To produce a contrast effect, the output range experiment, we demonstrate the robustness of the proposed
was reduced by mapping the intensity values to these three method in such cases. Figure 13, shows the visual detection
intervals. The interval (0.01, 0.8) was observed to produce a results for an example image in which different intermediate
significant change in image brightness (lighter images) and attacks are applied to the copied part using the Adobe Photo-
contrast (darker images). Figure 12, shows the visual results shop 7.0. To further generalize the results, we also program-
of the proposed method on two images affected with differ- matically performed intermediate attacks of Gaussian noise
ent amounts of brightness and contrast changes. (variance = 10−2), brightness change and contrast change

Fig. 10  Results of the proposed


method under different amounts
of Gaussian noise

Fig. 11  Results of the proposed


method when forged images are
affected with Gaussian blur

Fig. 12  Results of the proposed method when forged images are subjected to brightness and contrast changes

13
Evolving Systems

Fig. 13  Detection results when


brightness, contrast, noise,
rotation, and scaling variations
are applied to the copied region
before paste. Row1 shows the
input forged images and row
2 shows the results of the pro-
posed method

Table 3  Details of reference methods used 90 Huang et al.


80
Method Feature extraction technique Zhao et al.

Average Precision(%)
70
60 Li et al.
Huang et al. (2011) Discrete Cosine Transform (DCT)
50 Tralic et al.
Zhao and Guo (2013) DCT and SVD
40
Li et al. (2013) Local Binary Patterns (LBP) Hayat et al.
30
Tralic et al. (2016) Cellular Automata 20 Wang et al.
Hayat and Qazi (2017) DWT and DCT 10 Mahmood et al.
Wang and Wang (2018) DCT-based perceptual hash 0
Proposed
90 80 70 60 50
Mahmood et al. (2018) SWT and DCT
PROPOSED LBP, thresholding and Cellular Automata JPEG Quality Factor

Fig. 14  Comparison of average precision for JPEG images


(with interval [0.01, 0.85]) on a copied region of 20 images
for each attack. The average F-measures recorded over these
20 images for noise, brightness and contrast variations of 80
specified intervals are 82.32, 87.60, 84.50, respectively. Note 70 Huang et al.

that the F-measure is satisfactory for these manipulations. 60 Zhao et al.


Average Recall(%)

However, it was observed that the copied regions rotated by 50 Li et al.


only small angles (less than 5 degrees and scaled uniformly 40 Tralic et al.
within the 10%) could be detected. Thus, the method shows Hayat et al.
30
limited robustness to detect forgeries produced using arbi- Wang et al.
20
trary rotation angles and scaling factors.
10 Mahmood et al.

0 Proposed
90 80 70 60 50
6.3 Performance comparison with existing works
JPEG Quality Factor

The experiments presented in Sect. 6.2 demonstrated the


Fig. 15  Comparison of average recall for JPEG image set
efficacy of the proposed technique. In this section, we com-
pare our method with other state-of-the-art methods for
JPEG and AWGN manipulations because most block-based In the first comparison experiment, we tested all methods
methods fail to show satisfactory performance in these two on 200 images saved at five different JPEG quality factors
cases. We implemented the seven other reference methods (40 images per quality factor). Figures 14 and 15 summa-
proposed in the literature, details of which are shown in rizes the performance of the proposed and reference meth-
Table 3. The reference methods are implemented in MAT- ods in terms of average precision and average recall, i.e.,
LAB R2016, according to the details provided in the respec- each data point in the figure represents an average over 40
tive literature. We also present the run-time performance images per quality factor. As can be seen from this figure,
analysis of the tested methods in this section. the proposed method achieves the best average precision and

13
Evolving Systems

recall with respect to other methods at all JPEG quality fac- 100

tors. The average precision of the proposed method remains 90


80
satisfactory even at all lower quality factors whereas other Huang et al.

Average Precision(%)
70
methods either fail to detect any forgery or generate a lot of Zhao et al.
60
false positives due to which their precision drops rapidly. 50
Li et al.

Also, note from Fig. 14 that with increasing compression 40


Tralic et al.

levels, the recall curve of the proposed method shows a slow 30 Hayat et al.

decline, compared to other methods. This shows that the 20 Wang et al.

proposed method is able to recover duplicated blocks in the 10 Mahmood et al.

most number of cases. 0 Proposed


10^-5 10^-4 10^-3 10^-2
In the second comparison experiment, we compare the AWGN Variance
performance of our method with other methods in cases
when an Additive White Gaussian Noise of variance lev-
Fig. 17  Comparison of average precision under AWGN of different
els 10−5 , 10−4 , 10−3 and10−2 is added to the 160 images amounts
(40 images per variance level). The behavior of different
methods in noisy environments is shown in Figs. 16 and
17 respectively. It can be noted from these figures that the is separated into feature extraction and matching time for
proposed method takes lead with respect to the other seven a fair comparison. Among the tested methods, the average
methods in terms of both average precision and recall. matching time of the proposed method is lesser compared
Whereas our method detects the copy-moved segments in to other methods because the proposed method uses only
the presence of increased amounts of Gaussian noise, the five-dimensional feature vectors (low dimensional feature
performance of other methods drops rapidly as the noise vector improves matching performance) compared to other
level is increased beyond the variance level 10−4. This shows techniques that use high dimensional features.
that the proposed methods average performance against
AWGN noise attacks is much reliable relative to other meth-
ods in the literature. 7 Conclusions and perspectives
Finally, the run-time of the proposed method has to
be compared. The computation time depends on the time In this paper, we proposed a block-based method for detec-
required to obtain the features, size of the feature space tion and localization of copy-move forgery in digital images.
(number of extracted feature vectors) and the dimensions of We presented a new feature extraction scheme in which CA
the feature vector itself. Therefore, the two major compo- inversion procedure is applied to the binary representation
nents that contribute to running time are the feature extrac- of an image block to make the derived features robust to
tion time and the feature matching time. Note that we ignore JPEG compression and noise attacks. Our results show that
the block division time, time needed to obtain the LBP the proposed method is effective for locating copy-move for-
image and post-processing time because these are insignifi- geries in both uncompressed and compressed images and
cant compared to the feature extraction and matching time. under different image manipulation circumstances. Further,
Table 4 gives the average running time in seconds, over a the results show that the characteristics of image data greatly
set of 100 CMF images of size 256 × 256. The running time influence detection and localization performance. Also, as

100 Huang et al. Table 4  Average running time (in seconds) per image of different
90
Zhao et al. methods
80
Li et al. Method Feature extraction Feature
Average Recall(%)

70
60 Tralic et al. (in s) matching
50 (in s)
Hayat et al.
40
Wang et al. Huang et al. (2011) 325 25
30
20 Mahmood et al. Zhao and Guo (2013) 377 67
10 Proposed Li et al. (2013) 335 110
0 Tralic et al. (2016) 212 123
10^-5 10^-4 10^-3 10^-2
Hayat and Qazi, (2017) 245 36
AWGN Variance
Wang and Wang (2018) 280 90
Mahmood et al. (2018) 230 29
Fig. 16  Comparison of average recall under AWGN of different
Proposed 305 16
amounts

13
Evolving Systems

indicated by the experiments, the proposed method obtains Jeelani Z, Qadir F (2018) Cellular automata-based approach for digi-
the highest average precision and recall for locating forger- tal image scrambling. Int J Intell Comput Cybern 11:353–370
Jeelani Z, Qadir F (2019) Cellular automata-based approach for salt-
ies in JPEG compressed images and images corrupted by and-pepper noise filtration. J King Saud Univ Comput Inf Sci.
noise, relative to other state-of-the-art methods in the litera- https​://doi.org/10.1016/j.jksuc​i.2018.12.006
ture. Therefore, we believe that the proposed method may be Krawetz N (2015) Digital photo forensics, handbook of digital. Imag-
helpful in a variety of image forgery detection applications ing. https​://doi.org/10.1002/97811​18798​706.hdi04​4
Lee JC, Chang CP, Chen WK (2015) Detection of copy-move image
and our results may have implications for researchers work- forgery using histogram of orientated gradients. Inf Sci (NY)
ing in the related domains. 321:250–262. https​://doi.org/10.1016/j.ins.2015.03.009
Two major limitations of existing block-based methods Li L, Li S, Zhu H, Chu S-C, Roddick JF, Pan J-S (2013) An efficient
and our proposed method are (1) the unattractive time com- scheme for detecting copy-move forged images by local binary
patterns. J Inf Hiding Multimed Signal Process 4:46–56
plexity due to overlapping block division and (2) the limited Lin X, Li JH, Wang SL, Liew AWC, Cheng F, Huang XS (2018)
robustness to geometric transformations of the copied region Recent advances in passive digital image security foren-
(scaling and rotation). In the future, we plan to investigate sics: a brief review. Engineering. https​: //doi.org/10.1016/j.
the solutions to these limitations. Some possible directions eng.2018.02.008
Mahmood T, Mehmood Z, Shah M, Saba T (2018) A robust technique
to improve these limitations demand exploration of other for copy-move forgery detection and localization in digital images
CA rule models like Totalistic Cellular Automata (TCA) or via stationary wavelet and discrete cosine transform. J Vis Com-
non-linear rule models etc. mun Image Represent 53:202–214. https​://doi.org/10.1016/j.jvcir​
.2018.03.015
Mehta R, Egiazarian K (2016) Dominant rotated local binary patterns
(DRLBP) for texture classification. Pattern Recognit Lett 71:16–
22. https​://doi.org/10.1016/j.patre​c.2015.11.019
References Nightingale SJ, Wade KA, Watson DG (2017) Can people identify
original and manipulated photos of real-world scenes? Cogn Res
Al-Qershi OM, Khoo BE (2018) Evaluation of copy-move forgery Princ Implic 2:30. https​://doi.org/10.1186/s4123​5-017-0067-2
detection: datasets and evaluation metrics. Multimed. Tools Appl. Otsu N (1979) A threshold selection method from gray-level his-
77:31807–31833. https​://doi.org/10.1007/s1104​2-018-6201-4 tograms. IEEE Trans Syst Man Cybern 9:62–66. https​://doi.
Amerini I, Ballan L, Member S, Caldelli R, Bimbo A Del, Serra G org/10.1109/TSMC.1979.43100​76
(2011) A SIFT-based forensic method for copy—move attack Pun CM, Chung JL (2018) A two-stage localization for copy-move
detection and transformation recovery. IEEE Trans Inf Forensics forgery detection. Inf Sci (NY) 463–464:33–55. https​://doi.
Secur 6:1099–1110 org/10.1016/j.ins.2018.06.040
Angelov P, Sadeghi-tehran P, Ramezani R (2011) An approach to Qadir F, Shoosha IQ (2018) Cellular automata-based efficient method
automatic real-time novelty detection, object identification, and for the removal of high-density impulsive noise from digital
tracking in video streams based on recursive density estimation images. Int J Inf Technol 10:529–536. https​://doi.org/10.1007/
and evolving Takagi—Sugeno fuzzy systems. Int J Intell Syst s4187​0-018-0166-4
26:189–205. https​://doi.org/10.1002/int.20462​ Qadir F, Peer MA, Khan KA (2013) Digital image scrambling based on
Ardizzone E, Bruno A, Mazzola G (2015) Copy-move forgery two dimensional cellular automata. Int J Comput Netw Inf Secur
detection by matching triangles of keypoints. IEEE Trans Inf 5:36–41. https​://doi.org/10.5815/ijcni​s.2013.02.05
Forensics Secur 10(10):2084–2094. https​://doi.org/10.1109/ Rosin PL (2010) Image processing using 3-state cellular autom-
TIFS.2015.24457​42 ata. Comput Vis Image Underst. https ​ : //doi.org/10.1016/j.
Billings SA, Yang Y (2003) Identification of the neighborhood and cviu.2010.02.005
CA rules from spatio-temporal CA patterns. IEEE Trans Syst Ryu SJ, Kirchner M, Lee MJ, Lee HK (2013) Rotation invariant locali-
Man Cybern Part B Cybern 33:332–339. https​://doi.org/10.1109/ zation of duplicated image regions based on zernike moments.
TSMCB​.2003.81043​8 IEEE Trans Inf Forensics Secur 8:1355–1370. https ​ : //doi.
Cao Y, Gao T, Fan L, Yang Q (2012) A robust detection algorithm for org/10.1109/TIFS.2013.22723​77
copy-move forgery in digital images. Forensic Sci Int 214:33–43. Shelke PM, Prasad RS (2016) Improving JPEG image anti-forensics.
https​://doi.org/10.1016/j.forsc​iint.2011.07.015 1:1. https​://doi.org/10.1145/29050​55.29051​34
Christlein V, Riess C, Jordan J, Riess C, Angelopoulou E (2012) An Sun X, Rosin PL, Martin RR (2011) Fast rule identification and
evaluation of popular copy-move forgery detection approaches. neighborhood selection for cellular automata. IEEE Trans Syst
IEEE Trans Inf Forensics Secur 7(6):1841–1854 Man Cybern Part B Cybern. https​://doi.org/10.1109/TSMCB​
Davarzani R, Yaghmaie K, Mozaffari S, Tapak M (2013) Copy- .2010.20912​71
move forgery detection using multiresolution local binary pat- Tralic D, Zupancic I, Grgic M (2013) New database for copy-move
terns. Forensic Sci Int 231:61–72. https​://doi.org/10.1016/j.forsc​ forgery detection-CoMoFoD. In: 55th International Symposium
iint.2013.04.023 ELMAR, pp 49–54
Fridrich J, Soukal D, Lukáš J (2003) Detection of copy-move forgery Tralic D, Grgic S, Sun X, Rosin PL (2016) Combining cellular autom-
in digital images. Int J Comput Sci Issues. https:​ //doi.org/10.1109/ ata and local binary patterns for copy-move forgery detection.
PACII​A.2008.240 Multimed Tools Appl 75:16881–16903. https​://doi.org/10.1007/
Hayat K, Qazi T (2017) Forgery detection in digital images via discrete s1104​2-015-2961-2
wavelet and discrete cosine transforms. Comput Electr Eng. https​ Wang H, Wang H (2018) Perceptual hashing-based image copy-move
://doi.org/10.1016/j.compe​lecen​g.2017.03.013 forgery detection. Secur Commun Netw 2018:6853696. https​://
Huang Y, Lu W, Sun W, Long D (2011) Improved DCT-based detection doi.org/10.1155/2018/68536​96
of copy-move forgery in images. Forensic Sci Int 206:178–184. Wen B, Zhu Y, Subramanian R, Ng TT, Shen X, Winkler S (2016)
https​://doi.org/10.1016/j.forsc​iint.2010.08.001 COVERAGE—a novel database for copy-move forgery detection.

13
Evolving Systems

In: Proceedings—international conference on image processing. rule-based classifier. In: Proceedings of the 2007 IEEE sympo-
ICIP, pp 161–165. https​://doi.org/10.1109/ICIP.2016.75323​39 sium on computational intelligence in security and defense appli-
Wolfram S (2002) Stephen Wolfram: a new kind of science. [WWW cations (CISDA 2007). IEEE, Honolulu, pp 131–138. https​://doi.
Document]. Wolfram Media org/10.1109/CISDA​.2007.36814​5
Xu B, Wang J, Liu G, Dai Y (2010) Image copy-move forgery detection
based on SURF. In: Proc.—2010 2nd Int. Conf. Multimed. Inf. Publisher’s Note Springer Nature remains neutral with regard to
Netw. Secur. MINES 2010, pp 889–892. https​://doi.org/10.1109/ jurisdictional claims in published maps and institutional affiliations.
MINES​.2010.189
Zhao J, Guo J (2013) Passive forensics for copy-move image for-
gery using a method based on DCT and SVD. Forensic Sci Int
233:158–166. https​://doi.org/10.1016/j.forsc​iint.2013.09.013
Zhou X, Angelov P (2007) Autonomous visual self-localization
in completely unknown environment using evolving fuzzy

13

You might also like