B10 Document Anuradha
B10 Document Anuradha
Bachelor of
Technology In
Computer Science&Engineering
By
N.VYSHNAVI (183T1A0572) S.SAILOHITHA(183T1A0594)
Mr.P.KIRAN RAO
Assistant Professor
July – 2021
RAVINDRA COLLEGE OF ENGINEERING FOR WOMEN
(Approved by AICTE – New Delhi, Affiliated to JNTUA, Anantapur)
Kurnool-518002, Andhra Pradesh
CERTIFICATE
i
RAVINDRA COLLEGE OF ENGINEERING FOR WOMEN
(Approved by AICTE – New Delhi, Affiliated to JNTUA, Anantapur)
Kurnool-518002, Andhra Pradesh
DECLARATION
We hereby declare that the project entitled “RECOLORED IMAGE DETECTION USING
CNN” submitted by us to the Department of Computer Science and Engineering, Ravindra
College of Engineering for women, Kurnool, in partial fulfillment of the requirements for the
award of the degree of Bachelor of Technology in Computer Science and Engineering is a
record of Bonafide work carried out by us under the supervision of Mr.P.KIRAN RAO Assistant
professor, Department of Computer Science and Engineering. We further declare that the work
reported in this project work has not been submitted and will not be submitted, either in part or in
full, for the award of any other degree or diploma of this institute or any other institute or
university.
ii
ACKNOWLEDGEMENTS
We would like to thank Management and Chairman Sri G.V.M. Mohan Kumar,
giving me an opportunity to work on this seminar report and supporting me throughout the period
of workat Ravindra College of Engineering for Women, Kurnool.
We would like to thank Mr. K. Mahesh Babu, Professor & HOD for department of
ComputerScience and Engineering for his support and encouragement in completing our project.
Finally, we wish to take this opportunity to express our deep gratitude to our family
members and all the people who have extended their cooperation in various ways during our
project work. It is our pleasure to acknowledge the help of all those individuals.
iii
TABLE OF CONTENTS
Chapter Title Page
Title Page
Certificate i
Declaration ii
Acknowledgement iii
Contents iv-vi
List of Figures Vii
List of Abbreviations Viii
Abstract ix
1 Introduction 1-3
2 System Analysis 6-7
Existing System 6
Disadvantages 6
Proposed System 6
Advantages 7
System Requirements 7
3 System Design 8-14
Architecture Diagram 8
Component Diagram 9
ER Diagrams 9
Use case Diagram 10
iv
Class Diagram 11
Data flow Diagram 12
Activity Diagram 13
Sequence Diagram 13
4 Technologies Used 14
Python Programming Language 14
5 System Testing 15-17
Unit Testing 15
Integration Testing 15
Functional testing 16
System test 16
White box testing 16
Black box Testing 17
Unit Testing 17
6 Implementation 18-35
Input Design 18
Output Design 18
Modules 19
7 Algorithms 21
Structural similarity 22
8 Sample code 28
v
9 Conclusion 43
10 References 45
vi
LIST OF FIGURES
Figure No Title Page No
Figure 1 Kinds of Machine Learning 3
Figure 2 Architecture of Prediction System 8
Figure 3 Data Flow Diagram 9
Figure 4 Class Diagram 10
Figure 5 Use case Diagram 11
Figure 6 Sequence Diagram 12
Figure 7 Collaboration Diagram 13
vii
LIST OF ABBREVIATIONS
ECG Electrocardiogram
UML Unified Modeling Language
BA Bat Algorithm
BBA Binary Bat Algorithm
WOA Whale Optimization Algorithm
ABC Artificial Bee Colony
GA Genetic Algorithm
viii
ABSTRACT
Near duplicate image detection needs the matching of a bit altered images to the original
image. This will help in the detection of forged images. A great deal of effort has been
dedicated to visual applications that need efficient image similarity metrics and
signature. Digital images can be easily edited and manipulated owing to the great
functionality of image processing software. This leads to the challenge of matching
somewhat altered images to their originals, which is termed as near duplicate image
detection. This paper discusses the literature reviewed on the development of several
image matching algorithms. Image recoloring is a technique that can transfer image color
or theme and result in an imperceptible change in human eyes. Although image
recoloring is one of the most important image manipulation techniques, there is no
special method designed for detecting this kind of forgery. In this paper, we propose a
trainable end-to-end system for distinguishing recolored images from natural images.
The proposed network takes the original image and two derived inputs based on
illumination consistency and inter-channel correlation of the original input into
consideration and outputs the probability that it is recolored. Our algorithm adopts a
CNN-based deep
architecture, which consists of three feature extraction blocks and a feature fusion
module. To train the deep neural network, we synthesize a dataset comprised of
recolored images and corresponding ground truth using different recoloring methods.
Extensive experimental results on the recolored images generated by various methods
show that our proposed network is well generalized and much robust.
ix
CHAPTER 1
hgvghfgfhjko
INTRODUCTION
hgvghfgfhjko
CHAPTER 1
INTRODUCTION
Duplicate image detection is obtained by matching two different images respectively. This process of
matching helps in the detection of forged image. There are several visual applications. These visual
applications need efficient image similarity signature and image similarity metrics. In our present
market there are several image processing software which can easily edit and manipulate the original
digital image. One of the best image manipulation process is image recolouring. For this kind of
forgery there is no special method designed to detect it.
In our present market there are several image processing software which can easily edit and
manipulate the original digital image. This tends us to challenge the matching altered images to their
originals, which is known as near duplicate image detection. Our system discusses the literature
reviewed on the development of several image matching algorithms. A technique which changes or
modifies the color or theme of original image is known as Image recoloring which gets an
imperceptible change in human eyes. One of the best image manipulation process is image
recoloring. For this kind of forgery there is no special method designed to detect it.
In this paper we proposed an end-to-end system which distinguish the original image from modified
image. In this way the distinguishing of recolored images from natural images are approached.
The proposed model takes the original image and another image, then based on the inter-channel
correlation and illumination consistency of the original image is compared with another image then
the output probability is obtained. In this proposed system we used an algorithm called CNN. Our
algorithm adopts CNN based Architecture, which consists of three feature extraction blocks and a
feature fusion module.
hgvghfgfhjko
Digital image forgery deals with digital image. The process of creating fake image has been tremendously
simple with the introduction of powerful computer graphics editing software such as Adobe Photoshop, GIMP,
and Corel Paint Shop, some of which are available for free. There are many cases of digital image forgery. All
of these cases can be categorized into three major groups, based on the process involved in creating the fake
image. In this process we can identify the forgery image.
Fig:1.Architecture
We synthesize a dataset to train the neural network which comprises the recolored images and by
using several recoloring methods. The extensive results when experimented on recolored images
are generated by several methods which shows that the proposed model is well designed and
robust.
hgvghfgfhjko
Our Method:
CHAPTER 2
hgvghfgfhjko
SYSTEM ANALYSIS
hgvghfgfhjko
CHAPTER 2
SYSTEM ANALYSIS
EXISTING SYSTEM
Forgery detection methods intend to verify the authenticity of images and can be broadly classified
into two classes: active authentication and passive authentication. In active authentication
techniques, data hiding techniques are employed where some codes are embedded into the images
during generation. These codes are used for further verifying to authenticate the originality of
image. Active authentication methods can be further classified into two types: digital signatures and
digital watermarking. Watermarking embeds watermarks into images at the time of image
acquisition while digital signatures embed some secondary information extracted from images at
the acquisition end into the images. Lots of work has been proposed in both digital watermarking
and digital signatures. For example, two image authentication algorithms are proposed in to embed
an image digest based on error diffusion half toning technique, into the image in the Integer
Wavelet Transform domain and
the Discrete Cosine Transform domain, respectively. Lu et al. construct a structural digital signature
using image content information in the wavelet transform domain for image authentication. The
main drawback of these approaches remains that they must be inserted at the time of recording,
which limits these approaches to specially equipped digital cameras. In addition, the prior
information is necessary for an authentication process.
PROPOSED SYSTEM
Existing forgery detection methods adopt some description techniques to combine the
information attained by evidence estimators. However, every description technique has its own
limitations and drawbacks. Recently, CNNs have shown an explosive popularity in image
hgvghfgfhjko
classification andother computer vision tasks. Traditional neural networks employ the original
image in RGB channels as the input since it contains information about the picture such as color
and structural features. In this paper, we use three feature extractors and a feature fusion module to
learn forgery-relevant features. The flowchart of our proposed approach is We adopt the
original image as one of the input branches like traditional neural networks. Additionally, we derive
DIs and IM as two pieces of evidence of image recolored detection based on the observations that
images may not maintain the inter-channel correlation or illuminant consistency after the recoloring
process. These two pieces of evidence are employed as twoadditional input branches together with
the original image. The network architecture can be found in. Since the learned features are based
on a data-driven approach, they are able to describe the intrinsic properties of forgery formation and
help distinguishing the authenticity of an image. After extracting forgery-relevant features, we use a
feature fusion network to refine these features and output the probability of authenticity. Based on
this premise, we evaluate the proposed algorithm on forged images generated by various color
transfer methods and the images collected through the Internet.
DISADVANTAGE:
_ We are the first attempt to distinguish recolored images from natural images.
_ We analyze the inter-channel correlation and illumination consistency for natural
images which may not hold after the color transfer operation. Based on these two properties, we
propose a deep discriminative model for recoloring detection.
_ We generate a large-scale and high-quality training dataset for training the proposed
network and create a benchmark dataset consisting of 100 skillfully recolored images and the
corresponding 100 original photographs for testing.
Advantages:
REQUIREMENT SPECIFICATION
Functional Requirements
Graphical User interface with the User.
Software Requirements
For developing the application the following are the Software Requirements:
1. Python
2. Django
2. Windows XP
3. Windows 8
Hardware Requirements
For developing the application, the following are the Hardware Requirements:
Processor: Pentium IV or higher
RAM: 256 MB
Space on Hard Disk: minimum 512MB
CHAPTER 3
vghfgfhjkohjhjkjijljlkkj
SYSEM DESIGN
vghfgfhjkohjhjkjijljlkkj
CHAPTER 3
SYSTEM DESIGN
ARCHITECTURE DIAGARM:
Fig:3.Architecture diagram
We synthesize a dataset to train the neural network which comprises the recolored images and by using
several recoloring methods. The extensive results when experimented on recolored images are
generated by several methods which shows that the proposed model is well designed and robust.
vghfgfhjkohjhjkjijljlkkj
COMPONENT DIAGRAM:
A component diagram, also known as a UML component diagram, describes the organization
and wiring of the physical components in a system. Component diagrams are often drawn to help
model implementation details and double-check that every aspect of the system's required functions is
covered by planned development.
Fig:4.Component Diagram
ER DIAGRAM:
An entity relationship diagram (ERD), also known as an entity relationship model, is a graphical
vghfgfhjkohjhjkjijljlkkj
representation that depicts relationships among people, objects, places, concepts or events within an
information technology (IT) system. Database design: ER diagrams are used to model and design
relational databases, in terms of logic and business rules (in a logical data model) and in terms of the specific
technology to be implemented (in a physical data model.)
Fig:5.ER Diagram
A use case diagram is a way to summarize details of a system and the users within that system. It
is generally shown as a graphic depiction of interactions among different elements in a system.
vghfgfhjkohjhjkjijljlkkj
CLASS DIAGRAM:
A class diagram is an illustration of the relationships and source code dependencies among classes
in
the Unified Modeling Language (UML). In this context, a class defines the methods and variables in
object, which is a specific entity in a program or the unit of code representing that entity.
vghfgfhjkohjhjkjijljlkkj
Fig:7.Class Diagra
A data flow diagram (DFD) maps out the flow of information for any process or
system. It uses defined symbols like rectangles, circles and arrows, plus short text
labels, to show data inputs, outputs, storage points and the routes between each
destination.
vghfgfhjkohjhjkjijljlkkj
Fig:9.Activity Diagram
SEQUENCE DIAGRAM:
Fig:10.Sequence Diagram
vghfgfhjkohjhjkjijljlkkj
CHAPTER 4
vghfgfhjkohjhjkjijljlkkj
Technologies Used
vghfgfhjkohjhjkjijljlkkj
CHAPTER 4
Technologies Used
PYTHON
It provides rich data types and easier to read syntax than any other programming languages
It is a platform-independent scripted language with full access to operating system API’s
Compared to other programming languages, it allows more run-time flexibility
vghfgfhjkohjhjkjijljlkkj
Convolutional neural networks are neural networks that are mostly used in image
classification, object detection, face recognition, self-driving cars, robotics, neural style
transfer, video recognition, recommendation systems, etc.
CNN classification takes any input image and finds a pattern in the image, processes it,
and classifies it in various categories which are like Car, Animal, Bottle, etc. CNN is also
used in unsupervised learning for clustering images by similarity. It is a very interesting
and complex algorithm, which is driving the future of technology.
CHAPTER 5
vghfgfhjkohjhjkjijljlkkj
SYSTEM TEST
Chapter 5
System Testing
SYSTEM TEST
The purpose of testing is to discover errors. Testing is the process of trying to discover every conceivable
fault or weakness in a work product. It provides a way to check the functionality of components, sub-
assemblies, assemblies and/or a finished product It is the process of exercising software with the intent of
ensuring that the Software system meets its requirements and user expectations and does not fail in an
vghfgfhjkohjhjkjijljlkkj
unacceptable manner. There are various types of test. Each test type addresses a specific testing
requirement.
TYPES OF TESTS
Unit testing
Unit testing involves the design of test cases that validate that the internal program logic is functioning
properly, and that program inputs produce valid outputs. All decision branches and internal code flow
should be validated. It is the testing of individual software units of the application .it is done after the
completion of an individual unit before integration. This is a structural testing, that relies on knowledge of
its construction and is invasive. Unit tests perform basic tests at component level and test a specific
business process, application, and/or system configuration. Unit tests ensure that each unique path of a
business process performs accurately to the documented specifications and contains clearly defined inputs
and expected results.
Integration testing
Integration tests are designed to test integrated software components to determine if they actually run as
one program. Testing is event driven and is more concerned with the basic outcome of screens or fields.
Integration tests demonstrate that although the components were individually satisfaction, as shown by
successfully unit testing, the combination of components is correct and consistent. Integration testing is
specifically aimed at exposing the problems that arise from the combination of components.
Functional test
Functional tests provide systematic demonstrations that functions tested are available as specified by the
business and technical requirements, system documentation, and user manuals.
System Test
System testing ensures that the entire integrated software system meets requirements. It
tests a configuration to ensure known and predictable results. An example of system testing is the
configuration-oriented system integration test. System testing is based on process descriptions and flows,
emphasizing pre-driven process links and integration points.
White Box Testing is a testing in which in which the software tester has knowledge of the
inner workings, structure and language of the software, or at least its purpose. It is purpose. It is used to
test areas that cannot be reached from a black box level.
Black Box Testing is testing the software without any knowledge of the inner workings, structure or
language of the module being tested. Black box tests, as most other kinds of tests, must be written from a
definitive source document, such as specification or requirements document, such as specification or
requirements document. It is a testing in which the software under test is treated, as a black box. you
cannot “see” into it. The test provides inputs and responds to outputs without considering how the
software works.
Unit Testing
Unit testing is usually conducted as part of a combined code and unit test phase of the software lifecycle,
although it is not uncommon for coding and unit testing to be conducted as two distinct phases.
Field testing will be performed manually and functional tests will be written in detail.
Test objectives
Features to be tested
CHAPTER 6
vghfgfhjkohjhjkjijljlkkj
IMPLEMENTATION
vghfgfhjkohjhjkjijljlkkj
CHAPTER 6
IMPLEMENTATION
INPUT DESIGN:
The input design is the link between the information system and the user. It comprises the
developing specification and procedures for data preparation and those steps are necessary to put
transaction data in to a usable form for processing can be achieved by inspecting the computer to
read data from a written or printed document or it can occur by having people keying the data
directly into the system. The design of input focuses on controlling the amount of input required,
controlling the errors, avoiding delay, avoiding extra steps and keeping the process simple. The
input is designed in such a way so that it provides security and ease of use with retaining the
privacy. Input Design considered the following things:
What data should be given as input?
How the data should be arranged or coded?
The dialog to guide the operating personnel in providing input.
Methods for preparing input validations and steps to follow when error occur.
OBJECTIVES:
1.Input Design is the process of converting a user-oriented description of the input into a
computer-based system. This design is important to avoid errors in the data input process and show
the correct direction to the management for getting correct information from the computerized
system.
2. It is achieved by creating user-friendly screens for the data entry to handle large
volume of data. The goal of designing input is to make data entry easier and to be free from errors.
The data entry screen is designed in such a way that all the data manipulates can be performed. It
also provides record viewing facilities.
3.When the data is entered it will check for its validity. Data can be entered with the help
of screens. Appropriate messages are provided as when needed so that the user will not be in maize
vghfgfhjkohjhjkjijljlkkj
of instant. Thus the objective of input design is to create an input layout that is easy to follow
OUTPUT DESIGN:
A quality output is one, which meets the requirements of the end user and presents the information
clearly. In any system results of processing are communicated to the users and to other system
through outputs. In output design it is determined how the information is to be displaced for
immediate need and also the hard copy output. It is the most important and direct source
information to the user. Efficient and intelligent output design improves the system’s relationship to
help user decision-making.
1. Designing computer output should proceed in an organized, well thought out manner; the right
output must be developed while ensuring that each output element is designed so that people will
find the system can use easily and effectively. When analysis design computer output, they should
Identify the specific output that is needed to meet the requirements.
2.Select methods for presenting information.
3.Create document, report, or other formats that contain information produced by the system.
The output form of an information system should accomplish one or more of the following
objectives.
MODULES:
Visual descriptors give statistics about an image. A good descriptor permits to discriminate between
similar and dissimilar images. Note that the notion of similarity highly depends on the application. For
instance, similarity means “visually consistent images” in the framework of image retrieval while it
signifies “visually nearly identical” in duplicate detection. There exist many published surveys on image
description, the reader can refer for surveys centered around image description for content-based image
retrieval applications. In the following, four types of low-level image descriptors are presented.
DUPLICATE DETECTION
Duplicate detection is a task that aims at detecting the duplicates of an original image. Consequently, it is
first necessary to define what a duplicate is. In short, a duplicate is a transformed version of an original
artwork that keeps a similar visual value. In other words, ‘being a duplicate’ is a pairwise equivalence
relationship that links the original to any of its variations through a transformation operation, for
example, compression, brightness changes or cropping. By extension, if an image A is a duplicate of
another image B and yet another image C is duplicate of image B, then image C is in turn a duplicate of
image A. Finally, the task of duplicate detection can be expressed as follows. Duplicate detection aims at
detecting all the duplicates of a particular image
among a collection of images. Or in a simplified form, duplicate detection’s goal is to determine whether
two given images are duplicates of each other or unrelated to each other
human visual attention is enhanced through a process of competing interactions among neurons
representing all of the stimuli present in the visual field. The competition results in the selection of a few
vghfgfhjkohjhjkjijljlkkj
points of attention and the suppression of irrelevant material. In this context of visual attention, we argue
that humans are able to spot anomalies in a single image or similarity between two images through a
competitive comparison mechanism,
where dissimilar and similar regions are identified and scored by means of a new similarity measure. The
comparison is a flexible and dynamic procedure, which does not depend on a particular feature space
which may be thought to exist in a general image database
Grayscale is the collection or the range of monochromic (gray) shades, ranging from pure white on the
lightest end to pure black on the opposite end. Grayscale only contains luminance (brightness)
information and no color information; that is why maximum luminance is white and zero luminance is
black; everything in between is a shade of gray. That is why grayscale images contain only shades of
gray and no color.
SYSTEM STUDY
FEASIBILITY STUDY
The feasibility of the project is analyzed in this phase and business proposal is put forth with a very
general plan for the project and some cost estimates. During system analysis the feasibility study of the
proposed system is to be carried out. This is to ensure that the proposed system is not a burden to the
company. For feasibility analysis, some understanding of the major requirements for the system is
essential.
ECONOMICAL FEASIBILITY
vghfgfhjkohjhjkjijljlkkj
TECHNICAL FEASIBILITY
SOCIAL FEASIBILITY
ECONOMICAL FEASIBILITY
This study is carried out to check the economic impact that the system will have on the
organization. The amount of fund that the company can pour into the research and development of the
system is limited. The expenditures must be justified. Thus, the developed system as well within the
budget and this was achieved because most of the technologies used are freely available. Only the
customized products had to be purchased.
TECHNICAL FEASIBILITY
This study is carried out to check the technical feasibility, that is, the technical requirements
of the system. Any system developed must not have a high demand on the available technical resources.
This will lead to high demands on the
available technical resources. This will lead to high demands being placed on the client. The
developed system must have a modest requirement, as only minimal or null changes are required for
implementing this system.
SOCIAL FEASIBILITY
The aspect of study is to check the level of acceptance of the system by the user. This includes the
process of training the user to use the system efficiently. The user must not feel threatened by the system,
instead must accept it as a necessity. The level of acceptance by the users solely depends on the methods
that are employed to educate the user about the system
and to make him familiar with it. His level of confidence must be raised so that he is also able to make
some constructive criticism, which is welcomed, as he is the final user of the system.
REFERENCES CHAPTER 7
CHAPTER 7
ALGORITHM
Chapter 7
Algorithm
Structural Simularity
differences with respect to other techniques mentioned previously such as MSE or PSNR is that these approaches
estimate absolute errors; on the other hand, SSIM is a perception-based model that considers image degradation
as perceived change in structural information, while also incorporating important perceptual phenomena, including
both luminance masking and contrast masking terms. Structural information is the idea that the pixels have strong
inter-dependencies especially when they are spatially close. These dependencies carry important information about
the structure of the objects in the visual scene. Luminance masking is a phenomenon whereby image distortions (in
this context) tend to be less visible in bright regions, while contrast masking is a phenomenon whereby distortions
become less visible where there is significant activity or "texture" in the image.
The SSIM index is calculated on various windows of an image. The measure between two windows x and y of
common size N×N is:[4]
where μx, μy, σx,σy, and σxy are the local means, standard deviations, and cross-covariance for images x, y. If α = β =
γ = 1 (the default for Exponents), and C3 = C2/2 (default selection of C3) the index simplifies to:
x,y)= x y 1 xy 2 1 2
SSIM( (2μ μ +C )(2σ +C )(μ2x+μ2y+C )(σ2x+σ2y+C )
Source Code:
import pickle
imageA = cv2.imread(args["first"])
imageB = cv2.imread(args["second"])
val1=format(score)
print("VALUE (1.0) IS ORIGINAL OR FORGERY, WHAT'S YOURS ?: {}", val1)
for c in cnts:
(x, y, w, h) = cv2.boundingRect(c)
cv2.rectangle(imageA, (x, y), (x + w, y + h), (0, 0, 255), 2)
cv2.imshow("Original", imageA)
cv2.imshow("Modified", imageB)
cv2.imshow("Diff", diff)
cv2.imshow("Thresh ", thresh)
cv2.waitKey(0)
sys.stdout.close()
Output:
Fig:12.Original image
Fig:13.Modified image
RESULT
In this work, we present a novel deep learning approach for recolored image detection. Both the inter-channel
correlation and the illumination consistency are employed to help the feature extraction. We elaborate the design
principle of our Recent and systematically validate the rationality by running a number of experiments.
Furthermore, two recolored datasets with different sources are created and the high performance of our Recent
demonstrates the effectiveness of the model. We hope our simple yet effective Recent will serve as a solid baseline
and help future research in recolored images detection. Our future work will focus on designing a more effective
network architecture and searching for some high-level cues for better distinguishing.
Conclusion
In this work, we present a novel deep learning approach for recolored image detection. Both the inter-channel
correlation and the illumination consistency are employed to help the feature extraction. We elaborate the design
principle of our Recent and systematically validate the rationality by running a number of experiments.
Furthermore, two recolored datasets with different sources are created and the high performance of our Recent
demonstrates the effectiveness of the model. We hope our simple yet effective Recent will serve as a solid baseline
and help future research in recolored images detection. Our future work will focus on designing a more effective
network architecture and searching for some high-level cues for better distinguishing.
REFERENCES
[1] E. Reinhard, M. Ashikhmin, B. Gooch, and P. Shirley, “Color transfer between images,” IEEE
Computer Graphics Applications, vol. 21, no. 5, pp. 34–41, 2001.
[2] S. Beigpour and J. van de Weijer, “Object recoloring based on intrinsic image estimation,” ICCV,
2010.F. Pitie, A. C. Kokaram, and R. Dahyot, “Automated colour grading using colour
distribution transfer,” Comput Vis Image Underst, pp. 123– 137, 2007.
[3] H. Chang, O. Fried, Y. Liu, S. DiVerdi, and A. Finkelstein, “Palettebased photo recoloring,” ACM
Transactions on Graphics (Proc. SIGGRAPH), vol. 34, no. 4, 2015.
[4] M. P. Rao, A. N. Rajagopalan, and G. Seetharaman, “Harnessing motion blur to unveil splicing,”
IEEE Transactions on Information Forensics and Security, vol. 9, no. 4, pp. 583–595, 2014.
[5] G. Muhammad, M. Hussain, and G. Bebis, “Passive copy move image forgery detection using
undecimated dyadic wavelet transform,” Digital Investigation, vol. 9, no. 1, pp. 49–57, 2012.
[6] G. Cao, Y. Zhao, R. Ni, and X. Li, “Contrast enhancement-based forensics in digital images,” IEEE
Transactions on Information Forensics and Security, vol. 9, no. 3, pp. 515–525, 2014.
[7] X. Pan and S. Lyu, “Region duplication detection using image feature matching,” IEEE Transactions
on Information Forensics and Security, vol. 5, no. 4, pp. 857–867, 2010.
[8] X. Zhao, J. Li, S. Li, and S. Wang, “Detecting digital image splicing in chroma spaces,” in Digital
Watermarking - International Workshop=, 2010, pp. 12–22.