0% found this document useful (0 votes)
433 views18 pages

Cyber Security and Digital Forensics Final

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
433 views18 pages

Cyber Security and Digital Forensics Final

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

A Mini Project Report on

Design and develop a tool for digital forensic of images

Submitted by

Apeksha Gangurde(Roll No: 74)


Sakshi Rahane (Roll No: 53)
Karuna Pawar (Roll No.: 50)

submitted in partial fulfillment of the requirements for the award of the degree of

Bachelor

in

COMPUTER ENGINEERING

For Academic Year 2023-2024

Under the guidance of


Prof. Kiran Kulkarni

DEPARTMENT OF COMPUTER ENGINEERING

MET’s Institute of Engineering Bhujbal Knowledge City


Adgaon, Nashik-422003
Certificate
This is to Certify that

Apeksha Gangurde (Roll No: 74)


Sakshi Rahane (Roll No.: 53)
Karuna Pawar (Roll No.: 50)

has completed the necessary Cyber Security and Digital


Forensics Mini Project and prepared the report on

Design and develop a tool for digital forensic of images

in satisfactory manner as a fulfillment of the requirement of the award


of degree of Bachelor of Computer Engineering in the Academic year
2023-2024

Project Guide H.O.D Principal


Prof. Kiran Kulkarni Dr. M. U. Kharat. Dr. V. P. Wani
Acknowledgements

Every work is source which requires support from many people and areas. It gives us
proud privilege to complete the Cyber Security and Digital Firensics Mini Project Report under
valuable guidance and encouragement of our guide Prof. Kiran Kulkarni.

We are also extremely grateful to our respected H.O.D. Dr. M. U. Kharat for providing
all facilities and every help for smooth progress of our Mini Project.

At last we would like to thank all the staff members and our students who directly or
indirectly supported me without which the Mini Project work would not have been completed
successfully.

by

Apeksha Gangurde (Roll No: 74)


Sakshi Rahane (Roll No.: 53)
Karuna Pawar (Roll No.: 50)
Contents

1. Introduction

2. Problem Statement

3. Objectives

4. Motivation

5. Methodology

6. Proposed System
7. Result
8. Conclusion
9. Recommendation
10. References
1. Introduction

In today’s world, photography has become almost everyone’s hobby. This is as a result of
advances in technology which brought about availability of handy and pocket-sized digital
cameras at avoidable prices especially, the availability camera in mobile phones. The high
potential of visual media and the ease with which they are captured, distributed and stored is
such that they are used to convey information . While people enjoy the efficiency of
information exchange, the security and trustworthiness of digital images have become a crucial
issue due to the ease of malicious processing, for instance, embedding secret messages for
covert communications, altering origin and content of images with popular image editing
software. These malicious usages could give rise to serious problems if they are taken
advantage of by terrorist organizations, treated as evidence in court, or publishedby mass
media for information dissemination (Xu, 2017).There is a saying that “a picture is worth a
thousand words”, in recent years, this trust in picture has been eroded due to availability of
advance image-editing software with little or no prior training in its usage which has made
image manipulation easy. Nowadays, modern photo editors and advance image editing
techniques make image editing extremely easy to manipulate original imagesin such a way
that any alterations are impossible to catch by an untrained eye, and can even escape the
scrutiny of experienced editors of reputable news media. Even the eye of a highly competent
forensic expert can miss certain signs of a fake image, potentially allowing forged (altered)
images to be accepted as court evidence. As digital technology advances,the need for
authenticating digital images, validating their content and detection of forgeries is inevitable.
Before the advent of computers, photo manipulations were carried out with different
techniques such as double-exposure , piecing photos , retouching with ink, paint, scratching,
Polaroids, etc. Airbrushes were also used, whence the term “airbrushing” for manipulation. In
the early days of photography, the use of technology was not as advanced and efficient as it
is now. The results are similar to digital manipulation but they are harderto create (Photo
Manipulation, 2014). It is providing copyright protection for intellectual method that’s in
digital format (Murty et al., 2011). As explained by Lyatskaya (2006), watermarking is a kind
of steganography developed especially for possible authentication. According to Tao (2014),
the basic characteristics of digital watermark are imperceptibility, capacity, robustness, false
positive of watermarking algorithm and security of the hiding place.
2. Problem Statement

In today's digital age, images play a significant role in various aspects of our lives, including
social media, business, and personal communication. However, with the increasing use of
images, there is also a rise in malicious activities such as image manipulation, forgery, and
inappropriate content distribution. Detecting these digital manipulations is crucial for
maintaining trust and ensuring the authenticity of images in legal, journalistic, and personal
contexts. So our aim is to design and develop an efficient and user-friendly tool for digital
forensics of images.
3. Research Objectives

The specific objectives of this research work are to:


• (a) design a forensic tool to identify altered image using Exchangeable Image Format
(EXIF) metadata and Discrete Cosine Transform (DCT) coefficients; and
• (b) implement the design in (a)
4. Motivation

The rate at which fake images are being used for evidence at court of law, acceptance for news
publication, insurance claim and for medical diagnosis and treatment especially in this era of
social media is alarming. Having examined some related works on digital image forgery
detection tools, it is pertinent to develop an encompassing tool that will help in detecting altered
image accurately, which will assist forensic experts in carrying out their investigative role.
Motivation comes particularly from the research conducted by Yang et al., (2018), Taimori et
al., (2016), Popescu, (2004), Dong et al., (2011), and Kee et al., (2011). Building on this earlier
work, this research intends to develop a hybridized digital image tool for forensic analysis.
5. Methodology:

• CONCEPTUAL FRAMEWORK

The conceptual frame work for this research work is presented in the Figure 3.1. As
explained in chapter one, the framework integrates two approaches in identifying tampered
image by examining the EXIF parameter and identify the effect of Double Compression on
any image using EXIF parameters and Discrete Cosine Transform (DCT) coefficients. The
image for analysis is first subjected to EXIF analysis with a view of identifying discrepancy
(ies) in the EXIF metadata parameters extracted from the image. From the data extracted
from the image, camera signature is formed and is also compared with known signatures of
major cameras with a view of identifying the source camera. After this, the image is passed
to another algorithm that can detect the presence of double JPEG compression. The result
of the two analyses is further passed to logic circuit for further analysis before the final
decision is made.
6. Proposed System
FEATURE EXTRACTION
Feature extraction phase is divided into four modules as shown in figure 3.2. The first
point of contact is the JPEG Header File. At the JPEG Header File Content Extractor
module, information regarding the Image Dimension, Quantization Table, Huffman
Code, EXIF and Thumbnail which are stored in JPEG header file are extracted. The
extracted data will be used to decode JPEG file..
EXIF ANALYZER
EXIF analyzer phase deals with analysis of metadata extracted at the JPEG Header File
Extraction phase. Each camera has some unique features that can be used to authenticate each
image that comes from them. These features formed what is referred to as camera signature.
The purpose of EXIF analyser is to extract these features and compare them with database of
know camera in order to validate or authenticate the image in doubt. Image dimension,
quantization table and Huffman code form the first three components of the camera signature.
Image dimensions are used to distinguish cameras with different sensor resolution and are
specified as the minimum and maximum image dimensions in other to compensate for portrait
and landscape. Each 8x8 block of the three colour channels has different quantization table
and are specified as one dimensional array totaling 192 values. They are arranged in this
order: luminance (Y), chrominance (Cb) and chrominance (Cr). Also, the Huffman codes are
extracted; it has six sets of 15 values corresponding to the number 1,2,.,.,.,15: each of the
colour channels requires two codes. In all, a total of 284 values were extracted from a full
resolution image 2 from image dimensions, 192 from quantization table and 90 values from
Huffman code.

Also, the next three components of the camera signature are extracted from the image
thumbnail. A thumbnail is small replica of the full image resolution. Some camera
manufacturer do embed thumbnail in the header file. This is counted as error and a value is
assigned to it. This is considered as a unique feature of the camera. For those images that
have thumbnail, a total of 284 values were extracted as it is done with image with full
resolution. The last component of the camera signature is extracted from the image EXIF
metadata. The metadata which can be found at the JPEG header file contains some useful
information about the image and the source camera. According to JPEG standard, metadata
are organized into five image file directory (IFD) as stated in JPEG standard. These include:
Primary; Exif; Interoperability; Thumbnail; and GPS. Camera manufacturer arefree to add
other features to these. JPEG standard allows camera manufacturers to embed any
information into each image file directory. All entries in the five IFD are counted. We counted
additional information embedded by each manufacturer and took note of those that did not
embed any.
JPEG COMPRESSION ANALYZER

Details of image compression, effect of double JPEG compression and decompression stages
have been discussed in chapters two and three respectively. In this phase, we will discuss
detection of double JPEG compression. The stages involved in the detection of double
quantization or double compression are detailed in figure 3.8. The first thing after
decompression is to compute the histogram of the image using DCT coefficients. An image is
said to undergo double compression or double quantized if it is compressed in the first instance
with a quality factor Q1 and then compressed again with another quality factor Q2. Given a
set of DCT coefficients known to have been double quantized with steps b followed by a, double
quantization introduces periodic artifact in the DCT coefficient histogram that have specific
peak patterns. The periodic pattern can be estimated by taking signal from a uniform
distribution, double quantize it with the same steps (b follow by a), compute the Fourier
transform of the double quantized signal’s histogram and find the peak location. Then, the
periodicity of the DCT coefficient’s histogram is obtained by averaging the energy values in
the frequency domain at the peak locations. Denote with and the histograms of the double
quantized DCT coefficients and uniformly distributed signals respectively. Quantitative
measure to detect the periodic artifacts introduced in the DCT coefficient. The result from this
phase is also categorized into four: a. The imageis edited (B1) b. The image is edited (B2) c.
The image may be original (B3) and d. The image is likely to be original (B4) Where B1, B2,
B3, B4 denotes variable that stores the result of the analysis from this phase which are later
used for furtheranalysis.
7. RESULT
The outputs from the two analyses are displayed on the screen. The outputs displayed are
simple, clear and precise for even a novice to understand. The outputs are depicted in Figure
4.2, Figure 4.3, Figure 4.4, Figure 4.5, Figure 4.6, Figure 4.7, Figure 4.8, Figure 4.9 and Figure
4.10. Figure 4.2 and Figure 4.3 show samples of EXIF metadata retrieved from image under
investigation. From this, we can clearly see the camera make and model, softwareused,
camera orientation, resolution, date and time when the image was taken and date and time when
the image was digitized. These metadata are used by EXIT Analyser phase for forensic
analysis. Figure 4.4 shows quantization table retrieved from the image while Figure
4.5 shows the Huffman code that was used to quantize the image while Figure 4.6 shows the
thumbnail metadata extracted. All these are extracted from the JPEG header file and they are
combined to form camera signature. From Figure 4.7, the camera signature was depicted and
the list of all camera make and model that have that signature is listed. Figures 4.8, 4.9 and
4.10 show sample of final result (output) generated after processing.
Bar chart showing Accuracy of the Compression Analyzer Approach
8. Conclusion

In this research, a new tool that will aid forensic experts in the discharge of their duty has been
designed and implemented. Nowadays, image manipulation is not only carried out by experts
but also those that have little or no knowledge about photo editing. This is due to modern,
sophisticated and easy to use photo editing software. This research was able to present two
approaches which can help in the detection of image/photo manipulation which may be
difficult to be detected by human eye. The hybridized tool combines metadata extracted from
image under suspicion and analyzes the statistical data of the image with a view to detecting
manipulation evidence. The results presented showed that the new tool will be of great benefit
to forensic experts, lawyers and judges of various competent courts of law, medical
personnel, photo journalists and news editors and other categories of people who are involved
in investigation
9. Recommendation

The major limitation in this project is the inability to locate the tampered region. It is important
to pinpoint the exact point where the manipulation occurred. Further research should be able
to look at this dimension. Also, because of the time limit, only few camera signatures were
gotten, which represents small fraction of available camera signature database. Interested
researchers may wish to add more signatures to the database. Also, researchers should look at
the possibility of double compression detection when the first quality factor is used to
compress the image the second time i.e. QF1 = QF2.
10. References
1. Abhishek K., Rajesh, Singh P., Megha, A. & Hariom, G. (2017), An Evaluation of
Digital Image Forgery Detection Approaches. arXiv:1703.09968v2 [cs.MM] 30 Mar
2017.

2. Agarwal, V. and Mane, V. (2016). “Reflective SIFT for Improving the Detection of
Copy-Move Image Forgery”, IEEE Computer Society’2016

3. Upadhyay, A.; Bandyopadhyay, G. Forecasting Stock Performance in Indian Market


using Multinomial Logistic Regression. J. Bus. Stud. Q. 2012, 3, 16–39.

4. Tan, T.Z.; Quek, C.; Ng, G.S. Biological Brain-Inspired Genetic Complementary
Learning for Stock Market and Bank Failure Prediction. Comput. Intell. 2007, 23, 236–
261.

5. Ali Khan, J. Predicting Trend in Stock Market Exchange Using Machine Learning
Classifiers. Sci. Int. 2016, 28, 1363–1367.

6. Gupta, R.; Garg, N.; Singh, S. Stock Market Prediction Accuracy Analysis Using Kappa
Measure. In Proceedings of the 2013 International Conference on Communication
Systems and Network Technologies, Gwalior, India, 6–8 April 2013; pp. 635–639.

7. Rahman, A.S.A.; Abdul-Rahman, S.; Mutalib, S. Mining Textual Terms for Stock
Market Prediction Analysis Using Financial News. In International Conference on Soft
Computing in Data Science; Springer: Singapore, 2017; pp. 293–305.

8. Ballings, M.; Poel, D.V.D.; Hespeels, N.; Gryp, R. Evaluating multiple classifiers for
stock price direction prediction. Expert Syst. Appl. 2015, 42, 7046–7056.

9. Cortes, C.; Vapnik, V. Support-vector networks. Mach. Learn. 1995, 20, 273–297.

10. Srivastava, D.K.; Bhambhu, L. Data classification using support vector machine. J.
Theor. Appl. Inf. Technol. 2010, 12, 1–7.

11. Venugopal, K.R.; Srinivasa, K.G.; Patnaik, L.M. Fuzzy based neuro—Genetic
algorithm for stock market prediction. Stud. Comput. Intell. 2009, 190, 139–166.

You might also like