0% found this document useful (0 votes)
26 views18 pages

Abstract Asmitha

The document discusses using deep neural networks for enhancing the restoration of old photos. It explores the background of image restoration methods, presents a deep learning model, and explains its structure, principles, and loss function. An experiment showed the model outperformed other algorithms in blur and damage repair, with higher PSNR and SSIM values, making it the most effective restoration method.

Uploaded by

Vibin Nivas
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views18 pages

Abstract Asmitha

The document discusses using deep neural networks for enhancing the restoration of old photos. It explores the background of image restoration methods, presents a deep learning model, and explains its structure, principles, and loss function. An experiment showed the model outperformed other algorithms in blur and damage repair, with higher PSNR and SSIM values, making it the most effective restoration method.

Uploaded by

Vibin Nivas
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 18

Deep Learning Neural Network based old photograph restoration

1 2 3
Dr.G.Chitra Ganapathi Asmitha G Kavipriya D
Professor, UG Scholar, UG Scholar,
Computer Science and Engineering,
CMS College of Engineering and Technology, Coimbatore

Abstract:
Historical images captured in old photos hold valuable information, but many of these photos today
suffer from various levels of damage. While digital processing can help restore these old photos, the
process involves addressing multiple aspects of image degradation. Currently, there is no standardized
approach for repairing the different types of damage found in old photos. The field of photo
restoration technology is continuously evolving. Traditional methods rely on mathematical formulas
or thermal diffusion to repair missing areas in images, but they are limited to simple structures and
small damaged areas, making them impractical for everyday use. The introduction of deep learning
technology has revolutionized image restoration research, particularly in the context of old photo
restoration. This article explores the use of deep neural networks for enhancing the restoration of old
photos. It delves into the background significance of image restoration methods, presents a model
based on deep neural networks, and explains its structure, principles, and loss function. A comparative
experiment was conducted to evaluate the model's performance, revealing that it outperformed other
algorithms in both blur and damage repair experiments. The algorithm demonstrated higher peak
signal-to-noise ratios and structure similarity values, making it the most effective method for image
restoration.

Keywords:
Degradation, Revolutionized, Signal-to-noise ratios, image restoration.

1. Introduction:
Photos serve as a means of preserving memories, capturing significant moments in people's lives. In
the past, individuals could only store a limited number of printed photos in albums or frames, leading
to issues such as yellowing and tearing over time. However, with advancements in image restoration
technology, old photos can now be digitally restored on a computer, ensuring that precious memories
are preserved for future generations. Deep learning, a subset of machine learning, utilizes artificial
neural networks to analyze and understand data. This technology has been increasingly utilized in
computer vision tasks, including enhancing image restoration algorithms. Techniques such as deep
convolutional neural networks and generative adversarial networks have proven to be effective in
restoring old photos by extracting features and generating images. As a result, more people are turning
to deep neural networks for the restoration of aged photographs. While traditional methods focused on
filling in missing details in photos, deep neural networks offer a more comprehensive solution by
leveraging high-level models to imagine and reconstruct missing parts. This innovative approach not
only saves time and effort but also produces superior results in repairing damaged images and
eliminating unwanted elements. The paper introduces a novel image repair model based on
convolutional neural networks and generative adversarial networks, showcasing improvements in
image blur and damage repair compared to existing algorithms

2. Literature Review:
The opening paragraph of the literature review section discusses the significance and difficulties
involved in restoring old photos. It highlights how old photographs hold historical and emotional
value, acting as tangible reminders of personal memories, cultural heritage, and past events. [4]
Deep neural networks are sophisticated computational models that draw inspiration from the structure
and functionality of the human brain. These networks are comprised of numerous layers of
interconnected artificial neurons, enabling them to effectively learn intricate patterns and relationships
within data. It also delves into the difficulties that come with training deep neural networks, including
issues like overfitting, the necessity of ample computational power, and the need for substantial
annotated datasets. [2]
The article thoroughly examines past research papers, publications, and projects that have delved into
this subject. It also evaluates the strengths and weaknesses of these established techniques, offering
valuable insights into how well they work, their efficiency, and reliability. Additionally, it sheds light
on any notable progress or innovations made in the field of old photo restoration through the use of
deep neural networks in recent times. [5]
The mathematical foundation of these metrics and how they are understood in the realm of image
restoration is crucial. PSNR calculates the relationship between the highest achievable signal strength
and the noise level caused by the restoration process. On the other hand, SSIM assesses the likeness in
structure between the initial and repaired images, taking into account brightness, sharpness, and
structural details. Perceptual metrics like perceptual quality assessment (PQA) strive to grasp human
perception and subjective visual excellence. [1]
Obtaining large-scale, annotated datasets of degraded old photos can be difficult, unlike with modern
datasets. This challenge can impact the performance and generalization abilities of deep neural
networks. [7] Furthermore, it helps combat overfitting, a problem where models become too focused
on the training data and struggle to apply their learnings to new images. This also tackles the issue of
the computational complexity that comes with using deep neural networks. [3]
This section provides a brief overview and points out areas where further research is needed in the
field of old photo restoration through the application of deep neural networks. [6] It starts by outlining
the key discoveries and impacts of previous studies, emphasizing the methods, algorithms, and
structures of deep neural networks utilized in the restoration of old photographs. [8] It showcases the
project's distinct contributions and underscores its relevance and significance within the existing
literature.[9][10]

3. System Analysis:

3.1 Existing System:


The current method used to restore old photo images relies on traditional image processing techniques
and algorithms to tackle the deterioration problems often seen in aged photos. The process begins
with pre-processing, which includes actions such as cropping, resizing, and converting the format to
ready the image for restoration. Following this, noise reduction methods are employed to lessen the
grainy textures and random pixel fluctuations commonly found in old photos. Techniques like median
filtering, Gaussian filtering, and wavelet de-noising are commonly used to reduce noise while
retaining crucial image details.
Following noise reduction, contrast enhancement techniques are used to enhance the overall contrast
and dynamic range of the image. This process aims to improve the visibility of fine details and
enhance the visual quality of the restored photo. Commonly employed techniques include histogram
equalization, contrast stretching, and adaptive contrast enhancement. Additionally, methods to address
blurriness in old photos are implemented, utilizing deblurring algorithms such as inverse filtering,
Wiener deconvolution, or blind deconvolution. These algorithms work to restore sharpness and clarity
in blurred areas by reversing the effects of blurring factors like motion, defocus, or camera shake.
Nevertheless, it should be emphasized that the current system's efficiency greatly depends on manual
interventions, subjective adjustments, and predefined filters. It may struggle to accurately detect
intricate degradation patterns and could demand substantial expertise and time-consuming efforts to
produce satisfactory outcomes.
3.2 Limitations of Existing System
3.2.1 Manual Intervention and Subjectivity:
The current system depends heavily on manual input and subjective adjustments made by either the
user or an expert. Choosing the right filters, fine-tuning parameters, and adjusting restoration
techniques often demand a high level of skill and can be quite time-consuming. This subjective aspect
brings in variability and could lead to inconsistent results in the restoration process.
3.2.2 Limited Adaptability:
The current system's traditional image processing methods are usually tailored to tackle particular
forms of deterioration or are based on specific degradation models. Consequently, they may face
challenges when dealing with the wide range of intricate degradation patterns found in aged
photographs. These methods might lack the adaptability needed to adjust to different levels of
deterioration, various types of damage, or changes in image attributes.

3.2.3 Difficulty in Preserving Fine Details:

Challenges may arise in maintaining intricate details in the current system when restoring images.
Despite attempts to improve clarity and sharpness, there is a possibility of oversimplification or
excessive smoothing, which could result in the omission of crucial details and textures. This drawback
may lead to restorations that are visually unappealing and less accurate.
3.2.4 Lack of Learning and Adaptation:
The current system does not possess the capability to learn from examples, unlike deep neural
networks that can adapt and learn from extensive datasets. Traditional methods lack a learning
element that enables them to effectively generalize to unfamiliar or intricate degradation patterns.
Consequently, the quality of restoration may be constrained, especially when working with rare or
severely degraded vintage photographs.
3.2.5 Insufficient Handling of Color Restoration:
The current system might face challenges when it comes to accurately bringing back the colors in old
photographs. Basic color correction methods are commonly used, but they might not be able to truly
represent the original colors, resulting in unnatural or inconsistent outcomes. To address color
deterioration in old photos, like fading or discoloration, more advanced techniques are needed to
achieve authentic color restoration.
3.2.6 Limited Preservation of Image Content:
When old photos have missing or damaged areas, the current system may struggle to effectively
restore or fill in these regions. Traditional inpainting methods often make use of assumptions or
heuristics, which do not consistently yield visually convincing or precise outcomes. Balancing the
preservation of the image's original content with the task of filling in missing areas poses a challenge
for the current system.
3.3 Proposed System
The new system utilizes advanced deep neural networks that are tailored for the restoration of images,
enabling it to autonomously identify and capture detailed features from deteriorated vintage
photographs. In contrast to the current system's reliance on manual adjustments and preset filters, the
proposed system gains insights from a vast collection of labeled old photos to comprehend the
intricate degradation patterns and restoration needs. At the heart of this innovative system lies the
sophisticated deep neural network structure. Different types of neural networks like convolutional
neural networks (CNNs), generative adversarial networks (GANs), and recurrent neural networks
(RNNs) can be employed based on the particular restoration task. These neural networks undergo
training with sophisticated optimization algorithms to enhance the restoration process and produce
top-notch restored images. The new system tackles the drawbacks of the current system by
automating the restoration procedure, thereby decreasing the need for manual interventions and
subjective adjustments. This results in a more effective and consistent restoration process. The
advanced deep neural networks are able to effectively handle various forms of degradation such as
noise, blur, fading, and physical damage, offering a comprehensive and flexible solution.
Additionally, the system is particularly skilled at maintaining intricate details while restoring images.
By leveraging the complex feature extraction abilities and learned representations of deep neural
networks, it can precisely recover the fine details and textures found in aged photographs. This results
in visually appealing and authentic restorations that closely mirror the original images. Furthermore,
the system has the capability for interactive and user-guided restoration, providing additional potential
for customization. U

Users are able to give input, specify their restoration preferences, or make adjustments to the
restoration process in order to achieve personalized results. This high level of interaction and
customization enhances the user experience and ensures that the restoration outcome meets the user's
expectations. The proposed system offers a more advanced and automated approach to restoring old
photo images. By utilizing deep neural networks, it is able to overcome the limitations of the current
system, providing a more precise, efficient, and adaptable solution. This system has the potential to
transform the field of old photo restoration by generating high-quality restorations with minimal
manual intervention, while also preserving the intricate details and textures that give these photos
their historical significance.
3.4 Advantages of Proposed Systems
3.4.1 Automatic Restoration:
The proposed system offers a significant benefit in its capacity to automate the restoration process. In
contrast to the current system, which heavily depends on manual interventions and subjective
adjustments, the proposed system utilizes deep neural networks to analyze extensive datasets and
automatically enhance old photos. This automation diminishes the necessity for specialized
knowledge and time-consuming manual tweaks, ultimately streamlining the restoration process and
making it more user-friendly.
3.4.2 Improved Restoration Accuracy:
The advanced neural networks in the new system can understand complex features and connections
from labeled data. This allows them to effectively enhance vintage photographs by retaining intricate
details, textures, and colors. The new system outperforms the current one by delivering restoration
outcomes that closely mirror the original look of the pictures. It is capable of addressing various
forms of deterioration such as noise, blur, fading, and physical harm with improved accuracy.

3.4.3 Adaptability to Diverse Degradation Patterns:

The new system is more adaptable to different types of degradation often seen in old photos compared
to the current system. Instead of using preset filters and assumptions, the deep neural networks in the
new system can be trained on various data sets and adjust to different levels of degradation, types of
damage, and image features. This flexibility allows the new system to handle a broader range of
degradation situations, resulting in more reliable and efficient restoration outcomes..
3.4.4 Preservation of Fine Details and Textures:
The new system is highly effective in maintaining the delicate details and textures when restoring
images. Through the use of deep neural networks, it can accurately capture and restore complex
patterns, lines, and textures that may have been lost or altered in the current system. This feature is
especially important for old photographs with historical or sentimental significance, as it guarantees
the retention of key visual elements that enhance their authenticity and visual appeal.
3.4.5 Potential for Interactive and User-Guided Restoration:
The new system has the capability to involve users in the restoration process by allowing them to give
input, set preferences for restoration, and make adjustments as needed. This interactive approach
increases user satisfaction and engagement since users can play an active role in the restoration
process and achieve customized outcomes that meet their expectations.
3.4.6 Generalization and Scalability:
The proposed system's deep neural networks can effectively adapt to unfamiliar or intricate
degradation patterns. Through extensive training on vast annotated datasets, the system can draw
insights from various instances and apply its restoration skills to a broad spectrum of aged
photographs. This adaptability enables the system to address various image attributes, degradation
fluctuations, and unforeseen circumstances, establishing it as a versatile option for restoring old
photos.
4. System Requirements & Specification

4.1 Software Requirements:

Operating system : Windows 10


OpenCV-Python : 4.5.18
SciPy : 1.5.4
Torch : 1.2.0
Wheel : 0.33.1
TensorboardX : 2.1
Back end : Dataset(celebA-HQ & places2)
4.2 Hardware Requirements:
Processors : Intel core i3
Hard Disk : 512GB
RAM : 8 GB
Graphics processor : preferred
4.3 Software Study:
 OpenCV-Python:
OpenCV, a widely-used open-source computer vision library for Python, offers a diverse set of tools
and functions for tasks such as image and video processing, object detection and tracking, machine
learning, and more. It enables developers to work with images and videos in different formats, apply
filters and transformations, carry out geometric operations, and extract features from visual data.
In addition to providing pre-trained models and algorithms for tasks such as face detection, object
recognition, and motion analysis, the library OpenCV facilitates the quick and effective development
of computer vision applications. It offers a user-friendly interface that simplifies complex image
processing tasks, while still allowing for detailed control at a lower level. OpenCV is utilized across
various fields like robotics, augmented reality, surveillance, and medical imaging because of its
adaptability, thorough documentation, and strong community backing.
 SciPy:
SciPy is a robust scientific computing library designed for Python, offering a wide array of numerical
algorithms and tools essential for scientific and technical computations. It enhances the functionalities
of NumPy, a foundational Python library for numerical operations. SciPy expands on NumPy by
providing additional features for tasks like optimization, integration, linear algebra, signal and image
processing, statistical analysis, and more. A standout characteristic of SciPy is its diverse range of
specialized modules tailored to different scientific fields. For instance, the scipy.optimize module
includes functions for optimization tasks such as root finding, curve fitting, and minimizing
multivariate functions. Additionally, the scipy.integrate module offers techniques for numerical
integration and solving ordinary differential equations.
SciPy is a highly utilized tool across a range of scientific fields such as physics, engineering, biology,
and finance. Its extensive collection of functions and algorithms, combined with its seamless
integration with other scientific libraries like NumPy and matplotlib, positions it as a crucial resource
for professionals and researchers engaged in numerical and scientific computing using Python. The
library is known for its thorough documentation, ongoing maintenance, and strong community
support, all of which contribute to its enduring significance and value in the realm of scientific
computation.
 CelebA-HQ:
CelebA-HQ, also known as CelebA-High Quality, is an upgraded version of the CelebA dataset. The
CelebA dataset is a comprehensive compilation of images featuring celebrities, utilized for research in
facial attribute prediction and face recognition. CelebA-HQ comprises 30,000 high-resolution images,
each with a size of 1024x1024 pixels. These images were created using a method called progressive
growing of generative adversarial networks (GANs), which gradually enhanced the quality of the
original CelebA dataset. This process led to more intricate and lifelike facial depictions. CelebA-HQ
has gained popularity in the fields of computer vision and machine learning due to its superior image
quality. It is commonly used for tasks requiring precise facial analysis and synthesis. Researchers and
developers utilize this dataset to train advanced deep learning models, improve facial attribute
recognition algorithms, and enhance the performance of facial recognition systems.
The CelebA dataset is a widely-used collection of over 200,000 images of celebrities' faces. It was
developed to help improve facial attribute prediction models and face recognition algorithms. The
CelebA-HQ dataset, known for its high-quality images, has gained popularity in the fields of
computer vision and machine learning. Researchers use it for tasks like facial attribute analysis, face
recognition, and facial synthesis, as its detailed and realistic images are ideal for training advanced
deep learning models.
 Places2:
Places2 is a dataset focused on scene recognition, offering a wide range of images from different
environments and locations. With over 1.8 million images across 401 scene categories like offices,
beaches, bedrooms, forests, and highways, it serves as a valuable resource for computer vision
research. The dataset includes images from various sources, ensuring a diverse representation of
scenes worldwide, both natural and man-made. Researchers can use Places2 for tasks like scene
understanding, image classification, and scene synthesis, with each image accompanied by metadata
for evaluation and benchmarking against standardized scene categories.
The extensive dataset size and wide range of scene categories covered in Places2 enhance the strength
and applicability of models trained on it. Researchers rely on Places2 as a crucial tool for delving into
scene comprehension, refining image classification methods, creating scene generation techniques,
and investigating the connections between scenes and their visual characteristics. The widespread
acceptance and frequent utilization of Places2 within the computer vision field underscore its
importance as a standard dataset for progressing the interpretation and examination of visual scenes.
5. System Design and Implementation
5.1 Self Attention Mechanism Design:
The concept of self-attention is akin to how the human brain operates. When people view images or
listen to speech, they tend to concentrate on the specific area where the information is located,
disregarding less important details. This attention mechanism strikes a balance between modeling
capability and computational efficiency, compensating for the limitations of convolution. In cases
where there are complex relationships between different parts of an image, the attention mechanism
can delve into these intricate connections. It enables a holistic understanding and coordination of each
pixel in the image, making it useful for enhancing texture details in image super-resolution tasks. This
application is particularly effective when integrated into the residual module of the generator. The
specific structure of this mechanism is illustrated in the accompanying figure.
After two convolution operations, the attention score can be obtained, which is calculated as follows:

Aj,i represents the attention score, which represents the degree of attention of the network to i when
the synthesis area is j, and the weighted feature of the attention layer is expressed as o = (o1, o2, ⋯,
on) to obtain

Multiply the attention-weighted feature output with a balance parameter and then, reinput it into the
attention feature map, as shown in the following formula:
yi = yoi + xi
γ represents the balance parameter. Its function is to adjust the weight of the decoder features and the
resulting attention features. yi represents the final output result of the attention layer.
5.2 Network Structure Design
To minimize the discrepancy in spatial distribution between synthesized and real photos, the
restoration of old photos is framed as an image conversion issue. Treating clear and old photos as
distinct image spaces, this study aims to establish a mapping between them by transforming the
images across three spaces. The crux of this approach lies in encoding both real old photo data and
synthetic photo data into a shared hidden space. Leveraging a variational autoencoder, known for its
robust unsupervised learning capabilities and high-quality image reconstruction, the images are
encoded before adjusting the network using a generative adversarial network to bridge the domain
gap. Initially, two variational autoencoder (VAE) networks are employed for spatial mapping, with
the objective function expression for input object i as follows:

ER,X represents the encoder, and ER,X(Zi , i) represents the prior probability distribution of Zi
followed by ER,X when the input is i. The next step involves a mapping network that utilizes
synthetic image pairs {x, y} to assist in image restoration by mapping them through their hidden
space. Essentially, the purpose of the mapping network is to enhance deteriorated photos back to their
original state [25]. During this phase, the loss function of the mapping network μ is.

Lμ,l1 represents the loss of L1, and Lμ,GAN represents the generative adversarial network, which is
used to synthesize photos that look real.
5.3 Degradation Repair Design
When dealing with old photos that have structural defects, it is often necessary to carefully analyze
the entire image to identify relevant information that can be used to maintain the overall structure. To
achieve this, a support system for both local and global search is essential, which involves the
integration of two mechanisms [26]. In this context, the scratch mask from the original photo can be
utilized to guide the network in repairing damaged areas without using pixels from those areas. Let C,
H, and W denote the number of channels, height, and width, respectively. The mapping feature of the
middle layer, denoted as FϵRC×HW, is crucial in this process. Additionally, a binary mask m(0,
1)HW is used to indicate damaged (1) and intact (0) areas. The relationship between different
positions i and j in the feature map F is denoted as sij, with sijϵFϵRHW×HW representing the
connection between each pixel.

Fi and Fj are both vectors in C ∗ 1, with θ (∙) and ϕ (∙) as functions that transform F into a Gaussian
distribution. Consequently, if the input mask identifies an area as damaged, global information is
utilized for repair; otherwise, local feature information is employed. This integration of the global and
local branches is guided by the input mask, leading to the following formula:
In the formula, ⨀ represents the Hadamard product of the matrix, and ρlocal and ρglobal both
represent the non linear transformation of the branch residual block, which can deal with the structural
defects of old photos. The specific network structure is shown in Table.
5.4 Network Parameter Setting of Generator and Discriminator
The generative confrontation network consists of a generator and a discriminator. The generator's role
is to produce a high-quality image that closely resembles a real image. On the other hand, the
discriminator is tasked with distinguishing between images generated from actual training data and
those that are fake. The generator described in this study is divided into three main components. The
first component is the down sampling module, which includes three convolution kernels of varying
sizes. The second component is the self-attention residual module, which consists of 16 identical self-
attention residual blocks, each containing a self-attention layer and a convolutional layer. The final
component is the up sampling image reconstruction module, consisting of two subpixel convolutional
layers for pixel enhancement. Subpixel convolution, as opposed to regular deconvolution, minimizes
the impact of artificial elements, resulting in superior image quality in the reconstruction process. The
discriminator's role is to distinguish between input images from the sample space and those generated
by the generator, learning to differentiate between the two. In this section, the discriminator network
utilizes a deep convolutional neural network with the addition of a batch normalization layer and the
LeakyRelu activation function (α = 0:2) [27]. The process involves feeding the generated image
through multiple convolution layers to extract features, followed by inputting these features into a
fully connected layer. The Sigmoid activation function is then applied to classify the image as real or
fake. With the completion of the generator and discriminator design, the two components work
together through adversarial optimization to produce high-resolution images, achieving the
enhancement of old photographs.

Fig.1 Function image


Fig.2 Schematic diagram of steps to generate a confrontation network.

Figjfgh

Fig.2 Schematic diagram of the steps to generate a confrontation network.

Fig.3 Self attention mechanism structure


6. System Testing
Testing the system for an old photo image restoration process using deep neural networks is essential
to guarantee its functionality, accuracy, and reliability. This step involves assessing the system in its
entirety to confirm its ability to restore old photos effectively and meet the necessary criteria. Here are
some critical elements of system testing for this type of system:
6.1 Functional Testing:
The main goal is to confirm that the system can successfully carry out the desired image restoration
tasks. This involves experimenting with a variety of old photographs that have been affected by
different types of degradation like noise, blurriness, scratches, or color fading. The system must be
capable of efficiently eliminating these imperfections and improving the overall quality of the image.
6.2 Performance Testing:
Evaluating the system's efficiency involves testing its speed and resource usage. This includes
measuring how long it takes to process and recover images of varying sizes and levels of complexity.
The system needs to be capable of managing a moderate workload without experiencing notable
delays or resource limitations.
6.3 Robustness Testing:
The system's capability to manage difficult situations and unusual cases is assessed through testing.
This involves examining how well the system can withstand significant deterioration, high levels of
noise, or unusual elements in aged photographs. It is important for the system to be able to handle
these scenarios effectively and deliver satisfactory restoration outcomes.
6.4 Accuracy and Quality Testing:
The main objective is to verify the precision and excellence of the enhanced images by comparing
them to their original versions or clean references. This process helps evaluate the system's capability
to accurately restore the initial details, colors, and textures.
6.5 Usability and User Experience Testing:
The evaluation of the system involves analyzing the user interface and its overall usability. This
process includes assessing how easy it is to use, how intuitive it is, and how responsive the interface
is. User acceptance testing is conducted to collect user feedback and satisfaction, ensuring a
satisfactory user experience.
6.6 Compatibility Testing
This process confirms that the system is able to work effectively on various operating systems,
hardware setups, and web browsers, where relevant. It guarantees that the system operates properly in
diverse platforms and settings.
6.7 Security and Privacy Testing
Thorough system testing is essential to ensure that user data is handled securely and in compliance
with privacy regulations. This testing involves checking data encryption, access controls, and
protection against vulnerabilities or unauthorized access. Identifying and addressing any issues or
shortcomings in the old photo image restoration processing system through this testing process leads
to a dependable and top-notch restoration solution.
7. Experimentation Results:
7.1 Image Restoration Experiment Based On Blur Restoration
For this experiment, 1000 photos from both the CelebA-HQ and Places2 data sets were chosen to test
various algorithms for image restoration. The study focused on evaluating the peak signal-to-noise
ratio and structural similarity of each algorithm when restoring images affected by blur.

7.2 Analysis Of Peak Signal-To-Noise Ratio Results


In this section, various algorithms were used to restore images from 1000 photos in both the CelebA-
HQ and Places2 datasets. The results of the Peak Signal-to-Noise Ratio (PSNR) for each algorithm
are displayed in Figure 7. It is evident from the figure that regardless of whether it is the CelebA-HQ
or Places2 dataset, the algorithm presented in this paper achieved the highest PSNR value.
Specifically, on the CelebA-HQ dataset, the PSNR value for this algorithm was 32.34, while on the
Places2 dataset, it was 30.82. Conversely, the traditional algorithm Criminisi had the lowest PSNR
values on both datasets. A comparison of the PSNR results between the two datasets indicates that the
PSNR values for each algorithm were slightly higher on the CelebA-HQ dataset, demonstrating that
the algorithms performed better in restoring face images compared to scene images. In conclusion, the
algorithm presented in this paper consistently achieved the highest PSNR values across all datasets,
highlighting its superior image restoration capabilities, particularly for face images.
7.3 Analysis of Structural Similarity Results:
In this study, various algorithms were utilized to enhance 1000 images from the CelebA-HQ and
Places2 datasets, with a focus on analyzing the SSIM values of each algorithm. The findings are
presented in Figure 8, indicating that the algorithm consistently outperformed other algorithms in
terms of SSIM values. Specifically, the SSIM value of this algorithm on the CelebA-HQ dataset was
0.885, while on the Places2 dataset it was 0.879. Notably, Criminisi had the lowest SSIM value, with
0.849 on the CelebA-HQ dataset and 0.831 on the Places2 dataset. These results highlight the superior
performance of the algorithm in enhancing the structural similarity of the images compared to other
algorithms.
7.4 Image Restoration Experiment Based on Damage Degree:
In our study, we chose 100 images from the CelebA-HQ dataset and edited them using Photoshop to
create simulated damage. The photos were then divided into 5 groups, each containing 20 images. The
damaged areas in these groups accounted for 5%, 10%, 15%, 20%, and 25% of the total image size,
respectively.
7.5 Analysis of Peak Signal-To-Noise Ratio Results:
The study utilized four algorithms to repair 100 images across five groups, with the corresponding
PSNR results displayed in Figure 9. Analysis of Figure 9 reveals a consistent decrease in peak signal-
to-noise ratios for all four algorithms as the level of image damage increases. This indicates that the
restoration of photos becomes more challenging as the extent of damage worsens. Notably, the
algorithm proposed in this study consistently achieves the highest peak signal-to-noise ratio regardless
of the degree of damage. The calculated average PSNR value for this algorithm is 19.11 under
varying levels of damage. While some other algorithms show similar PSNR values, it remains unclear
which algorithm has the lowest PSNR value.
7.6 Analysis of Structural Similarity Results:
The study utilized four algorithms to restore 100 images across five groups, with SSIM results
recorded. Figure 10 illustrates that each algorithm has a unique impact on the restoration of damaged
images. Notably, regardless of the extent of damage, the SSIM value of the algorithm in this study
consistently outperforms other algorithms. Additionally, the algorithm in this study exhibits a smaller
decrease in SSIM value compared to other algorithms. Upon analysis, the average SSIM value of the
algorithm in this study is 0.767 across varying levels of damage. Consequently, this algorithm
demonstrates superior effectiveness in restoring damaged images. The experiments further reveal that
the PSNR and SSIM values of the algorithm in this study surpass those of other algorithms, whether
for repairing blurriness or damage in old photographs. Thus, the algorithm in this study excels in both
blur and damage restoration of old images.
8. Conclusion:
The primary technology for restoring old photos and images is image restoration technology. Old
photos are often blurry and damaged, necessitating the development of a suitable restoration method
to recreate the original scene. This article presents a novel approach to repairing old photos using deep
neural networks. It begins by discussing the background and importance of the topic, followed by an
explanation of the general convolutional neural network and the generative adversarial network. The
paper then proposes a new image restoration method based on these networks within the deep neural
network framework, along with a designed loss function. An experiment is conducted to evaluate the
algorithm's performance in image restoration, demonstrating higher peak signal-to-noise ratio and
structural similarity compared to other algorithms. This paper's algorithm excels not only in blur
repair but also in damage repair, making it more suitable for restoring old photos. While the
restoration effect is good overall, there is room for improvement, particularly in repairing damaged
photos. Future work will focus on enhancing the restoration of broken photos.
Appendix:
Sample Coding:
Fig.4 Sample Study Case 1
Fig. 5 Study Case 2
Fig. 6 Comparative Analysis of Case Study with and without Deep Learning Technique

References
1. S. Ding, S. Qu, Y. Xi, and S. Wan conducted a study on image caption generation titled "Stimulus-
driven and concept-driven analysis for image caption generation," which was published in the journal
Neurocomputing in 2020.

2. O. I. Khalaf, C. A. T. Romero, A. A. J. Pazhani, and G. Vinuja published a paper on the VLSI


implementation of a high-performance nonlinear image scaling algorithm in the Journal of Healthcare
Engineering in 2021.

3. H. Duan and X. Wang explored the use of Echo state networks with orthogonal pigeon-inspired
optimization for image restoration in their research published in the IEEE Transactions on Neural
Networks & Learning Systems in 2016.

4. J. Xu, X. C. Tai, and L. L. Wang presented a two-level domain decomposition method for image
restoration in their work published in Inverse Problems & Imaging in 2010.

5. In 2017, Y. T. Peng and P. C. Cosman published a research paper titled "Underwater image
restoration based on image blurriness and light absorption" in the IEEE Transactions on Image
Processing journal, volume 26, issue 4, pages 1579-1594.
6. In 2017, A. V. Nikonorov, M. V. Petrov, S. A. Bibikov, V. V. Kutikova, A. A. Morozov, and N. L.
Kazanskiy published a research paper titled "Image restoration in diffractive optical systems using
deep learning and deconvolution" in the journal Computer Optics, volume 41, issue 6, pages 875-887.
7. In 2017, S. Ono published a paper titled "Primal-dual plug-and-play image restoration" in the IEEE
Signal Processing Letters, volume 24, issue 8, pages 1108-1112.
8. H. Liu, R. Xiong, X. Zhang, Y. Zhang, S. Ma, and W. Gao proposed a method called non-local
gradient sparsity regularization for enhancing image quality.
9. In 2017, B. Dong, Z. Shen, and P. Xie published a paper titled "Image restoration: a general
wavelet frame based model and its asymptotic analysis" in the SIAM Journal on Mathematical
Analysis, volume 49, issue 1, pages 421-445.
10. In 2022, R. Surendran, O. I. Khalaf, and C. Andres published a research paper titled "Deep
learning based intelligent industrial fault diagnosis model" in the journal CMC-Computers, Materials
& Continua, volume 70, issue 3, pages 6323-6338.

You might also like