0% found this document useful (0 votes)
9 views18 pages

Sensors 23 06235

This research proposes an automated micro-crack detection system for photovoltaic cells using a lightweight convolutional neural network (CNN) trained on a modelled dataset that simulates production floor conditions. The study addresses the inefficiencies of manual inspections, which are prone to human error and high costs, by utilizing data augmentation techniques to enhance the dataset and regularization strategies to improve model performance, achieving an F1-score of 85%. The findings highlight the importance of developing practical, efficient solutions for defect detection in the growing field of solar energy manufacturing.

Uploaded by

budoormusameh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views18 pages

Sensors 23 06235

This research proposes an automated micro-crack detection system for photovoltaic cells using a lightweight convolutional neural network (CNN) trained on a modelled dataset that simulates production floor conditions. The study addresses the inefficiencies of manual inspections, which are prone to human error and high costs, by utilizing data augmentation techniques to enhance the dataset and regularization strategies to improve model performance, achieving an F1-score of 85%. The findings highlight the importance of developing practical, efficient solutions for defect detection in the growing field of solar energy manufacturing.

Uploaded by

budoormusameh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

sensors

Article
Automated Micro-Crack Detection within Photovoltaic
Manufacturing Facility via Ground Modelling for a Regularized
Convolutional Network
Damilola Animashaun and Muhammad Hussain *

Department of Computer Science, Centre for Industrial Analytics, School of Computing and Engineering,
University of Huddersfield, Queensgate, Huddersfield HD1 3DH, UK; [email protected]
* Correspondence: [email protected]

Abstract: The manufacturing of photovoltaic cells is a complex and intensive process involving the
exposure of the cell surface to high temperature differentials and external pressure, which can lead
to the development of surface defects, such as micro-cracks. Currently, domain experts manually
inspect the cell surface to detect micro-cracks, a process that is subject to human bias, high error
rates, fatigue, and labor costs. To overcome the need for domain experts, this research proposes
modelling cell surfaces via representative augmentations grounded in production floor conditions.
The modelled dataset is then used as input for a custom ‘lightweight’ convolutional neural network
architecture for training a robust, noninvasive classifier, essentially presenting an automated micro-
crack detector. In addition to data modelling, the proposed architecture is further regularized using
several regularization strategies to enhance performance, achieving an overall F1-score of 85%.

Keywords: defect detection; micro-cracks; photovoltaics; smart manufacturing; quality inspection

1. Introduction
Citation: Animashaun, D.;
Hussain, M. Automated Micro-Crack
The issue of global emissions and how to address them is a globally shared con-
Detection within Photovoltaic
cern, leading to the emergence of the renewable energy field, and among the practical
Manufacturing Facility via Ground options available at all levels of society, solar power is the most widely accepted [1]. Ac-
Modelling for a Regularized cording to the International Energy Agency (IEA), global carbon dioxide (CO2 ) emissions
Convolutional Network. Sensors 2023, from energy combustion and industrial processes increased by 0.9% to a record high of
23, 6235. https://doi.org/10.3390/ 36.8 Gt in 2022 after two years of pandemic-related oscillations, with CO2 emissions from
s23136235 energy combustion rising by 1.3% in 2022 while CO2 emissions from industrial processes
declined [2].
Academic Editors: Roberto Teti
The use of solar energy has resulted in more photovoltaic (PV) solar panels being
and Zahir M. Hussain
produced, installed, and maintained. It is crucial to have a dependable inspection process
Received: 18 May 2023 as production is automated to meet demand. These panels may face challenges, like soiling,
Revised: 4 July 2023 harsh environments, and damage, which can lower their performance [1,3–5]. These defects
Accepted: 5 July 2023 may be in the form of micro-cracks, which can be hard to visually identify [6], and their
Published: 7 July 2023 manual detection is subject to human error and thus susceptible to low efficiency, high
labor costs, high rates of false detection, as well as a high scrap rate [7]; hence, there is a
need to develop an automated process for easy detection.
This study explains how the manual inspection of PV cells in manufacturing facilities
Copyright: © 2023 by the authors.
is a costly and time-consuming process that can result in human bias. The solution to this
Licensee MDPI, Basel, Switzerland.
problem is integrating computer vision into the inspection process, which can detect defec-
This article is an open access article
distributed under the terms and
tive PV cells more quickly and cost effectively. Data collection from within manufacturing
conditions of the Creative Commons
facilities can be a cumbersome task due to several issues, including limited accessibility and
Attribution (CC BY) license (https:// down-time in the event of needing to deploy an acquisition mechanism for data collection.
creativecommons.org/licenses/by/ The complex and sensitive nature of PV manufacturing means researchers cannot simply
4.0/). collect data from a PV manufacturing site; hence, this work proposes the modeling of

Sensors 2023, 23, 6235. https://doi.org/10.3390/s23136235 https://www.mdpi.com/journal/sensors


Sensors 2023, 23, 6235 2 of 18

production floor variance in order to scale a small PV dataset in a representative manner,


followed by the development of a lightweight CNN architecture for the on-site, automated
detection of micro-cracks occurring during the manufacturing process.

1.1. Literature Review


The popularity and affordability of solar power have led to increased use of translucent
solar panels in homes and businesses. However, in utility-scale solar power plants, defects
in photovoltaic modules, such as micro-cracks, must be identified to maintain efficiency.
Gabor et al. [8] examined the potential of UV fluorescence (UVF) for detecting cracked
cells in solar panels via a pole-mounted UV flash camera system applied to residential
rooftops in Boulder, Colorado, and they found that the pole-mounted UVF system is
highly applicable and informative for detecting defects for a range of residential panel ages
and designs, and it can provide additional information to that from electroluminescence
imaging. Han et al. [9] proposed a deep learning approach using an improved version
of YOLOv3-tiny to detect faults in solar panels with the aid of a UAV equipped with a
thermal camera and GPS to acquire thermal images and locate faults. The information is
transmitted to a remote server for visualization via long-term evolution (LTE), and the
proposed DL model outperforms the current default YOLOv3-tiny model, achieving a high
accuracy of 96.45%.
Espinosa et al. [10] proposed using a CNN to automatically classify physical faults
in PV plants by segmenting and classifying RGB images, and they included experimental
results for both two output classes (no fault and fault) and four output classes (no fault,
dust, cracks, and shadows), achieving an average accuracy of 75% for the two output classes
and 70% for the four output classes, which demonstrated its potential as a classification
method for PV systems. Acharya et al. [11] also proposed a method for classifying different
types of defects in solar cells using a deep Siamese convolutional neural network (CNN).
The EL image is first preprocessed to remove noise and distortions, and then the proposed
model is tested on a standard EL image dataset. Simulation results show that the pro-
posed model achieves better classification accuracy with a 90% AUC in detecting defective
solar cells.
While using advanced CNN architectures and ensemble learning to detect micro-
cracks in EL images of PV modules, Rahman et al. [12] achieved high accuracy rates
of 97.06% and 96.97% for polycrystalline and monocrystalline solar panels, respectively,
by utilizing pre-trained models, including Inception-v3, VGG-19, VGG-16, Inception-
ResNet50-v2, Xception, and ResNet50-v2 [13]. Akram et al. [14], on the other hand, adopted
a CNN-based deep learning architecture using an “isolated model” which had been trained
with samples from the EL PV cell and employed transfer learning for fine-tuning the
architecture, achieving an accuracy rate of 99.23%, though the generalization and accurate
representativeness of the trained model may raise concerns due to the size of the dataset.
However, Mathias et al. [15] expanded the study by training 2000 EL images and testing
300 EL images. The preprocessing stage involved applying perspective transformation and
separating the solar panel section and individual solar cells from the PV panel. Textural
features were extracted from these cells using DWT and SWT. Support vector machine and
back propagation neural network were used for classification into cracked and non-cracked
cells, and the researchers achieved high classification accuracies of 92.67% and 93.67%
using SVM and BPNN, respectively. Winston et al. [16] also adopted this model, using six
input parameters, and both methods showed promising results with average accuracies
of 87% and 99%, respectively, and an F1-score of 94.6%, recall of 96.3%, and precision of
87.3% [17].
In the study of Xue et al. [18], the authors adopted fuzzy c-means clustering and
AlexNet CNN [4] to accurately detect hidden cracks despite an irregular and composite
texture background, thereby achieving stable and precise results with 94.4% accuracy [19].
Sensors 2023, 23, 6235 3 of 18

In summary, current research on automating the detection of faults in PV systems


lacks practical considerations. Although several works have focused on optimizing state-
of-the-art CNN architectures for high accuracy, there has been little attention on developing
lightweight CNN architecture, i.e., internal architectural complexity. This is a key area
for focus as the majority of the state-of-the-art architectures cannot be deployed onto
constrained-edge devices due to the high computational complexity of the internal net-
work. Hence, production sites would need to commission high-performance computing,
i.e., GPUs, to run state-of-the art CNNs, such as VGG, which significantly increases
the cost.

1.2. Paper Contribution


This study has two fundamental contributions. Firstly, as evident from the literature
review, the collection of quality PV cell samples for normal and defective cell surfaces
is a key component when looking to develop automated CNN algorithms for defect de-
tection and classification. However, the procurement of quality datasets, in particular
EL-processed samples can be cumbersome and sometimes practically infeasible due to
access restrictions within certain manufacturing facilities. Hence, to provide an alternative
route, we present the modelling of internal and external variance in the context of PV
cell manufacturing conditions by proposing representative augmentations for appropri-
ately scaling and increasing the variance of EL-based PV datasets. Secondly, a custom
CNN architecture with a lightweight footprint is developed (4.67 Million parameters) and
trained using the augmented-generated samples. The design and training of a ‘lightweight’
architecture is to address the stringent deployment conditions within manufacturing facili-
ties, such as edge-device deployment, low power consumption, and close-to-the-source
inferencing, in addition to generalization via scaled augmentations several regularization
techniques, which are applied for further model generalization and to reduce the degree
of overfitting.

2. Methodology
2.1. Dataset
For the purpose of this study, a dataset of PV-cell images from the manufacturing
facility was used and was manually labeled by experts.
The dataset has two classes, normal and defective, with a small sample size of 930,
which makes it difficult to develop a highly generalized architecture capable of accurately
distinguishing between the two classes. Table 1 presents the status of the dataset.

Table 1. Original dataset.

Class Samples
Normal 469
Defect 461

Figure 1 shows examples of normal and defective PV-cell surfaces. To ensure proper
scaling of the dataset, it was necessary to understand the visual differentiation features and
variance of the two classes. By observing Figure 1A, we can identify differences in texture
and global-level variance. For instance, the normal class has texture variance, with the first
image being clearer than the center image and the center image being clearer than the last
image. It’s crucial to consider this variance as it may result in the developed architecture
falsely generalizing that only clear surface images belong to the normal class.
Sensors 2023, 23, x FOR PEER REVIEW 4 of 18
Sensors 2023, 23, 6235 4 of 18

Datainvestigation:
Figure1.1.Data
Figure investigation: (A)
(A)normal,
normal,(B) defective.
(B) defective.
In Figure 1, the visual differences between the normal and defective PV-cell surfaces
In Figure 1, the visual differences between the normal and defective PV-cell surfaces
are impacted by both internal and external factors. For example, shading or poor filter
are impacted
quality by both
can induce internal
pixel shading and
on external factors.
normal cells, For
which canexample,
resembleshading or poor
micro-cracks on filter
quality cancells
defective induce pixel shading
and increase onofnormal
the chance cells, which can resemble micro-cracks on
misclassification.
defective cells and increase the chance of misclassification.
The busbar is a crucial component of PV cells, but its configuration and starkness can
vary
Thesignificantly,
busbar is apotentially leading to misclassification.
crucial component of PV cells, but itsTherefore, these observations
configuration and starkness can
suggest
vary that the developed
significantly, potentiallyarchitecture
leading needs to account for various
to misclassification. degrees
Therefore, of textural
these observations
and internal variance within and between classes to achieve accurate classification.
suggest that the developed architecture needs to account for various degrees of textural
Despite the small and representative size of the original dataset, it was spliced using
and internal variance within and between classes to achieve accurate classification.
the train, test, and split function (in the ratio 70:10:20) as shown in Table 2.
Despite the small and representative size of the original dataset, it was spliced using
the train,
Table test, and
2. Original splitSplit.
Dataset function (in the ratio 70:10:20) as shown in Table 2.

Table 2. Original Dataset Normal


Split. Defect Total Percentage
Testing Set 44 49 93 10%
Training Set Normal
331 Defect
320 651Total Percentage
70%
Testing Set
Validation Set 44
94 92 49 186 93 20% 10%
Total 469 461 930
Training Set 331 320 651 70%
Validation Set 94 92 186 20%
2.2. Data Augmentations
Total 469 461 930
Upon analyzing the dataset, it was hypothesized that addressing the variance within
the dataset could be achieved through representative data modeling, rather than randomly
2.2. Data Augmentations
augmenting the dataset to increase its size. Consequently, the dataset was augmented and
Upon
limited toanalyzing
2232 samplesthe from
dataset, it was 930
the initial hypothesized
images. This that addressing
decision was also theinfluenced
variance within
by practical limitations in obtaining PV data from manufacturing
the dataset could be achieved through representative data modeling, rather facilities due to limited
than ran-
access and a lack of open-source data.
domly augmenting the dataset to increase its size. Consequently, the dataset was aug-
mented Data
andtransformations
limited to 2232 cansamples
be divided into the
from two initial
categories:
930 translational
images. This invariance
decisionand
was also
translational equivariance. The former is used for designing the internal layers of architec-
influenced by practical limitations in obtaining PV data from manufacturing facilities due
to limited access and a lack of open-source data.
Data transformations can be divided into two categories: translational invariance and
translational equivariance. The former is used for designing the internal layers of archi-
tectures and preserves regional transformations through aggregation and is represented
as the vector sum of a constant 𝑣 to every point 𝑥, as expressed in (1) and (2):
Sensors
Sensors 23,23,
2023,
2023, 6235 PEER REVIEW
x FOR 5 of 18 5 of 18
Sensors 2023, 23, x FOR PEER REVIEW 5 of 18

tures and preserves regional transformations through aggregation and is represented as


the vector sum of a constant v to every point
𝑇 x,𝑥as expressed
𝑥 𝑣 in (1) and (2):
𝑇 𝑥 𝑥 𝑣 (1)(1)
Tv ( x ) = x + v (1)
𝑓 𝑓 𝑔𝑔 𝑥 𝑥 𝑔𝑔 𝑓 𝑥𝑓 𝑥
(2)(2)
TheThe latter,
latter, ononthethe other
other hand,
hand, ( gis
fis used
( x ))
used gfor
=for ( xdata
( fdata scaling
)) scaling and
and transforms
transforms (𝑔 (𝑔thethe
(2) input
input
image
image The 𝑓 according
𝑓 latter,
according
on theto
to the type
the hand,
other type of of transformation
transformation
is used for data scaling
applied.
applied. This is mathematically
This is mathematically
and transforms ( g) the input ex-
ex-
pressed
image (asf )as
pressed follows:to the type of transformation applied. This is mathematically expressed
according
follows:
as TheTheselection of of
follows: selection augmentations
augmentations was was based
based onon generating
generating representative
representative samples
samples that
that
may
may be
beThe generated
generated in in
selection of PVPV manufacturing
augmentations was based
manufacturing facilities,
on generating
facilities, considering different
representative
considering different factors,
samples
factors,that such
such as as
may be generated
production line in PV manufacturing
configurations and EL facilities,
camera considering different factors, such as
specifications.
production line configurations and EL camera specifications.
production line configurations and EL camera specifications.
2.2.1.
2.2.1. Scaling Variability
Scaling
2.2.1. ScalingVariability
Variability
TheThe
The production
production
productionof ofof
aaPVaPVPV cell
cell
cell goes
goes
goes through
through
through various
various
various stages
stages
stages from
from
from silicon
silicon
silicon ingots
ingots
ingots down
down
down to to
cell assembly,
celltoassembly,
cell assembly, all of which
all which
all of of which involves various
involvesvarious
involves texturing
varioustexturing processes
texturing processes
processes and and quality
andquality
quality control
control at at allall
control at
stages.
stages. These
all stages.
These variabilities
These variabilities
variabilities may may
may result
result
result in
in in different
different
different cell
cell
cell surface
surface
surface orientation.
orientation.
orientation. Therefore,
Therefore,
Therefore, toto to
en-en-
sure
ensure consistency
consistency and enable
and enable the
the proposed
proposed model
model to detect
to detect the different
the different
sure consistency and enable the proposed model to detect the different instances, various instances,
instances, various various
scaling variabilities,
scaling variabilities, like flipping,
like flipping,rotation, weight
rotation, shift,shift,
weight and height
and shift, were
height applied,
shift, were as
applied,
scaling variabilities, like flipping, rotation, weight shift, and height shift, were applied, as as
shown in
shown inFigures
Figures 2–5.
2–5.
shown in Figures 2–5.

(A)(A) (B)(B)
Figure 2.Flip:
Figure 2. Flip:(A)
(A)vertical,
vertical, (B)(B) horizontal.
horizontal.
Figure 2. Flip: (A) vertical, (B) horizontal.

(A)(A) (B)(B) (C)(C)


Figure 3. Rotation: (A) clockwise, (B) counter-clockwise, (C) 180 degrees shift.
Figure 3. 3.
Figure Rotation:
Rotation:(A)
(A)clockwise, (B)counter-clockwise,
clockwise, (B) counter-clockwise,(C)(C)
180180 degrees
degrees shift.shift.
Sensors 2023, 23, x FOR PEER REVIEW 6 of 1
Sensors 2023,
Sensors
Sensors 2023,23, xxFOR
23,23,
2023, FORPEER
6235 PEERREVIEW
REVIEW 6 of 18 66 of
of 18
18

(A) (B)
(A) (B)
(A) (A)vertical shift, (B) horizontal shift.
(B)
Figure 4. 15 Degree:
Figure 4.
Figure4.
Figure 15
4.15 Degree:
15Degree:
Degree: (A)vertical
(A)vertical shift,
shift,
(A)vertical (B)(B)
shift, (B)horizontal
horizontal shift.shift.
horizontal shift.

(A) (B)
(A)
(A) (B)
(B)
Figure 5. 15 Degree Rotation: (A) height shift, (B) width shift.
Figure
Figure 5.
Figure5. 15
5.15 Degree
15Degree
Degree Rotation:
Rotation:
Rotation: (A)(A)
(A) height
height
height shift,
shift,
shift, (B)
(B)width
(B) width shift. shift.
width shift.
2.2.2. Contrast Variability
2.2.2. Contrast Variability
2.2.2.
2.2.2. Contrast
Contrast Variability
Different Variability
samples of the PV cells were taken under different environmental factors
Different samples of the PV cells were taken under different environmental factors, like
like Different
the
Different
the dimness samples
dimness of the
samples
of the room of
of
when the
roomthethePV
when
PV cells
cells
pictures were
thewere
were taken
pictures were
taken
taken, under
under
camera different
taken, camera
different
quality, environmental
quality,
which or
environmental
or dust, factors,
dust,
could whic
factors
like
like
have the
couldthe dimness
have
dimness
accumulated of
of the
accumulated room
thethe
from room when
fromwhen
rigorous the the
the pictures
rigorous
pictures
production andwere
production
were taken,
taken,
inspection camera
and
camera
stages, quality,
inspection
quality,
hence or
ordust,
thestages,
need which
tohence
dust, th
which
could
need have
to
could have
ensure accumulated
ensure the
accumulated
the model model from is the
able rigorous
to
from the rigorous
is able to understand production
understand these
production
these variations and inspection
variations
andcontrast
by adding by
inspection stages,
adding hence
contrast
stages, hence the
augmentations, the
aug
need
like
need to
to ensure
mentations,
brightness,
ensure the
like model
brightness,
exposure,
the modeland isisnoise,
able
able to
to understand
exposure,
to the dataset,
understand asthese
and noise, shown
these variations
to the in Figuresby
dataset,
variations as adding
shown
6–8.
by addingin contrast
Figuresaug-
contrast 6–8
aug-
mentations, like brightness, exposure, and noise, to the dataset, as
mentations, like brightness, exposure, and noise, to the dataset, as shown in Figures 6–8. shown in Figures 6–8.

(A) (B)
(A)
(A) (A) input, (B) output. (B)
(B)
Figure 6. Brightness:
Figure 6. Brightness: (A) input, (B) output.
Figure
Figure6.
6.Brightness:
Brightness:(A)
(A)input,
input,(B)
(B)output.
output.
Sensors
Sensors 2023,
2023,
Sensors 23, 23,
23,
2023, x FOR
x FOR PEER REVIEW
6235 PEER REVIEW 7 of 18 77

(A)
(A) (B)
(B)
Figure
Figure 7.7.
Figure Exposure:
7.Exposure: (A)(A)
Exposure: input,
input,
(A) (B)
(B) output.
(B) output.
input, output.

(A)
(A) (B)
(B)
Figure
Figure 8.8.
Figure Noise:
8.Noise:
Noise: (A)
(A)
(A) input,
input,
input, (B)
(B) output.
output.
(B) output.

Table
Table3 33presents
Table presents
presentsthethe
state
the of the
state
state ofnewly
of the generated
the newly
newly dataset dataset
generated
generated after applying
dataset the aforemen-
after applying
after applying the aforem
the aforem
tioned
tioned augmentation techniques. The dataset was then split, as shown in Table 4, using the
tioned augmentation techniques. The dataset was then split, as shown in Table 4,
augmentation techniques. The dataset was then split, as shown in Table 4, u
u
same ratio as the original dataset.
the
the same
same ratio
ratio as
as the
the original
original dataset.
dataset.
Table 3. Augmented dataset.
Table 3.
Table 3. Augmented
Augmented dataset.
dataset.
Class Samples
Class
Class
Normal 1131
Samples
Samples
Normal
Defect
Normal 1101 1131
1131
Defect
Defect 1101
1101
Table 4. Augmented dataset split.
Table 4.
Table 4. Augmented
Augmented dataset
dataset split.
split.
Normal Defective Total Percentage
Testing Set Normal
114
Normal 109Defective
Defective 223 Total
Total 10% Percentage
Percentage
Training Set 791 772 1563 70%
Testing
Testing Set
Validation
Set
Set 114
226 114
109
220 109 446
223
223 20%
10%
10%
Training
Training Set
Total Set 1131791
791 1101 772
772 2232 1563
1563 70%
70%
Validation
Validation Set
Set 226
226 220
220 446
446 20%
20%
Total
Total 1131
1131 1101
1101 2232
2232

2.3.
2.3. Proposed
Proposed Architecture
Architecture
To
To reduce
reduce the
the complexity
complexity of
of the
the automated
automated defect
defect detector,
detector, aa custom
custom CNN
CNN arch
arch
ture was
ture was developed
developed featuring
featuring two
two convolutional
convolutional blocks
blocks with
with aa limited
limited number
number ofof fil
fil
Filters are an important component within a CNN architecture, as they aim to extract
Filters are an important component within a CNN architecture, as they aim to extract
Sensors 2023, 23, 6235 8 of 18

2.3. Proposed Architecture


To reduce the complexity of the automated defect detector, a custom CNN architec-
ture was developed featuring two convolutional blocks with a limited number of filters.
Sensors 2023, 23, x FOR PEER REVIEW Filters are an important component within a CNN architecture, as they aim to extract key8 of 18
features, but at the same time, a high number of filters can increase the computational cost.
Hence, our strategy was based on applying representative augmentations to accentuate key
features and variance within the dataset, to make it easier for filters to grasp the underlying
Each convolutional
characteristics, using onlyblock is comprised
a limited number of of predefined filters followed by feature ag-
filters.
gregation via Max-pooling, with the result feeding
Each convolutional block is comprised of predefined into the ReLu
filters activation
followed function.
by feature ag- The
selection
gregationof via
ReLu was againwith
Max-pooling, in line
thewith
resultour research
feeding theme,
into the ReLui.e., lightweight
activation footprint,
function. The as
selectionimplied
it simply of ReLu awas again
Max in line with
operation, our researchintheme,
as expressed (3). i.e., lightweight footprint, as it
simply implied a Max operation, as expressed in (3).
𝑠 𝑥 max 0, 𝑥 (3)
s( x ) = max(0, x ) (3)
The first convolutional block contained eight filters, also known as kernels. The fea-
ture map Thedetails that the operations
first convolutional performed
block contained in the
eight filters, alsofirst
knownconvolutional
as kernels. Theblock were re-
feature
map details
quired as inputthatdata
the operations performed
for the second in the first block.
convolutional convolutional block were
We decided required
to start with as
a small
input data
number for the then
of filters, second convolutionalincrease
incrementally block. Wethe decided to start
quantity, if with a small
required. number
The of for
rationale
filters,
this was then incrementally
that filters increase
contribute the quantity,
to increased if required. The
computational rationale for
parameters, thisaswas
and ourthat
aim was
filters contribute to increased computational parameters, and as our aim was to produce
to produce a computationally lightweight architecture, opting for a large number of filters
a computationally lightweight architecture, opting for a large number of filters would be
would be counterproductive towards this aim.
counterproductive towards this aim.
Figure
Figure9 9presents
presentsthetheproposed
proposedarchitecture
architecture containing
containing two twoconvolutional
convolutionalblocks blocks fol-
lowed by two fully connected layers feeding into the output. As presented
followed by two fully connected layers feeding into the output. As presented in Figure in Figure
9, 9,
each
eachconvolution
convolutionblockblock contained
contained aapredefined
predefined number
number of filters
of filters followed
followed by feature
by feature map map
aggregation
aggregationand andnon-linearity transformation
non-linearity transformation viavia
thethe ReLu
ReLu activation
activation function.
function.

Figure 9. 9.
Figure Proposed
Proposedarchitecture.
architecture.

Table5 5presents
Table presentsthe
the internal
internal architectural
architectural depth
depthdetails for for
details the the
proposed architecture.
proposed architecture.
As evident from Table 5, the proposed architecture resulted in only 4.67 million parameters.
As evident from Table 5, the proposed architecture resulted in only 4.67 million parame-
This would be considered lightweight compared to other architectures, such as ResNet at
ters. This
11.69 would
Million beand
[20] considered
VGG withlightweight compared
over 100 Million to other
parameters [21]. architectures, such as Res-
Net at 11.69 Million [20] and VGG with over 100 Million parameters [21].

Table 5. Internal depth architecture layout.

Layers Output Shape Parameters


Input 3.224 × 224 ---
Convo2d-1 8.222 × 222 224
BatchNorm2d 8.222 × 222 16
ReLu 8.222 × 222 ---
Max-Pool2d 8.111 × 111 ---
Sensors 2023, 23, 6235 9 of 18

Table 5. Internal depth architecture layout.

Layers Output Shape Parameters


Input 3.224 × 224 ---
Convo2d-1 8.222 × 222 224
BatchNorm2d 8.222 × 222 16
ReLu 8.222 × 222 ---
Max-Pool2d 8.111 × 111 ---
Convo2d 16.109 × 109 1168
BatchNorm2d 16.109 × 109 32
ReLu 16.109 × 109 ---
Max-Pool2d 16.54 × 54 ---
Dropout 16.54 × 54 ---
Fc1 100 Neurons 4,665,700
ReLu 100 ---
Dropout 100 ---
Fc2 50 Neurons 5050
ReLu 50 ---
Dropout 50 ---
Output 2 Neurons 102
Total Parameters 4.67 Million

3. Model Evaluation
3.1. Hyperparameter Tuning
This section compares the performance of the various experimental processes to
ascertain the optimal architecture configuration using Google Colab for GPU acceleration.
Due to limited GPU access, training was capped at 50 epochs, batch size was set to 32,
learning rate was set to 0.02, and SGD-M optimizer was adopted for faster training, as
shown in Table 6.

Table 6. Hyperparameters.

Global Hyperparamters
Batch Size 32
Epochs 50
Optimizer SGD-M
Learning Rate 0.02

3.2. Original Dataset Performance


Following the data split in Table 2, the proposed architecture was explored using a
learning rate of 0.02 for a fair comparison, and the results are shown in Figure 10, which
represents the model performance. It is evident from Figure 10 that the initial model
was not able to provide satisfactory results with a validation accuracy of 50.54%, i.e., the
architecture was essentially rendered as a random classifier.
Figure 11 complements the training and validation results presented in Figure 10 by
presenting the resultant confusion matrix. Based on the class-wise breakdown presented
via the confusion matrix in Figure 11, it is clear the architecture had failed to generalize
and essentially classified most samples as normal PV cells.
Further breaking down the performance metrics, Table 7 presents the precision, recall,
and F1-score for the trained classifier. As evident from the overall F1-score of 67%, it
can be concluded that the architecture lacked the generalization capacity with respect to
the application.
3.2. Original Dataset Performance
Following the data split in Table 2, the proposed architecture was explored using a
learning rate of 0.02 for a fair comparison, and the results are shown in Figure 10, which
represents the model performance. It is evident from Figure 10 that the initial model was
Sensors 2023, 23, 6235 not able to provide satisfactory results with a validation accuracy of 50.54%, i.e.,
10 ofthe
18 archi-
tecture was essentially rendered as a random classifier.

Sensors 2023, 23, x FOR PEER REVIEW 10 of 18

Figure 11 complements the training and validation results presented in Figure 10 by


presenting the resultant confusion matrix. Based on the class-wise breakdown presented
via the confusion matrix in Figure 11, it is clear the architecture had failed to generalize
and essentially classified most samples as normal PV cells.
Figure 10.
Figure Original data
10. Original dataperformance.
performance.

Figure 11. Confusion matrix for the initial model evaluation metrics with original dataset.
Figure 11. Confusion matrix for the initial model evaluation metrics with original dataset.
Table 7. Original dataset performance.
Further breaking down the performance metrics, Table 7 presents the precision, re-
Performance on Original Dataset
call, and F1-score for the trained classifier. As evident from the overall F1-score of 67%, it
Precision 51%
can be concluded that the architecture lacked the generalization capacity with respect to
Recall 100%
the application. F1-Score 67%
Accuracy 50.54%
Table 7. Original dataset performance.

Performance on Original Dataset


Precision 51%
Recall 100%
Sensors 2023, 23, 6235 11 of 18

3.3. Augmented Dataset Performance


Based on the performance of the initial architecture, there were two potential routes
that could be pursued: firstly, increasing the architectural capacity of the network via
increased internal layer depth, and secondly, applying data augmentations. As the objective
of the research was to propose a lightweight architecture, the latter option was given priority,
as this would not increase the computational complexity of the proposed architecture.
Hence, the proposed augmentations based on production floor manifestations (presented
in the methodology section) were applied to the initial dataset, and the transformed dataset
Sensors 2023, 23, x FOR PEER REVIEWpresented in Table 4 was used for training the initial architecture. 11 of 18
Figure 12 represents the performance of the augmented dataset trained on the initial
architecture. From Figure 12, it is evident that the augmented dataset did not have any
significant impact on improving the performance of the architecture. This did not, however,
however,
render therender the augmentations
augmentations as ineffective,
as ineffective, as mentioned
as mentioned earlier. Theearlier. Thethe
reason for reason
poor for the
poor performance
performance couldcould
be duebeto due to the architecture
the architecture lacking thelacking thearchitectural
internal internal architectural
capacity ca-
pacity required for generalization on the given dataset.
required for generalization on the given dataset.

Figure
Figure12. Augmented dataset
12.Augmented datasetperformance.
performance.
3.4. Modified Architecture
3.4. Modified Architecture
The next iteration was based on enhancing with internal architectural capacity of
the The next iteration
architecture was based
by introducing on enhancing
an additional with internal
convolutional blockarchitectural capacity
with increased filters of the
architecture
and anotherby introducing
fully connected an additional
layer, convolutional
the details of which are block withinincreased
presented filters and
the Proposed
another fully section.
Architecture connected layer,
Based thetraining/validation
on the details of which graph
are presented
presentedininthe Proposed
Figure 13, this Archi-
iteration
tecture had a profound
section. Based onimpact on the performancegraph
the training/validation with apresented
validation accuracy
in Figurereaching
13, this itera-
tion had a profound impact on the performance with a validation the
86.6%. The validation curve improved from 54.9% to 86.6%, demonstrating improved
accuracy reaching
ability of the modified architecture to better generalize because of the introduction
86.6%. The validation curve improved from 54.9% to 86.6%, demonstrating the improved of an
additional convolutional block. The metric breakdown presented via Table 8 also endorses
ability of the modified architecture to better generalize because of the introduction of an
additional convolutional block. The metric breakdown presented via Table 8 also endorses
the performance reported in Figure 13, with an overall F1-score of 84% and improved
precision (78%).
Sensors 2023, 23, 6235 12 of 18

Sensors 2023, 23, x FOR PEER REVIEW the performance reported in Figure 13, with an overall F1-score of 84% and improved
12 of 18
precision (78%).

Figure 13. Modified architecture performance.


Figure 13. Modified architecture performance.
Table 8. Modified architecture performance.
Table 8. Modified architecture performance.
Modified Architecture Performance
Modified Architecture Performance.
Precision 78%
Precision
Recall 91% 78%
Recall
F1-Score 84% 91%
Accuracy
F1-Score 86.55%
84%
Accuracy 86.55%
3.4.1. Modified Architecture with Batch Normalization
Although Architecture
3.4.1. Modified the modified architecture
with Batchprovided improved results, when observing the
Normalization
training and validation graphs in Figure 13, it is evident that there is a significant difference
Although
between the respecting
the two modified architecture provided
accuracies. This indicatedimproved
that there results,
was a when observing
high degree of the
training andbeing
overfitting validation graphs
experienced by in
theFigure
trained13, it is evident
architecture. thatwith
Hence, there
theisaim
a significant
of reducingdiffer-
ence betweenseveral
overfitting, the two respectingstrategies
regularization accuracies.
wereThis indicated
deployed in an that there
iterative was astarting
manner, high degree
of overfitting being experienced by the trained architecture. Hence, with the aim of reduc-
with batch normalization, with the aim of reducing internal covariance that may be residing
with
ing the internal
overfitting, samples
several of the two classes.
regularization strategies were deployed in an iterative manner,
starting with batch normalization, withof
Figure 14 presents the performance theaim
the proposed architecture
of reducing internalpost integrationthat
covariance of may
batch normalization. It is evident from Figure 14 that the introduction of batch normal-
be residing with the internal samples of the two classes.
ization did not have any significant impact with respect to reducing overfitting, with the
Figure 14 presents the performance of the proposed architecture post integration of
validation accuracy improving by 1%.
batch normalization. It is evident from Figure 14 that the introduction of batch normali-
zation did not have any significant impact with respect to reducing overfitting, with the
validation accuracy improving by 1%.
Sensors 2023, 23, x FOR PEER REVIEW 13 of 18
Sensors 2023, 23, 6235 13 of 18

Figure 14. Performance of modified architecture with batch normalization.


Figure 14. Performance of modified architecture with batch normalization.
3.4.2. Modified Architecture with Dropouts
3.4.2. Modified Architecture
As the integration with
of batch Dropouts did not have a significant impact on reducing
normalization
overfitting, the next regularization
As the integration strategy selected
of batch normalization did notwashavedropout. Dropoutimpact
a significant would focus
on reduc-
on reducing the distance between the training and validation accuracies,
ing overfitting, the next regularization strategy selected was dropout. Dropout would also known as fo-
reducing the degree of overfitting. The implementation of dropout was based on the drop-
cus on reducing the distance between the training and validation accuracies, also known
ratio parameter, i.e., the ratio of neurons to be randomly disabled. To select the optimal
as reducing the degree of overfitting. The implementation of dropout was based on the
drop-ratio, incremental steps of 10 were taken for the experimentation process, focusing our
drop-ratio parameter,
evaluation on the degree i.e.,ofthe ratio of neurons
overfitting. Due to thetolightweight
be randomly disabled.
architecture To proposed
of the select the op-
timal drop-ratio,
network, incremental
it was important steps of 10with
to experiment were takendropout
different for the ratios
experimentation process,
rather than select 0.5 fo-
cusing
as the default. Figure 15 presents the training and validation graphs for each drop-ratio of
our evaluation on the degree of overfitting. Due to the lightweight architecture
theexperiment. As evident
proposed network, fromimportant
it was Table 9, a drop-ratio of 0.6, with
to experiment i.e., disabling
different60% of the ratios
dropout internalrather
neurons
than selectof0.5
theasproposed architecture
the default. Figureresulted in the lowest
15 presents degree of
the training overfitting,
and validationi.e.,graphs
12.4% for
with an overall F1-score of 83%.
each drop-ratio experiment. As evident from Table 9, a drop-ratio of 0.6, i.e., disabling 60%
of the internal
Table neurons
9. Comparison of the proposed
of modified architecture
architecture’s performanceresulted in the lowest degree of over-
with dropout.
fitting, i.e., 12.4% with an overall F1-score of 83%.
Training Validation Degree of
Dropout Rate F1-Score
Accuracy Accuracy Overfitting
Table 9. Comparison of modified architecture’s performance with dropout.
10% 100% 86.55% 13.45% 82%
20% Training
99.94% Validation
83.63% Degree of
16.31% 82%
Dropout Rate
30% 99.62% 83.86% 15.76% F1-Score
84%
Accuracy Accuracy Overfitting
40% 99.43% 81.17% 18.26% 82%
10%
50% 100%
98.39% 86.55%
83.63% 13.45%
14.76% 82% 82%
20%
60% 99.94%
97.83% 83.63%
85.43% 16.31%
12.4% 83% 82%
30% 99.62% 83.86% 15.76% 84%
40% 99.43% 81.17% 18.26% 82%
50% 98.39% 83.63% 14.76% 82%
60% 97.83% 85.43% 12.4% 83%
Sensors 2023, 23, x FOR PEER REVIEW 14 of 18
Sensors 2023, 23, 6235 14 of 18

Training and Validation Accuracy

epoch no.

Figure
Figure Comparison of
15.Comparison
15. of modified
modifiedarchitecture
architecturewith dropout
with raterate
dropout (10%(10%
to 60%).
to 60%).
3.5. Modified Architecture with Batch Normalization and Dropout
3.5. Modified Architecture with Batch Normalization and Dropout
One final experiment stimulated via intuition was the amalgamation of both batch
One final experiment
normalization stimulated
and dropout into the trainingvia and
intuition waspipeline
validation the amalgamation of both
in a synchronous man- batch
normalization and
ner with the aim to dropout into the
observe whether training
this and invalidation
could result pipeline in
further performance a synchronous
accentuation
manner with to
with respect thereducing
aim to overfitting
observe whether this could
and improving result in
the overall further performance accen-
F1-score.
tuationThis strategy
with respectwastoimplemented by integrating
reducing overfitting andbatch normalization
improving component
the overall and the
F1-score.
optimal
This performing
strategy was drop-ratio, i.e., 0.6,by
implemented into the proposed
integrating architecture.
batch Figure 16
normalization presents and
component
the training and validation graphs. It is evident from Figure 16 that although the degree
the optimal performing drop-ratio, i.e., 0.6, into the proposed architecture. Figure 16 pre-
of overfitting was significantly reduced to less than 5%, the performance with respect to
sents the training and validation graphs. It is evident from Figure 16 that although the
precision, recall, and F1-score had also diminished, with an overall F1-score of 78%, as
degree
shownofinoverfitting
Table 10. was significantly reduced to less than 5%, the performance with re-
spect to precision, recall, and F1-score had also diminished, with an overall F1-score of
Table
78%, as10. Modified
shown architecture
in Table 10. with batch normalization and 60% dropout performance.
Based on the result ofBN-Dropout
the dropout, we decided to modify the architecture with a
Combined Performance
combination of batch-normalization and 60% dropout. Although, the degree of overfitting
Precision 73%
was minimal compared Recallto the previous histories, the model performance
84% which is repre-
sented in Figure 16F1-Score
and Table 10, respectively, is low when compared
78% to previous analysis
in terms of accuracy Accuracy
and F1-Score. 76.01%
Sensors 2023, 23, x FOR PEER REVIEW 15 of 18
Sensors 2023, 23, 6235 15 of 18

Figure 16. Model performance using a combination of batch normalization and 60% dropout.
Figure 16. Model performance using a combination of batch normalization and 60% dropout.
Based on the result of the dropout, we decided to modify the architecture with a com-
Table 10. Modified
bination architecture with
of batch-normalization andbatch
60% normalization and 60%
dropout. Although, the dropout
degree ofperformance.
overfitting was
minimal compared to the previous histories, the model performance which is represented
BN-Dropout Combined Performance
in Figure 16 and Table 10, respectively, is low when compared to previous analysis in terms
Precision
of accuracy and F1-Score. 73%
Recall 84%
4. Discussion F1-Score 78%
To select theAccuracy
most appropriate architecture configuration for the respective domain,
76.01%
i.e., EL-based PV fault detection, this research presented a development pipeline intro-
ducing incremental improvements. Rather than reporting the final proposed solution by
4. itself,
Discussion
we went through the training and validation process after the introduction of each
designselect
To the most
component appropriate
to manifest architecture
its impact, configuration
starting from for theasrespective
the original dataset, domain,
evident from
Table
i.e., 11.
EL-based PV fault detection, this research presented a development pipeline intro-
ducingAincremental
key takeawayimprovements.
from the results Rather
presented in Table
than 11 is that
reporting thedata
finalaugmentations and
proposed solution by
architectural capacity complement each other when it comes to achieving
itself, we went through the training and validation process after the introduction of each better perfor-
mance. Although the addition of representative data augmentations improves the variance
design component to manifest its impact, starting from the original dataset, as evident
of the original dataset, it is also necessary to ascertain the basic generalization capacity with
from Table 11.
respect to the internal architecture, as without enough convolutional blocks and internal fil-
Athe
ters, key takeawaywould
architecture from not
theberesults
able topresented in Table
extract the key 11 is feature
underlying that data augmentations
characteristics
and architectural capacity complement each
of the dataset in order to provide high performance. other when it comes to achieving better per-
formance. Although
In terms the addition
of the final selection,ofthe
representative data augmentations
proposed architecture was selected withimproves the var-
the inte-
iance of the
gration original
of batch dataset, it reporting,
normalization, is also necessary to ascertain
and an overall F1-scorethe basicAlthough,
of 85%. generalization
one ca-
may with
pacity argue,respect
the proposed
to the architecture with drop-ratio
internal architecture, of 0.6 should
as without enough beconvolutional
selected due toblocks
reduced degree of overfitting (12.4%), while looking at the wider
and internal filters, the architecture would not be able to extract the key application, thisunderlying
may have fea-
a negative impact on the architecture post deployment. The reason for this was because the
ture characteristics of the dataset in order to provide high performance.
proposed architecture consisted of only two convolutional blocks followed by two fully
In terms of the final selection, the proposed architecture was selected with the inte-
gration of batch normalization, reporting, and an overall F1-score of 85%. Although, one
may argue, the proposed architecture with drop-ratio of 0.6 should be selected due to
reduced degree of overfitting (12.4%), while looking at the wider application, this may
have a negative impact on the architecture post deployment. The reason for this was be-
Sensors 2023, 23, 6235 16 of 18

connected layers, hence further reduction in the network via the application of dropout may
reduce the generalization capacity when dealing with wider variance, post deployment.

Table 11. Comparison of model’s performance across all parameters.

Complete Experimental Performance Evaluation


Original Data Performance
Precision 51%
Recall 100%
F1-Score 67%
Accuracy 50.54%
Augmented Dataset Performance
Precision 60%
Recall 19%
F1-Score 28%
Accuracy 54.93%
Modified Architecture Performance
Precision 78%
Recall 91%
F1-Score 84%
Accuracy 86.55%
Modified Architecture with Batch Normalization Performance
Precision 79%
Recall 92%
F1-Score 85%
Accuracy 86.67%
Modified Architecture with 60% Dropout Performance
Precision 81%
Recall 85%
F1-Score 83%
Accuracy 85.43%
Modified Architecture with Batch Normalization and 60% Dropout Performance
Precision 73%
Recall 84%
F1-Score 78%
Accuracy 76.01%

5. Conclusions
In conclusion it can be stated that the research objective, i.e., creating a lightweight
architecture for micro-crack detection in PV cells was achieved to a high degree. The
lightweight footprint of the architecture is evident from the comparison against state-
of-the-art architectures presented in Table 12. It is clear from the comparison, that our
proposed architecture was significantly more computationally friendly compared to archi-
tectures such as ResNet at 11.69 Million parameters and AlexNet at 61.1 Million parameters.
In order to make sure the proposed architecture was able to generalize the PV domain,
several augmentations were proposed based on the modeling of a production floor envi-
ronment. In addition to this, multiple regularization strategies were deployed for obtaining
higher convergence.
Sensors 2023, 23, 6235 17 of 18

Table 12. Architectural comparison.

Model Parameters (M)


Proposed 4.7
GoogleNet 13
AlexNet 61.1
ResNet 11.69

As future development, the authors aim to further increase architectural generaliza-


tion by widening the experimental process to include hyperparameter tuning along with
optimizer selection and gradient-based augmentation generation, as presented in [22].
The proposed development pipeline can be extended to similar applications focused
on vision-based automated detection implementations requiring limited computational
constraints, such as healthcare [23], security [24], food industry [25], renewable energy [26],
and other constrained environments [27].

Author Contributions: Conceptualization, M.H.; formal analysis, D.A.; investigation, D.A. and D.A.;
methodology, D.A.; project administration, M.H; visualization, D.A.; writing—original draft, D.A.;
writing—review and editing, M.H. All authors have read and agreed to the published version of
the manuscript.
Funding: This research has no external funding.
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement: Not presently available.
Conflicts of Interest: The authors declare no conflict of interest.

References
1. Hussain, M.; Al-Aqrabi, H.; Hill, R. PV-CrackNet Architecture for Filter Induced Augmentation and Micro-Cracks Detection
within a Photovoltaic Manufacturing Facility. Energies 2022, 15, 8667. [CrossRef]
2. CO2 Emissions in 2022–Analysis-IEA. Available online: https://www.iea.org/reports/co2-emissions-in-2022 (accessed on
11 April 2023).
3. Padmavathi, N.; Chilambuchelvan, A. Fault detection and identification of solar panels using Bluetooth. In Proceedings of the
2017 International Conference on Energy, Communication, Data Analytics and Soft Computing, ICECDS 2017, Chennai, India,
1–2 August 2017; pp. 3420–3426. [CrossRef]
4. Zyout, I.; Qatawneh, A. Detection of PV Solar Panel Surface Defects using Transfer Learning of the Deep Convolutional Neural
Networks; Detection of PV Solar Panel Surface Defects using Transfer Learning of the Deep Convolutional Neural Networks. In
Proceedings of the 2020 Advances in Science and Engineering Technology International Conferences (ASET), Dubai, United Arab
Emirates, 4 February–9 April 2020.
5. Dhimish, M.; Mather, P. Development of Novel Solar Cell Micro Crack Detection Technique. IEEE Trans. Semicond. Manuf. 2019,
32, 277–285. [CrossRef]
6. Dhimish, M.; Holmes, V.; Dales, M.; Mehrdadi, B. Effect of micro cracks on photovoltaic output power: Case study based on real
time long term data measurements; Effect of micro cracks on photovoltaic output power: Case study based on real time long term
data measurements. Micro Nano Lett. 2017, 12, 803–807. [CrossRef]
7. Yao, G.; Wu, X. Halcon-Based Solar Panel Crack Detection. In Proceedings of the 2019 2nd World Conference on Mechanical
Engineering and Intelligent Manufacturing (WCMEIM), Shanghai, China, 22–24 November 2019. [CrossRef]
8. Gabor, A.M.; Knodle, P. UV Fluorescence for Defect Detection in Residential Solar Panel Systems. In Proceedings of the Conference
Record of the IEEE Photovoltaic Specialists Conference, Fort Lauderdale, FL, USA, 20–25 June 2021; pp. 2575–2579. [CrossRef]
9. Han, S.H.; Rahim, T.; Shin, S.Y. Detection of faults in solar panels using deep learning. In Proceedings of the 2021 International
Conference on Electronics, Information, and Communication, ICEIC 2021, Jeju, Republic of Korea, 31 January–3 February 2021.
[CrossRef]
10. Espinosa, A.R.; Bressan, M.; Giraldo, L.F. Failure signature classification in solar photovoltaic plants using RGB images and
convolutional neural networks. Renew. Energy 2020, 162, 249–256. [CrossRef]
11. Acharya, A.K.; Sahu, P.K.; Jena, S.R. Deep neural network based approach for detection of defective solar cell. Mater Today Proc.
2021, 39, 2009–2014. [CrossRef]
Sensors 2023, 23, 6235 18 of 18

12. Rahman, M.R.; Tabassum, S.; Haque, E.; Nishat, M.M.; Faisal, F.; Hossain, E. CNN-based Deep Learning Approach for Micro-crack
Detection of Solar Panels. In Proceedings of the 2021 3rd International Conference on Sustainable Technologies for Industry 4.0,
STI 2021, Dhaka, Bangladesh, 18–19 December 2021. [CrossRef]
13. Zhang, N.; Shan, S.; Wei, H.; Zhang, K. Micro-cracks Detection of Polycrystalline Solar Cells with Transfer Learning. J. Phys. Conf.
Ser. 2020, 1651, 012118. [CrossRef]
14. Akram, M.W.; Li, G.; Jin, Y.; Chen, X.; Zhu, C.; Ahmad, A. Automatic detection of photovoltaic module defects in infrared images
with isolated and develop-model transfer deep learning. Sol. Energy 2020, 198, 175–186. [CrossRef]
15. Mathias, N.; Shaikh, F.; Thakur, C.; Shetty, S.; Dumane, P.; Chavan, D.S. Detection of Micro-Cracks in Electroluminescence Images
of Photovoltaic Modules. In Proceedings of the 3rd International Conference on Advances in Science & Technology (ICAST),
Padang, Indonesia, 24–25 October 2020. [CrossRef]
16. Winston, D.P.; Murugan, M.S.; Elavarasan, R.M.; Pugazhendhi, R.; Singh, O.J.; Murugesan, P.; Gurudhachanamoorthy, M.;
Hossain, E. Solar PV’s Micro Crack and Hotspots Detection Technique Using NN and SVM. IEEE Access 2021, 9, 127259–127269.
[CrossRef]
17. Singh, O.D.; Gupta, S.; Dora, S. Segmentation technique for the detection of Micro cracks in solar cell using support vector
machine. Multimed Tools Appl. 2023, 1–26. [CrossRef]
18. Xue, B.; Li, F.; Song, M.; Shang, X.; Cui, D.; Chu, J.; Dai, S. Crack Extraction for Polycrystalline Solar Panels. Energies 2021, 14, 374.
[CrossRef]
19. Chen, H.; Zhao, H.; Han, D.; Liu, K. Accurate and robust crack detection using steerable evidence filtering in electroluminescence
images of solar cells. Opt. Lasers Eng. 2019, 118, 22–33. [CrossRef]
20. Gao, M.; Song, P.; Wang, F.; Liu, J.; Mandelis, A.; Qi, D. A Novel Deep Convolutional Neural Network Based on ResNet-18 and
Transfer Learning for Detection of Wood Knot Defects. J. Sens. 2021, 2021, 1–16. [CrossRef]
21. Yap, X.Y.; Chia, K.S.; Tee, K.S. A Portable Gas Pressure Control and Data Acquisition System using Regression Models. Int. J.
Electr. Eng. Inform. 2021, 13, 242–251. [CrossRef]
22. Hussain, M.; Chen, T.; Titrenko, S.; Su, P.; Mahmud, M. A Gradient Guided Architecture Coupled With Filter Fused Representa-
tions for Micro-Crack Detection in Photovoltaic Cell Surfaces. IEEE Access 2022, 10, 58950–58964. [CrossRef]
23. Hussain, M.; Al-Aqrabi, H.; Munawar, M.; Hill, R.; Parkinson, S. Exudate Regeneration for Automated Exudate Detection in
Retinal Fundus Images. IEEE Access 2022, 1. [CrossRef]
24. Hussain, M.; Al-Aqrabi, H. Child Emotion Recognition via Custom Lightweight CNN Architecture. In Kids Cybersecurity
Using Computational Intelligence Techniques; Studies in Computational Intelligence; Yafooz, W.M.S., Al-Aqrabi, H., Al-Dhaqm, A.,
Emara, A., Eds.; Springer: Cham, Switzerland, 2023; Volume 1080. [CrossRef]
25. Hussain, M.; Al-Aqrabi, H.; Munawar, M.; Hill, R. Feature Mapping for Rice Leaf Defect Detection Based on a Custom
Convolutional Architecture. Foods 2022, 11, 3914. [CrossRef] [PubMed]
26. Hussain, M.; Al-Aqrabi, H.; Hill, R. Statistical Analysis and Development of an Ensemble-Based Machine Learning Model for
Photovoltaic Fault Detection. Energies 2022, 15, 5492. [CrossRef]
27. Alsboui, T.; Hill, R.; Al-Aqrabi, H.; Farid, H.M.A.; Riaz, M.; Iram, S.; Shakeel, H.M.; Hussain, M. A Dynamic Multi-Mobile
Agent Itinerary Planning Approach in Wireless Sensor Networks via Intuitionistic Fuzzy Set. Sensors 2022, 22, 8037. [CrossRef]
[PubMed]

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.

You might also like