Galey Enhancing Weather Recognition Using Transfer Learning Approach
Galey Enhancing Weather Recognition Using Transfer Learning Approach
© 2023 The authors published by JCIS. This is an Open Access Article under the
Creative Common Attribution Non-Commercial 4.0
1. Introduction
Weather recognition is the process of using technology, such as computer vision and machine learning, to identify and
predict weather patterns. It is important in our daily lives because it allows us to make informed decisions about things like
travel plans, clothing choices, and outdoor activities. It also helps in making important decisions for industries such as
agriculture, transportation and energy, which are greatly impacted by weather conditions. Accurate weather recognition can
also help with disaster preparedness and response, by providing early warning of severe weather events such as hurricanes,
tornadoes, and floods.
The information provided about the weather over a certain period of time is crucial for individuals as it impacts their daily
routines and actions. People tend to make decisions and plan their activities based on the current weather conditions. This
59
can include things like deciding to go for a bike ride, book a flight, or plan a vacation. Furthermore, the weather is also a factor
that is considered when planning business operations, transportation systems, sports events, and sightseeing tours. It's
essential to take into account the weather of the location where events are held.
Weather is specific to a particular area and is typically measured through human observations or sensors. However, the
local economy may suffer as a result of the high cost of cameras' sensors. With the advancement of technology, it is expected
that artificial intelligence (AI) will play a significant role in embedded systems, allowing for more accurate weather analysis
while reducing hardware costs. AI is becoming increasingly prevalent in people's lives and has made many tasks easier.
Currently, many large companies are utilizing AI in their technologies and continue to invest in its development. Deep
learning, a subset of AI, uses architectures with hidden layers to automatically extract features from images, making it a
powerful tool in weather forecasting.
There has been a significant amount of research on the subject of weather image classification using deep-learning
architectures. In these studies, various methods and techniques have been employed to achieve high classification
accuracy. For example, in one study, Zhao et al. used a combination of CNN and RNN models, as well as the LSTM method,
to classify five-class weather images. They were able to achieve a classification rate of 92.63%. Another study by Lu et al.
focused on classifying aerial images into two categories (sunny and cloudy) and used techniques such as shadow and
reflection contrast to extract features from the images. They trained the dataset using a proposed CNN method and achieved
a classification success rate of 98.6% with the support vector machines (SVM) using data augmentation.
Elhoseiny, et al. [1] used features found in the fully connected layers of the AlexNet architecture to divide weather images
into two classes. By classifying the characteristics that were taken out of the final layer using the SoftMax function, they were
able to reach 91.1% accuracy. Guerra, et al. [2] employed three categories of meteorological data in their investigation and
utilized the superpixel approach and augmentation approach to enhance pixel seeds uniformly throughout each image. The
dataset was classified using the SVM approach after being subjected to numerous CNN model training iterations, and the
ResNet-50 model had the highest overall accuracy rate of 80.7%. Overall, these studies have demonstrated that deep
learning architectures, particularly CNNs, can be effectively used for weather image classification and have achieved high
classification success rates using various techniques such as image augmentation, superpixel method, and feature
extraction.
In numerous practical uses, such as systems that assist with self-driving, weather recognition is significant. By limiting
vehicle speed and adjusting the intensity of different lights, real-time weather information can increase road safety.
Consequently, a few studies concentrated on weather recognition using in-car cameras. A strategy called template matching
was suggested in [3] and [4] to recognize raindrops on the windscreen since they often serve as a strong indicator of rainy
conditions. Three categories of global features were developed in [3] to discriminate between overcast, sunny, and wet
weather. These features included the road information, the histogram of HSV color, and the gradient amplitude histogram.
Roser and Moosmann [5] proposed a method of dividing the entire image equally into thirteen parts and extracting different
histogram information from each region separately for rain detection. Numerous studies have also examined fog and haze in
addition to rainy conditions. Koschmieders Law [6] was used in [7] to calculate the visibility in foggy conditions. In [8] and [9],
power spectra were first generated for a given image, and Gabor filters were then employed to extract features for fog
recognition. According to edge replies, Bronte, et al. [10] proposal is to use a Sobel filter to identify edges and calculate the
existence of fog. In addition, Gallen, et al. [11] used a backscattered veil of lights to specifically find fog at night.
Several weather recognition studies focus on common outdoor photos [12] of estimating weather conditions using
illumination calculations on several photos of a specific location. Numerous global features were investigated in [13] to
identify weather conditions, power spectral slope, and edge gradient energy, including infection point information, contrast,
and saturation. To improve the classification of the weather category, Li, et al. [14] integrated SVM and decision issues with
several global characteristics. In contrast to earlier efforts, Lu, et al. [15] suggest a solution to the two-class weather
classification problem further using a variety of local factors, including the sky, shadow, and reflection. Zhang et al. made an
effort to address the problem of multi-class weather classification in [16] and [17] and used both global and local features
to do so. The two-class weather recognition (identical) problem was combined with handcrafted features and CNN features
to achieve much better results.
60
Computers now can observe satellite images determine current weather conditions and make forecasts. This
information is easily accessible through the internet, but it is important to note that weather conditions can vary greatly in
different locations. In industries such as transportation, real-time weather classification is particularly useful. Self-driving
cars, for example, can use weather images to assist in making decisions such as activating wipers in rainy conditions.
However, classifying weather images can be challenging due to similarities between different weather conditions, such as
foggy and snowy or cloudy and rainy. Image classification is a technology that can help computers recognize weather
patterns based on images in real-time. This technology can be used to develop advanced driver Assistance Systems (ADAS)
and independent autonomous machines. A study [18] employed AlexNet and CNN with GoogleNet architecture to categorize
the weather into four categories: cloudy, wet, snowy, or none of the above. Google Net’s accuracy was 92.0% and Alex Net’s
was 91.1%. The distribution of the training and test data was not described, and there is no information about how the dataset
was acquired. For the classification of weather images into the four categories of foggy, rainy, snowy, and sunny, Xia, et al.
[19] compared the performance of multiple CNN architectures, including AlexNet, VGG16, and GoogleNet. The best
accuracy is achieved by AlexNet, 86.47%. However, there is no information available regarding how long each design takes
to compute. CNN and transfer learning were utilized by [20] to classify weather images. Furthermore, it merely made use of
binary label classification and VGG16 architecture. There were two labels on the weather image: With Rain (WR) and No Rain
(NR). The dataset for this research was gathered from Image2Weather and consisted of dash cam images from the Tokyo
Metropolitan Area. This study had an accuracy rate of 85.28%.
Transfer learning is a technique that addresses the fundamental issue of not having enough training data in machine
learning [21]. With the premise that the testing and training data are impartial and evenly dispersed, knowledge will be
conveyed from one target domain to another. Transfer learning is used in computer vision to expedite the learning process
and improve performance. Several image datasets were used to train the pre-trained model, and it will then be retrained
using the source dataset. There hasn't been any research, according to prior studies, that tries to identify multiclass weather
images utilizing a variety of CNN architectures using transfer learning. This study classifies weather images with 6 different
weather kinds, including cloudy, foggy, rainy, shiny, snowy, and sunrise, using numerous CNN architectures, including
VGG16, DenseNet201, MobileNetV2, and Xception. To develop a classification model with improved performance more
quickly, transfer learning is utilized. ImageNet served as the transfer learning dataset for this investigation. For this work to
be comparable to future research, the dataset was gathered from publicly accessible sources such as Kaggle and Camera
as Weather Sensor (CWS) [22]. Accuracy, precision, recall, and F1 are the performance criteria used to validate this study.
Młodzianowski [23] highlights the importance of weather recognition across various industries, including self-driving cars
and agriculture. The proposed solution is an image-based weather detection system that leverages transfer learning to
classify weather conditions using a small dataset. The paper presents three weather recognition models based on ResNet50,
MobileNetV2, and InceptionV3 architectures and compares their efficiencies. The study [24] evaluates using pre-trained
object detection models via transfer learning for tropical storm identification and classification. Compared to bespoke
models, it requires less data and offers broader class diversity, achieving accuracies ranging from 69% to 89%, showing
promise for efficient and effective storm analysis. The study [25] introduces a novel deep learning model, combining
Convolutional Neural Networks (CNNs) and Support Vector Machines (SVMs), for weather detection and multi-classification
based on image processing. Evaluating on a dataset of 10,000 images encompassing five weather conditions, the proposed
approach exhibits high accuracy and robustness, outperforming other state-of-the-art models with an overall accuracy of
97.24%. Additionally, a comprehensive parameter analysis underscores the model's efficiency, contributing to the
advancement of reliable weather detection systems for diverse applications.
In essence, this study has made several noteworthy contributions, which can be outlined as follows:
Enhanced Multiclass Weather Classification: This study extends the scope of weather classification research by
focusing on multiclass classification, encompassing four distinct weather categories, rather than limiting the analysis to
binary classification scenarios. This expansion provides a more comprehensive understanding of the complexities of
weather classification.
Application of Transfer Learning: A key contribution is the successful application of transfer learning techniques,
specifically leveraging pre-trained CNN models (MobileNetV2 and VGG19). This approach harnesses the collective
knowledge from extensive datasets and applies it to improve weather image classification, demonstrating the effectiveness
of transfer learning in this context.
61
Exploration of Data Preprocessing: This research also delves into the influence of data preprocessing on the
performance of weather classification models. This exploration highlights the importance of data preparation techniques in
enhancing the accuracy and reliability of weather classification systems.
The organization of this paper is as follows: Section 1 serves as an introduction and provides an overview of related
works. Section 2 provides an in-depth analysis of our proposed approach. Section 3 The outcomes and results of the
proposed method are described and analyzed. Section 4 is dedicated to discussing the findings and their implications.
62
this dataset is to train a model to accurately classify an image as one of the four classes based on
the image's content.
Figure 2: Weather Classification Dataset: (a) Cloudy, (b) Rainy, (c) Shine, and (d) Sunrise
(https://www.kaggle.com/datasets/pratik2901/multiclass-weather-dataset)
B. Data Pre-Processing
For any image classification techniques, image preparation activities are typically an important
stage. To get the images ready for feeding into the neural network models in this work, they are carried
out over the dataset's original images. Consequently, the images from the gathered dataset are
preprocessed as follows:
Normalization: The normalization process on the image dataset by dividing each pixel value by
255. This scaling process brings all the pixel values to a range between 0 and 1. This is a standard
preprocessing step in image classification tasks as it ensures that the model is provided with data
that has a consistent scale, which can improve the model's performance.
Image Resizing: Resizing images is an important step in image processing as it can help reduce
the amount of memory and computational resources needed to work with the images. The goal is to
load all the images and check their size, if the size is less than 224x224 pixels then it will print the
name and size of the image so that we can decide whether to resize them or not. After checking the
size of the images, it was determined that the optimal size for the resized images is 128x128 pixels.
This means that all the images will be resized to have a width and height of 128 pixels.
Label Encoding: This process converts categorical labels, such as "cloudy", "rainy", "shiny", and
"sunrise" into numerical values comprehensible by machine learning models. We have employed
the numeral encoding method, resulting in four labels: "cloudy" is assigned the value of 1, "rainy" is
assigned the value of 2, "shiny" is assigned the value of 3, and "sunrise" is assigned the value of 4.
Data Distribution: The final result is two sets of data: a training set, which comprises 70% of the
original data, and a test set, which comprises 30% of the original data. These two sets can be used
to train and evaluate a machine-learning model.
C. Transfer Learning-Based Models
The transfer learning [26]-[29] method in machine learning entails utilizing a previously trained model for one
task as a foundation for building a model for a different task. The idea is to transfer the knowledge learned from
the first task to the second task. This is mainly useful when the second task has a limited volume of labelled
data. In our research, we used the VGG-19 and MobileNetv2 models, which are pre-trained models that have
been trained on a large dataset. By using these pre-trained models as the foundation for our model, we were
able to leverage the knowledge learned by these models on a large dataset and fine-tune the models on a
multiclass weather class dataset. This allows the model to quickly learn and adapt to the new dataset, as it is
63
already familiar with the general features of the images. This technique helps overcome the challenge of
insufficient labelled data and can enhance the model's performance.
TL database ImageNet
Optimizer Adam
Dropout 0.2
Epochs 100
Batch Size 64
A. Results Comparisons
The results show the performance of two different CNN models on a specific task. The models
are MobileNetV2, and VGG-19. Four metrics are used to assess the performance of each model:
Precision, Recall, F1-Score, and Accuracy.
MobileNetV2 has an accuracy of 94.65% which is higher than VGG-19. The precision, recall, and
F1-score of the MobileNetV2 all are 94.5%. The VGG-19 model has the lowest accuracy of 92.88%.
The VGG-19 model has a precision of 92.5%, which is slightly lower than the MobileNet model. The
recall of the VGG-19 model is 92.75%, which is also slightly lower than the MobileNet models. The
F1-score of the VGG-19 model is 92.5%, see Table II.
TABLE II: RESULT COMPARISON OF VGG-19 AND MOBILENETV2 MODELS
The classification report for the MobileNetV2 model, as presented in Table III, showcases rounded
values for the weighted precision, recall, F1 score, and accuracy, all registering at 0.95.
TABLE III: CLASSIFICATION REPORT OF PROPSOED MOBILENETV2 BASED MODEL
64
Cloudy 0.94 0.88 0.91 91
B. Discussion
During the initial epoch, the training loss was 1.6160 and the training accuracy was 0.3676 for the
VGG-19 model. The validation loss was 0.8023 and the validation accuracy was 0.7537. This means
that the model's ability to accurately predict the correct class la-bels on the validation dataset was
75.37%. By the last epoch, the training loss was 0.0083, and the training accuracy was 1.0000. The
validation loss was 0.2157 and the validation accuracy was 0.9525. This suggests that the model's
ability to accurately predict the correct class labels on the validation dataset was 95.25%. The
model's accuracy can be observed and loss improved as it was trained for more epochs. The model's
performance on the validation set also improved from 75.37% to 95.25%, see Fig. 3.
Furthermore, the training loss was 1.5756 and the training accuracy was 0.2270 for MobileNetV2
model. The validation loss was 1.2518 and the validation accuracy was 0.5163. This means that the
model's ability to accurately predict the correct class labels on the validation dataset was 51.63%.
By the last epoch, the training loss was 0.1094, and the training accuracy was 0.9773. The validation
loss was 0.2112 and the validation accuracy was 0.9258. This suggests that the model's ability to
accurately predict the correct class labels on the validation dataset was 92.58%, see Fig. 4.
65
Figure 4: Training and Accuracy Learning Graph of MobileNetV2 Pretrained Model
A confusion matrix is a table format utilized to assess the efficacy of a classification algorithm.
The table contrasts the predicted classifications with the actual classifications and displays the
findings in a matrix form. The predicted classifications are represented by the matrix's rows, while
the actual classifications are represented by its columns. The matrix helps to identify the number of
false positives, true positives, false negatives, and true negatives which are used to compute several
performance metrics such as F1-score, accuracy, precision, and recall. It is called the "confusion
matrix" because it helps to understand how the model is confusing between different classes (see
Fig. 5 and Fig. 6).
66
Figure 6: Confusion Matrix of MobileNetV2 Pretrained Model for Weather Recognition
4. Practical Implications
The research's practical implications are far-reaching, spanning various sectors and applications. By harnessing the
power of transfer learning and Convolutional Neural Networks (CNNs) for accurate weather classification, industries like
autonomous vehicles, intelligent transportation, agriculture, and energy production stand to benefit significantly. Real-time
and precise weather classification can enhance the safety and efficiency of autonomous vehicles, optimize traffic flow in
intelligent transportation systems, improve crop management in agriculture, and enhance energy generation from renewable
sources. Additionally, it can aid emergency response and disaster management, enable targeted marketing and advertising,
and contribute to environmental monitoring and research. Ultimately, this technology not only enhances operational
efficiency but also improves the quality of life and safety for individuals and communities, making it a valuable advancement
for a wide range of industries and applications.
5. Conclusion
In summary, the comparative analysis between MobileNetV2 and VGG-19 models in the context of weather recognition
demonstrates that MobileNetV2 outperforms VGG-19. We conducted an accuracy assessment of these models for weather
recognition, with VGG-19 achieving an accuracy rate of 92.88% and MobileNetV2 surpassing it with an impressive accuracy
of 94.65%. These results unequivocally indicate that MobileNetV2 stands out as the superior model, delivering the highest
level of accuracy when compared to VGG-19. As a result, this study strongly suggests that MobileNetV2 may serve as an
optimal choice for weather recognition tasks. This recommendation is based not only on its exceptional accuracy but also
on its adaptability to resource-constrained devices, making it an excellent candidate for real-world applications requiring
efficient and reliable weather classification.
6. References
1. M. Elhoseiny, S. Huang, and A. Elgammal, "Weather classification with deep convolutional neural networks," in 2015 IEEE
international conference on image processing (ICIP), 2015, pp. 3349-3353.
2. J. C. V. Guerra, Z. Khanam, S. Ehsan, R. Stolkin, and K. McDonald-Maier, "Weather Classification: A new multi-class
dataset, data augmentation approach and comprehensive evaluations of Convolutional Neural Networks," in 2018
NASA/ESA Conference on Adaptive Hardware and Systems (AHS), 2018, pp. 305-310.
67
3. X. Yan, Y. Luo, and X. Zheng, "Weather recognition based on images captured by vision system in vehicle," in Advances in
Neural Networks–ISNN 2009: 6th International Symposium on Neural Networks, ISNN 2009 Wuhan, China, May 26-29,
2009 Proceedings, Part III 6, 2009, pp. 390-398.
4. H. Kurihata, T. Takahashi, Y. Mekada, I. Ide, H. Murase, Y. Tamatsu, et al., "Raindrop detection from in-vehicle video
camera images for rainfall judgment," in First International Conference on Innovative Computing, Information and Control-
Volume I (ICICIC'06), 2006, pp. 544-547.
5. M. Roser and F. Moosmann, "Classification of weather situations on single color images," in 2008 IEEE intelligent vehicles
symposium, 2008, pp. 798-803.
6. W. E. K. Middleton, Vision through the atmosphere: Springer, 1957.
7. N. Hautiere, J.-P. Tarel, J. Lavenant, and D. Aubert, "Automatic fog detection and estimation of visibility distance through
use of an onboard camera," Machine vision and applications, vol. 17, pp. 8-20, 2006.
8. M. Pavlic, G. Rigoll, and S. Ilic, "Classification of images in fog and fog-free scenes for use in vehicles," in 2013 IEEE
Intelligent Vehicles Symposium (IV), 2013, pp. 481-486.
9. M. Pavlić, H. Belzner, G. Rigoll, and S. Ilić, "Image based fog detection in vehicles," in 2012 IEEE Intelligent Vehicles
Symposium, 2012, pp. 1132-1137.
10. S. Bronte, L. M. Bergasa, and P. F. Alcantarilla, "Fog detection system based on computer vision techniques," in 2009 12th
International IEEE conference on intelligent transportation systems, 2009, pp. 1-6.
11. R. Gallen, A. Cord, N. Hautière, and D. Aubert, "Towards night fog detection through use of in-vehicle multipurpose
cameras," in 2011 IEEE Intelligent Vehicles Symposium (IV), 2011, pp. 399-404.
12. L. Shen and P. Tan, "Photometric stereo and weather estimation using internet images," in 2009 IEEE Conference on
Computer Vision and Pattern Recognition, 2009, pp. 1850-1857.
13. H. Song, Y. Chen, and Y. Gao, "Weather condition recognition based on feature extraction and K-NN," in Foundations and
Practical Applications of Cognitive Systems and Information Processing: Proceedings of the First International Conference
on Cognitive Systems and Information Processing, Beijing, China, Dec 2012 (CSIP2012), 2014, pp. 199-210.
14. Q. Li, Y. Kong, and S.-m. Xia, "A method of weather recognition based on outdoor images," in 2014 International Conference
on Computer Vision Theory and Applications (VISAPP), 2014, pp. 510-516.
15. C. Lu, D. Lin, J. Jia, and C.-K. Tang, "Two-class weather classification," in Proceedings of the IEEE Conference on Computer
Vision and Pattern Recognition, 2014, pp. 3718-3725.
16. Z. Zhang and H. Ma, "Multi-class weather classification on single images," in 2015 IEEE International Conference on Image
Processing (ICIP), 2015, pp. 4396-4400.
17. Z. Zhang, H. Ma, H. Fu, and C. Zhang, "Scene-free multi-class weather classification on single images," Neurocomputing,
vol. 207, pp. 365-373, 2016.
18. L.-W. Kang, K.-L. Chou, and R.-H. Fu, "Deep learning-based weather image recognition," in 2018 International Symposium
on Computer, Consumer and Control (IS3C), 2018, pp. 384-387.
19. J. Xia, D. Xuan, L. Tan, and L. Xing, "ResNet15: weather recognition on traffic road with deep convolutional neural
network," Advances in Meteorology, vol. 2020, pp. 1-11, 2020.
20. N. M. Notarangelo, K. Hirano, R. Albano, and A. Sole, "Transfer learning with convolutional neural networks for rainfall
detection in single images," Water, vol. 13, p. 588, 2021.
21. C. Tan, F. Sun, T. Kong, W. Zhang, C. Yang, and C. Liu, "A survey on deep transfer learning," in Artificial Neural Networks
and Machine Learning–ICANN 2018: 27th International Conference on Artificial Neural Networks, Rhodes, Greece, October
4-7, 2018, Proceedings, Part III 27, 2018, pp. 270-279.
22. W.-T. Chu, X.-Y. Zheng, and D.-S. Ding, "Camera as weather sensor: Estimating weather information from single images,"
Journal of Visual Communication and Image Representation, vol. 46, pp. 233-249, 2017.
23. P. Młodzianowski, "Weather Classification with Transfer Learning-InceptionV3, MobileNetV2 and ResNet50," in Digital
Interaction and Machine Intelligence: Proceedings of MIDI’2021–9th Machine Intelligence and Digital Interaction
Conference, December 9-10, 2021, Warsaw, Poland, 2022, pp. 3-11.
24. Senior-Williams, J., Hogervorst, F., Platen, E., Kuijt, A., Onderwaater, J., Tervo, R., ... & Okuyama, A. (2024). The
Classification of Tropical Storm Systems in Infrared Geostationary Weather Satellite Images Using Transfer Learning. IEEE
Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 17, 5234-5244.
25. Kukreja, V., Sharma, R., & Yadav, R. (2023, July). Multi-Weather Classification using Deep Learning: A CNN-SVM
Amalgamated Approach. In 2023 World Conference on Communication & Computing (WCONF) (pp. 1-5). IEEE.
26. M. A. Arshed, A. Shahzad, K. Arshad, D. Karim, S. Mumtaz, and M. Tanveer, “Multiclass Brain Tumor Classification from
MRI Images using Pre-Trained CNN Model,” VFAST Trans. Softw. Eng., vol. 10, no. 4, pp. 22–28, Nov. 2022, doi:
10.21015/VTSE.V10I4.1182.
27. H. Younis, M. Asad Arshed, F. ul Hassan, M. Khurshid, H. Ghassan, and M. Haseeb-, “Tomato Disease Classification using
Fine-Tuned Convolutional Neural Network,” Int. J. Innov. Sci. Technol., vol. 4, no. 1, pp. 123–134, Feb. 2022, doi:
10.33411/IJIST/2022040109.
28. A. Shahzad, M. A. Arshed, F. Liaquat, M. Tanveer, M. Hussain, and R. Alamdar, “Pneumonia Classification from Chest X-
ray Images Using Pre-Trained Network Architectures,” VAWKUM Trans. Comput. Sci., vol. 10, no. 2, pp. 34–44, Dec.
2022, doi: 10.21015/VTCS.V10I2.1271.
29. M. A. Arshed, H. Ghassan, M. Hussain, M. Hassan, A. Kanwal, and R. Fayyaz, “A Light Weight Deep Learning Model for
Real World Plant Identification,” 2022 2nd Int. Conf. Distrib. Comput. High Perform. Comput. DCHPC 2022, pp. 40–45,
2022, doi: 10.1109/DCHPC55044.2022.9731841.
68