X-DenseNet Deep Learning For Garbage Classificatio
X-DenseNet Deep Learning For Garbage Classificatio
X-DenseNet Deep Learning For Garbage Classificatio
Abstract. In order to effectively solve the problem of garbage classification, this paper designs
a garbage classification model based on deep convolutional neural network. Based on Xception
network, combined with the idea of dense connections and multi-scale feature fusion in
DenseNet, the X-DenseNet is constructed to classify the garbage images obtained by visual
sensors. This paper conducts experiments through the process of “obtaining
dataset-preprocessing data-building X-DenseNet model-training and testing model” and the
accuracy of the model on the testing set is up to 94.1%, which exceeds some classic
classification networks. The X-DenseNet automatic garbage classification model based on
visual images proposed in this paper can effectively reduce manual investment and improve the
garbage recovery rate. It has the vital scientific significance and application value.
1. Introduction
With the development of economy, the total amount of garbage in the world is increasing year by year.
It is necessary to do garbage classification. As we all know, automatic classification which saves a lot
of manpower is becoming a development trend of garbage classification. However, the traditional
vision-based automatic garbage classification system uses simple manual features, and the
generalization performance is unsatisfactory. There are still many problems in the automatic
classification of complex and diverse garbage.
In recent years, deep learning has shown explosive development. Compared with traditional visual
characteristics extraction methods, the advantage of deep learning is that it does not need to select in
advance which features or design features to extract, but allows the model to learn from large-scale
data. So deep learning has a stronger learning ability and adaptability.
This paper proposes a visual sensor-based method, according to deep learning, in order to achieve
automatic garbage classification, which greatly improves the garbage collection rate.
2. Related Work
In the traditional method, Chan Xiang et al. [1] used Arduino UNO R3 as the main control board, used
the color recognition module to realize the classification function. Yet, in this method, the
characteristics of recognition are relatively simple, and the recognition object is relatively single. It is
difficult to perform efficient automatic classification of complex garbage.
In the deep learning method, Ning Kai et al. [2] used YOLOv2 as the main network module,
embedded deep dense modules to design a smart sweeping robot that can divide garbage into 25
subcategories including toilet paper, cans, milk, etc. according to shape and volume. Chen Yuchao et al.
[3] proposed to use OpenCV to implement the background difference algorithm, cut the object to be
detected from the picture, and then use the MobileNet to divide the image into syringes, hemostatic
forceps, infusion bags, and gloves.
Content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence. Any further distribution
of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.
Published under licence by IOP Publishing Ltd 1
ISAI 2020 IOP Publishing
Journal of Physics: Conference Series 1575 (2020) 012139 doi:10.1088/1742-6596/1575/1/012139
In summary, the garbage classification method based on deep learning has certain advantages.
Therefore, deep learning is of certain practical significance and scientific research value to solve the
problem of garbage classification and improve resource utilization.
3. Task Definition
This paper studies the problem of automatic classification of different types of garbage images in the
case of using a color camera to obtain color images in the actual garbage classification scenario. This
paper expects to generate the object category directly from the color image, and defines the function
M as the mapping relationship between the input image I and the output category C:
M I =C (1)
This paper uses a deep convolutional neural network to approximate the complex function M:
I→C. Let 𝑀𝜃 represent the neural network where 𝜃 is the weight of the network.
This paper proves that Mθ (I)=C≈M(I), you can use the training set input IT and the corresponding
output CT to learn and train the cross-entropy loss function L, as follows:
θ= min L CT ,Mθ IT (2)
θ
4. X-DenseNet Network
First of all, the Xception [4] network structure is used as the basic structure. With higher accuracy and
a relatively lower number of parameters, Xception is a further improvement of InceptionV3 [5]. The
added ResNet [6] residual network mode also makes the speed of convergence of the network
improve.
In order to make full use of image information features, this paper combines the idea of dense
connection and multi-scale feature fusion in DenseNet [7] in the network structure. Among them,
Dense Block is one of the main components of DenseNet. In a Dense Block, any layer is directly
connected to all subsequent layers. Therefore, the layer l receives the feature information of all
previous layers (x0 ,……,xl -1) as layer l input:
xl =Hl x0 ,x1 ,……,xl -1 (3)
Among them, [x0 ,x1 ,……,xl -1] refers to the connection of the feature map generated in the layer 0
to the layer l − 1.
Dense Block does not need to re-learn redundant parameter feature maps. In addition, each layer
can directly access the gradient from the loss function to the original input signal, which can
effectively slow down the problem of gradient disappearance. The structure of Dense Block adopted in
this paper is shown in Figure 1.
2
ISAI 2020 IOP Publishing
Journal of Physics: Conference Series 1575 (2020) 012139 doi:10.1088/1742-6596/1575/1/012139
5. Experiment Preparation
3
ISAI 2020 IOP Publishing
Journal of Physics: Conference Series 1575 (2020) 012139 doi:10.1088/1742-6596/1575/1/012139
6. Experimental Results
4
ISAI 2020 IOP Publishing
Journal of Physics: Conference Series 1575 (2020) 012139 doi:10.1088/1742-6596/1575/1/012139
prediction and label are under each picture. The categories corresponding to each value in the list are
cardboard, glass, metal, and paper, plastic, and other trashes.
7. Conclusion
Considering the problems of low efficiency of artificial garbage classification and poor generalization
performance of traditional garbage classification algorithms, this paper designs a garbage
classification model based on deep convolutional neural network X-DenseNet. The classification of
domestic garbage images obtained by visual sensors can effectively classify garbage automatically,
and divide the garbage into six categories: cardboard, glass, metal, paper, plastic and other garbage.
And the recognition accuracy rate is higher than other advanced image classification networks.
8. References
[1] Zhan Xiang, Xue Peng, Guo Huanping. Intelligent sorting trash can based on Arduino
[J].Electronic World, 2020 (04): 160-161.
[2] Ning Kai, Zhang Dongbo, Yin Feng, et al. Rubbish detection and classification of intelligent
sweeping robot based on visual perception [J]. Journal of Image and Graphics, 2019, 24 (8):
1358-1368.
[3] Chen Yuchao, Bian Xiaoxiao. Medical garbage classification system based on machine vision
and deep learning [J]. Computer Programming Skills and Maintenance, 2019, (5): 108-110. DOI:
10.3969 / j.issn.1006-4052.2019. 05.042.
[4] Francois Chollet. Xception: Deep Learning with Depthwise Separable Convolutions[C]// 2017
IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2017.
5
ISAI 2020 IOP Publishing
Journal of Physics: Conference Series 1575 (2020) 012139 doi:10.1088/1742-6596/1575/1/012139
[5] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna. Rethinking the inception
architecture for computer vision. arXiv preprint arXiv:1512.00567, 2015.
[6] He K, Zhang X, Ren S, et al. Deep Residual Learning for Image Recognition[C]// IEEE
Conference on Computer Vision & Pattern Recognition. IEEE Computer Society, 2016.
[7] Huang G, Liu Z , Maaten L V D , et al. Densely Connected Convolutional Networks[C]// CVPR.
IEEE Computer Society, 2017.
[8] Krizhevsky A, Sutskever I , Hinton G . ImageNet Classification with Deep Convolutional
Neural Networks[C]// NIPS. Curran Associates Inc. 2012.
[9] Simonyan, Karen, Zisserman, Andrew. Very Deep Convolutional Networks for Large-Scale
Image Recognition[J]. Computer Science, 2014.
[10] garythung, “garythung/trashnet,” GitHub. [Online].
Available:https://github.com/garythung/trashnet.