0% found this document useful (0 votes)
77 views13 pages

Fuzzy Classification of The Maturity of The Tomato

adasd

Uploaded by

Ranjani Vasu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
77 views13 pages

Fuzzy Classification of The Maturity of The Tomato

adasd

Uploaded by

Ranjani Vasu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

Hindawi

Journal of Sensors
Volume 2019, Article ID 3175848, 12 pages
https://doi.org/10.1155/2019/3175848

Research Article
Fuzzy Classification of the Maturity of the Tomato Using a
Vision System

Marcos J. Villaseñor-Aguilar ,1 J. Enrique Botello-Álvarez,1 F. Javier Pérez-Pinal ,1


Miroslava Cano-Lara,2 M. Fabiola León-Galván ,3 Micael-G. Bravo-Sánchez,1
and Alejandro I. Barranco-Gutierrez 1,4
1
Instituto Tecnológico de Celaya, Celaya 38010, Mexico
2
Departamento de Mecatrónica del ITESI, Irapuato 36698, Mexico
3
Departamento de Alimentos, Universidad de Guanajuato, Mexico
4
Cátedras Conacyt, Mexico

Correspondence should be addressed to Marcos J. Villaseñor-Aguilar; [email protected]

Received 29 December 2018; Revised 5 March 2019; Accepted 14 March 2019; Published 4 July 2019

Guest Editor: Jesus R. Millan-Almaraz

Copyright © 2019 Marcos J. Villaseñor-Aguilar et al. This is an open access article distributed under the Creative Commons
Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work
is properly cited.

Artificial vision systems (AVS) have become very important in precision agriculture applied to produce high-quality and low-cost
foods with high functional characteristics generated through environmental care practices. This article reported the design and
implementation of a new fuzzy classification architecture based on the RGB color model with descriptors. Three inputs were
used that are associated with the average value of the color components of four views of the tomato; the number of triangular
membership functions associated with the components R and B were three and four for the case of component G. The amount
of tomato samples used in training were forty and twenty for testing; the training was done using the Matlab© ANFISEDIT. The
tomato samples were divided into six categories according to the US Department of Agriculture (USDA). This study focused on
optimizing the descriptors of the color space to achieve high precision in the prediction results of the final classification task
with an error of 536,995 × 10-6. The Computer Vision System (CVS) is integrated by an image isolation system with lighting;
the image capture system uses a Raspberry Pi 3 and Camera Module Raspberry Pi 2 at a fixed distance and a black background.
In the implementation of the CVS, three different color description methods for tomato classification were analyzed and their
respective diffuse systems were also designed, two of them using the descriptors described in the literature.

1. Introduction (USDA) establishes six states of maturity that are Green,


Breaker, Turning, Pink, Light red and Red ([2]); these are
Tomato is one of the main vegetables consumed by humans shown in Figure 1.
for its antioxidant content, vitamins (A, B1, B2, B6, C, and In literature, there are research on artificial vision that
E), and minerals such as potassium, magnesium, manganese, reports methodologies to estimate the maturity states of the
zinc, copper, sodium, iron, and calcium [1]. This fruit pro- tomato that uses the color as a main characteristic. Tomato
vides health benefits in the prevention of chronic diseases maturity estimation models have been proposed based on
such as cancer, osteoporosis, and cataracts. One of the main the use of different models of color space. For example, the
indicators that allows to know the internal composition of L∗ a∗ b∗ model allows to identify the six stages of maturity
the tomato is its degree of maturity. This characteristic is very of the tomato using the Minolta a∗ /b∗ [3–5]. Reference [6]
important to determine the logistic processes of harvest, conducted a study of the firmness and color of the tomato;
transport, commercialization, and food consumption. In this they reported that the recommended firmness for commer-
respect, the Department of Agriculture of the United States cialization was 1.46 N mm-1; they also determined that in
2 Journal of Sensors

(1) (2) (3) (4) (5) (6)

Figure 1: Maturity classification according to the US Department of Agriculture (USDA 2018).

the stage of pink maturity of the tomato, the values from second part, it used five layers of CNN that extracted the
Minolta to ∗ /b∗ change from negative to positive magnitude. main characteristics. The convolution kernels are of sizes
When the ratio of Minolta a∗ /b∗ of the tomatoes reached 0.6- 9 × 9, 5 × 5, and 3 × 3 in order to conserve characteristics,
0.95, those can be easily marketed. On the other hand, [4] reduce unnecessary parameters, and improve the speed of
estimated the lycopene content in the different stages of calculations. Together, it has two layers of max-pooling in
maturity of the tomato by means of the foliar area and the the CNN layers. The last part is for the classification of results
color parameters (L∗ , a∗ , b∗ , and hue). This model was done by a fully connected layer. The experimental results showed
using an artificial neural network (ANN). an average accuracy of 91.9% with a prediction time less than
On the other hand, the use of the RGB color model has 0.01 s. Another research was that of [3], who proposed an
allowed the identification of the maturity of the tomato. As algorithm in Matlab© Simulink, which employed a 4 mega-
reported by [7], which proposed a methodology to identify pixel camera, with a resolution of 640 × 480 and a frame rate
red tomatoes for automatic cutting through the use of a of 30 for the capture of the images; they received a processing
robot, this used RGB images analyzed using the relationship that consisted of an erosion and expansion. The classification
between the red-blue component (RB) and red-green (RG) and identification of the maturity of the tomato were with the
that allowed to formulate the inequalities: B ≤ 0 8972R and use of obtaining the red chroma of the YCbCr color model,
G ≤ 0 8972R, when these conditions are met, the fruit can which was between 135 and 180. Reference [6] developed a
be harvested. A similar investigation was carried out by [8], system of classification of maturity of the cherry tomatoes
where they compared RGB images with hyperspectral images based on artificial vision; in this proposal, they used color, tex-
(in the range of 396-736 nm with a spectral resolution of ture, and shape of the nearest K-neighbor and classifiers of
1.3 nm using a spectrograph). A linear discriminant analysis vectorial support machines to classify the ripened tomatoes.
was applied to both groups for the classification of the toma- Currently, with Computer Vision Systems (CVS) and
toes in five stages of maturity, which was weighted by a Fuzzy Logic (FL), applications of maturity classification
majority vote strategy of the analysis of the individual pixels. of tomatoes, guavas, apples, mangoes, and watermelons
The authors document that hyperspectral images were more employee have been developed [12]. FL is an artificial intelli-
discriminant than RGB in tomato maturity analysis. gence technique that models human reasoning from the lin-
In 2018, [9] developed a system of maturity classification guistics of an expert to solve a problem. Therefore, the logical
of tomatoes, the system used two types of tomatoes: with processing of the variables is qualitative based on quantitative
defects and without defects. For the fruit’s classification, an belonging functions [13]. References [14, 4] argue that the
artificial backpropagation neural network (BPNN) was used, classification of the maturity of the elements of study is com-
which was implemented in Matlab©. This system identified posed of two systems that are the identification of color
the degrees of maturity: red, orange, turning, and green. and its labeling. For color representation, they used image
The architect of the neural network had thirteen inputs that histograms based on the RGB, HSI, and CIELab color space
were associated with six functions of color and seven func- models; for the automatic labeling of the fruits, they designed
tions of forms, twenty neurons in the hidden layers and one a fuzzy system that handled the knowledge base that was
in the output. Reference [10] proposed a method using a transferred by an expert. On the other hand, the proposal
BPNN to detect maturity levels (green, orange, and red) of made by [15] estimated the level of maturity in apples using
tomatoes of the Roma and Pera varieties. the RGB color space; their methodology used four images
The color characteristics were extracted from five con- of different views of the matrix. They proposed four maturity
centric circles of the fruit, and the average shade values of classes, based on a fuzzy system, which were defined as
each subregion were used to predict the level of maturity of mature, low mature, near to mature, or too mature. The
the samples; these values were the entries of the BPNN. The inputs of the diffuse system were the average values of each
average precision to detect the three maturity levels of the color map of the segmented images. Reference [13] devel-
tomato samples in this method was 99.31%, and the standard oped an image classification system of apple, sweet lime,
deviation was 1.2%. Reference [11] implemented a classifica- banana, guava, and orange; the system was implemented in
tion system based on convolutional neural networks (CNN). Matlab©. The characteristics extracted from each fruit’s
The proposed classification architecture was composed of image were area, major and minor axis of each sample; these
three stages; the first stage managed the color images of three were used as inputs in the diffuse system for their classifica-
channels that are 200 pixels in height and width. In the tion. Another similar study was reported by [16], which
Journal of Sensors 3

Table 1: Tomato sample division used to train and detection sets.

Samples per each maturity level


Green (G) Breaker (B) Turning (T) Pink (P) Light red (LR) Red (R)
Training Test Training Test Training Test Training Test Training Test Training Test
3 2 3 2 2 4 11 5 8 5 13 2

The images were acquired with the AVS, which was


c installed in a black box of dimensions 38 cm × 38 cm × 43
cm; this last to prevent the influences of the lighting as shown
a in Figure 2. The AVS was integrated by the Raspberry Pi 3
b
camera (8 megapixels) placed vertically at 30 cm from the
sample at an angle of 28.8°. The lighting used had a ring
geometry [20] with a power of 5.4 W and a diameter of
23 cm, and it was placed 30 cm above the samples where
the intensity was 200 lux. The processing subsystem was
implemented on a Raspberry Pi 3 card that features a Quad-
com 1.2 GHz Broadcom BCM2837 64-bit processor with a
Figure 2: Elements of the artificial vision system (AVS): (a) 1 GB RAM memory. This device has the flexibility to be used
subsystem of capture of images, (b) subsystem of illumination, in the solution of versatile problems [21].
and (c) image processing subsystem. The proposed system is shown in Figure 3; in the first
stage, the RGB images of the samples were acquired. After
implemented a diffuse system to classify the guavas in the that, images were segmented to create a vector with averages
stages of maturity raw, ripe, and overripe. The proposed clas- of the red, green, and blue components, which worked as an
sification was based on the apparent color, and their consid- input to the fuzzy system.
ered three inputs: hue value, saturation, and luminosity.
Following this trend, this paper reports the behavior of 2.3. Image Acquisition. Four images of each fruit were
tomato maturity based on color in the RGB model, which is acquired in each view of the tomato, obtaining a total of
the model with commonly commercial digital cameras work 240 images corresponding to 60 fruits. Figure 4 shows the
because they are mostly built with an optical Bayer filter on four views of a sample in the green maturity state. The cap-
the photosensors. A fuzzy system was used in the classifica- tured images have a resolution of (1050 × 1680 × 3); they
tion stage. The main contribution of this work focuses on were scaled to a size of (600 × 900 × 3) with an intensity of
the comparison of color models for the description of tomato 200 lux.
maturity stages. In addition, a Raspberry PI was used for the
capture and estimation of the output variables.
2.4. Image Segmentation. Figures 5(a)–5(c) show the process
performed on samples using Python 3.7 and OpenCV. The
2. Materials and Methods first step captured the images and assigned a maturity level.
2.1. Sample Preparation. In the proposed method, sixty The second step binarized them in HSV space by using
tomato samples were used (acquired in a local trade) and were 100<=H<=156, 90<=S<=255, and 0<=V<=255 with a range
classified in six stages of maturity (Green, Breaker, Turning, between 0 and 255. The forth step was to segment each
Rosa, Light red and Red). The classification was based on the tomato image and labeled them by using an algorithm of
criteria of the United States Department of Agriculture USDA the component’s connection. The fifth step was to separate
(1997). The samples were divided into two groups: the training the image segments under 500 pixels, and finally, their
and validation sets as shown in Table 1. respective masks were used to obtain the areas of interest
per each sample.
2.2. Artificial Vision System. Artificial vision systems (AVS)
are intended to emulate the functionality of human vision 2.5. Attribute Selection. The attributes were selected based on
to describe elements in captured images. Some AV’s advan- the methodology proposed by [15]. The mean channel’s
tages compared with other proposal are a reduction of cost, values of the segmented images were used, and it was also
improvement of accuracy, increase of precision, and good considered that in the initial stages of maturity, the studied
reliability estimation [14]. Figure 2 shows the AVS system, tomatoes had a high green content, and its content of red
which is integrated by three sections: (a) the image capture, color was very low. As the fruit reached full maturity, the
(b) the lighting subsystem, and (c) the processing subsystem. behavior was inverse [14]. The segment mean behavior was
The first one obtains spatial information and fruit character- mapped by using the image channels of 40 training samples.
istics, the second one maintains the experimental conditions, It also used the RGB color models, the CIELab 1976, and the
and the third one obtains several characteristics such as Minolta a∗ /b∗ ratio as shown in Figures 6, 7, and 8. By using
equalizing histograms, highlighting edges, segmenting, label- the process previously described, the identification of the six
ing components, and tomato maturity [17–19]. tomato maturity stages was possible, basically due to the
4 Journal of Sensors

Red mean Red


Image Image Green mean
acquisition segmentantion Blue mean

Ligh red

Fuzzy logic to detect ripeness level Pink


of fruit

Classification one
Turning
Red
state maturity
Fuzzy tomatoes
(Sugeno) f(u)
Green
Breaker
Maturity
Blue

Green

Figure 3: Workflow of the proposed system.

(a) (b) (c) (d)

Figure 4: Sample views in four different directions.

(a) (b) (c)

Figure 5: Segmentation of one sample: (a) capture and scaling of the sample image, (b) binarization of the image and noise, and (c) image
segmentation by means of the minor area discrimination.

direct relationship between the axis’s orthogonality with the consisted of four sections. On the other hand, three mem-
data classes. bership functions were proposed for the blue and red cases,
which resulted on six maturity states. Finally, the range value,
for the most significant input and output stage, was deter-
2.6. Fuzzification. In this stage, fuzzification had the main mined by selecting the linguistic states for each variable, i.e.,
purpose to translate the input values into linguistic variables very, medium, and less.
[22]. In this proposed system, a vector created by the average
values of the RGB components is used as input variable. The
input fuzzification was done using triangular membership 2.7. Fuzzy System Implementation. The fuzzy system was
functions as shown in Figure 9. These functions were selected implemented with the Matlab ANFISEDIT Tool and image
for their easy hardware implementation. capture using Raspberry Pi camera, where a set of data was
It is well known that in the first three maturity stages, a found and integrated by the mean of the RGB channels of
greater sensitivity is required to identify the changes com- the image and the output was labeled for the samples.
pared with the rest of them. Therefore, in this paper, the Four variants of the fuzzy system were designed to clas-
membership function related to the green variable entries sify the state of maturity of the tomato. In these, several
Journal of Sensors 5

Training data on RGB color space

T
B
30 BT T B
T T
B G
B R
25 T BP P
T LR G
P T LR
R LRLR R
LR
20 LR
LR
LR LR LR
LR
Blue

15 R
R
LR
R LR
10 LR
LR 24
5 22
20
18
0 16
6 8 14
10 12 14 16 18 12
20 22 24 Green
26
Red

Figure 6: Mapping of the means of the segments of the RGB channels of the training set.

Training data on L⁎a⁎b space

6
LR
LR R LR
4
LR LR
R R LR
2 R
LR
R LRLR
LR
0 G LR P
LR P TP R
B LR
G T
b

B

LR B B
B T
–2 B T
T
T
–4
T T

–6

–8
4
4.5
5 3
5.5 6 1 2
6.5 0
7 –2 –1
7.5 8 –3
8.5 –5 –4
L ⁎a

Figure 7: Mapping of the means of the segments of the channel CIELab 1976 of the training set.

parameters were maintained, which were the inputs of the to follow, where the variable is LR (Low Red), MR (Middle
system, the number of training epcohs, and the type of mem- Red), HB (High Red), Low Green (LG), Medium Low Green
bership functions. Table 2 shows the architectures used for (MLG), Medium High Green (MHG), High Green (HG), LB
each fuzzy system and the error obtained after training, Low (Blue), MB (Middle Blue), and HB (High Blue).
where it can be seen that the designs that presented the least
errors were Models 3 and 4. The selected membership func-
16 48 − R
tion is triangular because of its easy implementation. , 0 < R ≤ 16 48,
The programing was carried out using the methodology LR = 16 48 1
proposed by [23]. The description of each function is shown 0, 16 48 < R ≤ 34 73,
6 Journal of Sensors

Training data on (⁎a,⁎b,⁎a/⁎b) space of L⁎a⁎b color model

R LR
LR LR

R LR

R R LR
LR
GRLR LR
P LR B
20 LR G LRT TRP
LR
B B BP
0 B
T T LR T
T
–20
–40 T T
⁎a/⁎b

–60
–80
–100 6
–120 T 4
–140 2
–5 0
–4
–3 –2
–2 –4
–1
0 –6
1 ⁎b
2 –8
⁎a 3

Figure 8: Mapping of the channel segment mean CIELab 1976 of the training set using the Minolta (a∗ /b∗ ) relation.

16 48 − R B − 16 8
, 0 < R ≤ 16 48, , 0 < B ≤ 16 8,
16 48 16 8
MR = 2 MB = 9
R − 34 73 16 8 − B
, 16 48 < R ≤ 34 37, , 16 52 < B ≤ 45 61,
18 25 8 81
0, 0 < R ≤ 16 48, 0, 0 < B ≤ 16 8,
HR = R − 16 48 3 HB = B − 16 8 10
, 16 48 < R ≤ 34 53, , 16 8 < B ≤ 45 61
18 25 25
16 76 − G 2.8. Inferential Logic. The inferential logic was determined
, 0 < G ≤ 16 76, by identifying the maximum and minimum averages’
LG = 16 76 4
ranges of the RGB components of the training set images.
0, 16 76 < G ≤ 23 91,
Table 3 shows the maximum and minimum averages of
G − 16 76 each maturity state according to the USDA. By using the
, 0 < G ≤ 16 76, last procedure, it was possible to determine a set of 36
16 76
rules that were used in the fuzzy system; the linguistic var-
LMG = 20 32 − G 5
, 16 76 < G ≤ 20 32, iables used were low, medium, low average, high, and high
3 56 average, Table 4.
0, 20 32 < G ≤ 23 9,
2.9. Defuzzification. Defuzzification was done by equation
0, 0 < G ≤ 16 76, (11), with the 36 rules of inferences obtained for the model-
ing of maturity. The Takagi-Sugeno fuzzy model is illustrated
G − 20 34 in Figure 10; Z i represents the weight of the fuzzy rule in the
HMG = , 16 76 < G ≤ 20 32, 6
3 56 output, and wi is the weight of the membership function; N is
23 9 − G the number of rule inferences.
, 20 32 < G ≤ 23 9,
3 57
∑Ni=1 wi Z i
0, 0 < G ≤ 20 37, Final output = 11
∑18
i=1 wi
HG = G − 27 9 7
, 20 37 < G ≤ 27 9,
7 57 2.10. Fuzzy System Proposal. Three proposed architectures
16 48 − B of the fuzzy systems were evaluated for the fruit identifica-
, 0 < B ≤ 16 48, tion maturity as shown in Figure 11. These used the
LB = 16 48 8 means of the RGB channels of the segments associated
0, 16 48 < B ≤ 41 61, with the image. In the first architecture, it uses the R, G,
Journal of Sensors 7

Membership function plots plots points: 181

LR MR HR
1

0
8 10 12 14 16 18 20 22 24
Input variable “Red”
Membership function plots plots points: 181

LG LMG HMG HG
1

0
14 15 16 17 18 19 20 21 22 23
Input variable “Green”
Membership function plots plots points: 181

LB MB HB
1

0
5 10 15 20 25
Input variable “Blue”

Figure 9: Membership functions of the fuzzy system for maturity tomato classification.

Table 2: Fuzzy system training results.

Fuzzy system Inputs Number of membership functions Type of membership functions Epochs Error
Model 1 Mean RGB component 3,3,3 Triangular 100 0.70536
Model 2 Mean RGB component 3,4,3 Triangular 100 0.53892
Model 3 Mean RGB component 7,7,7 Triangular 100 0.01044
Model 4 Mean RGB component 10,10,10 Triangular 100 8 49 × 10-5

and B channels as inputs, the second one uses the differ- To perform the ANFIS’s training, forty samples in the
ence of the R and G channels that allow identifying the six stages of maturity were used. Table 5 shows the results of
maturity according to the methodology proposed by [7], the training using 100 epochs of the three proposed models.
and the last construction was a change of color model It can be observed that Model 1 has the lowest training error
from RGB to CIELab 1976, and the inputs used were L∗ , that is 0.046; this model uses the entries R, G, and B with 3.4
a∗ , and b∗ and Minolta a∗ /b∗ relation proposed by [4]. and 3 belonging functions, respectively.
8 Journal of Sensors

Table 3: Maximum and minimum range of the averages of the RGB channels for each state of maturity.

Minimum red Maximum red Minimum green Maximum green Minimum blue Maximum blue
Maturity level
mean mean mean mean mean mean
Green (G) 21.5402641 23.4607073 21.4570773 22.9846567 17.4503043 20.5893361
Breaker (B) 19.1914739 25.6090892 19.7009942 23.9158162 19.788143 24.2440957
Turning (T) 8.29093793 25.4734785 13.1834743 22.9724406 19.0287402 29.0377504
Pink (P) 17.9856155 24.126667 17.6915075 21.1138724 17.4557533 19.9197693
Light red (LR) 7.38083985 24.0121635 15.451138 21.1648058 3.99488988 20.7513841
Red (R) 7.35927338 24.064192 15.9823308 21.1106179 5.10223285 20.3244826

Table 4: Inference rules.

Class Red mean Green mean Blue mean


(1) Green (G) Middle Middle Medium high High Low Middle
(2) Breaker (B) Middle High Medium low High Middle High
(3) Turning (T) Low High Low High Middle High
(4) Pink (P) Low High Medium low Medium high Low Middle
(5) Light red (LR) Low Middle Low Medium high Low High
(6) Red (R) Low High Low Medium low Low High

Input MF
Membership function plots plots points: 181
LR MR HR
1
Input red
20000 5
25000
10000 0
5000 8 10 12 14 16 18 20 22 24
input variable “Red”
0
50 100 150 200 250
Input MF
Input green
Membership function plotsplots points:
25000 181
20000 LG LMG HMG HG
1
15000 w
Rule weight
10000 (firing strength)
5
5000
0 AND
50 100 150 200 250 0
14 15 16 17 18 19 20 21 22 23
input variable “Green”

Input MF
input variable “Green”
Input blue Membership function plots plots points: 181
LB MB HB
20000 1
15000
10000 5
5000
0
0
50 100 150 200 250 5 10 15 20 25
input variable “Blue”

Output MF

z Output level

Z = a⁎(red) + b⁎(green) + c⁎(blue) + d

Figure 10: Operation of Takagi-Sugeno rules to classify the maturity of the tomato.
Journal of Sensors 9

Red
RGB
f (u)
(Sugeno)
Green

Maturity
Blue
(a)

Red Red minus green f (u)


(Sugeno)

Maturity
Green
(b)

a between b
a f (u)
(Sugeno)

b
Maturity
a/b
(c)

Figure 11: Architecture of the fuzzy models: (a) model that uses mean RGB channels of the tomato image segment, (b) model that uses R-B
mean of the tomato image segment, and (c) model that uses L, ∗ a, ∗ b, and ∗ a/∗ b means of the tomato image segment.

3. Results red, four for green, and three for blue color, which had a reli-
able performance.
The results were obtained from the models using a set of 20 Additionally, Model 3 was a diffuse system that used the
samples that were not part of the training set, and they are averages (L, ∗ a, ∗ b, and ∗ a/∗ b) of the tomato as inputs; its
shown in Table 6. By looking Model 1, it can be noticed architecture integrated twelve membership functions, i.e.,
that it presented an error of 536 995 × 10-6, which is the each input used three. The sum of error of this system was
smallest value compared with the other two. On the other 12825 86 × 10-6. On the other hand, the fuzzy system with
hand, Models 2 and 3 managed to correctly classify the an RGB data entry (R-G) had 10 membership functions
entire sample of tests. However, Model 2 did not classify and a sum of its quadratic classification error was 32.8434.
twelve samples of the test set; those are market in italic. It can be inferred that using the subtraction (R-G) as a
The classification error is lower in Model 1 because the descriptor, the fuzzy classifier hided the information of the
descriptor mean of the components of the channels R, G, R and G components, while discarded the blue component.
and B can identify the increase in the mean of the red This system presented difficulties in classifying classes 3, 4,
channel, the decrease in the mean value of the green chan- and 5; consequently, their efficiency was very low compared
nel, and the nonlinear behavior of the average values of the with others. The color representation with the components
blue cannel [14]. (L, ∗ a, ∗ b, and ∗ a/∗ b) seems to help, because few member-
ship functions were used in each entry, but when four entries
4. Discussion were considered, the work area was divided into 81 sectors.
The color representation with the RGB model collected the
According to the results, Models 1 and 3 correctly classified direct values of the sensor image, generally of the Bayer type,
the set of test samples. Additionally, these presented the so that it had the complete information to classify without
sum of the lowest squared errors, the fuzzy system designed noise due to the data transformation; this is a clear theoret-
for RGB components with an averaged value of 536 995 × ical advantage of this work. Therefore, the fuzzy system
10-6. The architecture used ten membership functions for designed with RGB averages used only 36 sectors generated
10 Journal of Sensors

Table 5: Proposed fuzzy system.

Model Reference Input Number of membership functions (triangular) Training error (100 epochs)
1 Proposed in this work R, G, B 3,4,3 0.046
2 [7] (R-G) 10 1.16
3 [4] L, a, b y a/b 3,3,3,3 0.81

Table 6: Output and error of different classification systems.

Model 1 Model 2 Model 3


Test set class
Output MSE (1 × 10-6) Output MSE Output MSE (1 × 10-6)
1 0.9848 230.0 4.0274 9.1653 0.9883 135.1337
5 5.0015 2.380 4.9622 0.0014 5.0002 0.0831
6 5.9993 0.4.20 6.0460 0.0014 6.0003 0.15688
3 3.0000 .00042 3.5879 0.3457 2.9999 0.0003
3 2.9995 .24200 3.5936 0.3524 2.9995 0.1605
5 5.0142 203.00 4.0408 0.9200 4.9626 1394.9366
2 1.9989 1.1500 2.6855 0.4699 2.0091 83.24365
4 4.0041 17.400 3.9967 1 08 × 10-5 3.9806 372.6216
4 3.9857 17.400 4.1620 0.0262 4.0371 1383.8290
1 0.9848 230.00 4.0274 9.1653 0.9883 135.1337
6 5.9998 0.0109 5.2491 0.5637 5.9863 187.4693
2 1.9882 139.00 3.9156 3.6697 1.9989 1.0575
3 2.9999 .00185 4.1862 1.4070 2.9999 0.0021
5 4.9954 20.600 3.0531 3.7902 5.0375 1407.2835
5 5.0158 251.00 4.0636 0.8767 4.9669 1093.2020
3 2.9045 9110.0 3.8458 0.7154 2.9958 17.0728
5 4.9946 28.600 4.2339 0.5867 5.0047 22.7437
4 3.9909 81.200 4.1841 0.0339 4.0414 1721.7774
4 3.9857 204.00 3.1609 0.7040 3.9314 4702.4978
4 4.0167 204.00 4.2202 0.0485 4.0129 167.4634
Sum — 10739.9 — 32.8434 — 12825.86
Average error — 536.995 — 1.64217 — 641.293

by three membership functions in R, 4 in G, and 3 in B; this on the fruit’s segmentation. Subsequently, several fuzzy sys-
is practical advantage. tems were evaluated while maintaining the use of the four
In other words, the six tomato’s maturity classification views to optimize the number of triangular membership
can be reliably done in a RGB color space, mainly due to functions to reduce the classification error. Based on the
the nonlinear surfaces created by the fuzzy system or other result, the system obtained a good classification, surpassing
mathematical functions, which separates each stage. How- the system that uses the CIELab color space model and the
ever, the main limitation of the proposed system is that the R-G color space model. Together with this study, the report
overall experimentation was carried out in a controlled envi- on the relationship with ∗ a/∗ b for the identification of tomato
ronment (fixed lighting, fixed distance from the camera to maturity by ∗ is confirmed [3–6].
the sample, and a matt black background). This weakness is One aspect that can be highlighted is the use of the Rasp-
already considered in the research team, and a proposal will berry Pi 3 and the camera module Raspberry Pi 2, which
be reported in an upcoming paper. allowed to create applications of easy technology transfer
and rapid implementation focused on the classification of
5. Conclusion fruit and vegetable maturity. This system can be extended
to the CVS’s estimation of soluble solids, vitamins, and anti-
In this work, a CVS was designed using a Raspberry Pi oxidants in tomato.
3, which used tomato maturity degrees according to the
USDA criteria with an average error of 536 995 × 10-6. The Data Availability
acquisition of CVS images was done with the camera module
Raspberry Pi 2 in a controlled environment with an illumina- The data used to support the findings of this study are avail-
tion intensity of 200 lux, with the aim of reducing the noise able from the corresponding author upon request.
Journal of Sensors 11

Conflicts of Interest [7] Y. Takahashi, J. Ogawa, and K. Saeki, “Automatic tomato pick-
ing robot system with human interface using\nimage process-
The authors declare that there is no conflict of interest ing,” in IECON’01. 27th Annual Conference of the IEEE
regarding the publication of this paper. Industrial Electronics Society (Cat. No.37243), pp. 433–438,
Denver, CO, USA, 2001.
Authors’ Contributions [8] G. Polder, G. W. A. M. van der Heijden, and I. T. Young,
“Spectral image analysis for measuring ripeness of tomatoes,”
Marcos Jesús Villaseñor Aguilar contributed to the imple- Transactions of the ASAE, vol. 45, no. 4, pp. 1155–1161, 2002.
mentation of the image acquisition system of the tomato [9] S. Kaur, A. Girdhar, and J. Gill, “Computer vision-based
samples. Also, he developed the capture and processing sys- tomato grading and sorting,” in Advances in Data and Infor-
tem software for the determination of tomato maturity levels. mation Sciences, pp. 75–84, Springer, 2018.
J. Enrique Botello Alvarez contributed to the conceptualiza- [10] P. Wan, A. Toudeshki, H. Tan, and R. Ehsani, “A methodology
tion, the design of the vision system experiment, the tutoring, for fresh tomato maturity detection using computer vision,”
and the supply of study materials, laboratory samples, and Computers and Electronics in Agriculture, vol. 146, pp. 43–50,
equipment. F. Javier Pérez-Pinal contributed to the prepara- 2018.
tion, creation of the published work, writing of the initial [11] L. Zhang, J. Jia, G. Gui, X. Hao, W. Gao, and M. Wang, “Deep
draft, and validation of the results of the vision system. Mir- learning based improved classification system for designing
oslava Cano-Lara focused on the validation of the vision sys- tomato harvesting robot,” IEEE Access, vol. 6, pp. 67940–
tem of acquisition and of the algorithms. M. Fabiola León 67950, 2018.
Galván focused on the revision of the results in the classifica- [12] A. R. Mansor, M. Othman, M. Nazari, and A. Bakar, “Regional
tion system and in the conceptualization. Micael-Gerardo conference on science, technology and social sciences
Bravo-Sánchez contributed to the methodology design, the (RCSTSS 2014),” in Business and Social Sciences, p. 288,
tutoring, and the establishment of the design of the vision Springer, Malasya, 2016.
system experiment. Alejandro Israel Barranco Gutierrez led [13] H. G. Naganur, S. S. Sannakki, V. S. Rajpurohit, and
the supervision and responsibility of the leadership for the R. Arunkumar, “Fruits sorting and grading using fuzzy logic,”
planning, the execution of the research activity, the technical International Journal of Advanced Research in Computer Engi-
neering and Technology, vol. 1, no. 6, pp. 117–122, 2012.
validation, and the follow-up of the publication of the
manuscript. [14] N. Goel and P. Sehgal, “Fuzzy classification of pre-harvest
tomatoes for ripeness estimation – an approach based on auto-
matic rule learning using decision tree,” Applied Soft Comput-
Acknowledgments ing, vol. 36, pp. 45–56, 2015.
The authors greatly appreciate the support of TecNM, CON- [15] M. Dadwal and V. K. Banga, “Estimate ripeness level of fruits
ACyT, PRODEP, UG, ITESI, and ITESS. using RGB color space and fuzzy logic technique,” Interna-
tional Journal of Engineering and Advanced Technology
(IJEAT), vol. 2, no. 1, 2012.
References [16] R. Hasan, S. Muhammad, and G. Monir, “Fruit maturity
[1] A. Gastélum-Barrios, R. A. Bórquez-López, E. Rico-García, estimation based on fuzzy classification,” Proceedings of the
M. Toledano-Ayala, and G. M. Soto-Zarazúa, “Tomato quality 2017 IEEE International Conference on Signal and Image
evaluation with image processing: a review,” African Journal of Processing Applications (ICSIPA), pp. 27–32, Kuching,
Agricultural Research, vol. 6, no. 14, pp. 3333–3339, 2011. Malaysia, 2017.
[2] K. Choi, G. Lee, Y. J. Han, and J. M. Bunn, “Tomato maturity [17] M. S. Acosta-Navarrete, J. A. Padilla-Medina, J. E. Botello-
evaluation using color image analysis,” Transactions of the Alvarez et al., “Instrumentation and control to improve the
ASAE, vol. 38, no. 1, pp. 171–176, 1995. crop yield,” in Biosystems Engineering: Biofactories for Food
[3] S. R. Rupanagudi, B. S. Ranjani, P. Nagaraj, and V. G. Bhat, “A Production in the Century XXI, R. Guevara-Gonzalez and I.
cost effective tomato maturity grading system using image Torres-Pacheco, Eds., pp. 363–400, Springer, 2014.
processing for farmers,” in Proceedings of 2014 International [18] A. K. Seema and G. S. Gill, “Automatic fruit grading and
Conference on Contemporary Computing and Informatics classification system using computer vision: a review,” in
(IC3I), pp. 7–12, Mysore, India, 2014. 2015 Second International Conference on Advances in Comput-
[4] M. A. Vazquez-Cruz, S. N. Jimenez-Garcia, R. Luna-Rubio ing and Communication Engineering, pp. 598–603, Dehradun,
et al., “Application of neural networks to estimate carotenoid India, 2015.
content during ripening in tomato fruits (Solanum lycopersi- [19] B. Zhang, W. Huang, J. Li et al., “Principles, developments and
cum),” Scientia Horticulturae, vol. 162, pp. 165–171, 2013. applications of computer vision for external quality inspection
[5] R. Arias, T.-C. Lee, L. Logendra, and H. Janes, “Correlation of of fruits and vegetables: a review,” Food Research Interna-
lycopene measured by HPLC with the l∗ , a∗ , b∗ color readings tional, vol. 62, pp. 326–343, 2014.
of a hydroponic tomato and the relationship of maturity with [20] D. Wu and D.-W. Sun, “Colour measurements by computer
color and lycopene content,” Journal of Agricultural and Food vision for food quality control–a review,” Trends in Food Sci-
Chemistry, vol. 48, no. 5, pp. 1697–1702, 2000. ence & Technology, vol. 29, no. 1, pp. 5–20, 2013.
[6] V. Pavithra, R. Pounroja, and B. Sathya Bama, “Machine vision [21] M. Pagnutti, R. E. Ryan, G. Cazenavette et al., “Laying the
based automatic sorting of cherry tomatoes,” in 2015 2nd foundation to use raspberry pi 3 V2 camera module imagery
International Conference on Electronics and Communication for scientific and engineering purposes,” Journal of Electronic
Systems (ICECS), pp. 271–275, Coimbatore, India, 2015. Imaging, vol. 26, no. 1, article 013014, 2017.
12 Journal of Sensors

[22] V. A. Marcos, Á. T. Erik, R. A. Agustín, O. M. Horacio, and


P. M. José A, “Técnicas de inteligencia artificial para el control
de estabilidad de un manipulador paralelo 3RRR,” Revista De
Ingeniería Eléctrica, Electrónica Y Computación, vol. 11,
no. 1, 2013.
[23] B. Gutiérrez, Á. L. Cárdenas, and F. P. Pinal, “Implementación
de sistema difuso en arduino uno,” November 2016, https://
www.researchgate.net/profile/Alejandro_Barranco_Gutierrez
5/publication/309676195_Implementacion_de_sistema_difus
o_en_Arduino_Uno/links/581cc82f08ae12715af20b4e/Imple
mentacion-de-sistema-difuso-en-Arduino-Uno.pdf.
International Journal of

Rotating Advances in
Machinery Multimedia

The Scientific
Engineering
Journal of
Journal of

Hindawi
World Journal
Hindawi Publishing Corporation Hindawi
Sensors
Hindawi Hindawi
www.hindawi.com Volume 2018 http://www.hindawi.com
www.hindawi.com Volume 2018
2013 www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 www.hindawi.com Volume 2018

Journal of

Control Science
and Engineering

Advances in
Civil Engineering
Hindawi Hindawi
www.hindawi.com Volume 2018 www.hindawi.com Volume 2018

Submit your manuscripts at


www.hindawi.com

Journal of
Journal of Electrical and Computer
Robotics
Hindawi
Engineering
Hindawi
www.hindawi.com Volume 2018 www.hindawi.com Volume 2018

VLSI Design
Advances in
OptoElectronics
International Journal of

International Journal of
Modelling &
Simulation
Aerospace
Hindawi Volume 2018
Navigation and
Observation
Hindawi
www.hindawi.com Volume 2018
in Engineering
Hindawi
www.hindawi.com Volume 2018
Engineering
Hindawi
www.hindawi.com Volume 2018
Hindawi
www.hindawi.com www.hindawi.com Volume 2018

International Journal of
International Journal of Antennas and Active and Passive Advances in
Chemical Engineering Propagation Electronic Components Shock and Vibration Acoustics and Vibration
Hindawi Hindawi Hindawi Hindawi Hindawi
www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 www.hindawi.com Volume 2018

You might also like