Fuzzy Classification of The Maturity of The Tomato
Fuzzy Classification of The Maturity of The Tomato
Journal of Sensors
Volume 2019, Article ID 3175848, 12 pages
https://doi.org/10.1155/2019/3175848
Research Article
Fuzzy Classification of the Maturity of the Tomato Using a
Vision System
Received 29 December 2018; Revised 5 March 2019; Accepted 14 March 2019; Published 4 July 2019
Copyright © 2019 Marcos J. Villaseñor-Aguilar et al. This is an open access article distributed under the Creative Commons
Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work
is properly cited.
Artificial vision systems (AVS) have become very important in precision agriculture applied to produce high-quality and low-cost
foods with high functional characteristics generated through environmental care practices. This article reported the design and
implementation of a new fuzzy classification architecture based on the RGB color model with descriptors. Three inputs were
used that are associated with the average value of the color components of four views of the tomato; the number of triangular
membership functions associated with the components R and B were three and four for the case of component G. The amount
of tomato samples used in training were forty and twenty for testing; the training was done using the Matlab© ANFISEDIT. The
tomato samples were divided into six categories according to the US Department of Agriculture (USDA). This study focused on
optimizing the descriptors of the color space to achieve high precision in the prediction results of the final classification task
with an error of 536,995 × 10-6. The Computer Vision System (CVS) is integrated by an image isolation system with lighting;
the image capture system uses a Raspberry Pi 3 and Camera Module Raspberry Pi 2 at a fixed distance and a black background.
In the implementation of the CVS, three different color description methods for tomato classification were analyzed and their
respective diffuse systems were also designed, two of them using the descriptors described in the literature.
the stage of pink maturity of the tomato, the values from second part, it used five layers of CNN that extracted the
Minolta to ∗ /b∗ change from negative to positive magnitude. main characteristics. The convolution kernels are of sizes
When the ratio of Minolta a∗ /b∗ of the tomatoes reached 0.6- 9 × 9, 5 × 5, and 3 × 3 in order to conserve characteristics,
0.95, those can be easily marketed. On the other hand, [4] reduce unnecessary parameters, and improve the speed of
estimated the lycopene content in the different stages of calculations. Together, it has two layers of max-pooling in
maturity of the tomato by means of the foliar area and the the CNN layers. The last part is for the classification of results
color parameters (L∗ , a∗ , b∗ , and hue). This model was done by a fully connected layer. The experimental results showed
using an artificial neural network (ANN). an average accuracy of 91.9% with a prediction time less than
On the other hand, the use of the RGB color model has 0.01 s. Another research was that of [3], who proposed an
allowed the identification of the maturity of the tomato. As algorithm in Matlab© Simulink, which employed a 4 mega-
reported by [7], which proposed a methodology to identify pixel camera, with a resolution of 640 × 480 and a frame rate
red tomatoes for automatic cutting through the use of a of 30 for the capture of the images; they received a processing
robot, this used RGB images analyzed using the relationship that consisted of an erosion and expansion. The classification
between the red-blue component (RB) and red-green (RG) and identification of the maturity of the tomato were with the
that allowed to formulate the inequalities: B ≤ 0 8972R and use of obtaining the red chroma of the YCbCr color model,
G ≤ 0 8972R, when these conditions are met, the fruit can which was between 135 and 180. Reference [6] developed a
be harvested. A similar investigation was carried out by [8], system of classification of maturity of the cherry tomatoes
where they compared RGB images with hyperspectral images based on artificial vision; in this proposal, they used color, tex-
(in the range of 396-736 nm with a spectral resolution of ture, and shape of the nearest K-neighbor and classifiers of
1.3 nm using a spectrograph). A linear discriminant analysis vectorial support machines to classify the ripened tomatoes.
was applied to both groups for the classification of the toma- Currently, with Computer Vision Systems (CVS) and
toes in five stages of maturity, which was weighted by a Fuzzy Logic (FL), applications of maturity classification
majority vote strategy of the analysis of the individual pixels. of tomatoes, guavas, apples, mangoes, and watermelons
The authors document that hyperspectral images were more employee have been developed [12]. FL is an artificial intelli-
discriminant than RGB in tomato maturity analysis. gence technique that models human reasoning from the lin-
In 2018, [9] developed a system of maturity classification guistics of an expert to solve a problem. Therefore, the logical
of tomatoes, the system used two types of tomatoes: with processing of the variables is qualitative based on quantitative
defects and without defects. For the fruit’s classification, an belonging functions [13]. References [14, 4] argue that the
artificial backpropagation neural network (BPNN) was used, classification of the maturity of the elements of study is com-
which was implemented in Matlab©. This system identified posed of two systems that are the identification of color
the degrees of maturity: red, orange, turning, and green. and its labeling. For color representation, they used image
The architect of the neural network had thirteen inputs that histograms based on the RGB, HSI, and CIELab color space
were associated with six functions of color and seven func- models; for the automatic labeling of the fruits, they designed
tions of forms, twenty neurons in the hidden layers and one a fuzzy system that handled the knowledge base that was
in the output. Reference [10] proposed a method using a transferred by an expert. On the other hand, the proposal
BPNN to detect maturity levels (green, orange, and red) of made by [15] estimated the level of maturity in apples using
tomatoes of the Roma and Pera varieties. the RGB color space; their methodology used four images
The color characteristics were extracted from five con- of different views of the matrix. They proposed four maturity
centric circles of the fruit, and the average shade values of classes, based on a fuzzy system, which were defined as
each subregion were used to predict the level of maturity of mature, low mature, near to mature, or too mature. The
the samples; these values were the entries of the BPNN. The inputs of the diffuse system were the average values of each
average precision to detect the three maturity levels of the color map of the segmented images. Reference [13] devel-
tomato samples in this method was 99.31%, and the standard oped an image classification system of apple, sweet lime,
deviation was 1.2%. Reference [11] implemented a classifica- banana, guava, and orange; the system was implemented in
tion system based on convolutional neural networks (CNN). Matlab©. The characteristics extracted from each fruit’s
The proposed classification architecture was composed of image were area, major and minor axis of each sample; these
three stages; the first stage managed the color images of three were used as inputs in the diffuse system for their classifica-
channels that are 200 pixels in height and width. In the tion. Another similar study was reported by [16], which
Journal of Sensors 3
Ligh red
Classification one
Turning
Red
state maturity
Fuzzy tomatoes
(Sugeno) f(u)
Green
Breaker
Maturity
Blue
Green
Figure 5: Segmentation of one sample: (a) capture and scaling of the sample image, (b) binarization of the image and noise, and (c) image
segmentation by means of the minor area discrimination.
direct relationship between the axis’s orthogonality with the consisted of four sections. On the other hand, three mem-
data classes. bership functions were proposed for the blue and red cases,
which resulted on six maturity states. Finally, the range value,
for the most significant input and output stage, was deter-
2.6. Fuzzification. In this stage, fuzzification had the main mined by selecting the linguistic states for each variable, i.e.,
purpose to translate the input values into linguistic variables very, medium, and less.
[22]. In this proposed system, a vector created by the average
values of the RGB components is used as input variable. The
input fuzzification was done using triangular membership 2.7. Fuzzy System Implementation. The fuzzy system was
functions as shown in Figure 9. These functions were selected implemented with the Matlab ANFISEDIT Tool and image
for their easy hardware implementation. capture using Raspberry Pi camera, where a set of data was
It is well known that in the first three maturity stages, a found and integrated by the mean of the RGB channels of
greater sensitivity is required to identify the changes com- the image and the output was labeled for the samples.
pared with the rest of them. Therefore, in this paper, the Four variants of the fuzzy system were designed to clas-
membership function related to the green variable entries sify the state of maturity of the tomato. In these, several
Journal of Sensors 5
T
B
30 BT T B
T T
B G
B R
25 T BP P
T LR G
P T LR
R LRLR R
LR
20 LR
LR
LR LR LR
LR
Blue
15 R
R
LR
R LR
10 LR
LR 24
5 22
20
18
0 16
6 8 14
10 12 14 16 18 12
20 22 24 Green
26
Red
Figure 6: Mapping of the means of the segments of the RGB channels of the training set.
6
LR
LR R LR
4
LR LR
R R LR
2 R
LR
R LRLR
LR
0 G LR P
LR P TP R
B LR
G T
b
B
⁎
LR B B
B T
–2 B T
T
T
–4
T T
–6
–8
4
4.5
5 3
5.5 6 1 2
6.5 0
7 –2 –1
7.5 8 –3
8.5 –5 –4
L ⁎a
Figure 7: Mapping of the means of the segments of the channel CIELab 1976 of the training set.
parameters were maintained, which were the inputs of the to follow, where the variable is LR (Low Red), MR (Middle
system, the number of training epcohs, and the type of mem- Red), HB (High Red), Low Green (LG), Medium Low Green
bership functions. Table 2 shows the architectures used for (MLG), Medium High Green (MHG), High Green (HG), LB
each fuzzy system and the error obtained after training, Low (Blue), MB (Middle Blue), and HB (High Blue).
where it can be seen that the designs that presented the least
errors were Models 3 and 4. The selected membership func-
16 48 − R
tion is triangular because of its easy implementation. , 0 < R ≤ 16 48,
The programing was carried out using the methodology LR = 16 48 1
proposed by [23]. The description of each function is shown 0, 16 48 < R ≤ 34 73,
6 Journal of Sensors
R LR
LR LR
R LR
R R LR
LR
GRLR LR
P LR B
20 LR G LRT TRP
LR
B B BP
0 B
T T LR T
T
–20
–40 T T
⁎a/⁎b
–60
–80
–100 6
–120 T 4
–140 2
–5 0
–4
–3 –2
–2 –4
–1
0 –6
1 ⁎b
2 –8
⁎a 3
Figure 8: Mapping of the channel segment mean CIELab 1976 of the training set using the Minolta (a∗ /b∗ ) relation.
16 48 − R B − 16 8
, 0 < R ≤ 16 48, , 0 < B ≤ 16 8,
16 48 16 8
MR = 2 MB = 9
R − 34 73 16 8 − B
, 16 48 < R ≤ 34 37, , 16 52 < B ≤ 45 61,
18 25 8 81
0, 0 < R ≤ 16 48, 0, 0 < B ≤ 16 8,
HR = R − 16 48 3 HB = B − 16 8 10
, 16 48 < R ≤ 34 53, , 16 8 < B ≤ 45 61
18 25 25
16 76 − G 2.8. Inferential Logic. The inferential logic was determined
, 0 < G ≤ 16 76, by identifying the maximum and minimum averages’
LG = 16 76 4
ranges of the RGB components of the training set images.
0, 16 76 < G ≤ 23 91,
Table 3 shows the maximum and minimum averages of
G − 16 76 each maturity state according to the USDA. By using the
, 0 < G ≤ 16 76, last procedure, it was possible to determine a set of 36
16 76
rules that were used in the fuzzy system; the linguistic var-
LMG = 20 32 − G 5
, 16 76 < G ≤ 20 32, iables used were low, medium, low average, high, and high
3 56 average, Table 4.
0, 20 32 < G ≤ 23 9,
2.9. Defuzzification. Defuzzification was done by equation
0, 0 < G ≤ 16 76, (11), with the 36 rules of inferences obtained for the model-
ing of maturity. The Takagi-Sugeno fuzzy model is illustrated
G − 20 34 in Figure 10; Z i represents the weight of the fuzzy rule in the
HMG = , 16 76 < G ≤ 20 32, 6
3 56 output, and wi is the weight of the membership function; N is
23 9 − G the number of rule inferences.
, 20 32 < G ≤ 23 9,
3 57
∑Ni=1 wi Z i
0, 0 < G ≤ 20 37, Final output = 11
∑18
i=1 wi
HG = G − 27 9 7
, 20 37 < G ≤ 27 9,
7 57 2.10. Fuzzy System Proposal. Three proposed architectures
16 48 − B of the fuzzy systems were evaluated for the fruit identifica-
, 0 < B ≤ 16 48, tion maturity as shown in Figure 11. These used the
LB = 16 48 8 means of the RGB channels of the segments associated
0, 16 48 < B ≤ 41 61, with the image. In the first architecture, it uses the R, G,
Journal of Sensors 7
LR MR HR
1
0
8 10 12 14 16 18 20 22 24
Input variable “Red”
Membership function plots plots points: 181
LG LMG HMG HG
1
0
14 15 16 17 18 19 20 21 22 23
Input variable “Green”
Membership function plots plots points: 181
LB MB HB
1
0
5 10 15 20 25
Input variable “Blue”
Figure 9: Membership functions of the fuzzy system for maturity tomato classification.
Fuzzy system Inputs Number of membership functions Type of membership functions Epochs Error
Model 1 Mean RGB component 3,3,3 Triangular 100 0.70536
Model 2 Mean RGB component 3,4,3 Triangular 100 0.53892
Model 3 Mean RGB component 7,7,7 Triangular 100 0.01044
Model 4 Mean RGB component 10,10,10 Triangular 100 8 49 × 10-5
and B channels as inputs, the second one uses the differ- To perform the ANFIS’s training, forty samples in the
ence of the R and G channels that allow identifying the six stages of maturity were used. Table 5 shows the results of
maturity according to the methodology proposed by [7], the training using 100 epochs of the three proposed models.
and the last construction was a change of color model It can be observed that Model 1 has the lowest training error
from RGB to CIELab 1976, and the inputs used were L∗ , that is 0.046; this model uses the entries R, G, and B with 3.4
a∗ , and b∗ and Minolta a∗ /b∗ relation proposed by [4]. and 3 belonging functions, respectively.
8 Journal of Sensors
Table 3: Maximum and minimum range of the averages of the RGB channels for each state of maturity.
Minimum red Maximum red Minimum green Maximum green Minimum blue Maximum blue
Maturity level
mean mean mean mean mean mean
Green (G) 21.5402641 23.4607073 21.4570773 22.9846567 17.4503043 20.5893361
Breaker (B) 19.1914739 25.6090892 19.7009942 23.9158162 19.788143 24.2440957
Turning (T) 8.29093793 25.4734785 13.1834743 22.9724406 19.0287402 29.0377504
Pink (P) 17.9856155 24.126667 17.6915075 21.1138724 17.4557533 19.9197693
Light red (LR) 7.38083985 24.0121635 15.451138 21.1648058 3.99488988 20.7513841
Red (R) 7.35927338 24.064192 15.9823308 21.1106179 5.10223285 20.3244826
Input MF
Membership function plots plots points: 181
LR MR HR
1
Input red
20000 5
25000
10000 0
5000 8 10 12 14 16 18 20 22 24
input variable “Red”
0
50 100 150 200 250
Input MF
Input green
Membership function plotsplots points:
25000 181
20000 LG LMG HMG HG
1
15000 w
Rule weight
10000 (firing strength)
5
5000
0 AND
50 100 150 200 250 0
14 15 16 17 18 19 20 21 22 23
input variable “Green”
Input MF
input variable “Green”
Input blue Membership function plots plots points: 181
LB MB HB
20000 1
15000
10000 5
5000
0
0
50 100 150 200 250 5 10 15 20 25
input variable “Blue”
Output MF
z Output level
Figure 10: Operation of Takagi-Sugeno rules to classify the maturity of the tomato.
Journal of Sensors 9
Red
RGB
f (u)
(Sugeno)
Green
Maturity
Blue
(a)
Maturity
Green
(b)
a between b
a f (u)
(Sugeno)
b
Maturity
a/b
(c)
Figure 11: Architecture of the fuzzy models: (a) model that uses mean RGB channels of the tomato image segment, (b) model that uses R-B
mean of the tomato image segment, and (c) model that uses L, ∗ a, ∗ b, and ∗ a/∗ b means of the tomato image segment.
3. Results red, four for green, and three for blue color, which had a reli-
able performance.
The results were obtained from the models using a set of 20 Additionally, Model 3 was a diffuse system that used the
samples that were not part of the training set, and they are averages (L, ∗ a, ∗ b, and ∗ a/∗ b) of the tomato as inputs; its
shown in Table 6. By looking Model 1, it can be noticed architecture integrated twelve membership functions, i.e.,
that it presented an error of 536 995 × 10-6, which is the each input used three. The sum of error of this system was
smallest value compared with the other two. On the other 12825 86 × 10-6. On the other hand, the fuzzy system with
hand, Models 2 and 3 managed to correctly classify the an RGB data entry (R-G) had 10 membership functions
entire sample of tests. However, Model 2 did not classify and a sum of its quadratic classification error was 32.8434.
twelve samples of the test set; those are market in italic. It can be inferred that using the subtraction (R-G) as a
The classification error is lower in Model 1 because the descriptor, the fuzzy classifier hided the information of the
descriptor mean of the components of the channels R, G, R and G components, while discarded the blue component.
and B can identify the increase in the mean of the red This system presented difficulties in classifying classes 3, 4,
channel, the decrease in the mean value of the green chan- and 5; consequently, their efficiency was very low compared
nel, and the nonlinear behavior of the average values of the with others. The color representation with the components
blue cannel [14]. (L, ∗ a, ∗ b, and ∗ a/∗ b) seems to help, because few member-
ship functions were used in each entry, but when four entries
4. Discussion were considered, the work area was divided into 81 sectors.
The color representation with the RGB model collected the
According to the results, Models 1 and 3 correctly classified direct values of the sensor image, generally of the Bayer type,
the set of test samples. Additionally, these presented the so that it had the complete information to classify without
sum of the lowest squared errors, the fuzzy system designed noise due to the data transformation; this is a clear theoret-
for RGB components with an averaged value of 536 995 × ical advantage of this work. Therefore, the fuzzy system
10-6. The architecture used ten membership functions for designed with RGB averages used only 36 sectors generated
10 Journal of Sensors
Model Reference Input Number of membership functions (triangular) Training error (100 epochs)
1 Proposed in this work R, G, B 3,4,3 0.046
2 [7] (R-G) 10 1.16
3 [4] L, a, b y a/b 3,3,3,3 0.81
by three membership functions in R, 4 in G, and 3 in B; this on the fruit’s segmentation. Subsequently, several fuzzy sys-
is practical advantage. tems were evaluated while maintaining the use of the four
In other words, the six tomato’s maturity classification views to optimize the number of triangular membership
can be reliably done in a RGB color space, mainly due to functions to reduce the classification error. Based on the
the nonlinear surfaces created by the fuzzy system or other result, the system obtained a good classification, surpassing
mathematical functions, which separates each stage. How- the system that uses the CIELab color space model and the
ever, the main limitation of the proposed system is that the R-G color space model. Together with this study, the report
overall experimentation was carried out in a controlled envi- on the relationship with ∗ a/∗ b for the identification of tomato
ronment (fixed lighting, fixed distance from the camera to maturity by ∗ is confirmed [3–6].
the sample, and a matt black background). This weakness is One aspect that can be highlighted is the use of the Rasp-
already considered in the research team, and a proposal will berry Pi 3 and the camera module Raspberry Pi 2, which
be reported in an upcoming paper. allowed to create applications of easy technology transfer
and rapid implementation focused on the classification of
5. Conclusion fruit and vegetable maturity. This system can be extended
to the CVS’s estimation of soluble solids, vitamins, and anti-
In this work, a CVS was designed using a Raspberry Pi oxidants in tomato.
3, which used tomato maturity degrees according to the
USDA criteria with an average error of 536 995 × 10-6. The Data Availability
acquisition of CVS images was done with the camera module
Raspberry Pi 2 in a controlled environment with an illumina- The data used to support the findings of this study are avail-
tion intensity of 200 lux, with the aim of reducing the noise able from the corresponding author upon request.
Journal of Sensors 11
Conflicts of Interest [7] Y. Takahashi, J. Ogawa, and K. Saeki, “Automatic tomato pick-
ing robot system with human interface using\nimage process-
The authors declare that there is no conflict of interest ing,” in IECON’01. 27th Annual Conference of the IEEE
regarding the publication of this paper. Industrial Electronics Society (Cat. No.37243), pp. 433–438,
Denver, CO, USA, 2001.
Authors’ Contributions [8] G. Polder, G. W. A. M. van der Heijden, and I. T. Young,
“Spectral image analysis for measuring ripeness of tomatoes,”
Marcos Jesús Villaseñor Aguilar contributed to the imple- Transactions of the ASAE, vol. 45, no. 4, pp. 1155–1161, 2002.
mentation of the image acquisition system of the tomato [9] S. Kaur, A. Girdhar, and J. Gill, “Computer vision-based
samples. Also, he developed the capture and processing sys- tomato grading and sorting,” in Advances in Data and Infor-
tem software for the determination of tomato maturity levels. mation Sciences, pp. 75–84, Springer, 2018.
J. Enrique Botello Alvarez contributed to the conceptualiza- [10] P. Wan, A. Toudeshki, H. Tan, and R. Ehsani, “A methodology
tion, the design of the vision system experiment, the tutoring, for fresh tomato maturity detection using computer vision,”
and the supply of study materials, laboratory samples, and Computers and Electronics in Agriculture, vol. 146, pp. 43–50,
equipment. F. Javier Pérez-Pinal contributed to the prepara- 2018.
tion, creation of the published work, writing of the initial [11] L. Zhang, J. Jia, G. Gui, X. Hao, W. Gao, and M. Wang, “Deep
draft, and validation of the results of the vision system. Mir- learning based improved classification system for designing
oslava Cano-Lara focused on the validation of the vision sys- tomato harvesting robot,” IEEE Access, vol. 6, pp. 67940–
tem of acquisition and of the algorithms. M. Fabiola León 67950, 2018.
Galván focused on the revision of the results in the classifica- [12] A. R. Mansor, M. Othman, M. Nazari, and A. Bakar, “Regional
tion system and in the conceptualization. Micael-Gerardo conference on science, technology and social sciences
Bravo-Sánchez contributed to the methodology design, the (RCSTSS 2014),” in Business and Social Sciences, p. 288,
tutoring, and the establishment of the design of the vision Springer, Malasya, 2016.
system experiment. Alejandro Israel Barranco Gutierrez led [13] H. G. Naganur, S. S. Sannakki, V. S. Rajpurohit, and
the supervision and responsibility of the leadership for the R. Arunkumar, “Fruits sorting and grading using fuzzy logic,”
planning, the execution of the research activity, the technical International Journal of Advanced Research in Computer Engi-
neering and Technology, vol. 1, no. 6, pp. 117–122, 2012.
validation, and the follow-up of the publication of the
manuscript. [14] N. Goel and P. Sehgal, “Fuzzy classification of pre-harvest
tomatoes for ripeness estimation – an approach based on auto-
matic rule learning using decision tree,” Applied Soft Comput-
Acknowledgments ing, vol. 36, pp. 45–56, 2015.
The authors greatly appreciate the support of TecNM, CON- [15] M. Dadwal and V. K. Banga, “Estimate ripeness level of fruits
ACyT, PRODEP, UG, ITESI, and ITESS. using RGB color space and fuzzy logic technique,” Interna-
tional Journal of Engineering and Advanced Technology
(IJEAT), vol. 2, no. 1, 2012.
References [16] R. Hasan, S. Muhammad, and G. Monir, “Fruit maturity
[1] A. Gastélum-Barrios, R. A. Bórquez-López, E. Rico-García, estimation based on fuzzy classification,” Proceedings of the
M. Toledano-Ayala, and G. M. Soto-Zarazúa, “Tomato quality 2017 IEEE International Conference on Signal and Image
evaluation with image processing: a review,” African Journal of Processing Applications (ICSIPA), pp. 27–32, Kuching,
Agricultural Research, vol. 6, no. 14, pp. 3333–3339, 2011. Malaysia, 2017.
[2] K. Choi, G. Lee, Y. J. Han, and J. M. Bunn, “Tomato maturity [17] M. S. Acosta-Navarrete, J. A. Padilla-Medina, J. E. Botello-
evaluation using color image analysis,” Transactions of the Alvarez et al., “Instrumentation and control to improve the
ASAE, vol. 38, no. 1, pp. 171–176, 1995. crop yield,” in Biosystems Engineering: Biofactories for Food
[3] S. R. Rupanagudi, B. S. Ranjani, P. Nagaraj, and V. G. Bhat, “A Production in the Century XXI, R. Guevara-Gonzalez and I.
cost effective tomato maturity grading system using image Torres-Pacheco, Eds., pp. 363–400, Springer, 2014.
processing for farmers,” in Proceedings of 2014 International [18] A. K. Seema and G. S. Gill, “Automatic fruit grading and
Conference on Contemporary Computing and Informatics classification system using computer vision: a review,” in
(IC3I), pp. 7–12, Mysore, India, 2014. 2015 Second International Conference on Advances in Comput-
[4] M. A. Vazquez-Cruz, S. N. Jimenez-Garcia, R. Luna-Rubio ing and Communication Engineering, pp. 598–603, Dehradun,
et al., “Application of neural networks to estimate carotenoid India, 2015.
content during ripening in tomato fruits (Solanum lycopersi- [19] B. Zhang, W. Huang, J. Li et al., “Principles, developments and
cum),” Scientia Horticulturae, vol. 162, pp. 165–171, 2013. applications of computer vision for external quality inspection
[5] R. Arias, T.-C. Lee, L. Logendra, and H. Janes, “Correlation of of fruits and vegetables: a review,” Food Research Interna-
lycopene measured by HPLC with the l∗ , a∗ , b∗ color readings tional, vol. 62, pp. 326–343, 2014.
of a hydroponic tomato and the relationship of maturity with [20] D. Wu and D.-W. Sun, “Colour measurements by computer
color and lycopene content,” Journal of Agricultural and Food vision for food quality control–a review,” Trends in Food Sci-
Chemistry, vol. 48, no. 5, pp. 1697–1702, 2000. ence & Technology, vol. 29, no. 1, pp. 5–20, 2013.
[6] V. Pavithra, R. Pounroja, and B. Sathya Bama, “Machine vision [21] M. Pagnutti, R. E. Ryan, G. Cazenavette et al., “Laying the
based automatic sorting of cherry tomatoes,” in 2015 2nd foundation to use raspberry pi 3 V2 camera module imagery
International Conference on Electronics and Communication for scientific and engineering purposes,” Journal of Electronic
Systems (ICECS), pp. 271–275, Coimbatore, India, 2015. Imaging, vol. 26, no. 1, article 013014, 2017.
12 Journal of Sensors
Rotating Advances in
Machinery Multimedia
The Scientific
Engineering
Journal of
Journal of
Hindawi
World Journal
Hindawi Publishing Corporation Hindawi
Sensors
Hindawi Hindawi
www.hindawi.com Volume 2018 http://www.hindawi.com
www.hindawi.com Volume 2018
2013 www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 www.hindawi.com Volume 2018
Journal of
Control Science
and Engineering
Advances in
Civil Engineering
Hindawi Hindawi
www.hindawi.com Volume 2018 www.hindawi.com Volume 2018
Journal of
Journal of Electrical and Computer
Robotics
Hindawi
Engineering
Hindawi
www.hindawi.com Volume 2018 www.hindawi.com Volume 2018
VLSI Design
Advances in
OptoElectronics
International Journal of
International Journal of
Modelling &
Simulation
Aerospace
Hindawi Volume 2018
Navigation and
Observation
Hindawi
www.hindawi.com Volume 2018
in Engineering
Hindawi
www.hindawi.com Volume 2018
Engineering
Hindawi
www.hindawi.com Volume 2018
Hindawi
www.hindawi.com www.hindawi.com Volume 2018
International Journal of
International Journal of Antennas and Active and Passive Advances in
Chemical Engineering Propagation Electronic Components Shock and Vibration Acoustics and Vibration
Hindawi Hindawi Hindawi Hindawi Hindawi
www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 www.hindawi.com Volume 2018