Understanding Color Image Processing by Machine Vision
Understanding Color Image Processing by Machine Vision
net/publication/270956937
CITATIONS READS
26 12,129
1 author:
SEE PROFILE
All content following this page was uploaded by Ayman A.A. Ibrahim on 17 January 2015.
http://dx.doi.org/10.5772/50796
1. Introduction
Handling (Post harvest) process of fruits is completed in several steps: washing, sorting,
grading, packing, transporting and storage. The fruits sorting and grading are considered
the most important steps of handling. Product quality and quality evaluation are important
aspects of fruit and vegetable production. Sorting and grading are major processing tasks
associated with the production of fresh-market fruit types. Considerable effort and time
have been invested in the area of automation.
Suitable handling (Post harvest) process of fruits and vegetables is considered the most
important process that leads to conserve the fruits quality until reach to the consumers,
improve the quality of industry food products and decrease the losses of fruits that
estimated as 30% of crops in Egypt (Reyad, 1999).
Sorting is a separation based on a single measurable property of raw material units, while
grading is “the assessment of the overall quality of a food using a number of attributes”.
Grading of fresh product may also be defined as ‘sorting according to quality’, as sorting
usually upgrades the product (Brennan, 2006).
© 2012 Eissa and Khalik, licensee InTech. This is an open access chapter distributed under the terms of the
Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits
unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
228 Structure and Function of Food Engineering
In recent ten years, operations in grading systems for fruits and vegetables
became highly automated with mechatronics, and robotics technologies. Machine vision
systems and near infrared inspection systems have been introduced to many grading
facilities with mechanisms for inspecting all sides of fruits and vegetables (Kondo, 2009).
Machine vision and image processing techniques have been found increasingly useful in the
fruit industry, especially for applications in quality inspection and defect sorting
applications. Research in this area indicates the feasibility of using machine vision systems
to improve product quality while freeing people from the traditional hand-sorting of
agricultural materials.
The use of machine vision for the inspection of fruits and vegetables has increased during
recent years. Nowadays, several manufacturers around the world produce sorting machines
capable of pre-grading fruits by size, color and weight. Nevertheless, the market constantly
requires higher quality products and consequently, additional features have been developed
to enhance machine vision inspection systems (e.g. to locate stems, to determine the main
and secondary color of the skin, to detect blemishes).
Automated sorting had undergone substantial growth in the food industries in the
developed and developing nations because of availability of infrastructures. Computer
application in agriculture and food industries have been applied in the areas of sorting,
grading of fresh products, detection of defects such as cracks, dark spots and bruises on
fresh fruits and seeds. The new technologies of image analysis and machine vision have not
been fully explored in the development of automated machine in agricultural and food
industries (Locht et al, 1997).
Rapid advances in artificial intelligent automated inspection of orange and tomato fruits by
computer vision feasible. An intelligent vision system to evaluate fruit quality (size, color,
shape, extent of blemishes, and maturity) and assign a grade would significantly improve
the economic benefits to the orange and tomato fruits industries. It would potentially
increase the consumer confidence in the quality of fruit.
The specific objectives are to quantify the following attributes for inspection of orange and
tomato fruits:
1. Color,
2. Texture (homogeneity or non-homogeneity),
3. Size (projected area),
4. External blemishes (detect defects).
Understanding Color Image Processing by Machine Vision for Biological Materials 229
5. Develop image processing techniques to sorting orange and tomato fruits into quality
classes based on size, color and texture analysis,
6. Evaluate the performance of the system using some orange and tomato fruits, and
7. Evaluate the accuracy of the techniques by comparison with manual inspection.
Product quality and quality evaluation are important aspects of fruit and vegetable
production. Sorting and grading are major processing tasks associated with the production
of fresh-market fruit types. Considerable effort and time have been invested in the area of
automation, but the complexity of fruit sorting and required sorting rates have forced the
sorting of most fruit types to be performed manually. Although they currently achieve the
best performance, human graders are inconsistent and represent large labor costs.
Machine vision is the study of the principles underlying human visual perception, and it
attempts to provide the computer-camera system the visual capabilities easily accomplished
by humans. In the human eye-brain system the human eye receives light from an object and
then converts the light into electric signals. It does not interpret these signals nor make
decision based upon the nature of the image. Image interpretation and decision-making are
performed by the brain. Similarly, a machine vision system has an eye, which may be a
camera or a sensor. Image interpretation and decision-making are done by appropriate
software and hardware. Machine vision, often referred to as computer vision, can be defined
as a process of producing description of an object from its image.
Mcrae (1985), mentioned that, the term “grading” can be applied to two distinct operations
which are: (1) sizing, in which the grades are segregated according to their dimensions and
(2) inspection, in which grades are based on the proportion of undesirable characteristics
such as greening, cuts or other blemishes which are allowed to remain with the sound
tubers and involves the elimination of unwanted material.
Leemans and Destain (2004) mentioned that, fresh market fruits like apples are graded into
quality categories according to their size, color and shape and to the presence of defects. The
two first quality criteria are actually automated on industrial graders, but fruits grading
according to the presence of defects is not yet efficient and consequently remains a manual
operation, repetitive, expensive and not reliable.
230 Structure and Function of Food Engineering
Brennan (2006) stated that, sorting and grading are terms which frequently used
interchangeably in the food processing industry. Sorting is a separation based on a single
measurable property of raw material units, while grading is “the assessment of the overall
quality of a food using a number of attributes”. Grading of fresh product may also be
defined as ‘sorting according to quality’, as sorting usually upgrades the product.
Kondo (2009) reported that, in recent ten years, operations in grading systems for fruits and
vegetables became highly automated with mechatronics, and robotics technologies. Machine
vision systems and near infrared inspection systems have been introduced to many grading
facilities with mechanisms for inspecting all sides of fruits and vegetables.
Automated sorting had undergone substantial growth in the food industries in the
developed and developing nations because of availability of infrastructures. Computer
application in agriculture and food industries have been applied in the areas of sorting,
grading of fresh products, detection of defects such as cracks, dark spots and bruises on
fresh fruits and seeds. The new technologies of image analysis and machine vision have not
been fully explored in the development of automated machine in agricultural and food
industries. There is increasing evidence that machine vision is being adopted at commercial
level but the slow pace of technological development in Egypt which are not available are
among the factors that will limit the processes that requires computer vision and image
analysis (Locht et al, 1997).
3. Manual inspection
The method used by the farmers and distributors to sort agricultural products is through
traditional quality inspection and handpicking which is time-consuming, laborious and less
efficient.
The maximum manual sorting rate is dependent on numerous factors, including the
workers experience and training, the duration of tasks, and the work environment
(temperature, humidity, noise levels, and ergonomics of the work station). More
fundamentally, viewing conditions (illumination, defect contrast, and viewing distance)
must be optimal to achieve maximum sorting rates.
Attempts to develop automatic produce sorters have been justified mostly by the
inadequacies of manual sorters, but few authors provide results that demonstrate the degree
of manual sorting inefficiencies. Flaws were more accurately identified when the inspector
knew that only one type of flaw was present in the sample. The detectability of each flaw
decreased when the sample contained more than one type of flaw. The authors indicated
that different flaws must be mentally processed separately in a limited amount of time, and
Understanding Color Image Processing by Machine Vision for Biological Materials 231
that these separate decisions may interfere with each other when more than one flaw is
present in the sample. It was also proposed that a speed-accuracy relationship existed.
Geyer and Perry (1982) showed that samples with more than one flaw required a longer
inspection time to achieve similar accuracy than a sample with only one flaw type. It was
thought that inspector would have to search differently types of flaws, and this may have
contributed to the longer inspection time. The increased inspection time improved correct
rejection. The rejection of sound items was blamed on the increased false alarm rate due to
more decision cycles.
More than the ability to discern a defect is required for optimal defect detection. Meyers et
al. (1990) indicated that inspection tasks were complicated by the fact that acceptable defect
limits periodically change. Also, individuals must apply absolute limits to continuous
variables, such as color. In addition to the interpretation of the allowable limits, inspector
must be able to see the defect if they are to reject the produce using a standard peach
grading line with uniform spherical balls, theoretically only 88.7% of the surface area was
presented to the inspector when standing at the side of the conveyor . Actual tests showed
that only 82% of the defects on the balls were made visible to the inspector. The amount of
surface area inspected is increased by placing multiple manual graders at both sides of a
conveyor.
Many of the decision that are made during manual inspection are based on qualitative
measurements, and Muir et al. (1989) illustrated individual “human sensors” are quite
variable and difficult to calibrate. When qualified inspector were asked to quantify the
amount of surface defect on a potato (in percentage of the total tuber surface), the values for
a single sample ranged from 10 to 70%. The repeatability of individual inspectors was also
very poor. Differences between two consecutive readings were as high as 40 percentage
points in some cases. Appropriate imagines sensors are more accurate, with a maximum
variation of 15 percentage points.
Rehkugelr and Throop (1976) indicate that a manual sorter was able to remove bruised
apples from sound fruit with acceptable sorting efficiencies at a rate of approximately
1fruit/s. Similarly, Stephenson (1976) showed that rates for sorting tomatoes into immature
and mature lots should not exceed 1fruit/s per inspector. A slightly faster rate, 1.2 fruit/s,
was identified as the maximum rate for an inspector to reject 72% of serious defects in
oranges. These results demonstrated the shortfalls of manual inspection and re-enforced the
need for a more consistent grading system. Implementation of automated sorting machines
may improve accuracy, decrease labor costs, and result in a final product free of defects.
The method used by the farmers and distributors to sort agricultural products is through
traditional quality inspection and handpicking which is time-consuming, laborious and less
efficient. Sun et al (2003) observed that the basis of quality assessment is often subjective with
attributes such as appearance, smell, texture and flavour frequently examined by human
inspectors. Francis (1980) found that human perception could easily be fooled. It is pertinent to
explore the possibilities of adopting faster systems which will save time and more accurate in
sorting of crops. One of such reliable method is the automated computer vision sorting system.
232 Structure and Function of Food Engineering
The following process steps are common to all machine vision applications:
Image acquisition
An optical system gathers an image, which is then converted to a digital format and placed
into computer memory.
Image processing
A computer processor uses various algorithms to enhance elements of the image that are of
specific importance to the process.
Feature extraction
The processor identifies and quantifies critical features in the image (e.g., the position of
holes on a printed circuit board, the number of pins in a connector, the orientation of a
component on a conveyor) and sends the data to a control program.
The processor’s control program makes decisions based upon the data. Are the holes within
specification? Is a pin missing? How must a robot move to pick up the component? Machine
vision technology is used extensively in the automotive, agricultural, consumer product,
semiconductor, pharmaceutical, and packaging industries, to name but a few. Some of the
hundreds of applications include vision-guided circuit-board assembly, and gauging of
components, razor blades, bottles and cans, and pharmaceuticals.
The acquisition of an image that is both focused and illuminated is one of the most
important parts of any machine vision system. Figure (1) shows the general steps required
in obtaining results from an image of an object Sun et al (2003).
Originally the image capture and digitizing of the image was accomplished by using a
combination of a video camera and a frame grabber program. This method has almost been
entirely replaced with CCD (charge-coupled device) and CMOS (complementary metal
oxide semiconductor) chips. These chips use electrical circuits to directly convert light
intensities into a digital image. They combine the video camera and frame grabber into one
tool that can operate faster and with less distortion of the image. These chips also have the
advantage that they can produce images at a much higher resolution then the frame grabber
method (Mummert, 2004).
Machine vision and image processing techniques have been found increasingly useful in the
fruit industry, especially for applications in quality inspection and defect sorting applications.
234 Structure and Function of Food Engineering
Research in this area indicates the feasibility of using machine vision systems to improve
product quality while freeing people from the traditional hand-sorting of agricultural
materials (Tao, 1996 a,b; Heinemann et al.,1995; Crowe and Delwiche, 1996; Throop et al., 1993;
Yang, 1993; Upchurch et al., 1991). However, automating fruit defect sorting is still a
challenging subject due to the complexity of the process. From fruit industry perspective, the
fundamental requirements for an imaging-based fruit sorting system include: (1) 100% total
inspection so that each piece of fruit is checked; (2) high-speed on-line and adaptation to
existing packing lines; (3) sorting accuracy comparable to human sorters; and (4) the flexibility
to adapt to fruits natural variations in shape, size, brightness, and various defect (Tao, 1998;
Wen and Tao, 1997; Rigney et al., 1992). Machine-vision systems distinguish between good
and defective fruit by contrasting the differences in light reflectance off the fruits surfaces
(Miller, 1995; Thai et al., 1992; Guyer et al., 1994). Machine vision is increasingly used for
automated inspection of agricultural commodities (Brosnan and Sun, 2004; Chen et al., 2002).
Research results suggest that it is feasible to use machine vision systems to inspect fruit for
quality related problems (Bennedsen and Peterson, 2005, and Brosnan and Sun, 2004). For fruit
such as apples, commercial systems are available that allow sorting based on physical
characteristics like weight, size, shape, and color. Automated fruit grading, standards assigned
to fruit based on exterior quality, is also possible with machine vision (Leemans et al., 2002).
Commercial sorters frequently use a conveyor system with either shallow cups (each cup
holding one apple as it is moved) or bicone rollers that allow apples to rotate while moving
along the conveyor (Figure 2). To be considered commercially applicable, automated systems
must be able to handle fruit at rates of at least 6-10 fruit per second (Throop et al., 2001).
A camera or cameras above the conveyor are commonly used to capture images in these
systems, sometimes in conjunction with mirrors below the fruit. The rotation of apples
produced by bi-cone rollers allows for the imaging of multiple aspects of each apple’s surface
by using two or more cameras spaced apart along the conveyer. This approach has not been
proven to be viable for defect detection for a number of reasons, including non-uniform
rotation due to differences in apple sizes and frequent bouncing due to non-uniform shapes.
The main components of a typical vision system have been described in this study. Several
tasks such as the image acquisition, processing, segmentation, and pattern recognition are
conceivable. The role of image-acquisition sub-system in a vision system is to transform the
optical image data into an array of numerical data, which may be manipulated by a
computer. Fig. 3 shows a simple block diagram for such a machine vision system. It includes
systems and sub-systems for different processes. The big rectangles show the sub-systems
while the parts for gathering information are presented as small rectangles in Fig. 3. As can
be seen in Fig. 3, the light from a source illuminates the scene (it can be an industrial
environment), and an optical image is generated by image sensors. Image arrays, digital
camera, or other means are used to convert optical image into an electrical signal that can be
converted to an ultimate digital image.
Typically, cameras incorporating either the line scan or area scan elements are used, which
offer significant advantages. The camera system may use either charge coupled device
(CCD) sensor or vidicon for the light detection. The preprocessing, segmentation, feature
extraction and other tasks can be performed utilizing this digitized image. Classification and
interpretation of image can be done at this stage and considering the scene description, the
actuation operation can be performed in order to interact with the scene. The actuation sub-
system therefore provides an interaction loop with the original scene in order to adjust or
modify any given condition for a better image taking. (Golnabia and Asadpour, 2007).
236 Structure and Function of Food Engineering
The automated strawberry grading system (Liming and Yanchao, 2010) was developed
based on three characteristics: shape, size and color. The automated strawberry grading
system (Fig. 4) mainly consists of a mechanical part, an image processing part, a detection
part and a control part. The mechanical part mainly consists of a conveyer belt, a platform, a
leading screw, a gripper and two motors to implement the strawberry transport and
gradation. The image processing part consists of camera (WV-CP470, Panasonic), image
collecting card (DH-CG300, Daheng company), a closed image box and a computer
(PCM9575) to implement image preprocessing, segmentation, extracting grading
characteristic and to grade the strawberry by these characteristics.
The detection part consists of two photoelectrical sensors and two limit switches. The
photoelectrical sensors are used to detect the strawberry position; the limit switch is used to
protect the slider on the leading screw during the detection. The control part adopts the
single-chip-microcomputer (SCM) to receive the signals from the photoelectrical sensor, the
limit switch and the computer, finally to control the motors.
The results show that the strawberry classification algorithm is designed viable and
accurately. Strawberry size error is less than 5%, the color grading accuracy rate is 88.8%,
and the shape classification accuracy rate is over 90%. The average time to grade one
strawberry is no more than 3 s.
(Blasco, et al. 2009) developed an engineering solution for the automatic sorting of
pomegranate arils. The prototype (Fig. 5) basically consisted of three major elements that
corresponded to the feeding, inspection and sorting units. These are described below. The
prototype used two progressive scan cameras to acquire 512 _ 384 pixel RGB (Red, Green
and Blue) images with a resolution of 0.70 mm/pixel. Both cameras were connected to a
computer, the so-called ‘‘vision computer” (Pentium 4 at 3.0 GHz), by means of a single
frame grabber that digitized the images and stored them in the computer’s memory.
The illumination system consisted of two 40 w daylight compact fluorescent tubes located
on both sides of each conveyor belt. The scene captured by each camera had a length of
approximately 360 mm along the direction of the movement of the objects and a width that
allowed the system to inspect three conveyor belts at the same time. The entire system was
housed in a stainless steel chamber.
The sorting area followed the inspection chamber. Three outlets were placed on one side of
each of the conveyor belts. In front of each outlet air ejectors were suitably placed to expel
the product. The separation of the arils was monitored by the control computer, in which a
board with 32 digital outputs was mounted. This board was used to manage the air ejectors.
The computer tracked the movement of the objects on the conveyor belts by reading the
signals produced by the optical encoder attached to the shaft of the carrier roller.
Concluded that the prototype for inspecting and sorting the arils was developed and
successfully commissioned, which could handle a maximum throughput of 75 kg/h. The
inspection unit, which had two cameras connected to a computer vision system, had enough
capacity to achieve real-time specifications and enough accuracy to fulfill the commercial
requirements. The sorting unit was able to classify the product into four categories.
citrus inspection, including a parallel hardware and software architecture, able to determine
the external quality of the fruit in real-time at a speed of 10 fruits/s.
The vision system has been placed on a commercial fruit sorter having four independent
inspection lines. As the first step, the sorter singles the fruit before they enter into the
inspection site by means of bi-conic rollers. In principle, each individual fruit is located in a
space between two rollers (what is called a cup), although sometimes, when there is an
excessive loading, two or more fruits are located in the same cup or fruit are located
between two filled cups. The inspection site (Fig. 6) provides adequate lighting to the scene
by fluorescent tubes, incandescent lamps and polarised filters that remove reflections from
the surface of the fruit. The scene is composed of three complete fruit, imaged with a
multispectral camera that simultaneously captures four bands: the three conventional color
bands (R, G and B) and another centred at 750 nm (near infrared, denoted I). The camera
(Fig. 7) has two CCDs, one a color CCD that provides RGB information and the other
monochromatic, to which has been coupled an infrared filter, centred on 750 nm (±10), that
provides I information. The light coming from the scene reaches a semi-transparent mirror
that refracts 50% of the light towards the infrared (A) CCD and reflects the other 50% to a
second mirror (B), which reflects all the light towards the color CCD. The system guarantees
at least three whole fruits on each image with a resolution of 0.7 mm/pixel.
The fruit rotates while passing below the camera due to a forced rotation of the rollers. To
single the fruits and estimate their size and shape, the system uses only the I information, but
for color estimation and defect detection it is necessary to work also with the color bands.
This fact has been used to set up a parallel strategy based on dividing the inspection tasks
between two digital signal processors (DSP), so during on-line work, two image analysis
procedures are performed by the two DSP running in parallel in a master/slave architecture.
The master processor calculates the geometrical and morphological features of the fruit using
only the I band, and the slave processor estimates the fruit color and detects the skin defects
using the four RGBI bands. After the image processing, the master processor collects the
information from the slave and sends the result to a control computer. The system was tested
under laboratory conditions at two common sizer speeds: 300 and 600 fruits/min (5–10 fruits/s).
An image processing based technique was developed by (Omid et al. 2010) to measure
volume and mass of citrus fruits such as lemons, limes, oranges, and tangerines. The technique
uses two cameras to give perpendicular views of the fruit as shown in (figure 8). An efficient
algorithm was designed and implemented in Visual Basic (VB) language. The product volume
was calculated by dividing the fruit image into a number of elementary elliptical frustums.
The volume is calculated as the sum of the volumes of individual frustums using VB.
The volumes computed showed good agreement with the actual volumes determined by
water displacement method. The coefficient of determination (R2) for lemon, lime, orange, and
tangerine were 0.962, 0.970, 0.985, and 0.959, respectively. The Bland–Altman 95% limits of
agreement for comparison of volumes with the two methods were (_1.62; 1.74), (_7.20; 7.57),
(_6.54; 6.84), and (_4.83; 6.15), respectively. The results indicated citrus fruit’s size has no effect
on the accuracy of computed volume. The characterization results for various citrus fruits
showed that the volume and mass are highly correlated. Hence, a simple procedure based on
computed volume of assumed ellipsoidal shape was also proposed for estimating mass of
citrus fruits. This information can be used to design and develop sizing systems.
Computer vision is the construction of explicit and meaningful descriptions of physical objects
from images. States that it encloses the capturing, processing and analysis of two-dimensional
images, with others noting that it aims to duplicate the effect of human vision by electronically
perceiving and understanding an image. The basic principle of computer vision is described in
Fig. 9. Image processing and image analysis are the core of computer vision with numerous
algorithms and methods available to achieve the required classification and measurements.
240 Structure and Function of Food Engineering
Computer vision systems have been used increasingly in the food and agricultural industry
for inspection and evaluation purposes as they provide suitably rapid, economic, consistent
and objective assessment. They have proved to be successful for the objective measurement
and assessment of several agricultural products. Over the past decade advances in hardware
and software for digital image processing have motivated several studies on the
development of these systems to evaluate the quality of diverse and processed foods.
Computer vision has long been recognized as a potential technique for the guidance or
control of agricultural and food processes. Therefore, over the past 20 years, extensive
studies have been carried out, thus generating many publications.
Computer vision is a rapid, economic, consistent and objective inspection technique, which
has expanded into many diverse industries. Its speed and accuracy satisfy ever-increasing
production and quality requirements, hence aiding in the development of totally automated
processes. This non-destructive method of inspection has found applications in the
agricultural and food industry, including the inspection and grading of fruit and vegetable.
It has also been used successfully in the analysis of grain characteristics and in the
evaluation of foods such as meats, cheese and pizza (Brosnan and Sun, 2002).
(Jarimopas and Jaisin, 2008) develop an efficient machine vision experimental sorting
system for sweet tamarind pods based on image processing techniques. Relevant sorting
parameters included shape (straight, slightly curved, and curved), size (small, medium, and
large), and defects. The variables defining the shape and size of the sweet tamarind pods
were shape index and pod length. A pod was said to have defects if it contained cracks.
The sorting system involved the use of a CCD camera which was adapted to work with a TV
card, microcontrollers, sensors, and a microcomputer as shown in figure 10. Conveyor belt
Figure 10. An experimental machine vision system for sorting sweet tamarind pods
(1 is conveyor; 2 is power drive; 3 is light source and CCD camera; 4 is pneumatic segregator
and compressed air tank; 5 is control unit; and 6 is microcomputer).
242 Structure and Function of Food Engineering
30 cm wide and 180 cm long with four receivers for the sorted sweet tamarind. On the right
side of the belt was a box with a CCD camera mounted on the top and four 14-watt energy
saving lamps at each corner of the box to give uniform light intensity with minimum
shadows. The camera, which was mounted about 41 cm above the belt, had a focal length of
38–72 mm and provided a resolution of 520 vertical TV lines. A cylinder of compressed air
was used to drive the three pneumatic segregators. The sorting system was so designed as
to sort sweet tamarind into three sizes (large, medium, and small).
The defective pods were rejected at the left hand end of the conveyor. The control unit
components were assembled in a box and placed under the sorting system.
The results showed that the three control factors did not significantly affect shape, size, and
defects at a significance level of 5%. The averaged shape indexes of the straight, slightly
curved, and curved pods were 51.1%, 61.6%, and 75.8%, respectively. Pod length was found
to be influenced by size and cultivar, with Sitong and Srichompoo pods ranging from 10.0 to
14.0 cm and 8.5 to 12.4 cm, respectively. The vision sorting system could separate Sitong
tamarind pods at an average sorting efficiency (EW) of 89.8%, with a mean contamination
ratio (CR) of 10.2% at a capacity of 1517 pod/h.
Orange grading operations have been mechanized from a couple of decades. At the first
stage of the mechanization, plates with holes of orange fruit sizes were used for sorting.
Machine vision and near infrared (NIR) technologies have been utilized and improved with
engineering design to convey fruits to detect fruit size, shape, color, sugar content and
acidity since about ten years ago. The system inspects fruit with color CCD cameras
installed at six different positions on a line to provide all side fruit images with lighting
devices. The light devices are made by halogen lamps or LEDs fitted with PL (polarizing)
filters to eliminated halation on glossy fruit surfaces. The near infrared inspection systems
consist of halogen lamps and a spectrophotometer to analyze absorption bands of
transmissive light from fruits. Furthermore, an X-ray imaging system is sometimes installed
on each line to find internal defects such as rind-puffing.
Fig. 11 shows a whole inspection system on an orange grading line. After dumping
containers filled by oranges, fruits are singulated by a singulating conveyor. Singulated
fruits are sent to the NIR inspection system (transmissive type) to measure sugar content
(brix equivalent) and acidity.
In addition, it can measure the granulation level of the fruit which indicates the inside water
content of fruit. The second inspection is X-ray imaging for internal structural quality. Rind-
puffing, a biological defect is detected from the image. In the external inspection stage, color
images from six machine vision sets under random trigger mode, are copied to the image
grabber boards fitted on the image processing computers whenever a trigger occurs.
The four cameras are set for acquiring side images, while the two cameras are from top. The
final camera acquires a top image of each fruit after fruit turning over because both top and
bottom sides are inspected. All the images are processed using specific algorithms for
detecting image features of color, size, shape, and external defect. Output signals from
Understanding Color Image Processing by Machine Vision for Biological Materials 243
Figure 11. A whole orange fruit grading system on a line manufactured by SI Seiko Co., Ltd., Japan.
image processing are transmitted to the judgment computer where the final grading
decision (usually into several grades and several sizes) is made based on fruit features and
internal quality measurements.
Fig. 12 shows a fruit grading robot system installed at JA Shimoina, Japan. The robot system
consists of two 3 DOF manipulators, in which one is a providing robot, while the other is a
grading robot with 12 machine vision systems. After container comes under the providing
robot (1), 12 fruits are sucked up by suction pads at a time (2) and are transported to
intermediate stage making space toward vertical direction on this page between fruits (3).
Figure 12. A fruit grading robot system manufactured by SI Seiko Co., Ltd., Japan (Left: front view,
Right: side view).
The grading robot picks 12 fruits up again (4) and 12 bottom images of the fruits are
acquired during the manipulator moving to trays on a conveyor line (5). Just before
244 Structure and Function of Food Engineering
releasing the fruits to the trays (7), 4 side images of each fruit are acquired by rotating the
suction pads for 270_ (6). The fruits are pushed out onto a line (8) and top images are
acquired by another color camera stationed on each line. Software algorithms of machine
vision are similar with that of the orange grading system. Fruit color, size, shape, and defect
are measured.
Concluded that it can be said that roles of automated grading systems as follows: 1) Efficient
sorting and labor saving, 2) Uniformization of fruit quality, 3) Enhancing market value of
products, 4) Fair payment to producers based not only on quantity but on quality of each
product, 5) Farming guidance from grading results and GIS (Geographical Information
System), and 6) Contribution to the traceability system for food safety and security. The
most important difference of the automation systems from the conventional machines is to
be able to handle a lot of precise information. To handle the comprehensive data on
agricultural products and foods, understanding of diversity and complexity of biomaterial
properties is required and sensors to collect data should be often designed based on the
properties. Through the traceability system in which all the data of producers, distributors,
and consumers are linked and opened to them, it is expected that mutual information
exchange among them makes more effective procedure at each stage and produces more
safety and higher quality products (Kondo, 2010).
Identification of apple stem-ends and calyxes from defects on process grading lines is a
challenging task due to the complexity of the process. An in-line detection of the apple
defect is developed in this article. Firstly, a computer controlled system using three color
cameras is placed on the line. In this system, the apples placed on rollers are rotating while
moving, and each camera is capturing three images from each apple. In total nine images
are obtained for each apple allowing the total surface to be scanned. Secondly, the apple
image is segmented from the black background by multi-threshold methods. The defects,
including the stem-ends and calyxes, called regions of interest (ROIs), are segmented and
counted in each of the nine images. Thirdly, since a calyx and stem-end cannot appear at the
same image, an apple is defective if any one of the nine images has two or more ROIs. There
are no complex imaging processes or pattern recognition algorithms in this method, because
it is only necessary to know how many ROIs are there in a given apple’s image. Good
separation between normal and defective apples was obtained. The classification error of
unjustified acceptance of blemished apples reduced from 21.8% for a single camera to 4.2%
for the three camera system, at the expense of rejecting a higher proportion of good apples.
Averaged over false positive and false negative, the classification error reduced from 15 to
11%. The disadvantage of this method is that it could not distinguish different defect types.
Defects such as bruising, scab, fungal growth, and disease, are treated as the same.
The lighting and image acquisition system were designed to be adapted on an existing
single row grading machine (prototype from Jiangsu Univ., China). Six lighting tubes (18 W,
type 33 from Philips, Netherlands) were placed at the inner side of a lighting box while
three cameras (color 3CCD uc610 from Uniq, USA), two having their optical axis in a plane
perpendicular to the fruit movement and inclined at 60◦ with respect to the vertical and one
above observed the grading line in the box, as shown in Figs. 13 and 14. The lighting box is
Understanding Color Image Processing by Machine Vision for Biological Materials 245
1000mm in length and 1000mm in width. The distance between apple and camera is 580mm,
thus there are three apples in the view field of each camera with a resolution of 0.4456mm
per pixel. The images were captured using three Matrox/meteorII digitized frame-grabbers
(Matrox, Canada) loaded in three separate computers. The standard image treatment
functions were based on the Matrox libraries (Matrox, Canada) with remaining algorithms
implemented in C++. A local network was built among the computers in order to
communicate results data.
Figure 14. Trigger grab of nine images for an apple by three cameras at three positions.
The central processing unit of each computer was a Pentium 4 (Intel, USA) clocked at 3.0
GHz. The fruits placed on corn-shaped rollers are rotating while moving. The friction
246 Structure and Function of Food Engineering
between rollers and the belt on the conveyor rack makes the corn-shaped roller rotate while
moving through the field-of-view of the cameras. This was adjusted in such a way that a
spherical object having a diameter of 80mm made a rotation in exactly three images when
passed through the view field of camera. The moving speed in the range 0–15 apples per
second could be adjusted by the stepping motor (Xiao-bo, et al, 2010).
One of the main problems in the post-harvest processing of citrus is the detection of visual
defects in order to classify the fruit depending on their appearance. Species and cultivars of
citrus present a high rate of unpredictability in texture and color that makes it difficult to
develop a general, unsupervised method able of perform this task. In this paper we study
the use of a general approach that was originally developed for the detection of defects in
random color textures. It is based on a Multivariate Image Analysis strategy and uses
Principal Component Analysis to extract a reference eigenspace from a matrix built by
unfolding color and spatial data from samples of defect-free peel. Test images are also
unfolded and projected onto the reference eigenspace and the result is a score matrix which
is used to compute defective maps based on the T2 statistic. In addition, a multiresolution
scheme is introduced in the original method to speed up the process. Unlike the techniques
commonly used for the detection of defects in fruits, this is an unsupervised method that
only needs a few samples to be trained. It is also a simple approach that is suitable for real-
time compliance. Experimental work was performed on 120 samples of oranges and
mandarins from four different cultivars: Clemenules, Marisol, Fortune, and Valencia. The
success ratio for the detection of individual defects was 91.5%, while the classification ratio
of damaged/sound samples was 94.2%. These results show that the studied method can be
suitable for the task of citrus inspection.
The method performs novelty detection, and also is able to identify new unpredictable
defects, by using a model of sound color textures and considering those locations that do not
fit this model as being defective. It also needs only a few samples to carry out the
unsupervised training. For this reason, it is suitable for citrus inspection as these systems
need frequent tuning to adjust to the inspection of new cultivars and even the features of
each batch of fruit within the same cultivar.
Experimental work was performed using 120 samples (images) of randomly selected
oranges and mandarins belonging to four different cultivars: Marisol, Clemenules, Fortune
and Valencia. First, a set of experiments were carried out to tune the parameters of the
method for each cultivar. These included the number of principal eigenvectors used to
define the reference eigenspace, the T2 threshold (percentile in the T2 cumulative histogram)
used to determine if locations in test samples were sound or defective, and finally, the set of
scales used in the multiresolution framework. Once the parameters were tuned, wecompiled
the results for the detection of individual defects achieving 91.5% of correct detections and
3.5% of false detections. By using chromatic and textural features, the main contribution of
this method is the capability of detecting external defects in different cultivars of citrus that
present different textures carrying out only a single previous unsupervised training. The
method achieved a performance rate of 94.2% successful classification of complete samples
Understanding Color Image Processing by Machine Vision for Biological Materials 247
of fruit as either damaged or sound. These results show that the MIA approach studied here
can be adequate for the task of citrus inspection (Fernando. et al, 2010).
Oftentimes, when tackling complex classification problems, just one feature descriptor is not
enough to capture the classes’ separability. Therefore, efficient and effective feature fusion
policies may become necessary. Although normal feature fusion is quite effective for some
problems, it can yield unexpected classification results when not properly normalized and
preprocessed. Additionally, it has the drawback of increasing the dimensionality which
might require more training data.
This paper approaches the multi-class classification as a set of binary problems in such a
way one can assemble together diverse features and classifier approaches custom-tailored to
parts of the problem. It presents a unified solution (Section 4) that can combine many
features and classifiers. Such technique requires less training and performs better if
compared with a naïve method, where all features are simply concatenated and fed
independently to each classification algorithm.
The results show that the introduced solution is able to reduce the classification error in up
to 15 percentage points with respect to the baseline. A second contribution of this paper is
the introduction to the community of a complete and well-documented fruit/vegetables
image data set suitable for content-based image retrieval, object recognition, and image
categorization tasks. We hope this data set will be used as a common comparison set for
researchers workingin this space.
Although we have showed that feature and classifier fusion can be worthwhile, it seems not
to be advisable to combine weak features with high classification errors and features with
low classification errors. In this case, most likely the system will not take advantage of such
combination.
248 Structure and Function of Food Engineering
The feature and classifier fusion based on binary base learners presented in this paper
represents the basic framework for solving the more complex problem of determining not
only the species of a produce but also its variety. Since it requires only partial training for
the added features and classifiers, its extension is straightforward. Given that the introduced
solution is general enough to be used in other problems, we hope it will endure beyond this
paper.
Whether or not more complex approaches such as appearance based descriptors provides
good results for the classification is still an open problem. It would be unfair to conclude
they do not help (Anderson et al, 2010).
Color is an important quality attribute that dictates the quality and value of many fruit
products. Accurately measuring and describing heterogenous fruit color changes during
ripening is difficult with the instrumentation available (chromometer and colorimeter) due
to the small viewing area of the equipment. Calibrated computer vision systems (CVS)
provide another technique that allows capture and quantitative description of whole fruit
color characteristics. Published research has demonstrated errors in CVS due to product
curvature. In this work, it was confirmed that of the measured a* and b* color values on a
curved surface, 55% and 69% of the values were within the range measured for the same flat
surface. This deviation of measurement results in description of hue angle and chroma with
an average error of 2◦ and 2.5, respectively. The system developed allows capture of hue
angle data of whole fruit of heterogeneous colour. The usefulness of the device for capturing
descriptive colour data during maturation of fruit is demonstrated with ‘B74’ mangoes
(Kang et al, 2008).
Hyperspectral images of the apples (normal and injured) were acquired using a lab-scale
hyperspectral imaging system (Fig. 15) that consisted of a charge-coupled device (CCD)
camera (PCO-1600, PCO Imaging, Germany) connected to a spectrograph (ImSpector V10E,
Optikon Co., Canada) coupled with a standard C-mount zoom lens. The optics of this
imaging system allowed studying fruit properties associated with the spectral range of 400–
1000 nm.
The camera faced downward at a distance of 400mm from the target. The sample was
illuminated through a cubic tent made of white nylon fabric to provide uniform lighting
conditions. The light source consisted of two 50Whalogen lamps mounted at a 45◦ angle
from horizontal, fixed at 500mm above the sample and spaced 900mm apart on two
opposite sides of the sample. The sample was put in a position that corresponded with the
center of the field of view of the camera (300mm×300mm), with calyx–stem end
perpendicular to the camera lens to avoid any discrepancy between the normal surface and
stem or calyxes. The camera spectrograph assembly was provided with a stepper motor to
move this unit through the camera’s field of view to scan the apple line-by-line.
The spectral images were collected in a dark room where only the halogen light source was
used. The exposure time was adjusted to 200ms throughout the test. Each collected spectral
image was stored as a three-dimensional image (x, y, ). The spatial components (x, y)
included 400×400 pixels, and the spectral component () included 826 bands within
Understanding Color Image Processing by Machine Vision for Biological Materials 249
Figure 15. The hyperspectral imaging system: (a) a CCD camera; (b) a spectrograph with a standard C-
mount zoom lens; (c) an illumination unit; (d) a light tent; and (e) a PC supported with the image
acquisition software.
400–1000nm range. The hyperspectral imaging system was controlled by a laptop Pentium
M computer (processor speed: 2.0 GHz; RAM: 2.0 GB) preloaded and configured with the
Hypervisual Image Analyzer® software program (ProVision Technologies, Stennis Space
Center, MO, USA). All spectral images acquired were processed and analyzed using the
Environment for Visualizing Images software program (ENVI 4.2, Research Systems Inc.,
Boulder, CO, USA).
The hyperspectral images were calibrated with a white and a dark references. The dark
reference was used to remove the dark current effect of the CCD detectors, which are
thermally sensitive.
Hyperspectral imaging (400–1000 nm) and artificial neural network (ANN) techniques were
investigated for the detection of chilling injury in Red Delicious apples. Ahyperspectral
imaging system was established to acquire and pre-process apple images, as well as to
extract apple spectral properties. Feed-forward back propagation ANN models were
developed to select the optimal wavelength(s), classify the apples, and detect firmness
changes due to chilling injury. The five optimal wavelengths selected by ANN were 717,
751, 875, 960 and 980 nm. The ANN models were trained, tested, and validated using
different groups of fruit in order to evaluate the robustness of the models. With the spectral
and spatial responses at the selected five optimal wave lengths, an average classification
accuracy of 98.4%was achieved for distinguishing between normal and injured fruit. The
250 Structure and Function of Food Engineering
correlation coefficients between measured and predicted firmness values were 0.93, 0.91 and
0.92 for the training, testing, and validation sets, respectively (Elmasry et al, 2009).
Naoshi et al (2008). Mentioned that, there are many types of citrus fruit grading machine
with machine vision capability. While most of them sort fruit by size, shape, and color,
detection of rotten fruit remains challenging because their appearances are similar to normal
parts. Objectives of this research were to investigate if fluorescence would be a good
indicator of the fruit rot, and to develop an economical solution to add the rot inspection
capability to an existing machine vision fruit inspection station. A machine vision system
consisting of a pair of white and ultra violet (UV) LED lighting devices and a color CCD
camera was proposed for the citrus fruit grading task. Since the time lag between the color
and fluorescence image captures was short (14ms), it was possible to inspect color, shape,
size, and rot of a fruit on the move before it leaves an existing industrial inspection chamber.
Cheng et al (2003). Mentioned that, a near–infrared (NIR) and mid–infrared (MIR) dual–
camera imaging approach for online apple stem–end/calyx detection is presented in this
article. How to distinguish the stem–end/calyx from a true defect is a persistent problem in
apple defect sorting systems. In a single–camera NIR approach, the stem–end/calyx of an
apple is usually confused with true defects and is often mistakenly sorted. In order to solve
this problem, a dual–camera NIR/MIR imaging method was developed. The MIR camera
can identify only the stem–end/calyx parts of the fruit, while the NIR camera can identify
both the stem–end/calyx portions and the true defects on the apple. A fast algorithm has
been developed to process the NIR and MIR images. Online test results show that a 100%
recognition rate for good apples and a 92% recognition rate for defective apples were
achieved using this method. The dual–camera imaging system has great potential for
reliable online sorting of apples for defects.
Sunil et al (2009). Identification of the insect damage is critical in the pecan processing. The
insect damage is positively linked to the production of carcinogenic toxins in many food
products. Previously, X-ray images were used for pecan defect identification, but the feature
extraction was done manually. The objective of this article was to automate the feature
extraction. Three energy levels (30 kV and 1 mA, 35 kV and 0.5 mA, and 40 kV and 0.75 mA)
were used to acquire the images of the good pecans, pecans with insect exit holes, and
nutmeat eaten pecans. After thresholding, three features were extracted. The features used
were area ratio (ratio of area of the nutmeat and shell to the area of the total nut), mean local
intensity variation, and average pixel intensity. The local adaptive methods performed well
for the selected energy levels. The results indicate that it is feasible to distinguish between
the good pecans and eaten nutmeat pecans. However, the selected features were not able to
distinguish between the good pecans and the pecans with one or two insect exit holes.
Jun et al (2004). In this study, a mobile fruit grading robot for information-added product in
precision agriculture was developed. The prototype robot, which consisted of a
manipulator, an endeffector, a machine vision system, and a mobile mechanism, was made.
The robot could acquire five fruit images from four sides and the top while its manipulator
transported the fruit received from the operator. A preliminary experiment was conducted
Understanding Color Image Processing by Machine Vision for Biological Materials 251
with 372 samples of sweet pepper in variety of “TosahikariD” in laboratory. A fruit mass
prediction method was developed by use of the five images.
A high spatial resolution (0.5–1.0 mm) hyperspectral imaging system is presented as a tool
for selecting better multispectral methods to detect defective and contaminated foods and
agricultural products. Examples of direct linear or non-linear analysis of the spectral bands
of hyperspectral images that resulted in more efficient multispectral imaging techniques are
given. Various image analysis methods for the detection of defects and/or contaminations
on the surfaces of Red Delicious, Golden Delicious, Gala, and Fuji apples are compared.
Surface defects/contaminations studied include side rots, bruises, flyspecks, scabs and
molds, fungal diseases (such as black pox), and soil contaminations. Differences in spectral
responses within the 430–900 nm spectral range are analyzed using monochromatic images
and second difference analysis methods for sorting wholesome and contaminated apples.
An asymmetric second difference method using a chlorophyll absorption waveband at 685
nm and two bands in the near-infrared region is shown to provide excellent detection of the
defective/contaminated portions of apples, independent of the apple color and cultivar.
Simple and requiring less computation than other methods such as principal component
analysis, the asymmetric second difference method can be easily implemented as a
multispectral imaging technique.
(Tigard, OR, USA) equipped with an imaging spectrograph SPECIM ImSpector version 1.7
from Spectral Imaging Ltd. (Oulu, Finland). The Im-Spector has a fixed-size internal slit to
define the field of view for the spatial line and a prism/grating/prism system for the separation
of the spectra along the spatial line. To improve the spatial resolution of the hyperspectral
images, an external adjustable slit is placed between the sample and the camera optical set.
This better defines the field of view and increases the spatial resolution. The image acquisition
and recording is performed with a Pentium-based PC using a general purpose imaging
software, PixelViewe 3.10 Beta 4.0 from Pixel-Vision, Inc. (Tigard, OR, USA.).
A C-mount set with a focus lens and an aperture diaphragm allows for focusing and
aperture adjustments, for which the circular aperture is opened to its maximum and the
external slit is adjusted with micrometer actuators to optimize light flow and resolution. The
light source consists of two 21 V, 150 W halogen lamps powered with a regulated DC
voltage power supply from Fiber-Lite A-240P from Dolan-Jenner Industries, Inc. (Lawrence,
MA, USA). The light is transmitted through two optical fibers towards a line light reflector.
The sample is placed on a conveyor belt with an adjustable speed AC motor control
Speedmaster from Leeson Electric Motors (Denver, CO, USA). The sample is scanned line by
line with an adjustable scanning rate, illuminated by the two line sources as it passes
through the camera’s field of view (Patrick, et al, 2004).
Naoshi et al (2008). Mentioned that, a complete fruit quality inspection system should be
able to examine two opposite sides of each fruit. In automating such an inspection system, it
is a well-known challenge when there is a need to mechanically manipulate fruits of
irregular shapes and sizes. An innovatively designed rotary tray was developed for use in
an eggplant fruit grading system. The rotary tray enables the presentation of two opposite
sides of each fruit for inspection by machine vision systems. The rotary tray was designed
for handling baby eggplants and mainly consisted of two cover plates and six side plates.
It is capable of performing five tasks on a fruit: receiving, presenting, holding, rotating, and
releasing. The sequence of stages that a rotary tray goes through while moving along an
inspection line are: 1. receiving a fruit, 2. presenting the fruit during the first image
acquisition, 3. holding the fruit by closing one cover plate, 4. turn the fruit to its opposite
side by rotating the entire tray, 5. opening the other cover plate , 5. presenting the opposite
side of the fruit during the second image acquisition, 6. holding the fruit while the decision
on its quality is being made by the machine vision algorithms, and 7. releasing the fruit to a
particular location according to the inspection result.
The motions of a rotary tray are activated along a grading line by lifting guides, rotary
pushers, clicks, and cams. The actions at stages 1 through 6 are performed by mechanical
devices strategically placed along a motor driven grading conveyor. The releasing action is
triggered by a rotary solenoid when the fruit arrives at a proper location. Six eggplant
grading lines, each containing a series of the rotary trays, are being operated at an
agricultural cooperative facility in Japan.
Jiangsheng and Yibin (2006). In this research, a novel approach for fruit shape detection was
proposed, which based on multi scale level set framework. An image was first decomposed
Understanding Color Image Processing by Machine Vision for Biological Materials 253
from coarse to fine by wavelet analysis method and a serial of images were formed. Then we
use region homogeneity in a level set approach to extract fruit shape boundary at the coarse
scale. At the finer scale, these coarse boundaries are used to initialize boundary detection
and serve as a priori shape knowledge to guide contour evolution. This new algorithm
doesn’t need any noise removal preprocessing, and can find accurate shape boundary
without any assumption in a noisy image. The proposed method has been applied to fruit
shape detection with more promising result than traditional method.
Color is important in evaluating quality and maturity level of many agricultural products.
Color grading is an essential step in the processing and inventory control of fruits and
vegetables that directly affects profitability. Dates are harvested at different levels of maturity
that require different processing before the dates can be packed. Maturity evaluation is crucial
to processing control, but conventional methods are slow and labor-intensive. Because date
maturity level correlates strongly with color, automated color grading could be used. A novel
and robust color space conversion and color index distribution analysis technique for
automated date maturity evaluation that is well suited for commercial production is presented
in this paper. In contrast with more complex color grading techniques, the proposed method
makes it easy for a human operator to specify and adjust color preference settings for different
color groups representing distinct maturity levels. The performance of this robust color
grading technique is demonstrated using date samples collected from field testing.
Concluded that A new color space conversion method and color index distribution analysis
technique specifically for automated date maturity evaluation has been presented. The
proposed approach uses a third-order polynomial to convert 3D RGB values into a simple 1D
color space. Unlike other color grading techniques, this approach makes the selection and
adjustment of color preferences easy and intuitive. Moreover, it allows a more complicated
distribution analysis of fruit surface colors. The user can change color and consistency cutoff
points in a manner consistent with human color perception, simply sliding a cutoff point to
include fruit that is ‘‘slightly darker” or ‘‘lighter red”. Moreover, changes in preferred color
ranges can be completed without reference to precise color values. Furthermore, by converting
3D colors to a linear color space, color distribution analysis required for date maturity
evaluation is much more straightforward. The implementation of this new color space
conversion method and the results presented demonstrate the simplicity and accuracy of the
proposed technique. To calibrate the system, an experienced grader specifies a set of colors of
interest, each accompanied by a preferred index value on a linear scale. Provided that the
selected color samples cover the complete range of expected colors, accurate color grading will
result. This new technique can be applied to other color grading applications that require the
setting and adjustment of color preferences.
6. Light
6.1. Electromagnetic spectrum
Radiation energy travels in space at the speed of light in the form of sinusoidal waves with
known wavelengths. Arranged from shorter to longer wavelengths, the electromagnetic
254 Structure and Function of Food Engineering
spectrum provides information on the frequency as well as the energy distribution of the
electromagnetic radiation.
When electromagnetic radiation strikes an object, the resulting interaction is affected by the
properties of an object such as color, physical damage, and presence of foreign material on
the surface. Different types of electromagnetic radiation can be used for quality control of
foods. For example, near-infrared radiation can be used for measuring moisture content,
and internal defects can be detected by X-rays.
Figure 17. The electromagnetic spectrum comprises the visible and non-visible range.
Understanding Color Image Processing by Machine Vision for Biological Materials 255
Referring to Figure 17, the gamma rays with wavelengths of less than 0.1 nm constitute the
shortest wavelengths of the electromagnetic spectrum. At the other end of the spectrum, the
longest waves are radio waves, which have wavelengths of many kilometers. The well-
known ground-probing radar (GPR) and other microwave-based imaging modalities
operate in this frequency range.
Traditionally, gamma radiation is important for medical and astronomical imaging, leading
to the development of various types of anatomical imaging modalities such as computed
tomography (CT), magnetic resonance imaging (MRI), nuclear magnetic resonance (NMR),
single photon emission computed tomography (SPECT) and positron emission tomography
(PET) operate at shorter wavelengths ranging from 10-8 m to 10-13 m.
Located in the middle of the electromagnetic spectrum is the visible range, consisting of
narrow portion of the spectrum with wavelengths ranging from 400 nm (blue) to 700 nm
(red). The popular charge-coupled device or CCD camera operates in this range.
Infrared (IR) light lies between the visible and microwave portions of the electromagnetic
band. As with visible light, infrared has wavelengths that range from near (shorter) infrared
to far (longer) infrared.
Ultraviolet (UV) light is of shorter wavelength than visible light. Similar to IR, the UV part
of the spectrum can be divided, this time into three regions: near ultraviolet (NUV) (300 nm)
(NUV), far ultraviolet (FUV) (30 nm), and extreme ultraviolet (EUV) (3 nm). NUV is closest
to the visible band, while EUV is closest to the X-ray region and therefore is the most
energetic of the three types. FUV, meanwhile, lies between the near and extreme ultraviolet
regions, and is the least explored of the three.
Electromagnetic waves travel at the speed of light and are characterized by their frequency
(f) and wavelength (λ). In a medium, these two properties are related by:
c = λf (1)
Radiation can exhibit properties of both waves and particles. Visible light acts as if it is
carried in discrete units called photons. Each photon has an energy, E, that can be calculated
by:
E=hf (2)
where h is Planck’s constant (6.626 × 10 −34 J·s). (Sahin & Sumnu, 2005; Sun, 2008).
6.2. Illumination
The provision of correct and high-quality illumination, in many vision applications, is
absolutely decisive. Engineers and machine vision practitioners have long recognized
256 Structure and Function of Food Engineering
lighting as being an important piece of the machine vision system. However, choosing the
right lighting strategy remains a difficult problem because there is no specific guideline for
integrating lighting into machine vision applications.
Therefore, the illuminant is an important factor that must be taken into account when
considering machine vision integration. Frequently, knowledgeable selection of an
illuminant is necessary for specific vision applications.
For detection of differences in color under diffuse illumination, both natural daylight and
artificial simulated daylight are commonly used. A window facing north that is free of direct
sunshine is the natural illuminant normally employed for visual color examination.
However, natural daylight varies greatly in spectral quality with direction of view, time of
day and year, weather, and geographical location. Therefore, simulated daylight is
commonly used in industrial testing. Artificial light sources can be standardized and remain
stable in quality. The Commission Internationale de l’Eclairage (CIE) (The International
Commission on Illumination) recommended three light sources reproducible in the
laboratory in 1931. Illuminant A defines light typical of that from an incandescent lamp,
illuminant B represents direct sunlight, and illuminant C represents average daylight from
the total sky. Based on measurements of daylight, CIE recommended a series of illuminants
D in 1966 to represent daylight. These illuminants represent daylight more completely and
accurately than illuminants B and C do. In addition, they are defined for complete series of
yellow to blue color temperatures. The D illuminants are usually identified by the first two
digits of their color temperature Sahin & Sumnu, 2005; Sun, 2008).
Traditionally, the two most common illuminants are fluorescent and incandescent bulbs,
even though other light sources (such as light-emitting diodes (LEDs) and
electroluminescent sources) are also useful.
Computer Vision Systems are affected by the level and quality of illumination as with the
human eye. The performance of the illumination system greatly influences the quality of
image and plays an important role in the overall efficiency and accuracy of the system.
Illumination systems are the light sources. The light focuses on the materials (especially
when used). Lighting type, location and color quality play an important role in bringing out
a clear image of the object. Lighting arrangements are grouped into front- or back-lighting.
Front lighting serve as illumination focusing on the object for better detection of external
surface features of the product while back-lighting is used for enhancing the background of
the object. Light sources used include incandescent lamps, fluorescent lamps, lasers, X-ray
tubes and infra-red lamps (Narendra and Hareesh, 2010).
7. Color
Color is one of the important quality attributes in foods. Although it does not necessarily
reflect nutritional, flavor, or functional values, it determines the acceptability of a product
by consumers. Sometimes, instead of chemical analysis, color measurement may be used if a
correlation is present between the presence of the colored component and the chemical in
the food since color measurement is simpler and quicker than chemical analysis.
Understanding Color Image Processing by Machine Vision for Biological Materials 257
It may be desirable to follow the changes in color of a product during storage, maturation,
processing, and so forth. Color is often used to determine the ripeness of fruits. Color of
potato chips is largely controlled by the reducing sugar content, storage conditions of the
potatoes, and subsequent processing. Color of flour reflects the amount of bran. In addition,
freshly milled flour is yellow because of the presence of xanthophylls.
Color is a perceptual phenomenon that depends on the observer and the conditions in
which the color is observed. It is a characteristic of light, which is measurable in terms of
intensity and wavelength. Color of a material becomes visible only when light from a
luminous object or source illuminates or strikes the surface.
Light is defined as visually evaluated radiant energy having a frequency from about 3.9×1014
Hz to 7.9 × 1014 Hz in the electromagnetic spectrum. Light of different wavelengths is
perceived as having different colors. Many light sources emit electromagnetic radiation that
is relatively balanced in all of the wavelengths contained in the visible region. Therefore,
light appears white to the human eye. However, when light interacts with matter, only
certain wavelengths within the visible region may be transmitted or reflected. The resulting
radiations at different wavelengths are perceived by the human eye as different colors, and
some wavelengths are visibly more intense than others. That is, the color arises from the
presence of light in greater intensities at some wavelengths than at the others.
The selective absorption of different amounts of the wavelengths within the visible region
determines the color of the object. Wavelengths not absorbed but reflected by or transmitted
through an object are visible to observers.
Physically, the color of an object is measured and represented by spectrophotometric curves,
which are plots of fractions of incident light (reflected or transmitted) as a function of
wavelength throughout the visible spectrum (Figure 18).
Leaves look green because they only reflect the green colors and absorb the red and blue
colors. The source of light determines what colors can be reflected. Sunlight combines all
lights of wavelengths, so objects appear colored in daylight. If the light source has a single
wavelength, then objects just reflect this wavelength light and no other lights.
This states that the magnitudes of three stimuli determine the perception of a color and not
the detailed distribution of light energy across the visible spectrum. The concept is
illustrated in Figure 19. If these stimuli are the same for two different light distributions,
then the color appearance of the lights will be the same, irrespective of their spectrum. The
trichromatic theory is important since it forms the basis of most methods of expressing color
in terms of numbers and of the methods of reproduction of colored images.
The idea that three different types of photoreceptors participate in a population code for
color is often referred to as the "trichromatic theory" of color vision.
Figure 19. Show the signals from the eye cone cells
Therefore any light can be matched with a combination of any three others. Three receptors
are types of cones:
Red and green are not only unique hues but are also psychologically opponent color
sensations. A color will never be described as having both the properties of redness and
greenness at the same time; there is no such color as a reddish green. In the same way,
yellow and blue are an opponent pair of color perceptions.
The six properties can be grouped into two opponent pairs, red/green and yellow/blue and
the luminance property of white/black. The second stage of color vision is thought to arise
from the action of neurons and in particular by inhibitory synapses. Figure 21 illustrates the
signal pathways and the processing required accounting for the properties described in the
opponent theory. The human eye has receptors for short (S), middle (M), and long (L)
wavelengths, also known as blue, green, and red receptors.
Three cone types are combined to form three opponent process channels:-
S vs (M + L) = Blue/Yellow
(L+S) vs M = Red/Green
M + L = Black/White
In addition to the existence of the three different classes of cone photo pigments,
considerable support for the dichromatic theory comes from observations of human color
perception. For example, experiments in which subjects are shown different colors and
asked to match them by mixing only three pure wavelengths of light in various proportions
show that humans can, indeed, match any color using only three wavelengths of light - red,
green and blue (Colour4Free, 2010).
260 Structure and Function of Food Engineering
Figure 21. Show a set of signal paths consistent with the two stages of color vision.
The three dimensional color space CIE XYZ is the basis for all color management systems.
This color space contains all perceivable colors - the human gamut. The two dimensional
CIE chromaticity diagram xyY (below) shows a special projection of the three dimensional
CIE color space XYZ. Some interpretations are possible in xyY, others require the three
dimensional space XYZ or the related three dimensional space CIELab.
The new color-matching functions x(λ), y(λ),z(λ) have non-negative values, as expected. The
functions x(λ), y(λ),z(λ) can be understood as weight factors. For a spectral pure color C
with a fixed wavelength λ read in the diagram the three values as shown in figure 23. Then
the color can be mixed by the three Standard Primaries:
Generally we write
C=XX+YY+ZZ (4)
and a given spectral color distribution P(λ) delivers the three coordinates XYZ by these
integrals in the range from 380nm to 700nm or 800nm:
X k P x d (5)
Understanding Color Image Processing by Machine Vision for Biological Materials 261
Y k P y d (6)
Z k P z d (7)
where, k is a constant; it is 680 lumens/watt for a CRT; the λx, λy, and λz are color-matching
functions.
The chromaticity values x,y,z depend only on the hue or dominant wavelength and the
saturation. They are independent of the luminance:
X
x (8)
X Y Z
262 Structure and Function of Food Engineering
Y
y (9)
X Y Z
Z
z (10)
X Y Z
Obviously we have x + y + z = 1. All the values are on the triangle plane, projected by a line
through the arbitrary color XYZ and the origin, if we draw XYZ and xyz in one diagram.
This is a planar projection. The center of projection is in the origin as shown in figure 24.
The vertical projection onto the xy-plane is the chromaticity diagram xyY (view direction).
To reconstruct a color triple XYZ from the chromaticity values xy we need an additional
information, the luminance Y.
z 1 x y (11)
x
X Y (12)
y
z
Z Y (13)
y
Understanding Color Image Processing by Machine Vision for Biological Materials 263
The interior and boundary of the diagram represent all visible chromaticity values. The
boundary of the diagram represents the 100 percent pure colors of the spectrum. The line
joining the red and violet spectral points, called the purple line, is not part of the spectrum.
The center point E of the diagram represents a standard white light, which approximates
sunlight. Luminance values are not available in the chromaticity diagram because of
normalization. Colors with different luminance but the same chromaticity have the same
point. The chromaticity diagram is useful for the following:-
Each color model uses a different color representation. The term color gamut is used to
denote the universe of colors that can be created or displayed by a given color system or
technology. The colors that are perceivable by the human visual system fall within the
boundaries of the horse-shoe shape derived from the CIE-XYZ color space diagram, while
the RGB colors (that can be displayed on an RGB monitor) fall within the red triangle that
connects the RGB primary dots.
264 Structure and Function of Food Engineering
It is obvious that, the full range of perceptible color by humans is not available by the RGB
color model and the transformations from one space to another may create colors outside
the color gamut.
The choice of a color model is based on the application. Some equipment has limiting factors
that dictate the size and type of color model that can be used; for example, the RGB color
model is used with color CRT monitors, the YIQ color model is used with the broadcast TV
color system, and the CMY color model is used with some color-printing devices.
Unfortunately, none of these models are particularly easy to use comparing with human
perception. According to human intuitive color concepts, it is easy to describe the color in
terms of shade, tint, and tone, or hue, saturation, and brightness. Color models which
attempt to describe colors in this way include HSV, HLS, CIEL*a*b*, CIEL*C*H*, CIEL*u*v*.
(Shen, 2003) (Fairchild, 1997) (Findling, 1996).
The RGB color space can be defined by mapping the red, green, and blue intensity
components into the Cartesian coordinate system. The dynamic range of the intensity values
is scaled from 0 to 255 counts, and each primary color is represented by eight bits. The RGB
color space shown in (Figure 25) displays 16.77 million discrete colors. The red, green, and
blue corners of the cube indicate 100 percent color saturation.
An imaginary line can be drawn from the origin of the cube to the furthest opposite corner.
Along this line are 256 achromatic colors representing possible shades of gray. Black resides
at the origin of the color cube, and white is at the opposite corner. The RGB system enables
the reproduction of any color within the color space by using an additive mixture of the
primary colors. For an example, White is the sum of 255 counts of red, green, and blue, and
the function is usually expressed by RGB (255, 255, 255).
Understanding Color Image Processing by Machine Vision for Biological Materials 265
C 1 R
M 1 G (14)
Y 1 B
to the range [0, 1]. This equation reiterates the subtractive nature of the CMY model.
Although equal parts of cyan, magenta, and yellow should produce black, it has been found
that in printing applications this leads to muddy results.
Thus in printing applications a fourth component of true black is added to create the CMYK
color model. Four-color printing refers to using this CMYK model. As with the RGB model,
point distances in the CMY space do not truly correspond to perceptual color differences.
The YIQ model is used in U.S.A. commercial color television broadcasting and is closely
related to color raster graphics, which is suited to monochrome as well as color CRT display
historically. The parameter Y is luminance, which is the same as in the XYZ model.
Parameters I and Q are chromaticity, with I containing orange-cyan hue information, and Q
containing green-magenta hue information. There are two peculiarities with the YIQ color
model. The first is that this system is more sensitive to changes in luminance than to changes
in chromaticity; the second is that color gamut is quite small, it can be specified adequately
with one rather than two color dimensions. These properties are very convenient for the
transfer of TV signals.An approximate linear transformation from a given set of RGB
coordinates to the YIQ space is given by the following formula:
The boundary of the hexcone represents the various hues, the saturation is measured along
a horizontal axis, and value is along a vertical axis through the centre of the hexcone. The
color wheel is varied same as the human perception.
Hue is represented by the angle around the vertical axis, with starting red at 0°, then yellow,
green, cyan, blue, and magenta respectively, each interval is 60°. Any two colors with 180°
difference are complementary colors. Saturation (S) varies from 0 to 1. It is the fraction of
distance from center to edge of hexcone. At the S = 0, it is the grey scale. Value (V) varies
from 0 to 1 at the top. At the origin, it represents black; and at the top of the hexcone, colors
have their maximum intensity. As S =1, the colors have the pure hues.
The HSL color model is very much similar to the HSV system. A double hexcone, with two
apexes at both pure white and pure black rather than just one at pure black, is used to
visualize the subspace in three-dimensions as shown in (figure 28).
In HSL, the saturation component always goes from a fully saturated color to the
corresponding gray value; whereas in HSV, with V at its maximum, saturation goes from a
fully saturated color to white, which may not be considered intuitive to some. Additionally,
in HSL the intensity component always spans the entire range from black through the
chosen hue to white. In HSV, the intensity component only goes from black to the chosen
hue. Because of the separation of chromaticity from intensity in both the HSV and HSL color
spaces, it is possible to process images based on intensity only, leaving the original color
information untouched. Because of this, HSV and HSL have found wide spread use in
computer vision research.
The CIELAB color measurement method was developed in 1976 and offers more advantages
over the system developed in 1931. It is more uniform and based on more useful and
accepted colors describing a theory of opposing colors.
CIEL*a*b* defines L* as lightness; a* and b* are defined as the color axes to describe the hue
and saturation. The color axes are based on the fact that a color can’t be red and green, or
both blue and yellow, because these colors oppose each other. The a* axis runs from red (+ a)
to green (- a) and the b* axis from yellow (+ b) to blue (- b) as shown in Figure 29. Hue
values do not have the same angular distribution in CIEL*a*b* color model as the hue value
in HSV. In fact, CIEL*a*b* is intended to mimic the logarithmic response of the human eye.
[For98] The CIEL*a*b* overcomes the limitations of color gamut in the CIE chromaticity
diagrams. However, in order to convert to other color models, L* is defined form 0 (black) to
100 (white), a* is from –100 (green) to 100 (red), b* is from –100 (blue) to 100 (yellow).
CIEL*C*H* has the same definition with the CIEL*a*b* except that its values are defined in a
polar coordinate system. Thus in CIEL*C*H*, L* measures brightness, C* measures
Understanding Color Image Processing by Machine Vision for Biological Materials 269
saturation and H* measures hue. We will use this model instead of HSV, as CIEL*C*H* is
based on CIEL*a*b* and not on RGB, and hence is device-independent.
The color models which are used in computer graphics have been traditionally designed for
specific devices, such as RGB color model for CRT displays and CMY color model for
printers. They are device dependent. Therefore, it becomes meaningless to compare the
colors with different devices or the same device under different conditions.
CIEL*a*b* is a device independent color model, and is used for color management as the
device independent model of the ICC (International Color Consortium) device profiles (Shen,
2003) (Fairchild, 1997) (CIE, 1999) (CIE, 1998) (Snead, 2005) (Findling, 1996) (Braun, et al. 1998).
Most computer displays are similar in their phosphor chromaticities (primaries) and
transfer function.
The RGB color model is native to CRT displays, scanners and digital cameras, which are
the devices with the highest performance constraints.
The RGB color model can be made device independent in a straightforward way. It is
also possible to describe color gamuts that are large enough for all but a small number
of applications.
270 Structure and Function of Food Engineering
The accurate handling of color characteristics of digital images is a non-trivial task because
RGB signals generated by digital cameras are ‘device-dependent’, i.e. different cameras
produce different RGB signals for the same scene. In addition, these signals will change over
time as they are dependent on the camera settings and some of these may be scene
dependent, such as the shutter speed and aperture diameter. In other words, each camera
defines a custom device-dependent RGB color space for each picture taken. As a
consequence, the term RGB (as in RGB-image) is clearly ill-defined and meaningless for
anything other than trivial purposes. As measurements of colors and color differences in
this paper are based on a standard colorimetric observer as defined by the CIE (Commission
Internationale de l’Eclairage), the international standardizing body in the field of color
science, it is not possible to make such measurements on RGB images if the relationship
between the varying camera RGB color spaces and the colorimetric color spaces (color
spaces based on said human observer) is not determined. However, there is a standard RGB
color space (sRGB) that is fixed (device independent) and has a known relationship with the
CIE colorimetric color spaces. Furthermore, sRGB should more or less display realistically
on most modern display devices without extra manipulation or calibration.
The sRGB tristimulus values are linear combinations of the 1931 CIE XYZ values as
measured on the faceplate of the display, which assumes the absence of any significant
veiling glare. A linear portion of the transfer function of the dark end signal is integrated
into the encoding specification to optimise encoding implementations.
A calibrated, nonlinear standard RGB color space called sRGB has been proposed by
Microsoft and Hewlett-Packard. Benefits of sRGB are easier portability of RGB color images
(especially on the Internet) and faster computational performance than in the uniform CIE
spaces. The white point of sRGB is D65 as in the ITU-R BT.709 standard. The phosphor
chromaticities are also from BT.709. The sRGB color space is large enough to fit in most
device RGB spaces.
The suggested CRT gamma is 2.2 which complies with most monitors. The sRGB color space
is computationally fast enough for interactive video and is becoming the future de facto
Internet standard (Shen, 2003) (CIE, 1999) (CIE, 1998).
Author details
Ayman H. Amer Eissa*
Department of Agriculture Engineering, Faculty of Agricultural, Minoufiya University, Egypt
Department of Agriculture Systems Engineering, College of Agricultural and Food Sciences,
King Faisal University, Saudi Arabia
* Corresponding Author
Understanding Color Image Processing by Machine Vision for Biological Materials 271
8. References
Alonge, A. F., & Adigun, Y. J. (1999). Some physical and aerodynamic properties of sorghum
as related to cleaning. In Paper presented at the 21st Annual Conference of the Nigerian
Society of Agricultural Engineers (NSAE) at Federal Polytechnic, Bauchi, Nigeria.
Anderson, R., Daniel, C, H., Jacques, W., and Siome, G. (2010). Automatic fruit and
vegetable classification from images. Computers and Electronics in Agriculture 70. pp,
96–104.
Bassiou, N., C, Kotropoulos. 2007. Color image histogram equalization by absolute
discounting back-off. Computer Vision and Image Understanding.V.107 PP: 108–122.
Bennedsen, B.S., D.L., Peterson, and A.Tabb, (2005). Identifying defects in images of rotating
apples. Comput. Electron. Agr. 48(2): 92-102.
Bennedsen, B.S. and D.L. Peterson, (2005). Performance of a system for apple surface defect
identification in near-infrared images. Biosyst. Eng. 90(4): 419-431.
Bennedsen, B.S. and D.L. Peterson, (2004). Identification of apple stem and calyx using
unsupervised feature extraction. Trans. ASAE 47(3): 889-894.
Braun, G. J., F. Ebner, and M. D. Fairchild.1998. Color Gamut Mapping in a Hue-Linearized
CIELAB Color Space, IS&T/SID 6th Color Imaging Conference, Scottsdale, 163-168.
Brennan J. G., (2006). Food Processing Handbook. Chapter(1) Postharvest Handling and
Preparation of Foods for Processing. WILEY-VCH Verlag GmbH & Co. KGaA,
Weinheim
Brosnan, T., and D.W. Sun, (2004). Improving quality inspection of food products by
computer vision - a review. J. Food Engin. 61(1): 3-16.
Castleman, K. (1996). Digital image processing. Englewood Cliffs, NJ: Prentice-Hall, 667p.
Chakespari, A. G; Rajabipour, A and H. Mobli (2010). Post Harvest Physical and Nutritional
Properties of Two Apple Varieties. Journal of Agricultural Science Vol. 2, No. 3; (61-68).
Chen, Y.R., K. Chao, and M.S. Kim, (2002). Machine vision technology for agricultural
applications. Comput. Electron. Agr. 33(2/3): 173-191.
Cheng, X., Y. Tao, Y. R. Chen, and Y. Luo. (2003). Nir/MIR dual–sensor machine vision
system for online apple stem–end/calyx recognition. Transactions of the ASAE. Vol.
46(2): 551–558.
CIE. 1998. Color Measurement and Management in Multimedia Systems and Equipment. Part
2-1: Default RGB Color Space – sRGB. http://www.colour.org/tc8-05/Docs/colorspace/.
CIE. 1999. Multimedia Systems and Equipment Color Measurement and Management. Part
2-2: Color management Extended RGB Color Space – sRGB64.
http://www.colour.org/tc8-05/Docs/colorspace/.
Colour4Free. 2010. H14 color vision theory V01.doc page 1 of 4.
http://colour4free.org.uk/Books/H14ColourVisionTheoryV01.pdf.
Crowe, T.G., and M.J. Delwiche. (1996). Real-time defect detection in fruit-part II: An
algorithm and performance of a prototype system. Transactions of the ASAE 39(6):
2309-2317.
Dah, J. L., S. Robert., A. James and M.C. Steve. (2008). Development of a machine vision
system for automatic date grading using digital reflective near-infrared imaging.
Journal of Food Engineering 86 p.p 388–398.
272 Structure and Function of Food Engineering
ElMasrya, G., Wangb, N.,and Vigneault, C.(2009). Detecting chilling injury in Red Delicious
apple using hyperspectral imaging and neural networks. Postharvest Biology and
Technology 52. pp, 1–8.
Fairchild, M.D. 1997. Status of CIE Color Appearance Models. http://www.colour.org/tc8-
01/MDF_AICpaper.pdf.
Fernando, L, G., Gabriela, A, G., José. B., Nuria, A., and José, M, V. (2010). Automatic
detection of skin defects in citrus fruits using a multivariate image analysis approach.
Computers and Electronics in Agriculture 71. pp, 189–197.
Findling, H. 1996. Fuzzy Algorithm for the Enhancement of Noise degraded Images. Master
thesis of science. School of Florida Institute of Technology. Melbourne, Florida.
Francis, F.J. 1980. Colour quality evaluation of horticultural crops. Hort Science, 15(1): 14-15.
Geyer, Lewis H and Ranald F. Perry. (1982). Variation in detectability of multiple flaws with
allowed inspection time. Human Factors 24(3): 361-365.
Guyer, D.G. Brown, E. Timm, R. Brook, and E. Marshall.(1994). Lighting systems for fruit
and vegetable sorting. Extension Bulletin E-2559.E.Lansing, Mich.: Cooperative
Extension Service, Michigan State University.
Heinemann P.H., Hughes R., C.T.Morrow, H.J. Sommer, R.B. Beelman and P.J. Wuest (1994).
Grading of mushrooms using a machine vision system. Transactions of the ASAE 37 (5):
1671–1677.
Heinemann, P.H., Z.A. Varghese, C.T. Morrow, H.J. Sommer III, and R.M. Crassweller.
(1995). Machine vision inspection of "Golden Delicious" apples. Applied Engineering in
Agriculture 11 (6): 901-906.
Hoffmann, G. 2000. CIE color space. http://www.fho-emden.de/~hoffmann/ciexyz29082000.pdf
Jiangsheng, G., and Y., Yibin. (2006). Fruit Shape Detection Based on Multi-scale Level Set
Framework. ASABE Paper No 063088.
Jun, Q., S. Akira, S. Sakae, and K. Naoshi. (2004). Mobile Fruit Grading Robot -Concept and
prototype. American Society of Agricultural and Biological Engineers, St. Joseph,
Michigan www.asabe.org.
Kang, S.P., East, A.R., and Trujillo, F.J. (2008). Colour vision system evaluation of bicolour
fruit: A case study with ‘B74’ mango. Postharvest Biology and Technology 49. pp, 77–
85.
Kheiralipour K., Tabatabaeefar, H., Mobli, A., Rafiee, S., Sharifi, M., Jafari, A. and
Rajabipour, A. (2008). Some physical and hydrodynamic properties of two varieties of
apple (Malus domestica Borkh L.).Int. Agrophysics, vol.22, pp, 225-229.
Kondo, N. (2009). Automation on fruit and vegetable grading system and food traceability”,
Trends in Food Science & Technology, doi: 10.1016/j.tifs.
Kondo,N. (2010). Automation on fruit and vegetable grading system and food traceability.
Trends in Food Science & Technology 21. pp, 145-152.
Leemans, V., Magein, H., and Destain, M.F. (2002). On-line fruit grading according to their
external quality using machine vision. Biosyst. Eng. 83(4): 397-404.
Mathworks (2005). Image processing toolbox for use with Matlab. Natick, USA: The
Mathworks Inc.
Mendoza, F., & Aguilera, J. M. (2004). Application of image analysis for classification of
ripening bananas. Journal of Food Science, 69(9), E471–E477.
Understanding Color Image Processing by Machine Vision for Biological Materials 273
Mendoza, F., Dejmek, P., & Aguilera, J. M. (2006). Calibrated color measurements of
agricultural foods using image analysis. Postharvest Biology and Technology, 41,285–
295.
Mery, D., & Pedreschi, F. (2005). Segmentation of color food images using a robust
algorithm. Journal of Food Engineering, 66, 353–360.
Meyers, J.B., S.E. Prussia, C.N. Thai, T.L. Sadosky, and D.T. Campbell. (1990). Visual
inspection of agricultural products moving along sorting conveyors. Transactions of the
ASAE 33 (2): 367-372.
Miller, W.M. 1995. Optical defect analysis of Florida citrus. Applied Engineering in
Agriculture 11 (6): 855-860.
Mirasheh, R. (2006). Designing and making procedure for a machine determining olive
image dimensions. Master of Science Thesis, Tehran University.
Mohsenin, N.N. (1970). Physical properties of plant and animal materials. Vol.1. Structure
physical characteristics and mechanical properties. Gordon and Breach Science
Publications, New York.
Mohsenin, N. N. (1986). Physical properties of plant and animal materials. Gordon of Breach
science publishers, New York.
Muir, A.Y., I.D.G. Shirlaw, and D.C. Mckae. (1989). Machine vision using spectral imaging
techniques. Agricultural Engineering 44(3): 79-81.
Mummert, C.N. (2004). The development of a machine vision system to measure the shape
of a sweetpotato root. Master of thesis of science. Fac, of Agric. North Carolina State
University.
Naoshi, K., N. Takahisa, M. Yoshihid, L. Peter, K. Mitsutaka, K. Makoto, D.F. Paolo, and O.
Yuichi. (2008). A double image acquisition system with visible and UV LEDs for citrus
fruit. American Society of Agricultural and Biological Engineers, ASABE.
Narendra, V.G. and Hareesh, K.S. 2010. Prospects of Computer Vision Automated Grading
and Sorting Systems in Agricultural and Food Products for Quality Evaluation.
International Journal of Computer Applications. V (1), No. 4.
Nielsen, H.M. and W. Paul, (1998). Modeling image processing parameters and consumer
aspects for tomato quality grading. In: Mathematical and Control Application in
Agriculture and Horticulture. Proceedings of the Third IFAC Workshop,
Pergamon/Elsevier, Oxford, UK.
Onder K.; Aziz, O. and Ibrahim, A. (2006). Physical properties of cactus pear (Opuntia ficus
india L.) grown wild in Turkey. Journal of Food Engineering (73 ): 198–202.
Patrick, M, M., Yud, R, C., Moon, S, K., and Diane, E, C. (2004). Development of
hyperspectral imaging technique for the detection of apple surface defects and
contaminations. Journal of Food Engineering 61. pp, 67–81.
Paulus, I., R. De Busscher, and E. Schrevens, (1997). Use of image analysis to investigate
human quality classification of apples. Journal of Agricultural Engineering Research 68,
pp. 341-353.
Peschel S., Franke R., Schreiber L., and Knoche M., 2007. Composition of cuticle of
developing sweet cherry fruit. Phytochemistry, 68, 1017-1025.
Rehkugler, G.E. and J.A. Throop. (1976). Optical-mechanical bruised apple sorters. PP. 185-
188, 192. In: J.J. Gaffney(ed). Quality Detection in Foods. ASAE. St. Josph, MI.
274 Structure and Function of Food Engineering
Rigney M.P., G.H. Brusewitz, and G.A. Kranzler. (1992). Asparagus defect inspection with
machine vision. Transactions of the ASAE 35(6): 1873-1878.
Sahin, S. and G. S. Sumnu. 2006. Physical Properties of Foods. Library of Congress Control
Number: 2005937128. Springer Science+Business Media, LLC.
Shen, Z. 2003. Color differentiation in digital images. Master thesis of science. School of
Computer Science and Mathematics. Victoria University of Technology, Australia.
Snead, M.C.2005. A method of content-based image retrieval for the generation of image
mosaics. Master thesis of science. School of Electrical Engineering and Computer
Science. University of Central Florida .
Stephenson, K.Q. (1976). Color sorting system for tomatoes. ASAE. PP. 199-201.
Sun, D.W. 2008. Computer Vision Technology for Food Quality Evaluation.Food Science
and Technology, International Series. ISBN: 978-0-12-373642-0.
Sun, D.W., and T. Brosnan. (2003). Improving quality inspection of food products by
computer vision: A Review. Journal of Food Engineering, 61: 3-16.
Sunil K. M., R. W. Paul, W. Ning, B. Timothy, and O. M. Niels. (2009). Adaptive
Thresholding of Pecan X-ray Images using Water Flow Models and Feature Extraction.
ASABE.
Tao, Y. (1996a). Methods and apparatus for sorting objects including stable color
transformayion. U.S. Patent No. 5,533,628.
Tao, Y. (1996b). Spherical transform of fruit images for on-line defect extraction of mass
objects. Opt. Engng. 35(2):344-350.
Tao, Y. (1998). Defective object inspection and separation system. U.s. patent no. 5,732.147.
Throop., J.A., Aneshansley, D.J., Upchurch, B.L., and Anger, B. (2001). Apple orientation on
two conveyors: performance and predictability based on fruit shape characteristics.
Trans. ASABE. 44(1): 99-109.
Throop, J.A., D.J. Aneshansley, and B.L. Upchurch. (1993). Investigation of texture analysis
features to apple bruises. ASAE Paper No. 93-3527.
Upchurch, B.L, H.A. Affeldt, W.R. Hruschka, and J.a. Throop. (1991). Optical detection of
bruises and early frost damage on apples. Transaction of the ASAE 34(3): 1004-1009.
Wen, Z., and Y. Tao. (1997). adaptive spherical transform of fruit images for high-speed
defect detection. ASAE paper No. 97-3076.
Xiao-bo, Z., Jie-wen, Z., Yanxiao, L., and Holmes, M. (2010). In-line detection of apple
defects using three color cameras system. Computers and Electronics in Agriculture
70.pp, 129–134.
Yang, Q. (1993). Finding stalk and calyx of apples using structured lighting. Comput.
Electron. Agric. 8: 31-42.