0% found this document useful (0 votes)
11 views6 pages

Pierleoni 2020

The document presents a Machine Vision System designed for monitoring manual assembly lines, addressing the challenges of real-time production tracking in environments with human interaction. It details the development of a BLOB-based algorithm for counting produced items, comparing its performance with a Machine Learning-based detector in terms of accuracy, sensitivity, and processing speed. The proposed solution aims to enhance flexibility and efficiency in manufacturing processes, aligning with the principles of Industry 4.0.

Uploaded by

mohamed.jaheen22
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views6 pages

Pierleoni 2020

The document presents a Machine Vision System designed for monitoring manual assembly lines, addressing the challenges of real-time production tracking in environments with human interaction. It details the development of a BLOB-based algorithm for counting produced items, comparing its performance with a Machine Learning-based detector in terms of accuracy, sensitivity, and processing speed. The proposed solution aims to enhance flexibility and efficiency in manufacturing processes, aligning with the principles of Industry 4.0.

Uploaded by

mohamed.jaheen22
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

2020 International Conference on Intelligent Engineering and Management (ICIEM)

A Machine Vision System for Manual Assembly


Line Monitoring
Paola Pierleoni Alberto Belli Lorenzo Palma
Information Engineering Department Information Engineering Department Information Engineering Department
Università Politecnica delle Marche Università Politecnica delle Marche Università Politecnica delle Marche
Via Brecce Bianche 12, 60131, Ancona Via Brecce Bianche 12, 60131, Ancona Via Brecce Bianche 12, 60131, Ancona
(An), Italy (An), Italy (An), Italy
[email protected] [email protected] [email protected]

Michela Palmucci Luisiana Sabbatini


Logistic Centre Information Engineering Department
iGuzzini Illuminazione S.p.A. Università Politecnica delle Marche
Via Mariano Guzzini 37, 62019, Via Brecce Bianche 12, 60131, Ancona
Recanati (Mc), Italy (An), Italy
[email protected] [email protected]

Abstract—Customers are asking more and more become members of the so called Industry 4.0 paradigm [2]
customized products and expect to receive them in really short [3]. The general characteristics and pillars of Industry 4.0
times. That is only reachable if there is horizontal and vertical should be implemented on every part of a company to try to
integration, together with high information availability and be flexible and fast enough for the market. In order to do so,
transparency inside a company. When the production is not transparency is pursued through remote monitoring to
fully automatized, i.e. in those companies where the assembly facilitate information integration, inter-operability, data
or the production still relies on manual work of people, the analysis, decentralized decision making andvirtualization of
monitoring of the line production, in terms of number of pieces the smart factory in the so called Cyber- Physical Production
produced, may be tricky due to the inevitable variability that
System (CPPS), essential components for achieving an
operators add to the process, thus making essential the
creation of smart systems able to deal with such complex
efficient and flexible manufacturing capability [4] [5].
environments and autonomously monitor them. Computer Transparency and real-time control over the production
vision systems can be customized and very smart in various process, that is necessary for satisfying customer
contexts, if properly modeled. In this paper we are going to requirements, can be reached through the proper
describe the developed Machine Vision algorithm that, implementation of smart data collection at the bottom level,
building upon BLOB (Binary Large Objects) analysis, is able i.e. the physical layer, and the subsequent integration of data
to detect and count objects produced in a complex industrial collected into the higher levels until reaching the terminal
context of manual assembly. The developed algorithm would layer where decision makers, customers, maintainers and
then be compared to a more robust one taken as reference implementers can exploit data gathered at lower levels [6].
point for a comparison. The comparison between our To be reactive to market demand more and more companies
algorithm and the Machine Learning-based Detector aims at are embracing the Lean Manufacturing methodologies
showing the comparable Accuracy, Specificity and Sensitivity together with the Industry 4.0 technologies to be fast and
of our method, together with its higher versatility and flexible. From a practical point of view it is of extreme
processing speed, thus making it applicable in the plant-wide importance the ability to monitor the production in place,
real-time monitoring of the manual assembly lines. given that most of the manufacturing systems are
Keywords—Machine Vision System, Smart Workstation, reconfigurable and flexible to efficiently manage the high
Flexible Assembly, Industry 4.0, ACF Object Detector, mix produced. Often, materials and spare components are
Machine Learning. supplied as soon as they are required for the production, to
avoid storage of raw materials which is a renown cost and
I. INTRODUCTION locked investment. Obviously, this is only feasible if the
It is well known that the market has shifted toward a production is monitored in real-time, something that can be
customer-orientation since many years. This easily trivial in automatized production lines, but tricky in a context
recognizable shift has fostered the emergence of many where the production still grounds on manual work of
practical and operative challenges for competing companies, operators. In such situations it is hard to collect real time data
given that customers are more and more asking for the right from the production facility and so it is hard to properly and
product in the right place at the right time and right price [1]. efficiently manage the material replenishment and the
Extreme product customization, short lead times, low production line setups. Fortunately, Computer Vision
volumes and high mix in a cost-efficient way are the market Technologies offer a highly flexible and smart solution in
characteristics to be met, and in order to succeed in this many contexts of use, at the cost a tailored and customized
complex and competitive context companies are forced to adaptation of them. There are several examples of
massively embrace Information and Communication application of Computer Vision in the Industrial world, also
Technologies (ICT) for their daily production processes to referred to as Machine Vision [7] [8] [9] [10], specifically

33
978-1-7281-4097-1/20/$31.00 ©2020 IEEE

Authorized licensed use limited to: Cornell University Library. Downloaded on September 04,2020 at 23:01:50 UTC from IEEE Xplore. Restrictions apply.
2020 International Conference on Intelligent Engineering and Management (ICIEM)
targeted at increasing the manufacturing flexibility and table. An important contextual characteristic is that the
improving the efficient inventory management, but the vast objects are mostly black, while the tables and the gloves
majority works in contexts without the human interaction. worn by the operators are white. This ease the distinction
Besides, there are several contributions of Machine Vision between the objects to becounted and the rest of the
systems able to count using a video input [11] [12] [13], but environment, that is the background and the operators’ hands
again none of them manages contexts with human that do not have to interfere in the counting. Every kind of
interaction.For a manual assembly where pieces are passed non-vision technology may fail in such a variable context
from one operator to the next one it is hard to take trace of where hands and products repetitively go back and forth
how many pieces out of the order’s target have been already between the workstations and the intermediate tables, reason
produced. In this paper we develop a Machine Vision System why the adoption of a camera is necessary for data
able tocount pieces passed by an operator to the following acquisition and solution development. For gathering data for
one. The developed solution aims at being as flexible as the algorithm development phase we mounted on an
possible, envisioning the scalability of this solution at a plant intermediate table with a top-down view perpendicular to the
wide level, where the product mix produced is extremely table center a SVPRO USB camera, 2.8-12 mm varifocal
higher (thousands of product types) if compared to the mix lens, minimum illumination 0.01 lux, Sony IMX322 sensor,
produced in one line only (all the variations of one product 1920 x 1080 resolution that reaches the rate of 30 fps and
type). This solution grounds on image pre-processing, uses H.264 compression standard [15]. The camera lens is 80
enhancement, binarization and BLOB analysis. This simple, cm far from the table plane as depicted in Fig. 1. It is
fast yet versatile solution would then be compared to a observable from the picture that the camera has been
Machine Learning-based one, an Aggregated Channel connected directly through the USB cable to a laptop where,
Features (ACF) Object Detector [14], that would be our using Matlab ver. R2018b - 9.5.0.1049112, with Image
reference point in terms of accuracy of the detection and Acquisition Toolbox and Image Processing Toolbox
counting. The Detector has been trained on the product type installed, all the video acquisitions and the algorithms
variations that are produced on one assembly line, the one development process have been carried out. By the way, the
where we made the testing phase for both the algorithms, and Region of Interest (ROI) of the camera has been set through
for this reason performs very well in this restricted context, the laptop interface. In this way we are able to focus our
while it fails if we extend its application on other assembly attention within the intermediate table limits only. According
lines where product shapes and dimensions differs from the to this settings we are able to discern 4 main phasesinside a
training set used. On the other hand, the developed BLOB- typical continuous video acquired using the camera:
based solution is not product type dependent and works in a
manner that allows its plant wide application, for other • the empty table;
assembly lines also. • a hand that holds a piece, places it on the table and
The remaining of the paper is organized as follows. In then goes away from the Region Of Interest (ROI) of
Section 2 we provide the description of the problem to be the camera;
solved together with the description of the two algorithms • the piece alone on the table;
written for counting using a simple video streaming data • a hand that comes into the field of view of the camera
source. Section 3 resumes test settings, and resulting and picks the piece to take it away from the field of
performances of the two solutions in terms of sensibility, view.
sensitivity, accuracy and qualitatively evaluating the Given the operators’ processing time variability
versatility and processing speed of the two solutions. Last previously described, it may happen that a piece is added on
Section is dedicated to the conclusion of this work as well as the tablewhile the previous one hasn’t been already picked
to the following steps we are going to make in order to up from the following operator, reason why the solution
improve the performances of the algorithm and of the system should be smart enough for taking into considerations this
inside the broader ICT architecture of the company. eventuality. Moreover, another possibility to bear in mind is

II. MATERIALS AND METHODS


First, Our aim is to monitor a manual assembly line by
counting the pieces produced, in order to support the
toolmaker in the setups that are necessary between orders
composed of different product types that require different
materials and/or different tools. Four workstations are placed
in line with intermediate white rectangular tables of 30 cm
by 65 cm in between. An operator places in the subsequent
intermediate table only pieces ready for the following step,
as soon as he accomplishes working on each of them (they
are always added one at a time). The next operator picks one
workpiece from the previous intermediate table and after
working on it, he places the processed workpiece in the
following intermediate table, and so on until the fourth
operator that outputs packaged products on a rullier. The
main source of complexity is the fact that there isn’t a fixed
position for the piece produced on the intermediate table and
sometimes, due to slight misalignment between workstation Fig. 1. Camera positioning over the intermediate table center for video
processing time, more than one piece lie on the intermediate acquisition, and the configured Region of Interest depicted in red.

34

Authorized licensed use limited to: Cornell University Library. Downloaded on September 04,2020 at 23:01:50 UTC from IEEE Xplore. Restrictions apply.
2020 International Conference on Intelligent Engineering and Management (ICIEM)
that it may happen that at the beginning of the assembly 8) Read the following frame (F_1);
session one piece from the previous day and so already 9) Iterate steps 2-7 for F_1;
counted, is on the table yet, but shouldn’t be counted. 10) Is the numerousness of centroids in F_0 equal to zero?
Going forward into the algorithm development process, a) If yes, is the numerousness of centroids in F_1 higher
according to [12] the Machine Vision techniques that can be than zero?
adopted for object detection and counting are: i) If yes, add one to the piece-count, update CN_0
to the valueofCN_1andthengotostep5
• segmentation-based, using texture identification or pixel ii) If no, go directly to step 5
variability measures and contour information; b) If answer to step 10 is no, is the numerousness of
• blobs-based, using morphological techniques; F_0 equal or higher than the numerousness of F_1?
• classifier-based using machine learning or deep learning
i) If yes, update the numerousness of the centroids
methodologies.
CN_0 to the value of CN_1 and go to step 5
In this paper we developed one that is a mixture of color- ii) If no, add one to the piece-count, update CN_0
based segmentation with binary morphology and BLOB to the value of CN_1 and then go to step 5
analysis, and a more advanced one, based on Machine 11) Continue as long as there are video frames available at
Learning, specifically an Aggregated Channel Features step 5, then stop the iterations.
Detector trained to be able to detect the custom objects of
interest. Both algorithms analyse frame by frame a video,
whether it is a file or a real-time streaming, and are able to
update the count whenever a piece is laid on the framed
intermediate table.

A. BLOB-based Algorithm
The template is Hereafter, we are going to describe in
details the developed solution based on simple image
processing techniques like binary morphology. This
algorithm compares the number of dark objects in the current
frame, with the number in the previous frame, thus being
able to understand if a piece has been added or taken out of
the ROI. In case of an added piece it updates the count,
otherwise, it goes forward with the next frame. As can be
expected, the tricky part of this algorithm is the right
isolation of BLOBs (Binary Large OBjects) corresponding to
the products and the consideration of operators’ hands as part
of the white background. We are facilitated by the fact that
operators in the line use white gloves for safety purposes,
nevertheless our solution is able to manage also other colors
different from total black, for example, the natural skin
colour and other light color gloves. The image pre-
processing and processing steps forming the algorithm,
synthetically depicted in the Fig. 2, are:
1) Read the first video frame F_0 (RGB format);
2) Make the frame a single channel one by selecting for
each pixel the highest value among the three R, G and B
channels;
3) Make the single channel image binary by the imposition
of a specific threshold (properly defined for discerning
black from other lighter colours);
4) Fill the holes in the complement of the binary image;
5) Apply morphological opening using as structuring
element a disk with radius of 8 pixels, that is adequate for
eliminating a small appendix of the piece that otherwise
could interfere with the BLOBs counting;
6) Apply the morphological closing using as structuring
element a disk with a radius of 50 pixels, that is adequate
to restore the original object sizing;
7) Detect relevant BLOBs (with Minimum and Maximum
Area parameters specified according to the contextual
dimensions of the objects to be counted) in this binary
image and compute the numerousness of the centroids
(CN_0); Fig. 2. Flowchart of the proposed BLOB-based solution.

35

Authorized licensed use limited to: Cornell University Library. Downloaded on September 04,2020 at 23:01:50 UTC from IEEE Xplore. Restrictions apply.
2020 International Conference on Intelligent Engineering and Management (ICIEM)
In Fig. 3 the image processing results are shown for one
emblematic frame where both the object and the hand with
the white glove that is bringing it on the table are present. It
is evident how, thanks to the defined sequence of steps, it is
(b) (c)
possible to evidence the object alone (white area) and take
out as irrelevant background (black area) everything else,
that is the hand together with the table. Taking the
maximum over the three channels every pixel that is not
black reduces its darkness in this gray-scale image. (e) (f)
Thresholding allows to eliminate all of the pixels that are
Fig. 3. The processing steps: (a) the original RGB image; (b) after the
not totally black. Now it is necessary to compute the maximum selection among the three RGB channels for each pixel; (c) after
complement image and fill eventual holes on it. Following the binarization; (d) after filling the complement image; (e) after the
step is the opening of the binary image using as structuring morphological opening; (f) after the morphological closing (i.e. the binary
element a disk of 8 pixel radius, that is able to eliminate all image used as input for the BLOB analysis).
the small noises and the object’s appendix that may interfere
in product counting. Last step is image closing using as another, obviously considering one product type only for
structuring element a disk of radius 50 pixel, with the aim of simplicity, given that testing on other product types would
restoring original dimensions to the main body of the object
that has been previously reduced by the image opening. In
the last frame, corresponding to this last step, it is evident
the isolation of the object body from everything else.
Therefore, this kind of image is suitable as input for the
BLOB analysis and finally for making reasoning aimed at
counting objects. The critical issues of this algorithm are the
definition of the threshold for the binarization (step 3), the
dimensions of the structuring elements for the opening and
closing operations, and the selection of the minimum and
maximum values for the BLOBs areas (step 7), so to take
out of the BLOBs count eventual small noises of the image.
Despite everything, these criticalities are easily solvable: the
threshold, once defined empirically, would be the same for
all of the other assembly lines that are inside of the same
department and have the same configuration and
characteristics. Similarly, the structuring elements aiming at
the elimination of a small appendix and to the restoration of
the object sizing would be the same for all of the other
product types assembled in the plant. Last criticality, the
minimum and maximum BLOB area, has been solved
developing an automatic parameter calculator. Given the
easy resolution of all of the criticalities we can end up that
this first algorithm is highly flexible and qualitatively
eligible for plant-wide application.

B. Object Detector-based Algorithm


The second algorithm developed relies on a custom
object detector, trained using 1300 different pictures of the
objects assembled in the production line of interest. This is
the major criticality of this method, that makes it hardly
scalable at a plant-wide level due to time and resource
constraints: the training of the detector is based on the
provisioning, during the training phase, of a collection of
product’s images (at least hundreds). The creation of such
collections, one for each product type and for each assembly
step, is extremely time-consuming. Furthermore, the
training of a detector itself is a time-consuming procedure,
and for the training of our detector in 5 stages using the
1300 objects’ images it took about one hour and a half
processing on a dedicated laptop whose specifics would be
described inside the following paragraph. In spite of
everything, this second algorithm as been developed to
compare the performances of the BLOB-based one with Fig. 4. Flowchart of the proposed Detector-based solution.

36

Authorized licensed use limited to: Cornell University Library. Downloaded on September 04,2020 at 23:01:50 UTC from IEEE Xplore. Restrictions apply.
2020 International Conference on Intelligent Engineering and Management (ICIEM)
require the training of additional detectors. All the steps of • Sensitivity (Se) - the number of TP divided by the
this second algorithm, summarized in Fig. 4, are: sum of the number of TP and FN;
1) Read the first video frame F\_0 (RGB format); TABLE I. OFFLINE TESTING RESULTS
2) Apply the detector to the image, obtaining as output the
BLOB ACF Detector
scores and the bounding boxes;
Positives Negatives Positives Negatives
3) Select for each group of overlapping boxes only the
strongest; True 560 20 561 19
4) Count these strongest boxes to understand how many False 18 574 6 586
objects there are in the image;
5) Read the following frame(F_1); • Accuracy (A) - the sum of TP and TN divided by
6) Repeat steps 2-4 for F_1; the sum of TP, FP, TN and FN;
7) Is the number of detected objects in F_1 higher than the • Specificity (Sp) - the number of TN divided by the
number of objects detected in F_0? sum of TN and FP.
a) If yes, add one to the piece count and update the
number of detected objects N_0 to the value of N_1; For debugging purposes all of the errors (True-
Negativesand False-Positives) have been analysed in details,
b) If no, update the number of detected objects N_0 to in order to understand possible causes of the failures. We
the value of N_1; found that all of the errors were associated with particularly
8) Continue as long as there are video frames available at critical and weird behaviours of the workers, reason why we
step 5, then stop iterating. expect that the performances in terms of metrics can be
improved with a minimum education on conducts to avoid.
As previously said, the critical part of this algorithm is the Going in depth into the metrics, the Sensitivity of both
training of the detector phase, for two reasons: the time solutions is close to 97%, highlighting the comparable
necessary for the creation of the “training-set” and the time capability to sense added pieces. The Accuracy of the two
necessary for the training phase to be accomplished. A solutions is comparable too, while if we focus on the
really big difference with criticalities of the BLOB-based Specificity, the ACF Detector-based algorithm performs
methodology, is that all of the BLOB’s difficulties can be better than the BLOB- based solution, as we expected.
solved in an automatised way, for example developing a Indeed, the detector is more able to discern between objects
parameter setting tool that even not experienced people can and other dark stuff, while the BLOB indiscriminately
use, while the issues related to the Detector are only perceives an added pieces if something dark comes into the
solvable case by case through direct involvement of an field of view of the camera. The two algorithms have been
expert in Computer Vision. tested analysing 15 frames per seconds from the original
videos that have 30 fps rate. The machine used for making
III. RESULTS the tests has an i7 at 2.8 GHz, 4 GB RAM, 750 HDD SATA,
Once developed the two solutions, we collected videos in iOS 10.3, and both algorithms took more time then the video
the real context, relative to the production of 580 pieces. The length for performing the analyses, reasons why a processing
main intention of this data collection has been to really time reduction is necessary for eventual real-time
compare the two algorithms, under identical working implementation of the solution. Nonetheless, it is important
conditions, something that the real-time testing couldn’t let to highlight that the lag reached by the BLOB algorithm is
us do. Through Matlab interface we have been able to save lower compared to the ACF Detector algorithm, showing the
several videos in MPEG-4, and later use them for testing higher simplicity of the proposed algorithm if compared to
both solutions on each of them. The possible behaviours of the adoption of a detector, that is a computing resource
our solution are the following: intensive task.
In short, the BLOB algorithm has shown comparable
• True Positive (TP) if it counts when a piece has performances in terms of the three quantitative metrics, it is
been added; more flexible in slightly different contexts where the
assumptions of very bright background and very dark object
• True Negative (TN) if it does not count whenever a
of interest are met, it is quickly adaptable and modifiable and
piece has not been added (when one product is
it can process up till 5 fps in real time. On the other hand, the
taken off the table);
ACF Detector-based algorithm, still performing very well in
• False Positive (FP) if it counts but no piece has been terms of quantitative metrics, is absolutely not flexible given
added; that grounds on a training phase where it has been trained for
recognizing the specific object of interest, its particular
• False Negative (FN) if it does not count but a piece shape, colour and dimension. Moreover, the training for
has been added. other product types is not as easy as the adaptation of the
Through this definitions we are able to summarize the BLOB-based procedure, given that it requires a preliminary
complete results of the testing into Table I, that is a
confusion matrix for the BLOB-based solution on the left TABLE II. OFFLINE TESTING PERFORMANCE METRICS
and for the ACF Detector-based solution on the right.
BLOB ACF Detector
Afterward, we computed three emblematic metrics for each
of the two solutions, and resumed them in Table II: Se 96.6% 96.7%
A 97.8% 97.9%
Sp 97% 99%

37

Authorized licensed use limited to: Cornell University Library. Downloaded on September 04,2020 at 23:01:50 UTC from IEEE Xplore. Restrictions apply.
2020 International Conference on Intelligent Engineering and Management (ICIEM)
phase where a collection of hundreds or thousands of Manager), for making available to us a real assembly line
object’s images has to be created, all of its inclinations and where we could install the camera, perform the algorithm
positioning on the table, and a secondary phase where this improvement and all the tests.
collection is used for the training. Both the phases are time-
consuming tasks respect to the BLOB algorithm adaptations. REFERENCES
To conclude with the comparison between the two
[1] P. M. Swamidass, Ed., Encyclopedia of Production and
algorithms, the detector requires also more computing efforts Manufacturing Management. Springer US, 2000.
and so more processing time to be performed, that reflects on [2] K. Feldmann and S. Slama, “Highly flexible assembly – scope and
the ability to process at maximum 2 fps in real time. justification,” CIRP Annals, vol. 50, no. 2, pp. 489–498, 2001.
Resuming all these results and the counting performances the [3] L. Barreto, A. Amaral, and T. Pereira, “Industry 4.0 implications in
proposed BLOB-based solution is the suggested one for logistics: an overview,” Procedia Manufacturing, vol. 13, pp. 1245–
counting dark pieces produced and manually placed on a 1252, 2017.
white table and then picked from it, or reversed colors [4] L. Monostori, “Cyber-physical production systems: Roots,
situations. This solution being reliable (97.8% Accuracy, expectations and r&d challenges,” Procedia CIRP, vol. 17, pp. 9–13,
96.6% Sensitivity and 97% Specificity), flexible, adaptable 2014.
and fast is suitable for plant-wide application for real-time [5] G. Lanza, B. Haefner, and A. Kraemer, “Optimization of selective
assembly and adaptive manufacturing by means of cyber-physical
counting pieces produced. system based matching,” CIRP Annals, vol. 64, no. 1, pp. 399–402,
2015.
CONCLUSIONS [6] B.Chen,J.Wan,L.Shu,P.Li,M.Mukherjee,andB.Yin,“Smartfactory of
industry 4.0: Key technologies, application case, and challenges,”
In this paper we proposed and compared two Machine IEEE Access, vol. 6, pp. 6505–6519, 2018.
Vision algorithms able to count the number of pieces [7] L. W. Teck, , M. Sulaiman, H. N. M. Shah, R. Omar, , and and,
assembled by an operator. From a practical point of view, a “Implementation of shape – based matching vision system in flexible
key requisite for this solution is its capability to be scalable manufacturing system,” Journal of Engineering Science and
and versatile in slightly different contexts. While the BLOB Technology Review, vol. 3, no. 1, pp. 128–135, jun 2010.
based algorithm satisfy this important requisite, the ACF [8] N. Kejriwal, S. Garg, and S. Kumar, “Product counting using images
with application to robot-based retail stock assessment,” in 2015
Detector-based is completely specialized and for this reason IEEE International Conference on Technologies for Practical Robot
fails when we use it in other slightly different contexts. Applications (TePRA). IEEE, may 2015.
Going into the counting performances of the two solutions, [9] P.Nerakae,P.Uangpairoj,andK.Chamniprasart,“Usingmachinevision
the quantitative metrics are comparable. The real-time for flexible automatic assembly system,” Procedia Computer Science,
capability of counting products finished by an operator is vol. 96, pp. 428–435, 2016.
essential in an environment where there is high need for [10] R.K.S.NishchalK.Verma,TeenaSharmaandA.Salour,“Visionbased
transparency and information integrationfor being flexible object counting using speeded up robust features for inventory
and fast enough in low-volume and high-mix manufacturing. control,” in 2016 IEEE International Conference on Computational
Science and Computational Intelligence (CSCI), 2016.
For this reason, our intent is to carry on this work
[11] C. Pornpanomchai, F. Stheitsthienchai, and S. Rattanachuen, “Object
specifically focusing on the real-time implementation of the detection and counting system,” in 2008 Congress on Image and
BLOB-based algorithm, in order to check the capacity to Signal Processing. IEEE, 2008.
work aligned with a streaming video source. Another key [12] J. G. A. Barbedo, “A review on methods for automatic counting of
point for the future steps would be the improvement of the objects in digital images,” IEEE Latin America Transactions, vol. 10,
BLOB algorithm performance in terms of Se, A, Sp, in order no. 5, pp. 2112–2124, sep 2012.
to create a system able to real-time monitor the produced [13] M. Baygin, M. Karakose, A. Sarimaden, and E. Akin, “An image
pieces with the minimum possible percentage of errors under processing based object counting approach for machine vision
application,” ArXiv, 2018.
normal working conditions of the workers. This work is a
[14] P. Dollar, R. Appel, S. Belongie, and P. Perona, “Fast feature
preliminary attempt in solving the needs of today’s pyramids for object detection,” IEEE Transactions on Pattern
companies, and aims at being the starting point for further Analysis and Machine Intelligence, vol. 36, no. 8, pp. 1532–1545, aug
improvements just described. 2014.
[15] S. Pasqualini, F. Fioretti, A. Andreoli, and P. Pierleoni, “Comparison
ACKNOWLEDGMENT of h.264/avc, h.264 with aif, and avs based on different video quality
metrics,” in 2009 International Conference on Telecommunications,
Thanks to the iGuzzini Illuminazione S.p.A., and in May 2009, pp. 190–195.
particular to Michela Palmucci (Logistic Centre Senior

38

Authorized licensed use limited to: Cornell University Library. Downloaded on September 04,2020 at 23:01:50 UTC from IEEE Xplore. Restrictions apply.

You might also like