Skip to main content

Advertisement

Springer Nature Link
Account
Menu
Find a journal Publish with us Track your research
Search
Cart
  1. Home
  2. Journal of Intelligent & Robotic Systems
  3. Article

Comparative Analysis of Deep Neural Networks for the Detection and Decoding of Data Matrix Landmarks in Cluttered Indoor Environments

  • Regular Paper
  • Open access
  • Published: 11 August 2021
  • Volume 103, article number 13, (2021)
  • Cite this article
Download PDF

You have full access to this open access article

Journal of Intelligent & Robotic Systems Aims and scope Submit manuscript
Comparative Analysis of Deep Neural Networks for the Detection and Decoding of Data Matrix Landmarks in Cluttered Indoor Environments
Download PDF
  • Tiago Almeida  ORCID: orcid.org/0000-0001-9059-61751,2,
  • Vitor Santos1,
  • Oscar Martinez Mozos2 &
  • …
  • Bernardo Lourenço1 
  • 1281 Accesses

  • 7 Citations

  • Explore all metrics

Abstract

Data Matrix patterns imprinted as passive visual landmarks have shown to be a valid solution for the self-localization of Automated Guided Vehicles (AGVs) in shop floors. However, existing Data Matrix decoding applications take a long time to detect and segment the markers in the input image. Therefore, this paper proposes a pipeline where the detector is based on a real-time Deep Learning network and the decoder is a conventional method, i.e. the implementation in libdmtx. To do so, several types of Deep Neural Networks (DNNs) for object detection were studied, trained, compared, and assessed. The architectures range from region proposals (Faster R-CNN) to single-shot methods (SSD and YOLO). This study focused on performance and processing time to select the best Deep Learning (DL) model to carry out the detection of the visual markers. Additionally, a specific data set was created to evaluate those networks. This test set includes demanding situations, such as high illumination gradients in the same scene and Data Matrix markers positioned in skewed planes. The proposed approach outperformed the best known and most used Data Matrix decoder available in libraries like libdmtx.

Article PDF

Download to read the full article text

Similar content being viewed by others

Fast Landmark Recognition in Photo Albums

Chapter © 2019

Enhancing Generalization in Few-Shot Learning for Detecting Unknown Adversarial Examples

Article Open access 05 March 2024

A comprehensive review of recent advances on deep vision systems

Article 11 May 2018

Explore related subjects

Discover the latest articles and news from researchers in related subjects, suggested using machine learning.
  • Automated Pattern Recognition
  • Computer Vision
  • Linear Algebra
  • Machine Learning
  • Neural decoding
  • Object Recognition
Use our pre-submission checklist

Avoid common mistakes on your manuscript.

References

  1. Li, D., Ouyang, B., Wu, D., Wang, Y: Artificial intelligence empowered multi-AGVs in manufacturing systems. arXiv:1909.033731909.03373 (2019)

  2. Shi, Q., Zhang, Y.-L., Li, L., Yang, X., Li, M., Zhou, J.: Safe: Scalable automatic feature engineering framework for industrial tasks (2020)

  3. Oyekanlu, E.A., Smith, A.C., Thomas, W.P., Mulroy, G., Hitesh, D., Ramsey, M., Kuhn, D.J., Mcghinnis, J.D., Buonavita, S.C., Looper, N.A., Ng, M., Ng’oma, A., Liu, W., Mcbride, P.G., Shultz, M.G., Cerasi, C., Sun, D.: A review of recent advances in automated guided vehicle technologies: Integration challenges and research areas for 5g-based smart manufacturing applications. IEEE Access 8, 202312–202353 (2020)

    Article  Google Scholar 

  4. Bhosekar, A., Isik, T., Eksioglu, S., Gilstrap, K., Allen, R.: Simulation-optimization of automated material handling systems in a healthcare facility. arXiv: Optimization and Control (2020)

  5. Cui, W., Wu, C., Zhang, Y., Li, B., Fu, W.: Indoor robot localization based on multidimensional scaling. Int. J. Distrib. Sens. Netw. 11(8), 719658 (2015)

    Article  Google Scholar 

  6. Sabattini, L., Digani, V., Secchi, C., Cotena, G., Ronzoni, D., Foppoli, M., Oleari, F.: Technological roadmap to boost the introduction of agvs in industrial applications. In: 2013 IEEE 9th International Conference on Intelligent Computer Communication and Processing (ICCP), pp. 203–208 (2013)

  7. Fellan, A., Schellenberger, C., Zimmermann, M., Schotten, H.D.: Enabling communication technologies for automated unmanned vehicles in industry 4.0. In: 2018 International Conference on Information and Communication Technology Convergence (ICTC), pp. 171–176 (2018)

  8. Bergamin, M.: Indoor localization using visual information and passive landmarks. Master Thesis, Universities of Padova and Aveiro. http://tesi.cab.unipd.it/50395/1/bergamin_marco.pdf (2015)

  9. Betke, M., Gurvits, L.: Mobile robot localization using landmarks. IEEE Trans. Robot. Autom. 13(2), 251–263 (1997)

    Article  Google Scholar 

  10. Okuyama, K., Kawasaki, T., Kroumov, V.: Localization and position correction for mobile robot using artificial visual landmarks. In: The 2011 International Conference on Advanced Mechatronic Systems, pp. 414–418 (2011)

  11. Zhang, X., Zhu, S., Wang, Z., Li, Y.: Hybrid visual natural landmark-based localization for indoor mobile robots. Int. J. Adv. Robot. Syst. 15(6) (2018)

  12. Almeida, T., Santos, V., Lourenço, B., Fonseca, P.: Detection of data matrix encoded landmarks in unstructured environments using deep learning. In: 2020 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC), pp. 74–80 (2020)

  13. Babinec, A., Jurišica, L., Hubinský, P., Duchoň, F.: Visual localization of mobile robot using artificial markers. Procedia Eng. 96, 1–9 (2014). Modelling of Mechanical and Mechatronic Systems

    Article  Google Scholar 

  14. Mutka, A., Miklic, D., Draganjac, I., Bogdan, S.: A low cost vision based localization system using fiducial markers. IFAC Proc. Vol. 41(2), 9528–9533 (2008). 17th IFAC World Congress

    Article  Google Scholar 

  15. Romero-Ramirez, F.J., Muñoz-Salinas, R., Medina-Carnicer, R.: Speeded up detection of squared fiducial markers. Image Vis. Comput. 76, 38–47 (2018)

    Article  Google Scholar 

  16. Mantha, B., Garcia de Soto, B.: Designing a reliable fiducial marker network for autonomous indoor robot navigation. In: Al-Hussein, M. (ed.) Proceedings of the 36th International Symposium on Automation and Robotics in Construction (ISARC), pp 74–81. International Association for Automation and Robotics in Construction (IAARC), Banff (2019)

  17. Annusewicz, A., Zwierzchowski, J.: Marker detection algorithm for the navigation of a mobile robot. In: 2020 27th International Conference on Mixed Design of Integrated Circuits and System (MIXDES), pp 223–226 (2020)

  18. Zhong, X., Zhou, Y., Liu, H.: Design and recognition of artificial landmarks for reliable indoor self-localization of mobile robots. Int. J. Adv. Robot. Syst. 14(1), 1729881417693489 (2017)

    Article  Google Scholar 

  19. Karrach, L., Pivarčiová, E.: The analyse of the various methods for location of data matrix codes in images. In: 2018 ELEKTRO, pp 1–6 (2018)

  20. Dai, Y., Liu, L., Song, W., Du, C., Zhao, X.: The realization of identification method for datamatrix code. In: 2017 International Conference on Progress in Informatics and Computing (PIC), pp 410–414 (2017)

  21. Hansen, D.K., Nasrollahi, K., Rasmusen, C.B., Moeslund, T.B.: Real-time barcode detection and classification using deep learning. In: Proceedings of the 9th International Joint Conference on Computational Intelligence - Volume 1: IJCCI, pp 321–327. SciTePress (2017)

  22. Jiao, L., Zhang, F., Liu, F., Yang, S., Li, L., Feng, Z., Qu, R.: A survey of deep learning-based object detection. IEEE Access 7, 128837–128868 (2019)

    Article  Google Scholar 

  23. Zhang, H., Shi, G., Liu, L., Zhao, M., Liang, Z.: Detection and identification method of medical label barcode based on deep learning. In: 2018 Eighth International Conference on Image Processing Theory, Tools and Applications (IPTA), pp 1–6 (2018)

  24. Yang, Q., Golwala, G., Sundaram, S., Lee, P., Allebach, J.: Barcode detection and decoding in on-line fashion images. Electron. Imaging 2019(8), 413–1–413–7 (2019)

    Article  Google Scholar 

  25. Zharkov, A., Zagaynov, I.: Universal barcode detector via semantic segmentation. CoRR arXiv:1906.06281 (2019)

  26. Ren, S., He, K., Girshick, R.B., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. CoRR arXiv:1506.01497 (2015)

  27. Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S.E., Fu, C.-Y., Berg, A.C.: SSD: single shot multibox detector. CoRR arXiv:1512.02325 (2015)

  28. Redmon, J., Divvala, S.K., Girshick, R.B., Farhadi, A.: You only look once: Unified, real-time object detection. CoRR arXiv:1506.02640 (2015)

  29. Bochkovskiy, A., Wang, C-Y, Liao, H-: Yolov4: Optimal speed and accuracy of object detection. arXiv:2004.10934 (2020)

  30. Benali Amjoud, A., Amrouch, M.: Convolutional neural networks backbones for object detection. In: El Moataz, A., Mammass, D., Mansouri, A., Nouboud, F. (eds.) Image and Signal Processing, pp 282–289. Springer International Publishing, Cham (2020)

  31. Lin, T.-Y., Dollár, P., Girshick, R.B., He, K., Hariharan, B., Belongie, S J: Feature pyramid networks for object detection. CoRR arXiv:1612.03144 (2016)

  32. Cao, G., Xie, X., Yang, W., Liao, Q., Shi, G., Wu, J.: Feature-fused SSD: fast detection for small objects. CoRR arXiv:1709.05054 (2017)

  33. Huang, Z., Wang, J.: DC-SPP-YOLO: dense connection and spatial pyramid pooling based YOLO for object detection. CoRR arXiv:1903.08589 (2019)

  34. Lin, T-Y, Goyal, P., Girshick, R B, He, K., Dollár, P.: Focal loss for dense object detection. CoRR arXiv:1708.02002 (2017)

  35. Chen, Y., Yang, T., Zhang, X., Meng, G., Xiao, X., Sun, J.: Detnas: Backbone search for object detection. In: Wallach, H., Larochelle, H., Beygelzimer, A., d’Alché-Buc, F., Fox, E., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 32, pp 6642–6652. Curran Associates, Inc. (2019)

  36. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556 (2014)

  37. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. CoRR arXiv:1512.03385 (2015)

  38. Huang, G., Liu, Z., Weinberger, K.Q.: Densely connected convolutional networks. CoRR arXiv:1608.06993 (2016)

  39. Howard, A.G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., Andreetto, M., Adam, H.: Mobilenets: Efficient convolutional neural networks for mobile vision applications. CoRR arXiv:1704.04861 (2017)

  40. Iandola, F.N., Moskewicz, M.W., Ashraf, K., Han, S., Dally, W.J., Keutzer, K.: Squeezenet: Alexnet-level accuracy with 50x fewer parameters and < 1mb model size. CoRR arXiv:1602.07360(2016)

  41. Zhang, X., Zhou, X., Lin, M., Sun, J.: Shufflenet: An extremely efficient convolutional neural network for mobile devices. CoRR arXiv:1707.01083 (2017)

  42. Redmon, J., Farhadi, A.: Yolov3: An incremental improvement. arXiv:1804.02767 (2018)

  43. Girshick, R.B.: Fast R-CNN. CoRR arXiv:1504.08083 (2015)

  44. Wang, C.-Y., Liao, H.-Y.M., Yeh, I.-H., Wu, Y.-H., Chen, P.-Y., Hsieh, J.-W.: Cspnet: A new backbone that can enhance learning capability of cnn. arXiv:1911.11929 (2019)

  45. Liu, S., Qi, L., Qin, H., Shi, J., Jia, J.: Path aggregation network for instance segmentation. CoRR arXiv:1803.01534 (2018)

  46. Tan, M., Pang, R., Le, Q.: Efficientdet: Scalable and efficient object detection. arXiv:1911.09070 (2019)

  47. Zhou, X., Zhuo, J., Krähenbühl, P.: Bottom-up object detection by grouping extreme and center points. CoRR arXiv:1901.08043 (2019)

Download references

Funding

Open access funding provided by Örebro University. This work was partially supported by Project SeaAI-FA_02_2017_011, Project PRODUTECH II SIF- POCI-01-0247-FEDER-024541, by the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation, and by the Spanish Ministerio de Ciencia, Innovación y Universidades under project RobWell (RTI2018-095599-A-C22).

Author information

Authors and Affiliations

  1. IEETA, DEM, University of Aveiro, 3810-193, Aveiro, Portugal

    Tiago Almeida, Vitor Santos & Bernardo Lourenço

  2. Center for Applied Autonomous Sensor Systems (AASS), Örebro University, 702 81, Örebro, Sweden

    Tiago Almeida & Oscar Martinez Mozos

Authors
  1. Tiago Almeida
    View author publications

    Search author on:PubMed Google Scholar

  2. Vitor Santos
    View author publications

    Search author on:PubMed Google Scholar

  3. Oscar Martinez Mozos
    View author publications

    Search author on:PubMed Google Scholar

  4. Bernardo Lourenço
    View author publications

    Search author on:PubMed Google Scholar

Contributions

– Tiago Almeida: Coding and writing

– Vitor Santos: Writing and review

– Oscar Martinez Mozos: Review

– Bernardo Lourenço: Test set acquisition and review

Corresponding author

Correspondence to Tiago Almeida.

Ethics declarations

Competing Interests

The authors declare that they have no conflict of interest.

Additional information

Consent to Participate

The authors consent to participate in this work.

Availability of Data and Material

Code at github.com/tmralmeida/data-matrix-detection-benchmark

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Almeida, T., Santos, V., Mozos, O.M. et al. Comparative Analysis of Deep Neural Networks for the Detection and Decoding of Data Matrix Landmarks in Cluttered Indoor Environments. J Intell Robot Syst 103, 13 (2021). https://doi.org/10.1007/s10846-021-01442-x

Download citation

  • Received: 20 August 2020

  • Accepted: 17 June 2021

  • Published: 11 August 2021

  • DOI: https://doi.org/10.1007/s10846-021-01442-x

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Keywords

  • Deep learning
  • Data matrix
  • Detection
  • Decoding
  • Localization
Use our pre-submission checklist

Avoid common mistakes on your manuscript.

Advertisement

Search

Navigation

  • Find a journal
  • Publish with us
  • Track your research

Discover content

  • Journals A-Z
  • Books A-Z

Publish with us

  • Journal finder
  • Publish your research
  • Language editing
  • Open access publishing

Products and services

  • Our products
  • Librarians
  • Societies
  • Partners and advertisers

Our brands

  • Springer
  • Nature Portfolio
  • BMC
  • Palgrave Macmillan
  • Apress
  • Discover
  • Your US state privacy rights
  • Accessibility statement
  • Terms and conditions
  • Privacy policy
  • Help and support
  • Legal notice
  • Cancel contracts here

Not affiliated

Springer Nature

© 2025 Springer Nature