Autonomous Mobile Scanning Systems for the Digitization of Buildings: A Review
Abstract
:1. Introduction
2. Autonomous Scanning Platforms
3. Context of the Review in the Process of the Creation of As-Is Models
4. Open Issues
4.1. Utility and Redundancy of the Data
4.2. The Complexity of the Scene
4.2.1. Geometry
4.2.2. Occlusion and Clutter
4.3. The Next Best Scan Position
4.4. Assumptions and Initial Hypotheses
5. Comparison
6. Weaknesses and Strengths
7. Conclusions: Improvements and Future Projects
7.1. What Has Been Achieved?
7.2. What is achievable?
7.3. Future Challenging Projects
Funding
Conflicts of Interest
References
- Lehtola, V.V.; Kaartinen, H.; Nüchter, A.; Kaijaluoto, R.; Kukko, A.; Litkey, P.; Honkavaara, E.; Rosnell, T.; Vaaja, M.T.; Virtanen, J.-P.; et al. Comparison of the selected state-of-the-art 3D indoor scanning and point cloud generation methods. Remote Sens. 2017, 9, 796. [Google Scholar] [CrossRef]
- Kostavelis, I.; Gasteratos, A. Semantic mapping for mobile robotics tasks: A survey. Robot. Auton. Syst. 2015, 66, 86–103. [Google Scholar] [CrossRef]
- Almadhoun, R.; Taha, T.; Seneviratne, L.; Dias, J.; Cai, G. A survey on inspecting structures using robotic systems. Int. J. Adv. Robot. Syst. 2016, 13. [Google Scholar] [CrossRef]
- Sequeira, V.; Goncalves, J.G.M.; Ribeiro, M.I. 3D reconstruction of indoor environments. In Proceedings of the 3rd IEEE International Conference on Image Processing, Lausanne, Switzerland, 19 September 1996; Volume 1, pp. 405–408. [Google Scholar]
- Surmann, H.; Nüchter, A.; Hertzberg, J. An autonomous mobile robot with a 3D laser range finder for 3D exploration and digitalization of indoor environments. Robot. Auton. Syst. 2003, 45, 181–198. [Google Scholar] [CrossRef] [Green Version]
- Strand, M.; Dillmann, R. Using an attributed 2D-grid for next-best-view planning on 3D environment data for an autonomous robot. In Proceedings of the 2008 IEEE International Conference on Information and Automation, ICIA 2008, Changsha, China, 20–23 June 2008; pp. 314–319. [Google Scholar]
- Blaer, P.S.; Allen, P.K. Data acquisition and view planning for 3-D modeling tasks. In Proceedings of the International Conference on Intelligent Robots and Systems, San Diego, CA, USA, 29 October–2 November 2007; pp. 417–422. [Google Scholar]
- Blodow, N.; Goron, L.C.; Marton, Z.; Pangercic, D.; Rühr, T.; Tenorth, M.; Beetz, M. Autonomous semantic mapping for robots performing everyday manipulation tasks in kitchen environments. In Proceedings of the IEEE International Conference on Intelligent Robots and Systems, San Francisco, CA, USA, 20–30 September 2011; pp. 4263–4270. [Google Scholar]
- Potthast, C.; Sukhatme, G.S. A probabilistic framework for next best view estimation in a cluttered environment. J. Vis. Commun. Image Represent. 2014, 25, 148–164. [Google Scholar] [CrossRef] [Green Version]
- Charrow, B.; Kahn, G.; Patil, S.; Liu, S. Information-Theoretic Planning with Trajectory Optimization for Dense 3D Mapping. In Proceedings of the Robotics: Science and Systems, Rome, Italy, 17 July 2015. [Google Scholar]
- Iocchi, L.; Pellegrini, S. Building 3d Maps With Semantic Elements Integrating 2D Laser, Stereo Vision And IMU On A Mobile Robot. In Proceedings of the 2nd ISPRS International Workshop on 3D-ARCH, ETH Zurich, Switzerland, 12–13 July 2007. [Google Scholar]
- Borrmann, D.; Nuchter, A.; Seder, M.; Maurović, I. A mobile robot based system for fully automated thermal 3D mapping. Adv. Eng. Inform. 2014, 28, 425–440. [Google Scholar] [CrossRef]
- Prieto, S.A.; Quintana, B.; Adán, A.; Vázquez, A.S. As-is building-structure reconstruction from a probabilistic next best scan approach. Robot. Auton. Syst. 2017, 94, 186–207. [Google Scholar] [CrossRef]
- Kim, P.; Chen, J.; Cho, Y.K. SLAM-driven robotic mapping and registration of 3D point clouds. Autom. Constr. 2018, 89, 38–48. [Google Scholar] [CrossRef]
- Bircher, A.; Kamel, M.; Alexis, K.; Oleynikova, H.; Siegwart, R. Receding horizon path planning for 3D exploration and surface inspection. Auton. Robots 2018, 42, 291–306. [Google Scholar] [CrossRef]
- Meng, Z.; Qin, H.; Chen, Z.; Chen, X.; Sun, H.; Li, F.; Ang, M.H., Jr. A Two-Stage Optimized Next-View Planning Framework for 3-D Unknown Environment Exploration, and Structural Reconstruction. IEEE Robot. Autom. Lett. 2017, 2, 1680–1687. [Google Scholar] [CrossRef]
- Kurazume, R.; Oshima, S.; Nagakura, S.; Jeong, Y.; Iwashita, Y. Automatic large-scale three dimensional modeling using cooperative multiple robots. Comput. Vis. Image Underst. 2016, 157, 25–42. [Google Scholar] [CrossRef]
- Heng, L.; Gotovos, A.; Krause, A.; Pollefeys, M. Efficient visual exploration and coverage with a micro aerial vehicle in unknown environments. In Proceedings of the IEEE International Conference on Robotics and Automation, Seattle, WA, USA, 26–30 May 2015; pp. 1071–1078. [Google Scholar]
- Rusu, R.B.; Marton, Z.C.; Blodow, N.; Dolha, M.; Beetz, M. Towards 3D Point cloud based object maps for household environments. Robot. Auton. Syst. 2008, 56, 927–941. [Google Scholar] [CrossRef]
- Jung, J.; Hong, S.; Jeong, S.; Kim, S.; Cho, H.; Hong, S.; Heo, J. Productive modeling for development of as-built BIM of existing indoor structures. Autom. Constr. 2014, 42, 68–77. [Google Scholar] [CrossRef]
- Wang, C.; Cho, Y.K.; Kim, C. Automatic BIM component extraction from point clouds of existing buildings for sustainability applications. Autom. Constr. 2015, 56, 1–13. [Google Scholar] [CrossRef]
- Mura, C.; Mattausch, O.; Villanueva, A.J.; Gobbetti, E.; Pajarola, R. Automatic room detection and reconstruction in cluttered indoor environments with complex room layouts. Comput. Graph. 2014, 44, 20–32. [Google Scholar] [CrossRef] [Green Version]
- Nüchter, A.; Hertzberg, J. Towards semantic maps for mobile robots. Robot. Auton. Syst. 2008, 56, 915–926. [Google Scholar] [Green Version]
- Lee, Y.-C.; Park, S. 3D map building method with mobile mapping system in indoor environments. In Proceedings of the 2013 16th International Conference on Advanced Robotics (ICAR), Montevideo, Uruguay, 25–29 November 2013. [Google Scholar]
- Shen, S.; Michael, N.; Kumar, V. Obtaining Liftoff Indoors: Autonomous Navigation in Confined Indoor Environments. IEEE Robot. Autom. Mag. 2013, 20, 40–48. [Google Scholar] [CrossRef]
- Biswas, J.; Veloso, M. Depth camera based indoor mobile robot localization and navigation. In Proceedings of the 2012 IEEE International Conference on Robotics and Automation, Saint Paul, MN, USA, 14–18 May 2012. [Google Scholar]
- Borrmann, D.; Heß, R.; Houshiar, H.R.; Eck, D.; Schilling, K.; Nüchter, A. Robotic Mapping of Cultural Heritage Sites. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2015, 40, 9–16. [Google Scholar] [CrossRef]
- Nüchter, A.; Lingemann, K.; Hertzberg, J.; Surmann, H. 6D SLAM-3D Mapping Outdoor Environments. J. Field Robot. 2007, 34, 699–722. [Google Scholar] [CrossRef]
- Ahn, J.; Wohn, K. Interactive scan planning for heritage recording. Multimed. Tools Appl. 2015, 75, 3655–3675. [Google Scholar] [CrossRef]
- Biber, P.; Andreasson, H.; Duckett, T.; Schilling, A. 3D Modeling of Indoor Environments by a Mobile Robot with a Laser Scanner and Panoramic Camera. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Sendai, Japan, 28 September–2 October 2004; Volume 4, pp. 3430–3435. [Google Scholar]
- Thrun, S.; Hahnel, D.; Ferguson, D.; Montemerlo, M.; Triebel, R.; Burgard, W.; Baker, C.; Omohundro, Z.; Thayer, S.; Whittaker, W. A system for volumetric robotic mapping of abandoned mines. In Proceedings of the 2003 IEEE International Conference on Robotics and Automation (Cat. No.03CH37422), Taipei, Taiwan, 14–19 Sepember 2003; Volume 3, pp. 4270–4275. [Google Scholar]
- Jun, C.; Youn, J.; Choi, J.; Medioni, G.; Doh, N.L. Convex Cut: A realtime pseudo-structure extraction algorithm for 3D point cloud data. In Proceedings of the 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Hamburg, Germany, 28 September–2 October 2015; pp. 3922–3929. [Google Scholar]
- Wolf, D.; Howard, A.; Sukhatme, G.S. Towards geometric 3D mapping of outdoor environments using mobile robots. In Proceedings of the 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Edmonton, AB, Canada, 2–6 August 2005. [Google Scholar]
- Fischler, M.A.; Bolles, R.C. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 1981, 24, 381–395. [Google Scholar] [CrossRef]
- Torr, P.H.S.; Zisserman, A. MLESAC: A New Robust Estimator with Application to Estimating Image Geometry. Comput. Vis. Image Underst. 2000, 78, 138–156. [Google Scholar] [CrossRef] [Green Version]
- Connolly, C. The Determination of next best views. In Proceedings of the 1985 IEEE International Conference on Robotics and Automation, St. Louis, MO, USA, 25–28 March 1985; Volume 2, pp. 432–435. [Google Scholar]
- Yamauchi, B. A frontier-based approach for autonomous exploration. In Proceedings of the 1997 IEEE International Symposium on Computational Intelligence in Robotics and Automation CIRA’97. “Towards New Computational Principles for Robotics and Automation, Monterey, CA, USA, 10–11 July 1997; pp. 146–151. [Google Scholar]
- Song, S.; Jo, S. Online inspection path planning for autonomous 3D modeling using a micro-aerial vehicle. In Proceedings of the IEEE International Conference on Robotics and Automation, Singapore, 29 May–3 June 2017; pp. 6217–6224. [Google Scholar]
- Quintana, B.; Prieto, S.A.; Adán, A.; Vázquez, A.S. Semantic scan planning for indoor structural elements of buildings. Adv. Eng. Inform. 2016, 30, 643–659. [Google Scholar] [CrossRef]
- Furrer, F.; Burri, M.; Achtelik, M.; Siegwart, R. RotorS—A Modular Gazebo MAV Simulator Framework; Springer: Cham, Switzerland, 2016; pp. 595–625. [Google Scholar]
- Rohmer, E.; Singh, S.P.N.; Freese, M. V-REP: A versatile and scalable robot simulation framework. In Proceedings of the IEEE International Conference on Intelligent Robots and Systems, Tokyo Big Sight, Japan, 3–8 November 2013; pp. 1321–1326. [Google Scholar]
- Gschwandtner, M.; Kwitt, R.; Uhl, A.; Pree, W. BlenSor: Blender sensor simulation toolbox. In Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Springer: Berlin/Heidelberg, Germany, 2011; Volume 6939 LNCS, no. PART 2; pp. 199–208. [Google Scholar]
- Holz, D.; Basilico, N.; Amigoni, F.; Behnke, S. Evaluating the Efficiency of Frontier-based Exploration Strategies. In Proceedings of the 2010 41st International Symposium on and 2010 6th German Conference on Robotics (ROBOTIK), Munich, Germany, 7–9 June 2010; Volume 1, p. 8. [Google Scholar]
Method | Year | Tested Environment | Sensors | Transport |
---|---|---|---|---|
Sequeira [4] | 1996 | A part of a single room | Time-of-flight laser range finder (LRF) | Ground robot |
Surmann [5] | 2003 | Corridor | Two 2D LRF | Ground robot |
Blaer [7] | 2007 | Interior and exterior scenes | 3D laser scanner, 2D LRF and RGB camera. | Ground robot |
Iocchi [11] | 2007 | Corridor and several rooms | 2D LRF | Ground robot |
Strand [6] | 2008 | Corridor and several rooms | 2D LRF on a rotating platform and a camera | Ground robot |
Blodow [8] | 2011 | Single room | 2D LRF and a registered colour camera. | Ground robot |
Bormann [12] | 2014 | Corridor and several rooms | 3D laser scanner, thermal camera and RGB camera | Ground robot |
Potthast [9] | 2014 | Three scenarios. A table top scene, two adjacent rooms and corridor with rooms | 2D LRF and RGB-D camera | Ground robot |
Charrow [10] | 2015 | Long corridor | RGB-D sensor | Ground robot |
Bircher [15] | 2016 | Indoors and outdoors | Stereo camera | Hexacopter |
Prieto [13] | 2017 | Complex configuration of adjacent rooms and corridors | 3D laser scanner and 2D LRF | Ground robot |
Kurazume [17] | 2017 | Indoors and outdoors | 3D laser scanner | Multiple ground robots and quadcopters |
Meng [16] | 2017 | Indoors | Rotating laser module | Quadcopter |
Kim [14] | 2018 | Scans taken in corridors and walkways | Two 2D LRF and DSLR camera | Ground robot |
Ref. | Environment Tested | Goal | 2D/3D NBS Algorithm | Geometry | Occlusion and Clutter | Hypotheses and Assumptions | Output |
---|---|---|---|---|---|---|---|
[5] Surmann 2003 | Indoor. Corridor | Digitalisation of 3D indoor environments | Maximum information gain criterion. Reduction in the robot path length and rotation angles. 2D NBS considering different horizontal planes at different heights. | Rectangular floor and flat walls | Low occlusion | No initial assumptions | 3D model with bounding boxes |
[7] Blaer 2007 | Outdoor. Part of a campus | Data acquisition and view planning for large-scale indoor and outdoor sites. | Maximum unseen boundary voxels, 3D NBS | No geometric restrictions | High occlusion Inhabited cultural heritage sites | Previous 2D map needed. | 3D point cloud |
[11] Iocchi 2007 | Indoor. Corridor and several rooms | Generation of visually realistic 3D maps formed of semantic structural elements | Frontier-based exploration, 2D NBS | NR | Low occlusion Inhabited scene with minor obstacles | Previous 2D map needed for the exploration | 3D model generated by creating walls from the 2D map lines |
[6] Strand 2008 | Indoor. Corridor and several rooms | 3D scanning of indoor environments | Function of unexplored areas, overlapping, distance and proximity to obstacles. NBS in 2D grid with 3D information using different attributes | Rectangular floor and flat walls | High occlusion Inhabited scene | Corridors must be bigger than rooms | 3D point cloud with texture superimposed |
[8] Blodow 2011 | Indoor. Single room | Semantic representation of a kitchen | Maximum fringe and occupied voxels in a 2D projection with 50% minimum overlapping, NBS with 3D information in 2D costmaps. | Rectangular floor and flat walls | High occlusion Inhabited scene | Furniture and extracted elements must be cuboids | Semantic 3D map with information about the furniture (handles, doors, etc.) |
[12] Borrmann 2014 | Indoor. Corridor and several rooms | Generation of 3D thermal models of indoor environments | Maximum amount of unexplored regions (2D) and unseen boundary voxels (3D) Combination of 2D NBS and 3D NBS | Rectangular floor and flat walls | High occlusion Inhabited scene | No initial assumptions | 3D point cloud with thermal images. Reconstructed mesh |
[9] Potthast 2014 | Indoor. Three scenarios. A table top scene, two adjacent rooms and corridor with rooms | 3D data acquisition and viewpoint selection for occluded environments | Highest expected knowledge gain using probabilistic methods, 3D NBS | Rectangular floor and flat walls | Low occlusion | No initial assumptions | NR |
[10] Charrow 2015 | Indoor. Long corridor | 3D mapping | Trajectory that maximises an information-theoretic objective based on the Cauchy-Schwarz QMI and locally optimising portions of the trajectory to maximise the CSQMI objective. | Rectangular floor and flat walls | Low occlusion | No initial assumptions | 3D point cloud |
[15] Bircher 2016 | Indoor/Outdoor | 3D exploration and surface inspection | Receding horizon paradigm, 2D NBS | No geometric restrictions | Low occlusion Inhabited scene | Volume with given bounds | 3D voxel model with occupied and free voxels |
[13] Prieto2017 | Indoor. Complex configuration of adjacent rooms and corridors | 3D scanning of indoor structural elements in complex scenes | Maximum sum of probabilities of visible voxels being a structural element, 3D NBS | Convex and concave floor and flat walls. | High occlusion Inhabited scene. | No initial assumptions | 3D labelled voxel model, 3D point cloud and 3D CAD single model of the scene |
[17] Kurazume 2017 | Indoor/Outdoor | 3D scan planning | Frontier based approach, 2D NBS | No geometric restrictions | High occlusion Inhabited scene | No initial assumptions | 3D point cloud |
[16] Meng 2017 | Indoor. Office corridor environment | 3D exploration | Frontier based approach, 2D NBS | No geometric restrictions | Low occlusion Inhabited scene. | No initial assumptions | 3D voxel model and raw point cloud |
[14] Kim 2018 | Indoor. Corridor and walkway | Mapping and registration 2D NBS | Maximum visible area along a predetermined robot trajectory, 2D NBS | Rectangular floor and flat walls | Low occlusion | No initial assumptions | 3D point cloud |
Ref. | Preprocessing Outlier PC alignment | Door Detection | Time Requirements | Simulated/Real Tests | Comparison Report | Quantitative Evaluation Related to the 3D Model Obtained | Time Reports |
---|---|---|---|---|---|---|---|
[5] Surmann 2003 | NR | No | Yes. Time involved in the NBV calculation | R | No | No | 3D scan matching |
[7] Blaer 2007 | NR | No | Yes. Typical runtime, scan time | R | No | Voxel data for the NBV and dimensions of the scene. | NBV time |
[11] Iocchi 2007 | No | No | Yes, acquisition time | R | No | Only the quantity of laser scans recorded | Total scanning time |
[6] Strand 2008 | Yes | Doors | Yes, acquisition time | R | No | Only the quantity of laser scans recorded | Total scanning time |
[8] Blodow 2011 | Yes | No | NR | R | No | No | No |
[12] Borrmann 2014 | NR | Doors | Yes, acquisition time and reconstruction algorithm runtime | R | No | Number of unseen, occupied and potential unseen voxels from the next best position | Mesh model creation. |
[9] Potthast 2014 | NR | No | Yes. Runtime of the NBV computation. | S/R | Experimental | Number of unobserved cells in each scan position | Average NBV time |
[10] Charrow 2015 | No | No | Yes | S/R | Experimental | No | Total scanning time. Planning time |
[15] Bircher 2016 | NR | No | NR | S/R | No | Surface inspected | Exploration and computation time |
[13] Prieto 2017 | Yes | Doors | Yes. Runtime of the NBV computation. | S/R | Theoretical and experimental | Yes. Percentage of the structural element’s sensed area | Scanning and NBV times |
[17] Kurazume 2017 | Yes | No | NR | S/R | Comparison with earlier prototypes | Area coverage rate | No |
[16] Meng 2017 | Yes | No | No | S/R | Yes | No | Exploration and computation time |
[14] Kim 2018 | Yes | No | NR | R | No | Registration accuracy | No |
Ref. | Limitations and Weaknesses | Strengths |
---|---|---|
[5] Surmann 2003 | Reduced movement ability of the robot (simple trajectories). The NBV is based on 2D data | The planning algorithm works in a continuous state space rather than a grid-based space. |
[7] Blaer2007 | Localisation based on a GPS, it cannot be used indoors. Multiple iterations needed to attain the final model. Two-dimensional map of the region is assumed. | The system works in indoor and outdoor scenes. Scanning in a large-scale environment. |
[11] Iocchi 2007 | Owing to the way in which the 3D model is obtained, there could be wrong structures and a loss of information in the final model. | Generation of a single 3D model of the building structure |
[6] Strand 2008 | Scene size restrictions. Doors must be open. The 3D information is compressed in a 2D grid and the door detection could lead to failures owing to the loss of information. | Reduction of the planning model representation. |
[8] Blodow 2011 | The objective is focused on mapping objects in the scene. Inefficient 2D/3D NBV for 3D model of buildings. High overlapping between scans is required. | Exhaustive labelling. Detailed semantic 3D model of the scene. The correct registration of point clouds is guaranteed with the overlapping restriction applied. |
[12] Borrmann 2014 | The robot has plenty of space to move, the occlusion and the obstacles are concentrated on the walls. Great loss of information because of the height at which the data is taken. | The system obtains a 3D thermal point cloud. Low computational and memory requirements. |
[9] Potthast 2014 | The NBV might not be reachable for the robot and the final position could be bad for the exploration. The exploration algorithm is evaluated in a simulated and simple scenario. There is an error owing to accumulative registration issues | The system is able to mimic different exploration strategies. The system works in small cluttered scenes (table top) and less cluttered large indoor scenes (simulated office environments). |
[10] Charrow 2015 | The exploration is based on 2D data | Particularly efficient for robots with limited battery life, equipped with noisy sensors with short sensing ranges and limited FOV. |
[15] Bircher 2016 | Low resolution of the 3D model. High level of occlusion is not permitted. Scene with given bounds. | Online planning. Real applicability. Open source. |
[13] Prieto 2017 | The robot’s footprint is too big for inhabited indoor scenes. Excessive time involved in the preprocessing stages. | The system works in complex scenarios composed of furnished concave rooms. The number of scans is reduced. Generation of a single 3D model of the building structure. |
[17] Kurazume 2017 | Planning algorithm in 2D space. High complexity of the overall system. | Scanning in a large-scale environment with no geometric restrictions. |
[16] Meng 2017 | Low resolution of the 3D model. High level of occlusion is not permitted. | Online planning. Real applicability. |
[14] Kim 2018 | Noisy individual dynamic point cloud. The registered point cloud is not sufficiently accurate | The system works without using targets. The number of scans is reduced. |
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Adán, A.; Quintana, B.; Prieto, S.A. Autonomous Mobile Scanning Systems for the Digitization of Buildings: A Review. Remote Sens. 2019, 11, 306. https://doi.org/10.3390/rs11030306
Adán A, Quintana B, Prieto SA. Autonomous Mobile Scanning Systems for the Digitization of Buildings: A Review. Remote Sensing. 2019; 11(3):306. https://doi.org/10.3390/rs11030306
Chicago/Turabian StyleAdán, Antonio, Blanca Quintana, and Samuel A. Prieto. 2019. "Autonomous Mobile Scanning Systems for the Digitization of Buildings: A Review" Remote Sensing 11, no. 3: 306. https://doi.org/10.3390/rs11030306