Monocular semantic occupancy grid mapping with convolutional variational encoder–decoder networks

C Lu, MJG Van De Molengraft… - IEEE Robotics and …, 2019 - ieeexplore.ieee.org
IEEE Robotics and Automation Letters, 2019ieeexplore.ieee.org
In this letter, we research and evaluate end-to-end learning of monocular semantic-metric
occupancy grid mapping from weak binocular ground truth. The network learns to predict
four classes, as well as a camera to bird's eye view mapping. At the core, it utilizes a
variational encoder–decoder network that encodes the front-view visual information of the
driving scene and subsequently decodes it into a two-dimensional top-view Cartesian
coordinate system. The evaluations on Cityscapes show that the end-to-end learning of …
In this letter, we research and evaluate end-to-end learning of monocular semantic-metric occupancy grid mapping from weak binocular ground truth. The network learns to predict four classes, as well as a camera to bird's eye view mapping. At the core, it utilizes a variational encoder–decoder network that encodes the front-view visual information of the driving scene and subsequently decodes it into a two-dimensional top-view Cartesian coordinate system. The evaluations on Cityscapes show that the end-to-end learning of semantic-metric occupancy grids outperforms the deterministic mapping approach with flat-plane assumption by more than 12% mean intersection-over-union. Furthermore, we show that the variational sampling with a relatively small embedding vector brings robustness against vehicle dynamic perturbations, and generalizability for unseen KITTI data. Our network achieves real-time inference rates of approx. 35 Hz for an input image with a resolution of 256 × 512 pixels and an output map with 64 × 64 occupancy grid cells using a Titan V GPU.
ieeexplore.ieee.org