ViPNet: An End-to-End 6D Visual Camera Pose Regression Network
Haohao Hu, Aoran Wang, Marc Sons, and Martin Lauer
In 2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC) 2020
In this work, we present a visual pose regression network: ViPNet. It is robust and real-time capable on mobile platforms such as self-driving vehicles. We train a convolutional neural network to estimate the six degrees of freedom camera pose from a single monocular image in an end-to-end manner. In order to estimate camera poses with uncertainty, we use a Bayesian version of the ResNet-50 as our basic network. SEBlocks are applied in residual units to increase our model’s sensitivity to informative features. Our ViPNet is trained using a geometric loss function with trainable parameters, which can simplify the fine-tuning process significantly. We evaluate our ViPNet on the Cambridge Landmarks dataset and also on our Karl-Wilhelm-Plaza dataset, which is recorded with an experimental vehicle. As evaluation results, our ViPNet outperforms other end-to-end monocular camera pose estimation methods. Our ViPNet requires only 9-15ms to predict one camera pose, which allows us to run it with a very high frequency.