Adaptive Large-scale Novel View Image Synthesis for Autonomous Driving Datasets

Published:

Figure: Illustration of the novel view image synthesis of static traffic scenes as well as the three major technical challenges. The source images are collected by a vehicle camera on the source path and the novel views are provided by users. The three challenges would bring artifacts and missing backgrounds in the synthesized images.

Abstract

Novel view image synthesis for outdoor scenes has been challenged by inaccurate depth measurements, moving objects, and wide-angle rendering. In this paper, we propose an adaptive novel view image synthesis pipeline to generate realistic images of large-scale traffic scenes. The novelty of this work is threefold: 1) developing a set of high-fidelity 3D surfel model reconstruction methods with depth refinement and moving object removal schemes; 2) developing a self-adaptive rendering scheme adapted to different novel views via surfel geometry adjustment; 3) developing a hyper-parameter tuning scheme based on image quality evaluation to achieve better surfel model construction and adaptation. The removed back- grounds and other occluded regions within 3D scene geometric models are further inpainted using a Generative Adversarial Network (GAN). The KITTI dataset and CARLA simulator are used to verify the proposed novel view image synthesis pipeline. Experiment results show that our method outperforms other approaches for large-scale traffic scene image synthesis in terms of computational efficiency and the quality of synthesized images.

Video