3D Mesh Generation Through Noised RGB-D Inputstream and Rule Based Denoising with Virtual City Model
Darmstadt, TU, Master Thesis, 2020
3D models are popular for planning in an urban context. The Levels of Detail (LoDs) can vary from cuboid shapes to highly detailed meshes. The acquisition and updating of those models is a cost intensive process requiring aerial footage and manual labor. This is why often only low detailed city models are available, which do not represent an up-to-date state. Updating a city model with RGB-D mesh generation can be a viable option, since depth sensing cameras have become cheap and machine learning techniques for predicting depth from a single color image have advanced. But depth values from those methods are very noisy. Although there are good options available for reconstructing a 3D mesh from a stream of color and depth images, this amount of noise represents a challenge. In this thesis a 3D mesh reconstruction method is presented that uses the existing virtual city model as a second data input to minimize the influence of noise. Therefore a virtual depth stream is created by rendering the urban model from the same perspective as the noised RGB-D stream. A set of rules merges both streams by leveraging their depth difference and normal deviation. The approach is implemented as an extension to the reconstruction algorithm of SurfelMeshing. The output is an updated model with more detailed building features. The evaluation is done in an artificial environment to test against ground truth with fixed noise levels. Quantitative results show that the approach is less prone to errors than using just the noised depth stream. Artifacts in the reconstruction can still arise especially with a very high noise level. The denoising capabilities show that salient features are kept while the overall output error is reduced.