- Position description
- Research topics
Automated 3D Mass Digitization for the GLAM Sector
Archiving 2020 online. Final Program and Proceedings
Archiving <2020, Online>
The European Cultural Heritage Strategy for the 21st century has led to an increased demand for fast, efficient and faithful 3D digitization technologies for cultural heritage artefacts. Yet, unlike the digital acquisition of cultural goods in 2D which is widely used and automated today, 3D digitization often still requires significant manual intervention, time and money. To overcome this, the authors have developed CultLab3D, the world’s first fully automatic 3D mass digitization technology for collections of three-dimensional objects. 3D scanning robots such as the CultArm3D-P are specifically designed to automate the entire 3D digitization process thus allowing to capture and archive objects on a large-scale and produce highly accurate photo-realistic representations.
Fully Automatic Mechanical Scan Range Extension of a Lens-Shifted Structured Light System
Darmstadt, Hochschule, Master Thesis, 2020
Cultural heritage are precious goods which need to be preserved for coming generations. Due to many reasons, e.g., wars or natural decay, those objects are in danger of destruction. In order to prevent them from being lost forever, those objects are digitized as 3D models to be accessible for further generations of mankind, the Fraunhofer Institute for computer graphics research offers a fully automatic 3D digitization system called the CultLab3D. There is already a fully functional system for big objects. However, it is more difficult to scan small objects like coins or rings. Those small objects are often referred to as 2.5D objects because they often got engravings and inscriptions on their surface, which cannot even be felt with ones fingers. Scanning such fine detailed objects needs a system that can measure such details. This is accomplished by the MesoScannerV2, an extension of the CultLab3D. It is designed for the digitization of these 2.5D objects without missing details. The MesoScannerV2 is a structured light system which uses a special variation of the phase shift method in order to improve the accuracy of the digitized 3D model of the object. The structured light-based MesoScannerV2 reaches an advanced depth and lateral resolution due to its specialty, the extension of state-of-the-art fringe patterns by a mechanical lens-shifted surface encoding method. Due to bad data acquisition and due to possible uncertainties of numerical algorithms noise is generated which directly influences the digitized 3D models. Therefore, this thesis aims to reduce the generated noise to get cleaner 3D models. Furthermore, the MesoScannerV2 needs to be future-proof, which requires an automation of the scan process of many objects at the same time. The integration of an automation procedure to the MesoScannerV2 is another topic discussed in this thesis. We show that methods are found to reduce the generated noise significantly in particular, we provide a corresponding evaluation. Further, possible solutions to automate the scan process could be found.
Geometry Classification through Feature Extraction and Pattern Recognition in 3D Space
Darmstadt, TU, Master Thesis, 2020
In dieser Masterarbeit wird der Versuch unternommen, ähnliche Wappendarstellungen auf 3D-Modellen von Scherben abzugleichen. Teil eines initialen Workflows ist die Reliefextraktion, für die ein Ansatz von Zatzarinni et al. verwendet wird. Um Informationen der Objektoberfläche zu extrahieren, wird eine Local Binary Pattern Variante von Thompson et al. implementiert. Die resultierenden Merkmalsdeskriptoren werden dann unter Verwendung einer Abstandsmetrik verglichen. Am Ende führt der vorgeschlagene Ansatz nicht zu guten Ergebnissen, aber die aufgetretenen Herausforderungen sind dokumentiert und zukünftige Lösungen werden diskutiert.
Towards 3D Digitization in the GLAM (Galleries, Libraries, Archives, and Museums) Sector – Lessons Learned and Future Outlook
The IPSI BgD Transactions on Internet Research
The European Cultural Heritage Strategy for the 21st century, within the Digital Agenda, one of the flagship initiatives of the Europe 2020 Strategy, has led to an increased demand for fast, efficient and faithful 3D digitization technologies for cultural heritage artefacts. 3D digitization has proven to be a promising approach to enable precise reconstructions of objects. Yet, unlike the digital acquisition of cultural goods in 2D which is widely used and automated today, 3D digitization often still requires significant manual intervention, time and money. To enable heritage institutions to make use of large scale, economic, and automated 3D digitization technologies, the Competence Center for Cultural Heritage Digitization at the Fraunhofer Institute for Computer Graphics Research IGD has developed CultLab3D, the world’s first fully automatic 3D mass digitization technology for collections of three-dimensional objects. 3D scanning robots such as the CultArm3D-P are specifically designed to automate the entire 3D digitization process thus allowing to capture and archive objects on a large-scale and produce highly accurate photo-realistic representations. The unique setup allows to shorten the time needed for digitization from several hours to several minutes per artefact.
End-to-end Color 3D Reproduction of Cultural Heritage Artifacts: Roseninsel Replicas
Eurographics Workshop on Graphics and Cultural Heritage (GCH) <17, 2019, Sarajevo, Bosnia and Herzegovina>
Planning exhibitions of cultural artifacts is always challenging. Artifacts can be very sensitive to the environment and therefore their display can be risky. One way to circumvent this is to build replicas of these artifacts. Here, 3D digitization and reproduction, either physical via 3D printing or virtual, using computer graphics, can be the method of choice. For this use case we present a workflow, from photogrammetric acquisition in challenging environments to representation of the acquired 3D models in different ways, such as online visualization and color 3D printed replicas. This work can also be seen as a first step towards establishing a workflow for full color end-to-end reproduction of artifacts. Our workflow was applied on cultural artifacts found around the “Roseninsel” (Rose Island), an island in Lake Starnberg (Bavaria), in collaboration with the Bavarian State Archaeological Collection in Munich. We demonstrate the results of the end-to-end reproduction workflow leading to virtual replicas (online 3D visualization, virtual and augmented reality) and physical replicas (3D printed objects). In addition, we discuss potential optimizations and briefly present an improved state-of-the-art 3D digitization system for fully autonomous acquisition of geometry and colors of cultural heritage objects.
Lossless Compression of Multi-View Cultural Heritage Image Data
Eurographics Workshop on Graphics and Cultural Heritage (GCH) <17, 2019, Sarajevo, Bosnia and Herzegovina>
Photometric multi-view 3D geometry reconstruction and material capture are important techniques for cultural heritage digitalization. Capturing images of artifacts with high resolution and high dynamic range and the possibility to store them losslessly enables future proof application of this data. As the images tend to consume immense amounts of storage, compression is essential for long time archiving. In this paper, we present a lossless image compression approach for multi-view and material reconstruction datasets with a strong focus on data created from cultural heritage digitalization. Our approach achieves compression rates of 2:1 compared against an uncompressed representation and 1.24:1 when compared against Gzip.
Phenomenological Acquisition and Rendering of Optical Material Behavior for Entire 3D Objects
Darmstadt, TU, Bachelor Thesis, 2019
In the last few years, major improvements in 3D scanning and rendering technology have been accomplished. Especially the acquisition of surface appearance information has seen innovation thanks to phenomenological approaches for capturing lighting behavior. In this work, the current Bi-directional Texturing Function (BTF) and Approximate- BTF (ABTF) approaches were extended to allow for a greater depth of effects to be captured as well as the ability to reproduce entire 3D objects from different viewing angles. The proposed Spherical Harmonic BTF (SHBTF) is able to model the captured surface appearance of objects by encoding all measured light samples into spherical harmonic coefficients, allowing for calculation of the surface appearance for any given light direction. In contrast to the ABTF, an SHBTF can capture multiple views of the same object which enables it to efficiently reproduce anisotropic material properties and subsurface scattering in addition to the spatially varying effects captured by an ABTF. The CultArc3D capturing setup used for all measurements is versatile enough to deliver view and light samples from a full hemisphere around an arbitrary object. It is now possible to capture entire 3D objects as opposed to many other BTF acquisition techniques. Challenges for the SH based lighting solution are ringing artifacts, growing stronger with rising SH bands. Another challenge for a full 3D experience was the re-projection of camera images onto a 3D model, depending heavily on the camera hardware calibration. The SH based approach has the potential to produce compelling results given further optimizations of the SH and re-projection accuracy.
Seamless and Non-repetitive 4D Texture Variation Synthesis and Real-time Rendering for Measured Optical Material Behavior
Computational Visual Media
We show how to overcome the single weakness of an existing fully automatic system for acquisition of spatially varying optical material behavior of real object surfaces. While the expression of spatially varying material behavior with spherical dependence on incoming light as a 4D texture (an ABTF material model) allows flexible mapping onto arbitrary 3D geometry, with photo-realistic rendering and interaction in real time, this very method of texture-like representation exposes it to common problems of texturing, striking in two disadvantages. Firstly, non-seamless textures create visible artifacts at boundaries. Secondly, even a perfectly seamless texture causes repetition artifacts due to their organised placement in large numbers over a 3D surface. We have solved both problems through our novel texture synthesis method that generates a set of seamless texture variations randomly distributed over the surface at shading time. When compared to regular 2D textures, the inter-dimensional coherence of the 4D ABTF material model poses entirely new challenges to texture synthesis, which includes maintaining the consistency of material behavior throughout the 4D space spanned by the spatial image domain and the angular illumination hemisphere. In addition, we tackle the increased memory consumption caused by the numerous variations through a fitting scheme specifically designed to reconstruct the most prominent effects captured in the material model.
Automated Acquisition and Real-time Rendering of Spatially Varying Optical Material Behavior
ACM SIGGRAPH 2018 Posters
International Conference on Computer Graphics and Interactive Techniques (SIGGRAPH) <45, 2018, Vancouver, BC, Canada>
We created a fully automatic system for acquisition of spatially varying optical material behavior of real object surfaces under a hemisphere of individual incident light directions. The resulting measured material model is flexibly applicable to arbitrary 3D model geometries, can be photorealistically rendered and interacted with in real-time and is not constrained to isotropic materials.
CultArc3D_mini: Fully Automatic Zero-Button 3D Replicator
Eurographics Workshop on Graphics and Cultural Heritage (GCH) <16, 2018, Vienna, Austria>
3D scanning and 3D printing are two rapidly evolving domains, both generating results with a huge and growing spectrumof applications. Especially in Cultural Heritage, a massive and increasing amount of objects awaits digitization for variouspurposes, one of them being replication. Yet, current approaches to optical 3D digitization are semi-automatic at best andrequire great user effort whenever high quality is desired. With our solution we provide the missing link between both domains,and present a fully automatic 3D object replicator which does not require user interaction. The system consists of ourphotogrammetric 3D scanner CultArc3D_mini that captures an optimal image set for 3D geometry and texture reconstructionand even optical material properties of objects in only minutes, a conveyor system for automatic object feed-in and -out,a 3D printer, and our sensor-based process flow software that handles every single process step of the complex sequencefrom image acquisition, sensor-based object transportation, 3D reconstruction involving different kinds of calibrations, to3D printing of the resulting virtual replica immediately after 3D reconstruction. Typically, one-button machines require theuser to start the process by interacting over a user interface. Since positioning and pickup of objects is automatically registered,the only thing left for the user to do is placing an object at the entry and retrieving it from the exit after scanning. Shortly after,the 3D replica can be picked up from the 3D printer. Technically, we created a zero-button 3D replicator that provides highthroughput digitization in 3D, requiring only minutes per object, and it is publicly showcased in action at 3IT Berlin.
Example-based Synthesis of Seamless Texture Variations and Application to the Acquisition of Optical Material-Properties
Darmstadt, TU, Master Thesis, 2018
This work extends an existing workflow for acquisition, synthesis and rendering of Approximate Bi-directional Texturing Functions (ABTF), which represent a lower-dimensional alternative to Bi-directional Texturing Functions (BTF). In the first steps, image corrections and registration are presented to optimize the current setup. Texture synthesis needs to consider the surface geometry and reflection parameters to generate consistent images for all illumination directions. Furthermore, a method for generating seamless texture tiles and randomly spread them on a target surface to create renderings with less noticeable repetitions in texture is discussed in detail. To handle the enormous memory consumption of multiple ABTF datasets a fitting scheme is proposed specifically designed to reconstruct the most prominent effects captured in the ABTF model.
Extension of the ABTF Material Acquisition and Rendering Process to CultArc3D Image Data
Darmstadt, Hochschule, Bachelor Thesis, 2018
The importance of photorealistic 3D rendering of different materials is increasing, as there are various domains of application, such as the textile and the 3D games industry. In order to be able to do real-time rendering involving a physical material, a method for its acquisition and realistic rendering on 3D geometry is required. So far a single-camera system called ABTF Scanner is already able to acquire flat materials that are anisotropic (appearance dependent on rotation around surface normal) using a turntable, which makes it possible to map the acquired material onto an arbitrary 3D geometry during real-time rendering. Another scanner system consisting of multiple cameras called CultArc3D can also be used for this purpose. Due to its structure, allowing for lighting over a hemisphere by turning an arc equipped with light sources, there is no need for a turntable to acquire materials, as opposed to the single-camera system that achieves hemispheric illumination by combining a fixed quarter light arc with a rotary. In order to make images acquired by CultArc3D usable for real-time rendering, this thesis extends the software implemented for the ABTF Scanner. The extension was done in a way that makes the software multi functional in that it is now able to do real-time rendering for materials acquired by either of the above mentioned scanner system. For the first time images taken by CultArc3D can be used for renderings of ABTF material samples that capture material behavior for a comparable set of virtual light directions. Additionally a new shader (computer program used for 3D rendering) is implemented to provide real-time rendering with respect to the different data structure imposed by the concept of CultArc3D. The experimental evaluation shows that real-time rendering with the images acquired by CultArc3D can lead to better results compared to images taken by the ABTF Scanner, because the back-rotation of images, introduced by the rotary in the ABTF Scanner setup, is not required by CultArc3D. Thus, a number of calibrations and alignment steps that possibly introduce visual artifacts if not performed correctly, can be avoided. As a result CultArc3D can now be used for ABTF real-time rendering in addition to its capability of acquiring geometry, texture and a number of different optical material models. The software can be extended for different viewing perspectives during rendering in future work, due a hemispherical distribution of camera perspectives around the object.
Synthesis and Rendering of Seamless and Non-Repetitive 4D Texture Variations for Measured Optical Material Properties
SIGGRAPH Asia 2018 Technical Briefs
Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia (SIGGRAPH ASIA) <11, 2018, Tokyo, Japan>
We have lifted the one weakness of an existing fully automatic acquisition system for spatially varying optical material behavior of real object surfaces. While its expression of spatially varying material behavior with spherical dependence on incoming light as 4D texture (ABTF material model) allows flexible mapping on arbitrary 3D geometries, photo-realistic rendering and interaction in real-time, this very method of texture-like representation exposed it to common problems of texturing, striking in two levels. First, non-seamless textures create visible border artifacts. Second, even a perfectly seamless texture causes repetition artifacts due to side-by-side distribution in large numbers over the 3D surface. We solved both problems through our novel texture synthesis that generates a set of seamless texture variations randomly distributed on the surface at shading time. When compared to regular 2D textures, the inter-dimensional coherence of the 4D ABTF material model poses entirely new challenges to texture synthesis, which includes maintaining the consistency of material behavior throughout the space spanned by the spatial image domain and the angular illumination hemisphere. In addition, we tackle the increased memory consumption caused by the numerous variations through a fitting scheme specifically designed to reconstruct the most prominent effects captured in the material model.
3D Mass Digitization: A Milestone for Archeological Documentation
VAR. Virtual Archaeology Review [online]
In the heritage field the demand for fast and efficient 3D digitization technologies for historic remains is increasing. Besides, 3D digitization has proved to be a promising approach to enable precise reconstructions of objects. Yet, unlike the digital acquisition of cultural goods in 2D widely used today, 3D digitization often still requires a significant investment of time and money. To make it more widely available to heritage institutions, the Competence Center for Cultural Heritage Digitization at the Fraunhofer Institute for Computer Graphics Research IGD has developed CultLab3D, the world's first fully automatic 3D mass digitization facility for collections of three-dimensional objects. CultLab3D is specifically designed to automate the entire 3D digitization process thus allowing users to scan and archive objects on a large-scale. Moreover, scanning and lighting technologies are combined to capture the exact geometry, texture, and optical material properties of artefacts to produce highly accurate photo-realistic representations. The unique setup allows shortening the time needed for digitization to several minutes per artefact instead of hours, as required by conventional 3D scanning methods.
Acceleration of 3D Mass Digitization Processes: Recent Advances and Challenges
Mixed Reality and Gamification for Cultural Heritage
In the heritage field, the demand for fast and efficient 3D digitization technologies for historic remains is increasing. Besides, 3D has proven to be a promising approach to enable precise reconstructions of cultural heritage objects. Even though 3D technologies and postprocessing tools are widespread and approaches to semantic enrichment and Storage of 3D models are just emerging, only few approaches enable mass capture and computation of 3D virtual models from zoological and archeological findings. To illustrate how future 3D mass digitization systems may look like, we introduce CultLab3D, a recent approach to 3D mass digitization, annotation, and archival storage by the Competence Center for Cultural Heritage Digitization at the Fraunhofer Institute for Computer Graphics Research IGD. CultLab3D can be regarded as one of the first feasible approaches worldwide to enable fast, efficient, and cost-effective 3D digitization. lt specifically designed to automate the entire process and thus allows to scan and archive large amounts of heritage objects for documentation and preservation in the best possible quality, taking advantage of integrated 30 visualization and annotation within regular Web browsers using technologies such as WebGI and X3D.
c-Space: Time-evolving 3D Models (4D) from Heterogeneous Distributed Video Sources
Eurographics Workshop on Graphics and Cultural Heritage (GCH) <14, 2016, Genova, Italy>
We introduce c-Space, an approach to automated 4D reconstruction of dynamic real world scenes, represented as time-evolving 3D geometry streams, available to everyone. Our novel technique solves the problem of fusing all sources, asynchronously captured from multiple heterogeneous mobile devices around a dynamic scene at a real word location. To this end all captured input is broken down into a massive unordered frame set, sorting the frames along a common time axis, and finally discretizing the ordered frame set into a time-sequence of frame subsets, each subject to photogrammetric 3D reconstruction. The result is a time line of 3D models, each representing a snapshot of the scene evolution in 3D at a specific point in time. Just like a movie is a concatenation of time-discrete frames, representing the evolution of a scene in 2D, the 4D frames reconstructed by c-Space line up to form the captured and dynamically changing 3D geometry of an event over time, thus enabling the user to interact with it in the very same way as with a static 3D model. We do image analysis to automatically maximize the quality of results in the presence of challenging, heterogeneous and asynchronous input sources exhibiting a wide quality spectrum. In addition we show how this technique can be integrated as a 4D reconstruction web service module, available to mobile end-users.
CultLab3D - On the Verge of 3D Mass Digitization
Eurographics Symposium on Graphics and Cultural Heritage (GCH) <12, 2014, Darmstadt, Germany>
Acquisition of 3D geometry, texture and optical material properties of real objects still consumes a considerable amount of time, and forces humans to dedicate their full attention to this process. We propose CultLab3D, an automatic modular 3D digitization pipeline, aiming for efficient mass digitization of 3D geometry, texture, and optical material properties. CultLab3D requires minimal human intervention and reduces processing time to a fraction of today's efforts for manual digitization. The final step in our digitization workflow involves the integration of the digital object into enduring 3D Cultural Heritage Collections together with the available semantic information related to the object. In addition, a software tool facilitates virtual, location-independent analysis and publication of the virtual surrogates of the objects, and encourages collaboration between scientists all around the world. The pipeline is designed in a modular fashion and allows for further extensions to incorporate newer technologies. For instance, by switching scanning heads, it is possible to acquire coarser or more refined 3D geometry.
Synthesis of Periodic Textures Using Image Decomposition and Neighborhood Synthesis
Darmstadt, TU, Master Thesis, 2013
This thesis deals with the automatic generation of texture tiles based on a given two-dimensional texture sample. By simply stitching the tiles together, arbitrarily sized planes can be tessellated in little time at low storage requirement. To ensure satisfying results, several demands have to be considered: Most essential for a successful tiling is a good boundary handling to prevent visible seams. Among color adjustment, it is a special quest to maintain features beyond the borders. Moreover, the tile should contain enough information about the texture's topology in order to be able to realistically reconstruct it. Since there is a variety of textures with different properties and demands, two approaches are proposed that together cover the whole spectrum: The first one initially corrects the input image regarding rotation and warping and then decomposes it in a periodic part and a smooth residual, where the actual decomposition depends on several parameters. The periodic part can then be used as a tile. The second approach reconstructs a transition between opposing borders by extracting matching patches from the input image and stitching them along the borders. Since unwanted repetitions might occur when simply stitching the tiles together, an additional re-synthesis step is proposed at the end of this work that re-synthesizes the inner part of a tile in different ways and thus generates a stack where tiles can be randomly chosen from and stitched together until the desired space is filled.
High Resolution Acquisition of Detailed Surfaces with Lens-Shifted Structured Light
Computers & Graphics
We present a novel 3D geometry acquisition technique at high resolution based on structured light reconstruction with a low-cost projector-camera system. Using a 1D mechanical lens-shifter extension in the projector light path, the projected pattern is shifted in subpixel scale steps with a granularity of up to 2048 steps per projected pixel, which opens up novel possibilities in depth accuracy and smoothness for the acquired geometry. Combining the mechanical lens-shifter extension with a multiple phase shifting technique yields a measuring range of 120×80 mm while at the same time providing a high depth resolution of better than 100µm. Reaching beyond depth resolutions achieved by conventional structured light scanning approaches with projector-camera systems, depth layering effects inherent to conventional techniques are fully avoided. Relying on low-cost consumer products only, we reach an area resolution of down to 55µm (limited by the camera). We see two main benefits. First, our acquisition setup can reconstruct finest details of small cultural heritage objects such as antique coins and thus digitally preserve them in appropriate precision. Second, our accurate height fields are a viable input to physically based rendering in combination with measured material BRDFs to reproduce compelling spatially varying, material-specific effects.
Interactive Semantic Enrichment of 3D Cultural Heritage Collections
International Symposium on Virtual Reality, Archaeology and Cultural Heritage (VAST) <13, 2012, Brighton, UK>
Virtual Surrogates of Cultural Heritage (CH) objects are seriously being considered in professional activities such as conservation and preservation, exhibition planning, packing, and scholarly research, among many other activities. Although this is a very positive development, a bare 3D digital representation is insufficient and poor for fulfilling the full range of professional activities. In this paper, we present the first interactive semantic enrichment tool for 3D CH collections that is fully based on the CIDOC-CRM schema and that fully supports its sophisticated annotation model. The tool eases the user interaction, allowing inexperienced users without previous knowledge on semantic models or 3D modeling to employ it and to conceive it for the professional workflow on 3D annotations. We illustrate the capabilities of our tool in the context of the Saalburg fort, during Roman times (2nd century AD), for the protection of the Limes on the Taunus hills in Germany.
Iterative Scan Planning for Automated 3D Reconstruction
Dijon, Univ. de Bourgogne, Master Thesis, 2012
3D Scanning is one of the most important and interesting areas in modern Computer Vision research. 3D Scanning mainly deals with creating a virtual CAD/Polygonal mesh model of real object. For proper preservation and restoration of cultural objects, it is very important to have detailed information about the object in 3D. The current methods available to scan and represent the digital 3D model of such objects are very cumbersome and expensive. In this thesis, we present a method for Iterative scan planning for Automated 3D reconstruction of Cultural Heritage Objects. The main area we have focussed is automated 3D scanning. Primary goal is to implement automated scanning with minimum human intervention in the entire process. The method intended will be similar to the concept of Assembly line in production, where the automation and separation of individual tasks will help to speed up the process of 3D scanning. We have implemented a method similar to mass vector chain approach for iterative scanning. Our algorithm predicts evolution of 3D surface based on previously scanned surfaces. This idea coupled with several approaches from graph theory provides robust estimation for the surface evolution, which can be used in determining next pose. Another part of the thesis focuses on "Geometric lossy compression of 3D meshes using convex hulls". This method is entirely new proposed by the author. This method is based on use of ellipsoidal convex hulls to get lossy compression of the given geometric 3D mesh. The compressed version of mesh can later be used for multi-level or multi-resolution viewing and easy transportation over web. The use of ellipsoidal convex hulls combined with the method of ray-casting provides at least 35 percent compressions for the same number of vertices, although the original vertices are not retained. Instead they will be replaced by the new vertices obtained by the method of ray-casting onto the surface of given 3D mesh.
Removing the Example from Example-based Photometric Stereo
Trends and Topics in Computer Vision
Reconstruction and Modeling of Large-Scale 3D Virtual Environments (RMLE) <2010, Heraklion, Greece>
We introduce an example-based photometric stereo approach that does not require explicit reference objects. Instead, we use a robust multi-view stereo technique to create a partial reconstruction of the scene which serves as sceneintrinsic reference geometry. Similar to the standard approach, we then transfer normals from reconstructed to unreconstructed regions based on robust photometric matching. In contrast to traditional reference objects, the scene-intrinsic reference geometry is neither noise free nor does it necessarily contain all possible normal directions for given materials.We therefore propose several modifications that allow us to reconstruct high quality normal maps. During integration, we combine both normal and positional information yielding high quality reconstructions. We show results on several datasets including an example based on data solely collected from the Internet.
Dense 3D Reconstruction and Object Recognition using a Minimum Set of Inside-Out Images
Pilani, BITS Univ., Bachelor Thesis, 2011
Dense 3D Reconstruction of environments is important for various applications like augmented reality, artefact digitization and object classification. Object classification in particular allows for scene understanding. This work proposes the development of a pipeline for image based 3D reconstruction and object recognition. The 2D images under consideration are the inside out images of the interior of a room. A dense 3D reconstruction allows the description of the room as point clouds on which the object recognition algorithms are implemented. To allow for flexibility in terms of image acquisition methods, the algorithm is robust to the type of image input as well as the number of images. Matching algorithms like Scale Invariant Feature Transform (SIFT) provide accurate correspondences while Structure from Motion (SFM) algorithms use these correspondences to estimate precise camera pose and Multi-view Stereo (MVS) methods take images with pose as input and produce dense 3D models. The reconstructed scenes are then acted upon by the 3D feature extractor and the features are compared with pre-trained classifiers from a database to carry out object recognition. The pipeline has been developed to allow for different types of Multi-view Stereo input images. While plane images allow for cheap equipment, spherical and cylindrical panoramic images allow for ease in image acquisition. We have used a modified version of the existent Open Street Map (OSM) Bundler for Structure from Motion and an overlapping view clustering problem for the Multi-view Stereo. Iterative Closest Point algorithm allows for integrating the depth maps to generate the mesh model. The pipeline also allows for an input generated using the Microsoft Xbox Kinect. We have used the Cluster Viewpoint Feature Histogram algorithm (CVFH) for the object recognition and also proposed the use of Normal Aligned Radial Features (NARF). We also study the prospect of using 2D to 3D feature correlation to find objects in the 3D generated model of the room from a 2D image of that room. This work also shows the results of a comparative study undertaken between the different possible methods to complete the task. We also study the different image geometries to further explore the invariance to camera models. Finally the pipeline has been integrated in the Rapid Prototyping Environment (RPE) framework of the Department Interactive Engineering Technologies as a plug-in to provide additional functionality.
A Full HDR Pipeline from Acquisition to Projection
Siggraph 2010. Full Conference DVD-ROM
International Conference on Computer Graphics and Interactive Techniques (SIGGRAPH) <37, 2010, Los Angeles, CA, USA>
In this publication we present one of the first full HDR visualization systems starting with HDR material and light acquisition, providing a HDR light simulation and rendering pipeline and finally displaying maximum fidelity image quality with color gamut enhanced HDR projection technology to bring the total dynamic range to over 5.000.000:1. We demonstrate these capabilities in the fields of car design and architecture.
High Resolution Acquisition of Detailed Surfaces with Lens-Shifted Structured Light
International Symposium on Virtual Reality, Archaeology and Cultural Heritage (VAST) <11, 2010, Paris, France>
We present a novel 3D geometry acquisition technique at high resolution based on structured light reconstruction with a low-cost projector-camera system. Using a 1D mechanical lens-shifter extension in the projector light path, the projected pattern is shifted in fine steps at sub-pixel scale with a granularity of down to 2048 steps per projected pixel, which opens up novel possibilities in depth accuracy and smoothness for the acquired geometry. Combining the mechanical lens-shifter extension with a multiple phase shifting technique yields a measuring range of 120x80 mm while at the same time providing a high depth resolution of better than 100 micron. Reaching far beyond depth resolutions achieved by conventional structured light scanning approaches with projector-camera systems, depth layering effects inherent to conventional techniques are fully avoided. Relying on low-cost consumer products only, we reach an area resolution of down to 55 micron (limited by the camera). We see two fields of benefit. Firstly, our acquisition setup can reconstruct finest details of small Cultural Heritage objects such as antique coins and thus digitally preserve them in appropriate precision. Secondly, our accurate height fields can be viable input to physically based rendering in combination with measured material BRDFs to reproduce compelling spatially varying, material-specific effects.
Scene Reconstruction from Community Photo Collections
The literally billions of images available from online photo-sharing sites offer an unprecedented wealth of information but also add additional layers of complexity for reconstruction applications.
Surface Geometry Acquisition Using an Analog Lens-Shifting Device
Darmstadt, TU, Bachelor Thesis, 2010
Mit der Entwicklung immer leistungsfähigerer und flexiblerer Grafikhardware können zunehmend komplexere Darstellungsalgorithmen umgesetzt werden. Zusätzlich verlangen die heute weit verbreiteten hochauflösenden Displays sehr datenintensive Inhalte. Dies führt dazu, dass die Erstellung und Akquisition von Daten für eine realistische Darstellung immer mehr an Bedeutung gewinnt. Eine besonders wichtige Rolle spielt dabei die naturgetreue Modellierung und Beleuchtung von Oberflächen. Viele Bemühungen konzentrieren sich dabei auf die Vermessung und Darstellung von Reflexionseigenschaften jedoch sind auch geometrische Aspekte der Oberflächenstruktur ausschlaggebend für die Berechnung realistischer Bilder. Zwar existieren eine Reihe von Algorithmen und Systemen zur Geometrievermessung jedoch sind diese meist nicht geeignet um feine Details ausreichend aufzulösen. Deswegen beschäftigt sich diese Arbeit mit der Entwicklung und Implementierung eines Rekonstruktionsalgorithmus für einen 3D-Scanner der unter Verwendung einer speziellen Projektorerweiterung den hohen Genauigkeitsanforderungen gerecht werden kann.
A Combined Multi-View Stereo and Photometric Stereo Approach
Darmstadt, TU, Master Thesis, 2009
Two approaches to 3D-reconstruction from images, Multi-view Stereo and Photometric Stereo, are combined as a means of completing missing areas within the partly-complete reconstruction returned by the first approach. This is realized by exploiting the Photometric Stereo principle of orientation consistency, stating that image positions with similar appearance have similar surface orientation on the original object surface. Completion of reconstruction is thus achieved by finding pairs of image positions with similar appearance, one with available reconstruction serving as source, the other one with reconstruction missing as target for transfer of normals.
Satisfaction of Continuity Constraints Controlling the Transition Between Free-form Surfaces for Virtual Styling
Darmstadt, TU, Bachelor Thesis, 2006
The Bachelor thesis realized a consistency mechanism in the domain of Virtual Styling. It is able to check the feasibility of free-form creation and deformation operations considering the conformance of newly introduced continuity constraints to existing ones, and propagates deformations through the constraint graph by applying necessary surface modifications such that the continuity constraints at the transition between adjacent free-form surfaces are satisfied, leading the system into a consistent state. Some measures have been adopted to ensure a performing consistency mechanism and constraint satisfaction suitable for virtual conceptual design. One is to explore the representation of the constraint satisfaction problem in an extended constraint hyper graph. Together with the rules defined for the consistent growth and modification of the free-form model, the approach allows for an efficient consistency verification and propagation of changes throughout the constraint graph. The consistency mechanism could further be optimized by exploring the geometric properties of adjacent NURBS surfaces with continuous transition, where the affected area of shape deformations is limited to the 2-neighbourhood of a modified shape node in the constraint graph, reducing the constraint satisfaction problem to a well defined sub-graph of a given constraint graph.