Acceleration of 3D Mass Digitization Processes: Recent Advances and Challenges
Mixed Reality and Gamification for Cultural Heritage
In the heritage field, the demand for fast and efficient 3D digitization technologies for historic remains is increasing. Besides, 3D has proven to be a promising approach to enable precise reconstructions of cultural heritage objects. Even though 3D technologies and postprocessing tools are widespread and approaches to semantic enrichment and Storage of 3D models are just emerging, only few approaches enable mass capture and computation of 3D virtual models from zoological and archeological findings. To illustrate how future 3D mass digitization systems may look like, we introduce CultLab3D, a recent approach to 3D mass digitization, annotation, and archival storage by the Competence Center for Cultural Heritage Digitization at the Fraunhofer Institute for Computer Graphics Research IGD. CultLab3D can be regarded as one of the first feasible approaches worldwide to enable fast, efficient, and cost-effective 3D digitization. lt specifically designed to automate the entire process and thus allows to scan and archive large amounts of heritage objects for documentation and preservation in the best possible quality, taking advantage of integrated 30 visualization and annotation within regular Web browsers using technologies such as WebGI and X3D.
3DHOG for Geometric Similarity Measurement and Retrieval on Digital Cultural Heritage Archives
Intelligent Interactive Multimedia Systems and Services 2016
KES International Conference on Intelligent Interactive Multimedia Systems and Services (IIMSS) <9, 2016, Puerto de la Cruz, Tenerife, Spain>
Smart Innovation, Systems and Technologies
With projects such as CultLab3D, 3D Digital preservation of cultural heritage will become more affordable and with this, the number of 3D-models representing scanned artefacts will dramatically increase. However, once mass digitization is possible, the subsequent bottleneck to overcome is the annotation of cultural heritage artefacts with provenance data. Current annotation tools are mostly based on textual input, eventually being able to link an artefact to documents, pictures, videos and only some tools already support 3D models. Therefore, we envisage the need to aid curators by allowing for fast, web-based, semi-automatic, 3D-centered annotation of artefacts with metadata. In this paper we give an overview of various technologies we are currently developing to address this issue. On one hand we want to store 3D models with similarity descriptors which are applicable independently of different 3D model quality levels of the same artefact. The goal is to retrieve and suggest to the curator metadata of already annotated similar artefacts for a new artefact to be annotated, so he can eventually reuse and adapt it to the current case. In addition we describe our web-based, 3D-centered annotation tool with meta- and object repositories supporting various databases and ontologies such as CIDOC-CRM.
A Modular Architecture for a Driving Simulator Based on the FDMU Approach
IJIDeM - International Journal on Interactive Design and Manufacturing
The present paper describes the development of a modular and easily configurable simulation platform for ground vehicles. This platform should be usable for the implementation of driving simulators employed both in training purposes and in vehicle components testing. In particular, the paper presents a first architectural model for the implementation of a simulation platform based on the Functional Digital Mock-Up (FDMU) approach. This platform will allow engineers to implement different kinds of simulators that integrate both physical and virtual components, thus achieving the possibility to quickly reconfigure the architecture depending on the hardware and software used and on specific test case needs. The platform has been tested by developing a case study that integrates a motion platform, some I/O devices and a simple dynamic ground vehicle model implemented in OpenModelica.
CultLab3D - On the Verge of 3D Mass Digitization
Eurographics Symposium on Graphics and Cultural Heritage (GCH) <12, 2014, Darmstadt, Germany>
Acquisition of 3D geometry, texture and optical material properties of real objects still consumes a considerable amount of time, and forces humans to dedicate their full attention to this process. We propose CultLab3D, an automatic modular 3D digitization pipeline, aiming for efficient mass digitization of 3D geometry, texture, and optical material properties. CultLab3D requires minimal human intervention and reduces processing time to a fraction of today's efforts for manual digitization. The final step in our digitization workflow involves the integration of the digital object into enduring 3D Cultural Heritage Collections together with the available semantic information related to the object. In addition, a software tool facilitates virtual, location-independent analysis and publication of the virtual surrogates of the objects, and encourages collaboration between scientists all around the world. The pipeline is designed in a modular fashion and allows for further extensions to incorporate newer technologies. For instance, by switching scanning heads, it is possible to acquire coarser or more refined 3D geometry.
Interactive Semantic Enrichment of 3D Cultural Heritage Collections
International Symposium on Virtual Reality, Archaeology and Cultural Heritage (VAST) <13, 2012, Brighton, UK>
Virtual Surrogates of Cultural Heritage (CH) objects are seriously being considered in professional activities such as conservation and preservation, exhibition planning, packing, and scholarly research, among many other activities. Although this is a very positive development, a bare 3D digital representation is insufficient and poor for fulfilling the full range of professional activities. In this paper, we present the first interactive semantic enrichment tool for 3D CH collections that is fully based on the CIDOC-CRM schema and that fully supports its sophisticated annotation model. The tool eases the user interaction, allowing inexperienced users without previous knowledge on semantic models or 3D modeling to employ it and to conceive it for the professional workflow on 3D annotations. We illustrate the capabilities of our tool in the context of the Saalburg fort, during Roman times (2nd century AD), for the protection of the Limes on the Taunus hills in Germany.
Dense 3D Reconstruction and Object Recognition using a Minimum Set of Inside-Out Images
Pilani, BITS Univ., Bachelor Thesis, 2011
Dense 3D Reconstruction of environments is important for various applications like augmented reality, artefact digitization and object classification. Object classification in particular allows for scene understanding. This work proposes the development of a pipeline for image based 3D reconstruction and object recognition. The 2D images under consideration are the inside out images of the interior of a room. A dense 3D reconstruction allows the description of the room as point clouds on which the object recognition algorithms are implemented. To allow for flexibility in terms of image acquisition methods, the algorithm is robust to the type of image input as well as the number of images. Matching algorithms like Scale Invariant Feature Transform (SIFT) provide accurate correspondences while Structure from Motion (SFM) algorithms use these correspondences to estimate precise camera pose and Multi-view Stereo (MVS) methods take images with pose as input and produce dense 3D models. The reconstructed scenes are then acted upon by the 3D feature extractor and the features are compared with pre-trained classifiers from a database to carry out object recognition. The pipeline has been developed to allow for different types of Multi-view Stereo input images. While plane images allow for cheap equipment, spherical and cylindrical panoramic images allow for ease in image acquisition. We have used a modified version of the existent Open Street Map (OSM) Bundler for Structure from Motion and an overlapping view clustering problem for the Multi-view Stereo. Iterative Closest Point algorithm allows for integrating the depth maps to generate the mesh model. The pipeline also allows for an input generated using the Microsoft Xbox Kinect. We have used the Cluster Viewpoint Feature Histogram algorithm (CVFH) for the object recognition and also proposed the use of Normal Aligned Radial Features (NARF). We also study the prospect of using 2D to 3D feature correlation to find objects in the 3D generated model of the room from a 2D image of that room. This work also shows the results of a comparative study undertaken between the different possible methods to complete the task. We also study the different image geometries to further explore the invariance to camera models. Finally the pipeline has been integrated in the Rapid Prototyping Environment (RPE) framework of the Department Interactive Engineering Technologies as a plug-in to provide additional functionality.
LIS3D: Low-Cost 6DOF Laser Interaction for Outdoor Mixed Reality
Virtual and Mixed Reality - New Trends: Part I
International Conference on Virtual and Mixed Reality (VMR) <4, 2011, Orlando, FL, USA>
This paper introduces a new low-cost, laser-based 6DOF interaction technology for outdoor mixed reality applications. It can be used in a variety of outdoor mixed reality scenarios for making 3D annotations or correctly placing 3D virtual content anywhere in the real world. In addition, it can also be used with virtual back-projection displays for scene navigation purposes. Applications can range from design review in the architecture domain to cultural heritage experiences on location. Previous laser-based interaction techniques only yielded 2D or 3D intersection coordinates of the laser beam with a real world object. The main contribution of our solution is that we are able to reconstruct the full pose of an area targeted by our laser device in relation to the user. In practice, this means that our device can be used to navigate any scene in 6DOF. Moreover, we can place any virtual object or any 3D annotation anywhere in a scene, so it correctly matches the user's perspective.
Supporting Outdoor Mixed Reality Applications for Architecture and Cultural Heritage
2010 Proceedings of the Symposium on Simulation for Architecture and Urban Design
Symposium on Simulation for Architecture and Urban Design (SimAUD) <1, 2010, Orlando, FL, USA>
This paper introduces new approaches to enable collaborative outdoor mixed reality design review in the architectural domain as well as outdoor mixed reality experiences in the cultural heritage domain. For this purpose we present the results of three closely related European projects, IMPROVE and CINeSPACE, which are currently succeeded by MAXIMUS, continuing the development of technologies relevant to the two domains. The paper focuses on the base technologies needed to develop usable outdoor mixed reality applications, such as marker-less optical tracking combined with sensor fusion for accurate pose estimation outdoors. Furthermore the paper presents a new visualization system developed within the project, one of the few daylight blocking head.-mounted displays available. In addition to pose estimation and visualization devices, the paper presents a VR framework adapted for the architecture and the cultural heritage domain featuring high dynamic range image acquisition combined with pre-computed radiance transfer rendering to be able to photo-realistically render urban content. For the cultural heritage domain this framework has been extended to allow accurate display and generation of multimedia content super-imposed on a real environment as well as city navigation.
The Hybrid Outdoor Tracking Extension for the Daylight Blocker Display
Siggraph Asia 2009
International Conference on Computer Graphics and Interactive Techniques (SIGGRAPH Asia) <2, 2009, Yokohama, Japan>
From UMPCs to SmartPhones, we witness the emergence of highly integrated mobile computing platforms which boast higher performance than any of their preceeding systems. However, due to the equally growing demand for ever more complex applications such as mixed reality applications for outdoor scenarios, the need for an efficient distribution of resources for the different application tasks remains. In particular pose estimation in outdoor environments still presents a major ongoing challenge. Since mobile platforms have limited performance, the best approach for real-time pose estimation is using sensor fusion combining optical and inertial sensors. However different algorithms using different sensors require varying amounts of processing power (e.g. there are peaks when processing key frames for feature point detection). In this paper we introduce a novel sensor fusion pose estimation approach which we combine with the first compact daylight blocking optical stereo see-through display for mixed reality we presented a year ago [Santos et al. 2008a] [Santos et al. 2008b]. Through two new feature matching algorithms and appropriate sequencing of tracking algorithms using different sensors we attempt to achieve time-constant tracking update rates while keeping efforts for pose estimation at a fixed share of the overall available computing performance. By doing so we are able to guarantee the main mixed reality application a fixed share of the remaining computing performance on the mobile platform while preserving high tracking stability.