AR Tracking with Hybrid, Agnostic And Browser Based Approach
2019 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR 2019). Proceedings
IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR) <2, 2019, San Diego, CA>
Mobile platform tools are desirable when it comes to practical augmented reality applications. With the convenience and portability that the form factor has to offer, it lays an ideal basic foundation for a feasible use case in industry and commercial applications. Here, we present a novel approach of using the monocular Simultaneous Localization and Mapping (SLAM) ,  information provided by a Cross-Reality (XR) device  to augment the linked 3D CAD models. The main objective is to use the tracking technology for an augmented and mixed reality experience by tracking a 3D model and superimposing its respective 3D CAD model data over the images we receive from the camera feed of the XR device without any scene preparation (e.g markers or feature maps). The intent is to conduct a visual analysis and evaluations based on the intrinsic and extrinsic of the model in the visualization system that instant3Dhub  has to offer. To achieve this we make use of the Apple’s ARKit to obtain the images, sensor data and SLAM heuristic of client XR device, remote marker-less model based 3D object tracking from monocular RGB image data and hybrid client server architecture. Our approach is agnostic of any SLAM system or Augmented Reality (AR) framework. We make use of the Apple’s ARKit because of the its ease of use, affordability, stability and maturity as a platform and as an integrated system.
Extending Mixed Reality Interaction Using External Sensor Systems
Darmstadt, TU, Bachelor Thesis, 2019
Microsoft's HoloLens is a Mixed Reality supporting head mounted display (HMD) device which is available on the market since 2016. Its first generation model supports some fundamental hand gestures by itself. However, permanent tracking of the finger and joint positions is not supported. This drastically limits the user's ability to interact with the Mixed Reality environment. Phenomenons like hand occlusion further restrict the user's feeling of immersion. However, a combination of the HoloLens with modern hand-tracking systems like the Leap Motion Controller (LMC) could remove these limitations. The final goal of this work was to develop a generic framework for the registration of one or more external sensors with a MR device. The framework was demonstratively implemented for LMC and HoloLens. Based on it, a prototype was developed which renders the finger joint coordinates tracked by the LMC at their actual real-world position, i.e. where the user's hand is located, in a holographic application. For this purpose, a point transformation pipeline has been worked out which maps the LMC's tracking data onto their correspondent coordinates in holographic space. As a prerequisite, a setup of a LMC mounted on top of the HoloLens was used. In order to obtain accurate results, computing the pose of both devices relative to each other precisely is crucial. In this work, the pose estimation has been achieved using a two-step method. This approach is based on standard camera calibration methods using a chessboard pattern as calibration object. Due to the different fields of view of both device's cameras, the joint calibration of both devices turned out to be a major challenge which needed to be solved. The following thesis will describe how the prototype has been worked out, how transformation accuracy was ensured and in which ways it can be improved.
Depth Image Based Composition in Distributed Rendering Environments
Darmstadt, TU, Master Thesis, 2017
In dieser Arbeit wird ein auf Depth Image Based Rendering (DIBR) aufbauender Ansatz vorgestellt, der die flüssige Darstellung von Szenen auch auf leistungsschwächeren Geräten ermöglichen soll. Grundlage hierfür ist ein Client-Server Ansatz, bei dem der Server auf Anfrage Bilder zur Verfügung stellt, die der Client mit Hilfe von DIBR an seine Bedürfnisse anpasst. Außerdem wird ein Kameraprädiktor verwendet, um die Anfragen des Client zu optimieren. Die Qualität der erstellten Bilder wird mit Hilfe von drei verschiedenen Simulationen untersucht.
Comparative Local Quality Assessment of 3D Medical Image Segmentations with Focus on Statistical Shape Model-based Algorithms
IEEE Transactions on Visualization and Computer Graphics
The quality of automatic 3D medical segmentation algorithms needs to be assessed on test datasets comprising several 3D images (i.e., instances of an organ). The experts need to compare the segmentation quality across the dataset in order to detect systematic segmentation problems. However, such comparative evaluation is not supported well by current methods. We present a novel system for assessing and comparing segmentation quality in a dataset with multiple 3D images. The data is analyzed and visualized in several views. We detect and show regions with systematic segmentation quality characteristics. For this purpose, we extended a hierarchical clustering algorithm with a connectivity criterion. We combine quality values across the dataset for determining regions with characteristic segmentation quality across instances. Using our system, the experts can also identify 3D segmentations with extraordinary quality characteristics. While we focus on algorithms based on statistical shape models, our approach can also be applied to cases, where landmark correspondences among instances can be established. We applied our approach to three real datasets: liver, cochlea and facial nerve. The segmentation experts were able to identify organ regions with systematic segmentation characteristics as well as to detect outlier instances.
Visual Analysis of Local Correspondence in Segmentation Quality
Darmstadt, TU, Bachelor Thesis, 2013
The Bachelor-Thesis presents a new interactive system for visual and exploratory analysis of local correspondence in segmentation quality. Segmentations of several samples of one organ are analyzed on the basis of pairwise distances between a reference- and a test mesh, which is extracted from the organ segmentation. The tool features several views on the data (Coloring, threshold based highlighting, average mesh visualization) and a set of analysis methods (clustering, cluster quality evaluation, dimension reduction) to extract new information, such as reoccurring regions or patterns of low quality segmentation. Segmentation algorithm developers can use the visual information for gaining knowledge on how their algorithms work. This insight can be beneficial to the improvement of the algorithms. The software is optimized for analyzing medical image segmentation, but can also be translated to countless domains, as it simply operates on the extracted mesh data.