Liste der Publikationen
Using Dashboard Networks to Visualize Multiple Patient Histories: A Design Study on Post-operative Prostate Cancer
IEEE Transactions on Visualization and Computer Graphics
In this design study, we present a visualization technique that segments patients' histories instead of treating them as raw event sequences, aggregates the segments using criteria such as the whole history or treatment combinations, and then visualizes the aggregated segments as static dashboards that are arranged in a dashboard network to show longitudinal changes. The static dashboards were developed in nine iterations, to show 15 important attributes from the patients' histories. The final design was evaluated with five non-experts, five visualization experts and four medical experts, who successfully used it to gain an overview of a 2,000 patient dataset, and to make observations about longitudinal changes and differences between two cohorts. The research represents a step-change in the detail of large-scale data that may be successfully visualized using dashboards, and provides guidance about how the approach may be generalized.
3D Printing Spatially Varying Color and Translucency
ACM Transactions on Graphics
We present an efficient and scalable pipeline for fabricating full-colored objects with spatially-varying translucency from practical and accessible input data via multi-material 3D printing. Observing that the costs associated with BSSRDF measurement and processing are high, the range of 3D printable BSSRDFs are severely limited, and that the human visual system relies only on simple high-level cues to perceive translucency, we propose a method based on reproducing perceptual translucency cues. The input to our pipeline is an RGBA signal defined on the surface of an object, making our approach accessible and practical for designers. We propose a framework for extending standard color management and profiling to combined color and translucency management using a gamut correspondence strategy we call opaque relative processing. We present an efficient streaming method to compute voxel-level material arrangements, achieving both realistic reproduction of measured translucent materials and artistic effects involving multiple fully or partially transparent geometries.
Box Cutter: Atlas Refinement for Efficient Packing via Void Elimination
ACM Transactions on Graphics
Packed atlases, consisting of 2D parameterized charts, are ubiquitously used to store surface signals such as texture or normals. Tight packing is similarly used to arrange and cut-out 2D panels for fabrication from sheet materials. Packing efficiency, or the ratio between the areas of the packed atlas and its bounding box, significantly impacts downstream applications. We propose Box Cutter, a new method for optimizing packing efficiency suitable for both settings. Our algorithm improves packing efficiency without changing distortion by strategically cutting and repacking the atlas charts or panels. It preserves the local mapping between the 3D surface and the atlas charts and retains global mapping continuity across the newly formed cuts. We balance packing efficiency improvement against increase in chart boundary length and enable users to directly control the acceptable amount of boundary elongation. While the problem we address is NP-hard, we provide an effective practical solution by iteratively detecting large rectangular empty spaces, or void boxes, in the current atlas packing and eliminating them by first refining the atlas using strategically placed axis-aligned cuts and then repacking the refined charts. We repeat this process until no further improvement is possible, or until the desired balance between packing improvement and boundary elongation is achieved. Packed chart atlases are only useful for the applications we address if their charts are overlap-free; yet many popular parameterization methods, used as-is, produce atlases with global overlaps. Our pre-processing step eliminates all input overlaps while explicitly minimizing the boundary length of the resulting overlap-free charts. We demonstrate our combined strategy on a large range of input atlases produced by diverse parameterization methods, as well as on multiple sets of 2D fabrication panels. Our framework dramatically improves the output packing efficiency on all inputs; for instance with boundary length increase capped at 50% we improve packing efficiency by 68% on average.
Cinematic Narration in VR – Rethinking Film Conventions for 360 Degrees
Virtual Augmented and Mixed Reality: Applications in Health, Cultural Heritage, and Industry
International Conference Virtual Augmented and Mixed Reality (VAMR) <10, 2018, Las Vegas, NV, USA>
The rapid development of VR technology in the past three years allowed artists, filmmakers and other media producers to create great experiences in this new medium. But filmmakers are, however, facing big challenges, when it comes to cinematic narration in VR. The old, established rules of filmmaking do not apply for VR films and important techniques of cinematography and editing must be completely rethought. Possibly, a new filmic language will be found. But even though filmmakers eagerly experiment with the new medium already, there exist relatively few scientific studies about the differences between classical filmmaking and filmmaking in 360 and VR. We therefore present this study on cinematic narration in VR. In this we give a comprehensive overview of techniques and concepts that are applied in current VR films and games. We place previous research on narration, film, games and human perception into the context of VR experiences and we deduce consequences for cinematic narration in VR. We base our assumptions on a conducted empirical test with 50 participants and on an additional online survey. In the empirical study, we selected 360-degree videos and showed them to a test-group, while the viewer’s behavior and attention was observed and documented. As a result of this paper, we present guidelines which suggest methods of guiding the viewers’ attention as well as approaches to cinematography, staging and editing in VR.
Comparing Visual-Interactive Labeling with Active Learning: An Experimental Study
IEEE Transactions on Visualization and Computer Graphics
Labeling data instances is an important task in machine learning and visual analytics. Both fields provide a broad set of labeling strategies, whereby machine learning (and in particular active learning) follows a rather model-centered approach and visual analytics employs rather user-centered approaches (visual-interactive labeling). Both approaches have individual strengths and weaknesses. In this work, we conduct an experiment with three parts to assess and compare the performance of these different labeling strategies. In our study, we (1) identify different visual labeling strategies for user-centered labeling, (2) investigate strengths and weaknesses of labeling strategies for different labeling tasks and task complexities, and (3) shed light on the effect of using different visual encodings to guide the visual-interactive labeling process. We further compare labeling of single versus multiple instances at a time, and quantify the impact on efficiency. We systematically compare the performance of visual interactive labeling with that of active learning. Our main findings are that visual-interactive labeling can outperform active learning, given the condition that dimension reduction separates well the class distributions. Moreover, using dimension reduction in combination with additional visual encodings that expose the internal state of the learning model turns out to improve the performance of visual-interactive labeling.
CrazyFaces: Unassisted Circumvention of Watchlist Face Identification
IEEE 9th International Conference on Biometrics: Theory, Applications and Systems
IEEE International Conference on Biometrics: Theory, Applications and Systems (BTAS) <9, 2018, Redondo Beach, CA, USA>
Once upon a time, there was a blacklisted criminal who usually avoided appearing in public. He was surfing the Web, when he noticed, what had to be a targeted advertisement announcing a concert of his favorite band. The concert was in a near town, and the only way to get there was by train. He was worried, because he heard in the news about the new face identification system installed at the train station. From his last stay with the police, he remembers that they took these special face images with the white background. He thought about what can he do to avoid being identified and an idea popped in his mind “what if I can make a crazy-face, as the kids call it, to make my face look different? What do I exactly have to do? And will it work?”. He called his childhood geeky friend and asked him if he can build him a face recognition application he can tinker with. The geeky friend was always interested in such small projects where he can use open-source resources and didn’t really care about the goal, as usual. The criminal tested the application and played around, trying to figure out how can he make a crazy-face that won’t be identified as himself. On the day of the concert, he took off to the train station with some doubt in his mind and fear in his soul. To know what happened next, you should read the rest of this paper.
GPU-based Polynomial Finite Element Matrix Assembly for Simplex Meshes
Computer Graphics Forum
Pacific Conference on Computer Graphics and Applications (PG) <26, 2018, Hong Kong, China>
In this paper, we present a matrix assembly technique for arbitrary polynomial order finite element simulations on simplex meshes for graphics processing units (GPU). Compared to the current state of the art in GPU-based matrix assembly, we avoid the need for an intermediate sparse matrix and perform assembly directly into the final, GPU-optimized data structure. Thereby, we avoid the resulting 180% to 600% memory overhead, depending on polynomial order, and associated allocation time, while simplifying the assembly code and using a more compact mesh representation. We compare our method with existing algorithms and demonstrate significant speedups.
Copyright: This is the accepted version of the following article: Mueller‐Roemer, J. S., and A. Stork. "GPU-based Polynomial Finite Element Matrix Assembly for Simplex Meshes." Computer Graphics Forum 37, no. 7 (2018): 443-454, which has been published in final form at http://onlinelibrary.wiley.com. This article may be used for non-commercial purposes in accordance with the Wiley Self-Archiving Policy [http://olabout.wiley.com/WileyCDA/Section/id-820227.html].
Planning Nonlinear Access Paths for Temporal Bone Surgery
International Journal of Computer Assisted Radiology and Surgery
Purpose: Interventions at the otobasis operate in the narrow region of the temporal bone where several highly sensitive organs define obstacles with minimal clearance for surgical instruments. Nonlinear trajectories for potentialminimally invasive interventions can provide larger distances to risk structures and optimized orientations of surgical instruments, thus improving clinical outcomes when compared to existing linear approaches. In this paper, we present fast and accurate planning methods for such nonlinear access paths. Methods: We define a specific motion planning problem in SE(3) = R3 × SO(3) with notable constraints in computation time and goal pose that reflect the requirements of temporal bone surgery. We then present k-RRT-Connect: two suitable motion planners based on bidirectional Rapidly exploring Random Tree (RRT) to solve this problem efficiently. Results: The benefits of k-RRT-Connect are demonstrated on real CT data of patients. Their general performance is shown on a large set of realistic synthetic anatomies. We also show that these new algorithms outperform state-of-the-art methods based on circular arcs or Bézier-Splines when applied to this specific problem. Conclusion: With this work, we demonstrate that preoperative and intra-operative planning of nonlinear access paths is possible for minimally invasive surgeries at the otobasis.
UnFlow: Unsupervised Learning of Optical Flow with a Bidirectional Census Loss
32nd AAAI Conference on Artificial Intelligence
AAAI Conference on Artificial Intelligence <32, 2018, New Orleans, Louisiana, USA>
In the era of end-to-end deep learning, many advances in computer vision are driven by large amounts of labeled data. In the optical flow setting, however, obtaining dense per-pixel ground truth for real scenes is difficult and thus such data is rare. Therefore, recent end-to-end convolutional networks for optical flow rely on synthetic datasets for supervision, but the domain mismatch between training and test scenarios continues to be a challenge. Inspired by classical energy-based optical flow methods, we design an unsupervised loss based on occlusion-aware bidirectional flow estimation and the robust census transform to circumvent the need for ground truth flow. On the KITTI benchmarks, our unsupervised approach outperforms previous unsupervised deep networks by a large margin, and is even more accurate than similar supervised methods trained on synthetic datasets alone. By optionally fine-tuning on the KITTI training data, our method achieves competitive optical flow accuracy on the KITTI 2012 and 2015 benchmarks, thus in addition enabling generic pre-training of supervised networks for datasets with limited amounts of ground truth.