The “Selected Readings in Computer Graphics 2020” consist of 40 articles selected from a total of 120 scientific publications.

They were contributed by the Fraunhofer Institute for Computer Graphics Research IGD with offices in Darmstadt as well as in Rostock, Singapore, and Graz, the partner institutes at the respective universities, the Interactive Graphics Systems Group of Technische Universität Darmstadt, the Computergraphics and Communication Group of the Institute of Computer Science at Rostock University, Nanyang Technological University (NTU), Singapore, and the Visual Computing Cluster of Excellence of Graz University of Technology, that cooperate closely within projects and research and development in the field of Computer Graphics.

All articles previously appeared in various scientific books, journals, conferences and workshops, and are reprinted with permission of the respective copyright holders. The publications had to undergo a thorough review process by internationally leading experts and established technical societies. Therefore, the Selected Readings should give a fairly good and detailed overview of the scientific developments in Computer Graphics in the year 2020. They are put together by Professor Dieter W. Fellner, the director of Fraunhofer Institute for Computer Graphics Research IGD in Darmstadt, at the same time professor at the Department of Computer Science at Technische Universität Darmstadt, and professor at the Faculty of Computer Science at Graz University of Technology.

The Selected Readings in Computer Graphics 2020 touch aspects and trends in Computer Graphics research and development in the areas of

Liste der Publikationen

Show publication details

Schufrin, Marija; Reynolds, Steven Lamarr; Kuijper, Arjan; Kohlhammer, Jörn

A Visualization Interface to Improve the Transparency of Collected Personal Data on the Internet

2021

IEEE Transactions on Visualization and Computer Graphics

Online services are used for all kinds of activities, like news, entertainment, publishing content or connecting with others. But information technology enables new threats to privacy by means of global mass surveillance, vast databases and fast distribution networks. Current news are full of misuses and data leakages. In most cases, users are powerless in such situations and develop an attitude of neglect for their online behaviour. On the other hand, the GDPR (General Data Protection Regulation) gives users the right to request a copy of all their personal data stored by a particular service, but the received data is hard to understand or analyze by the common internet user. This paper presents TransparencyVis - a web-based interface to support the visual and interactive exploration of data exports from different online services. With this approach, we aim at increasing the awareness of personal data stored by such online services and the effects of online behaviour. This design study provides an online accessible prototype and a best practice to unify data exports from different sources.

Show publication details

Baumgartl, Tom; Petzold, Markus; Wunderlich, Marcel; Höhn, Markus; Archambault, Daniel; Lieser, M.; Dalpke, Alexander; Scheithauer, Simone; Marschollek, Michael; Eichel, V. M.; Mutters, Nico T.; Landesberger, Tatiana von

In Search of Patient Zero: Visual Analytics of Pathogen Transmission Pathways in Hospitals

2021

IEEE Transactions on Visualization and Computer Graphics

Pathogen outbreaks (i.e., outbreaks of bacteria and viruses) in hospitals can cause high mortality rates and increase costs for hospitals significantly. An outbreak is generally noticed when the number of infected patients rises above an endemic level or the usual prevalence of a pathogen in a defined population. Reconstructing transmission pathways back to the source of an outbreak – the patient zero or index patient – requires the analysis of microbiological data and patient contacts. This is often manually completed by infection control experts. We present a novel visual analytics approach to support the analysis of transmission pathways, patient contacts, the progression of the outbreak, and patient timelines during hospitalization. Infection control experts applied our solution to a real outbreak of Klebsiella pneumoniae in a large German hospital. Using our system, our experts were able to scale the analysis of transmission pathways to longer time intervals (i.e., several years of data instead of days) and across a larger number of wards. Also, the system is able to reduce the analysis time from days to hours. In our final study, feedback from twenty-five experts from seven German hospitals provides evidence that our solution brings significant benefits for analyzing outbreaks.

Show publication details

Cao, Min; Chen, Chen; Dou, Hao; Hu, Xiyuan; Peng, Silong; Kuijper, Arjan

Progressive Bilateral-Context Driven Model for Post-Processing Person Re-Identification

2021

IEEE Transactions on Multimedia

Most existing person re-identification methods compute pairwise similarity by extracting robust visual features and learning the discriminative metric. Owing to visual ambiguities, these content-based methods that determine the pairwise relationship only based on the similarity between them, inevitably produce a suboptimal ranking list. Instead, the pairwise similarity can be estimated more accurately along the geodesic path of the underlying data manifold by exploring the rich contextual information of the sample. In this paper, we propose a lightweight post-processing person re-identification method in which the pairwise measure is determined by the relationship between the sample and the counterpart's context in an unsupervised way. We translate the point-to-point comparison into the bilateral point-to-set comparison. The sample's context is composed of its neighbor samples with two different definition ways: the first order context and the second order context, which are used to compute the pairwise similarity in sequence, resulting in a progressive post-processing model. The experiments on four large-scale person re-identification benchmark datasets indicate that (1) the proposed method can consistently achieve higher accuracies by serving as a post-processing procedure after the content-based person re-identification methods, showing its state-of-the-art results, (2) the proposed lightweight method only needs about 6 milliseconds for optimizing the ranking results of one sample, showing its high-efficiency. Code is available at: https://github.com/123ci/PBCmodel.

Show publication details

Lengauer, Stefan; Komar, Alexander; Labrada, Arniel; Karl, Stephan; Trinkl, Elisabeth; Preiner, Reinhold; Bustos, Benjamin; Schreck, Tobias

A Sketch-aided Retrieval Approach for Incomplete 3D Objects

2020

Computers & Graphics

With the growing amount of digital collections of visual CH data being available across different repositories, it becomes increasingly important to provide archaeologists with means to find relations and cross-correspondences between different digital records. In principle, existing shape- and image-based similarity search methods can aid such domain analysis tasks. However, in practice, visual object data are given in different modalities, and often only in incomplete or fragmented state, posing a particular challenge for conventional similarity search approaches. In this paper we introduce a methodology and system for cross-modal visual search in CH object data that addresses these challenges. Specifically, we propose a new query modality based on 3D views enhanced by user sketches (3D+sketch). This allows for adding new context to the search, which is useful e.g., for searching based on incomplete query objects, or for testing hypotheses on existence of certain shapes in a collection. We present an appropriately designed workflow for constructing query views from incomplete 3D objects enhanced by a user sketch, based on shape completion and texture inpainting. Visual cues additionally help users compare retrieved objects with the query. The proposed approach extends on a previously presented retrieval system by introducing improved retrieval methods, an extended evaluation including retrieval in a larger and richer data collection, and enhanced interactive search weight specification. We demonstrate the feasibility and potential of our approach to support analysis of domain experts in Archaeology and the field of CH in general.

Show publication details

Mueller-Roemer, Johannes; Stork, André; Fellner, Dieter W.

Analysis of Schedule and Layout Tuning for Sparse Matrices With Compound Entries on GPUs

2020

Computer Graphics Forum

Large sparse matrices with compound entries, i.e. complex and quaternionic matrices as well as matrices with dense blocks, are a core component of many algorithms in geometry processing, physically based animation and other areas of computer graphics. We generalize several matrix layouts and apply joint schedule and layout autotuning to improve the performance of the sparse matrix-vector product on massively parallel graphics processing units. Compared to schedule tuning without layout tuning, we achieve speedups of up to 5.5×. In comparison to cuSPARSE, we achieve speedups of up to 4.7×.

Show publication details

Preiner, Reinhold; Schmidt, Johanna; Krösl, Katharina; Schreck, Tobias; Mistelbauer, Gabriel

Augmenting Node-Link Diagrams with Topographic Attribute Maps

2020

EuroVis 2020. Eurographics / IEEE VGTC Conference on Visualization 2020

Eurographics / IEEE VGTC Conference on Visualization (EuroVis) <22, 2020, online>

We propose a novel visualization technique for graphs that are attributed with scalar data. In many scenarios, these attributes (e.g., birth date in a family network) provide ambient context information for the graph structure, whose consideration is important for different visual graph analysis tasks. Graph attributes are usually conveyed using different visual representations (e.g., color, size, shape) or by reordering the graph structure according to the attribute domain (e.g., timelines). While visual encodings allow graphs to be arranged in a readable layout, assessing contextual information such as the relative similarities of attributes across the graph is often cumbersome. In contrast, attribute-based graph reordering serves the comparison task of attributes, but typically strongly impairs the readability of the structural information given by the graph’s topology. In this work, we augment force-directed node-link diagrams with a continuous ambient representation of the attribute context. This way, we provide a consistent overview of the graph’s topological structure as well as its attributes, supporting a wide range of graph-related analysis tasks. We resort to an intuitive height field metaphor, illustrated by a topographic map rendering using contour lines and suitable color maps. Contour lines visually connect nodes of similar attribute values, and depict their relative arrangement within the global context. Moreover, our contextual representation supports visualizing attribute value ranges associated with graph nodes (e.g., lifespans in a family network) as trajectories routed through this height field. We discuss how user interaction with both the structural and the contextual information fosters exploratory graph analysis tasks. The effectiveness and versatility of our technique is confirmed in a user study and case studies from various application domains.

Show publication details

Kügler, David; Uecker, Marc; Kuijper, Arjan; Mukhopadhyay, Anirban

AutoSNAP: Automatically Learning Neural Architectures for Instrument Pose Estimation

2020

Medical Image Computing and Computer Assisted Intervention - MICCAI 2020

International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI) <23, 2020, Online>

Lecture Notes in Computer Science (LNCS), 12263

Despite recent successes, the advances in Deep Learning have not yet been fully translated to Computer Assisted Intervention (CAI) problems such as pose estimation of surgical instruments. Currently, neural architectures for classification and segmentation tasks are adopted ignoring significant discrepancies between CAI and these tasks. We propose an automatic framework (AutoSNAP) for instrument pose estimation problems, which discovers and learns architectures for neural networks. We introduce 1) an efficient testing environment for pose estimation, 2) a powerful architecture representation based on novel Symbolic Neural Architecture Patterns (SNAPs), and 3) an optimization of the architecture using an efficient search scheme. Using AutoSNAP, we discover an improved architecture (SNAPNet) which outperforms both the hand-engineered i3PosNet and the state-of-the-art architecture search method DARTS.

Show publication details

Krämer, Michel

Capability-based Scheduling of Scientific Workflows in the Cloud

2020

Proceedings of the 9th International Conference on Data Science, Technology and Applications

International Conference on Data Science, Technology and Applications (DATA) <9, 2020>

We present a distributed task scheduling algorithm and a software architecture for a system executing scientific workflows in the Cloud. The main challenges we address are (i) capability-based scheduling, which means that individual workflow tasks may require specific capabilities from highly heterogeneous compute machines in the Cloud, (ii) a dynamic environment where resources can be added and removed on demand, (iii) scalability in terms of scientific workflows consisting of hundreds of thousands of tasks, and (iv) fault tolerance because in the Cloud, faults can happen at any time. Our software architecture consists of loosely coupled components communicating with each other through an event bus and a shared database. Workflow graphs are converted to process chains that can be scheduled independently. Our scheduling algorithm collects distinct required capability sets for the process chains, asks the agents which of these sets they can manage, and then assigns process chains accordingly. We present the results of four experiments we conducted to evaluate if our approach meets the aforementioned challenges. We finish the paper with a discussion, conclusions, and future research opportunities. An implementation of our algorithm and software architecture is publicly available with the open-source workflow management system “Steep”.

Show publication details

Kraft, Dimitri; Srinivasan, Karthik; Bieber, Gerald

Deep Learning Based Fall Detection Algorithms for Embedded Systems, Smartwatches, and IoT Devices Using Accelerometers

2020

Technologies

A fall of an elderly person often leads to serious injuries or even death. Many falls occur in the home environment and remain unrecognized. Therefore, a reliable fall detection is absolutely necessary for a fast help. Wrist-worn accelerometer based fall detection systems are developed, but the accuracy and precision are not standardized, comparable, or sometimes even known. In this work, we present an overview about existing public databases with sensor based fall datasets and harmonize existing wrist-worn datasets for a broader and robust evaluation. Furthermore, we are analyzing the current possible recognition rate of fall detection using deep learning algorithms for mobile and embedded systems. The presented results and databases can be used for further research and optimizations in order to increase the recognition rate to enhance the independent life of the elderly. Furthermore, we give an outlook for a convenient application and wrist device.

Show publication details

Dong, Jiangxin; Roth, Stefan; Schiele, Bernt

Deep Wiener Deconvolution: Wiener Meets Deep Learning for Image Deblurring

2020

Advances in Neural Information Processing Systems

Annual Conference on Neural Information Processing Systems (NeurIPS) <34, 2020, Online>

We present a simple and effective approach for non-blind image deblurring, combining classical techniques and deep learning. In contrast to existing methods that deblur the image directly in the standard image space, we propose to perform an explicit deconvolution process in a feature space by integrating a classical Wiener deconvolution framework with learned deep features. A multi-scale feature refinement module then predicts the deblurred image from the deconvolved deep features, progressively recovering detail and small-scale structures. The proposed model is trained in an end-to-end manner and evaluated on scenarios with both simulated and real-world image blur. Our extensive experimental results show that the proposed deep Wiener deconvolution network facilitates deblurred results with visibly fewer artifacts. Moreover, our approach quantitatively outperforms state-of-the-art non-blind image deblurring methods by a wide margin.

Show publication details

Wang, Yu; Yu, Weidong; Liu, Xiuqing; Wang, Chunle; Kuijper, Arjan; Guthe, Stefan

Demonstration and Analysis of an Extended Adaptive General Four-Component Decomposition

2020

IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing

The overestimation of volume scattering is an essentialshortcoming of the model-based polarimetric syntheticaperture radar (PolSAR) target decomposition method. It islikely to affect the measurement accuracy and result in mixedambiguity of scattering mechanism. In this paper, an extendedadaptive four-component decomposition method (ExAG4UThs)is proposed. First, the orientation angle compensation (OAC)is applied to the coherency matrix and artificial areas areextracted as the basis for selecting the decomposition method.Second, for the decomposition of artificial areas, one of the twocomplex unitary transformation matrices of the coherency matrixis selected according to the wave anisotropy (Aw). In addition, thebranch condition that is used as a criterion for the hierarchicalimplementation decomposition is the ratio of the correlationcoefficient (Rcc). Finally, the selected unitary transformationmatrix and discriminative threshold are used to determine thestructure of the selected volume scattering models, which aremore effectively to adapt to various scattering mechanisms. Inthis paper, the performance of the proposed method is evaluatedon GaoFen-3 full PolSAR data sets for various time periods andregions. The experimental results demonstrate that the proposedmethod can effectively represent the scattering characteristics ofthe ambiguous regions and the oriented building areas can bewell discriminated as dihedral or odd-bounce structures.

Show publication details

Sahu, Manish; Strömsdörfer, Ronja; Mukhopadhyay, Anirban; Zachow, Stefan

Endo-Sim2Real: Consistency Learning-Based Domain Adaptation for Instrument Segmentation

2020

Medical Image Computing and Computer Assisted Intervention - MICCAI 2020

International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI) <23, 2020, Online>

Lecture Notes in Computer Science (LNCS), 12263

Surgical tool segmentation in endoscopic videos is an important component of computer assisted interventions systems. Recent success of image-based solutions using fully-supervised deep learning approaches can be attributed to the collection of big labeled datasets. However, the annotation of a big dataset of real videos can be prohibitively expensive and time consuming. Computer simulations could alleviate the manual labeling problem, however, models trained on simulated data do not generalize to real data. This work proposes a consistency-based framework for joint learning of simulated and real (unlabeled) endoscopic data to bridge this performance generalization issue. Empirical results on two data sets (15 videos of the Cholec80 and EndoVis’15 dataset) highlight the effectiveness of the proposed Endo-Sim2Real method for instrument segmentation. We compare the segmentation of the proposed approach with state-of-the-art solutions and show that our method improves segmentation both in terms of quality and quantity.

Show publication details

Fu, Biying; Jarms, Lennart; Kirchbuchner, Florian; Kuijper, Arjan

ExerTrack - Towards Smart Surfaces to Track Exercises

2020

Technologies

The concept of the quantified self has gained popularity in recent years with the hype of miniaturized gadgets to monitor vital fitness levels. Smartwatches or smartphone apps and other fitness trackers are overwhelming the market. Most aerobic exercises such as walking, running, or cycling can be accurately recognized using wearable devices. However whole-body exercises such as push-ups, bridges, and sit-ups are performed on the ground and thus cannot be precisely recognized by wearing only one accelerometer. Thus, a floor-based approach is preferred for recognizing whole-body activities. Computer vision techniques on image data also report high recognition accuracy; however, the presence of a camera tends to raise privacy issues in public areas. Therefore, we focus on combining the advantages of ubiquitous proximity-sensing with non-optical sensors to preserve privacy in public areas and maintain low computation cost with a sparse sensor implementation. Our solution is the ExerTrack, an off-the-shelf sports mat equipped with eight sparsely distributed capacitive proximity sensors to recognize eight whole-body fitness exercises with a user-independent recognition accuracy of 93.5% and a user-dependent recognition accuracy of 95.1% based on a test study with 9 participants each performing 2 full sessions. We adopt a template-based approach to count repetitions and reach a user-independent counting accuracy of 93.6 %. The final model can run on a Raspberry Pi 3 in real time. This work includes data-processing of our proposed system and model selection to improve the recognition accuracy and data augmentation technique to regularize the network.

Show publication details

Ceneda, Davide; Andrienko, Natalia; Andrienko, Gennady; Gschwandtner, Theresia; Miksch, Silvia; Piccolotto, Nikolaus; Schreck, Tobias; Streit, Marc; Suschnigg, Josef; Tominski, Christian

Guide Me in Analysis: A Framework for Guidance Designers

2020

Computer Graphics Forum

Guidance is an emerging topic in the field of visual analytics. Guidance can support users in pursuing their analytical goals more efficiently and help in making the analysis successful. However, it is not clear how guidance approaches should be designed and what specific factors should be considered for effective support. In this paper, we approach this problem from the perspective of guidance designers. We present a framework comprising requirements and a set of specific phases designers should go through when designing guidance for visual analytics. We relate this process with a set of quality criteria we aim to support with our framework, that are necessary for obtaining a suitable and effective guidance solution. To demonstrate the practical usability of our methodology, we apply our framework to the design of guidance in three analysis scenarios and a design walk-through session. Moreover, we list the emerging challenges and report how the framework can be used to design guidance solutions that mitigate these issues.

Show publication details

Zhang, Alex; Chen, Kan; Johan, Henry; Erdt, Marius

High Performance Texture Streaming and Rendering of Large Textured 3D Cities

2020

2020 International Conference on Cyberworlds. Proceedings

International Conference on Cyberworlds (CW) <19, 2020, online>

We introduce a novel, high performing, bandwidth-aware texture streaming system for progressive texturing of buildings in large 3D cities, with optional texture pre-processing. We seek to maintain high and consistent texture streaming performance across different city datasets, and to address the high memory binding latency in hardware virtual textures. We adopt the sparse partially-resident image to cache mesh textures at runtime and propose to allocate memory persistently, based on mesh visibility weightings and estimated GPU bandwidth. We also retain high quality rendering by minimizing texture pop-ins when transitioning between texture mipmaps. We evaluate our texture streaming system on large city datasets, including a tile-based dataset with 56K large atlases and a dataset containing 5.7M individual textures. Results indicate fast and robust streaming and rendering performance with minimal pop-in artifacts suitable for real-time rendering of large 3D cities.

Show publication details

Liu, Yisi; Lan, Zirui; Tschoerner, Benedikt; Virdi, Satinder Singh; Cui, Jian; Li, Fan; Sourina, Olga; Zhang, Daniel; Chai, David; Müller-Wittig, Wolfgang K.

Human Factors Assessment in VR-based Firefighting Training in Maritime: A Pilot Study

2020

2020 International Conference on Cyberworlds. Proceedings

International Conference on Cyberworlds (CW) <19, 2020, online>

Virtual Reality (VR) has been used for training aircraft pilots, maritime seafarers, operators, etc as it provides an immersive environment with realistic lifelike quality. We developed and implemented a VR-based Liquefied Natural Gas (LNG) firefighting simulation system with head-mounted displays (HMD) and novel human factors evaluation that could train and assess both technical and non-technical skills in the firefighting scenarios. The proposed human factors evaluation is based on a competence model and the non-technical skills such as situation awareness, vigilance, and decision making of seafarers could be assessed. An experiment was carried out with 6 trainees and 2 trainers using the implemented LNG firefighting simulation system. The results show that that the maritime trainees felt the VR scene was realistic to them, evoked similar emotions (such as fear, stress) during the demanding events as in the real world and made them attentive during the experience.

Show publication details

Kügler, David; Sehring, Jannik Matthias; Stefanov, Andrei; Stenin, Igor; Kristin, Julia; Klenzner, Thomas; Schipper, Jörg; Mukhopadhyay, Anirban

i3PosNet: Instrument Pose Estimation from X-ray in Temporal Bone Surgery

2020

International Journal of Computer Assisted Radiology and Surgery

International Conference on Information Processing in Computer-Assisted Interventions (IPCAI) <11, 2020, Munich, Germany>

PURPOSE:Accurate estimation of the position and orientation (pose) of surgical instruments is crucial for delicate minimally invasive temporal bone surgery. Current techniques lack in accuracy and/or line-of-sight constraints (conventional tracking systems) or expose the patient to prohibitive ionizing radiation (intra-operative CT). A possible solution is to capture the instrument with a c-arm at irregular intervals and recover the pose from the image. METHODS:i3PosNet infers the position and orientation of instruments from images using a pose estimation network. Said framework considers localized patches and outputs pseudo-landmarks. The pose is reconstructed from pseudo-landmarks by geometric considerations. RESULTS:We show i3PosNet reaches errors [Formula: see text] mm. It outperforms conventional image registration-based approaches reducing average and maximum errors by at least two thirds. i3PosNet trained on synthetic images generalizes to real X-rays without any further adaptation. CONCLUSION:The translation of deep learning-based methods to surgical applications is difficult, because large representative datasets for training and testing are not available. This work empirically shows sub-millimeter pose estimation trained solely based on synthetic training data.

Show publication details

Kloiber, Simon; Settgast, Volker; Schinko, Christoph; Weinzerl, Martin; Fritz, Johannes; Schreck, Tobias; Preiner, Reinhold

Immersive Analysis of User Motion in VR Applications

2020

The Visual Computer

With the rise of virtual reality experiences for applications in entertainment, industry, science and medicine, the evaluation of human motion in immersive environments is becoming more important. By analysing the motion of virtual reality users, design choices and training progress in the virtual environment can be understood and improved. Since the motion is captured in a virtual environment, performing the analysis in the same environment provides a valuable context and guidance for the analysis.We have created a visual analysis system that is designed for immersive visualisation and exploration of human motion data. By combining suitable data mining algorithms with immersive visualisation techniques, we facilitate the reasoning and understanding of the underlying motion. We apply and evaluate this novel approach on a relevant VR application domain to identify and interpret motion patterns in a meaningful way.

Show publication details

Zhou, Wei; Hao, Xingxing; Wang, Kaidi; Zhang, Zhenyang; Yu, Yongxiang; Su, Haonan; Li, Kang; Cao, Xin; Kuijper, Arjan

Improved Estimation of Motion Blur Parameters for Restoration from a Single Image

2020

PLOS ONE

This paper presents an improved method to estimate the blur parameters of motion deblurring algorithm for single image restoration based on the point spread function (PSF) in frequency spectrum. We then introduce a modification to the Radon transform in the blur angleestimation scheme with our proposed difference value vs angle curve. Subsequently, theauto-correlation matrix is employed to estimate the blur angle by measuring the distancebetween the conjugated-correlated troughs. Finally, we evaluate the accuracy, robustnessand time efficiency of our proposed method with the existing algorithms on the public benchmarks and the natural real motion blurred images. The experimental results demonstratethat the proposed PSF estimation scheme not only could obtain a higher accuracy for theblur angle and blur length, but also demonstrate stronger robustness and higher time efficiency under different circumstances.

Show publication details

Liu, Yisi; Lan, Zirui; Cui, Jian; Sourina, Olga; Müller-Wittig, Wolfgang K.

Inter-subject Transfer Learning for EEG-based Mental Fatigue Recognition

2020

Advanced Engineering Informatics

Mental fatigue is one of the major factors leading to human errors. To avoid failures caused by mental fatigue, researchers are working on ways to detect/monitor fatigue using different types of signals. Electroencephalography (EEG) signal is one of the most popular methods to recognize mental fatigue since it directly measures the neurophysiological activities in the brain. Current EEG-based fatigue recognition algorithms are usually subject-specific, which means a classifier needs to be trained per subject. However, as fatigue may need a relatively long period to induce, collecting training data from each new user could be time-consuming and troublesome. Calibration-free methods are desired but also challenging since significant variability of physiological signals exists among different subjects. In this paper, we proposed algorithms using inter-subject transfer learning for EEG-based mental fatigue recognition, which did not need a calibration. To explore the influence of the number of EEG channels on the algorithms’ accuracy, we also compared the cases of using one channel only and multiple channels. Random forest was applied to choose the channel that has the most distinguishable features. A public EEG fatigue dataset recorded during driving was used to validate the algorithms. EEG data from 11 subjects were selected from the dataset and leave-one-subject-out cross-validation was employed. The channel from the occipital lobe is selected when only one channel is desired. The proposed transfer learning-based algorithms using Maximum Independence Domain Adaptation (MIDA) achieved an accuracy of 73.01% with all thirty channels, and using Transfer Component Analysis (TCA) achieved 68.00% with the one selected channel.

Show publication details

Chegini, Mohammad; Bernard, Jürgen; Cui, Jian; Chegini, Fatemeh; Sourin, Alexei; Andrews, Keith; Schreck, Tobias

Interactive Visual Labelling versus Active Learning: an Experimental Comparison

2020

Frontiers of Information Technology & Electronic Engineering

Methods from supervised machine learning allow the classification of new data automatically and are tremendously helpful for data analysis. The quality of supervised maching learning depends not only on the type of algorithm used, but also on the quality of the labelled dataset used to train the classifier. Labelling instances in a training dataset is often done manually relying on selections and annotations by expert analysts, and is often a tedious and time-consuming process. Active learning algorithms can automatically determine a subset of data instances for which labels would provide useful input to the learning process. Interactive visual labelling techniques are a promising alternative, providing effective visual overviews from which an analyst can simultaneously explore data records and select items to a label. By putting the analyst in the loop, higher accuracy can be achieved in the resulting classifier. While initial results of interactive visual labelling techniques are promising in the sense that user labelling can improve supervised learning, many aspects of these techniques are still largely unexplored. This paper presents a study conducted using the mVis tool to compare three interactive visualisations, similarity map, scatterplot matrix (SPLOM), and parallel coordinates, with each other and with active learning for the purpose of labelling a multivariate dataset. The results show that all three interactive visual labelling techniques surpass active learning algorithms in terms of classifier accuracy, and that users subjectively prefer the similarity map over SPLOM and parallel coordinates for labelling. Users also employ different labelling strategies depending on the visualisation used.

Show publication details

Boutros, Fadi; Damer, Naser; Raja, Kiran; Ramachandra, Raghavendra; Kirchbuchner, Florian; Kuijper, Arjan

Iris and Periocular Biometrics for Head Mounted Displays: Segmentation, Recognition, and Synthetic Data Generation

2020

Image and Vision Computing

Augmented and virtual reality deployment is finding increasing use in novel applications. Some of these emerging and foreseen applications allow the users to access sensitive information and functionalities. Head Mounted Displays (HMD) are used to enable such applications and they typically include eye facing cameras to facilitate advanced user interaction. Such integrated cameras capture iris and partial periocular region during the interaction. This work investigates the possibility of using the captured ocular images from integrated cameras from HMD devices for biometric verification, taking into account the expected limited computational power of such devices. Such an approach can allow user to be verified in a manner that does not require any special and explicit user action. In addition to our comprehensive analyses, we present a light weight, yet accurate, segmentation solution for the ocular region captured from HMD devices. Further, we benchmark a number of well-established iris and periocular verification methods along with an in-depth analysis on the impact of iris sample selection and its effect on iris recognition performance for HMD devices. To the end, we also propose and validate an identity-preserving synthetic ocular image generation mechanism that can be used for large scale data generation for training purposes or attack generation purposes. We establish the realistic image quality of generated images with high fidelity and identity preserving capabilities through benchmarking them for iris and periocular verification.

Show publication details

Krumb, Henry John; Hofmann, Sofie; Kügler, David; Ghazy, Ahmed; Dorweiler, Bernhard; Bredemann, Judith; Schmitt, Robert; Sakas, Georgios; Mukhopadhyay, Anirban

Leveraging spatial uncertainty for online error compensation in EMT

2020

International Journal of Computer Assisted Radiology and Surgery

PURPOSE: Electromagnetic tracking (EMT) can potentially complement fluoroscopic navigation, reducing radiation exposure in a hybrid setting. Due to the susceptibility to external distortions, systematic error in EMT needs to be compensated algorithmically. Compensation algorithms for EMT in guidewire procedures are only practical in an online setting. METHODS: We collect positional data and train a symmetric artificial neural network (ANN) architecture for compensating navigation error. The results are evaluated in both online and offline scenarios and are compared to polynomial fits. We assess spatial uncertainty of the compensation proposed by the ANN. Simulations based on real data show how this uncertainty measure can be utilized to improve accuracy and limit radiation exposure in hybrid navigation. RESULTS: ANNs compensate unseen distortions by more than 70%, outperforming polynomial regression. Working on known distortions, ANNs outperform polynomials as well. We empirically demonstrate a linear relationship between tracking accuracy and model uncertainty. The effectiveness of hybrid tracking is shown in a simulation experiment. CONCLUSION: ANNs are suitable for EMT error compensation and can generalize across unseen distortions. Model uncertainty needs to be assessed when spatial error compensation algorithms are developed, so that training data collection can be optimized. Finally, we find that error compensation in EMT reduces the need for X-ray images in hybrid navigation.

Show publication details

Fang, Meiling; Damer, Naser; Kirchbuchner, Florian; Kuijper, Arjan

Micro Stripes Analyses for Iris Presentation Attack Detection

2020

IJCB 2020. IEEE/IARP International Joint Conference on Biometrics

IEEE/IARP International Joint Conference on Biometrics (IJCB) <2020, online>

Iris recognition systems are vulnerable to the presentation attacks, such as textured contact lenses or printed images. In this paper, we propose a lightweight framework to detect iris presentation attacks by extracting multiple micro-stripes of expanded normalized iris textures. In this procedure, a standard iris segmentation is modified. For our Presentation Attack Detection (PAD) network to better model the classification problem, the segmented area is processed to provide lower dimensional input segments and a higher number of learning samples. Our proposed Micro Stripes Analyses (MSA) solution samples the segmented areas as individual stripes. Then, the majority vote makes the final classification decision of those micro-stripes. Experiments are demonstrated on five databases, where two databases (IIITD-WVU and Notre Dame) are from the LivDet-2017 Iris competition. An in-depth experimental evaluation of this framework reveals a superior performance compared with state-of-the-art (SoTA) algorithms. Moreover, our solution minimizes the confusion between textured (attack) and soft (bona fide) contact lens presentations.

Show publication details

Ströter, Daniel; Mueller-Roemer, Johannes; Stork, André; Fellner, Dieter W.

OLBVH: Octree Linear Bounding Volume Hierarchy for Volumetric Meshes

2020

The Visual Computer

We present a novel bounding volume hierarchy for GPU-accelerated direct volume rendering (DVR) as well as volumetric mesh slicing and inside-outside intersection testing. Our novel octree-based data structure is laid out linearly in memory using space filling Morton curves. As our new data structure results in tightly fitting bounding volumes, boundary markers can be associated with nodes in the hierarchy. These markers can be used to speed up all three use cases that we examine. In addition, our data structure is memory-efficient, reducing memory consumption by up to 75%. Tree depth and memory consumption can be controlled using a parameterized heuristic during construction. This allows for significantly shorter construction times compared to the state of the art. For GPU-accelerated DVR, we achieve performance gain of 8.4×–13×. For 3D printing, we present an efficient conservative slicing method that results in a 3×–25× speedup when using our data structure. Furthermore, we improve volumetric mesh intersection testing speed by 5×–52×.

Show publication details

Berndt, René; Tuemmler, Carl; Kehl, Christian; Aehnelt, Mario; Grasser, Tim; Franek, Andreas; Ullrich, Torsten

Open Problems in 3D Model and Data Management

2020

Proceedings of the 15th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications

International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP) <15, 2020, Valetta, Malta>

In interdisciplinary, cooperative projects that involve different representations of 3D models (such as CAD data and simulation data), a version problem can occur: different representations and parts have to be merged to form a holistic view of all relevant aspects. The individual partial models may be exported by and modified in different software environments. These modifications are a recurring activity and may be carried out again and again during the progress of the project. This position paper investigates the version problem; furthermore, this contribution is intended to stimulate discussion on how the problem can be solved.

Show publication details

Terhörst, Philipp; Riehl, Kevin; Damer, Naser; Rot, Peter; Bortolato, Blaz; Kirchbuchner, Florian; Struc, Vitomir; Kuijper, Arjan

PE-MIU: A Training-Free Privacy-Enhancing Face Recognition Approach Based on Minimum Information Units

2020

IEEE Access

Research on soft-biometrics showed that privacy-sensitive information can be deduced from biometric data. Utilizing biometric templates only, information about a persons gender, age, ethnicity, sexual orientation, and health state can be deduced. For many applications, these templates are expected to be used for recognition purposes only. Thus, extracting this information raises major privacy issues. Previous work proposed two kinds of learning-based solutions for this problem. The first ones provide strong privacy-enhancements, but limited to pre-defined attributes. The second ones achieve more comprehensive but weaker privacy-improvements. In this work, we propose a Privacy-Enhancing face recognition approach based on Minimum Information Units (PE-MIU). PE-MIU, as we demonstrate in this work, is a privacy-enhancement approach for face recognition templates that achieves strong privacy-improvements and is not limited to pre-defined attributes. We exploit the structural differences between face recognition and facial attribute estimation by creating templates in a mixed representation of minimal information units. These representations contain pattern of privacy-sensitive attributes in a highly randomized form. Therefore, the estimation of these attributes becomes hard for function creep attacks. During verification, these units of a probe template are assigned to the units of a reference template by solving an optimal best-matching problem. This allows our approach to maintain a high recognition ability. The experiments are conducted on three publicly available datasets and with five state-of-the-art approaches. Moreover, we conduct the experiments simulating an attacker that knows and adapts to the systems privacy mechanism. The experiments demonstrate that PE-MIU is able to suppress privacy-sensitive information to a significantly higher degree than previous work in all investigated scenarios. At the same time, our solution is able to achieve a verification performance close to that of the unmodified recognition system. Unlike previous works, our approach offers a strong and comprehensive privacy-enhancement without the need of training.

Show publication details

Terhörst, Philipp; Kolf, Jan Niklas; Damer, Naser; Kirchbuchner, Florian; Kuijper, Arjan

Post-comparison Mitigation of Demographic Bias in Face Recognition Using Fair Score Normalization

2020

Pattern Recognition Letters

Current face recognition systems achieve high progress on several benchmark tests. Despite this progress, recent works showed that these systems are strongly biased against demographic sub-groups. Consequently, an easily integrable solution is needed to reduce the discriminatory effect of these biased systems. Previous work mainly focused on learning less biased face representations, which comes at the cost of a strongly degraded overall recognition performance. In this work, we propose a novel unsupervised fair score normalization approach that is specifically designed to reduce the effect of bias in face recognition and subsequently lead to a significant overall performance boost. Our hypothesis is built on the notation of individual fairness by designing a normalization approach that leads to treating “similar” individuals “similarly”. Experiments were conducted on three publicly available datasets captured under controlled and in-the-wild circumstances. Results demonstrate that our solution reduces demographic biases, e.g. by up to 82.7% in the case when gender is considered. Moreover, it mitigates the bias more consistently than existing works. In contrast to previous works, our fair normalization approach enhances the overall performance by up to 53.2% at false match rate of 10−3 and up to 82.9% at a false match rate of 10−5. Additionally, it is easily integrable into existing recognition systems and not limited to face biometrics.

Show publication details

Fauser, Johannes; Bohlender, Simon Peter; Stenin, Igor; Kristin, Julia; Klenzner, Thomas; Schipper, Jörg; Mukhopadhyay, Anirban

Retrospective in Silico Evaluation of Optimized Preoperative Planning for Temporal Bone Surgery

2020

International Journal of Computer Assisted Radiology and Surgery

Purpose: Robot-assisted surgery at the temporal bone utilizing a flexible drilling unit would allow safer access to clinical targets such as the cochlea or the internal auditory canal by navigating along nonlinear trajectories. One key sub-step for clinical realization of such a procedure is automated preoperative surgical planning that incorporates both segmentation of risk structures and optimized trajectory planning. Methods: We automatically segment risk structures using 3D U-Nets with probabilistic active shape models. For nonlinear trajectory planning, we adapt bidirectional rapidly exploring random trees on Bézier Splines followed by sequential convex optimization. Functional evaluation, assessing segmentation quality based on the subsequent trajectory planning step, shows the suitability of our novel segmentation approach for this two-step preoperative pipeline. Results: Based on 24 data sets of the temporal bone, we perform a functional evaluation of preoperative surgical planning. Our experiments show that the automated segmentation provides safe and coherent surface models that can be used in collision detection during motion planning. The source code of the algorithms will be made publicly available. Conclusion: Optimized trajectory planning based on shape regularized segmentation leads to safe access canals for temporal bone surgery. Functional evaluation shows the promising results for both 3D U-Net and Bézier Spline trajectories.

Show publication details

Schoosleitner, Michael; Ullrich, Torsten

Scene Understanding and 3D Imagination: A Comparison between Machine Learning and Human Cognition

2020

Proceedings of the 15th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications

International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP) <15, 2020, Valetta, Malta>

Spatial perception and three-dimensional imagination are important characteristics for many construction tasks in civil engineering. In order to support people in these tasks, worldwide research is being carried out on assistance systems based on machine learning and augmented reality. In this paper, we examine the machine learning component and compare it to human performance. The test scenario is to recognize a partly-assembled model, identify its current status, i.e. the current instruction step, and to return the next step. Thus, we created a database of 2D images containing the complete set of instruction steps of the corresponding 3D model. Afterwards, we trained the deep neural network RotationNet with these images. Usually, the machine learning approaches are compared to each other; our contribution evaluates the machine learning results with human performance tested in a survey: in a clean-room setting the survey and RotationNet results are comparable and neither is significantly better. The real-world results show that the machine learning approaches need further improvements.

Show publication details

Bülow, Maximilian von; Tausch, Reimar; Knauthe, Volker; Wirth, Tristan; Guthe, Stefan; Santos, Pedro; Fellner, Dieter W.

Segmentation-Based Near-Lossless Compression of Multi-View Cultural Heritage Image Data

2020

GCH 2020

Eurographics Workshop on Graphics and Cultural Heritage (GCH) <18, 2020, online>

Cultural heritage preservation using photometric approaches received increasing significance in the past years. Capturing of these datasets is usually done with high-end cameras at maximum image resolution enabling high quality reconstruction results while leading to immense storage consumptions. In order to maintain archives of these datasets, compression is mandatory for storing them at reasonable cost. In this paper, we make use of the mostly static background of the capturing environment that does not directly contribute information to 3d reconstruction algorithms and therefore may be approximated using lossy techniques. We use a superpixel and figure-ground segmentation based near-lossless image compression algorithm that transparently decides if regions are relevant for later photometric reconstructions. This makes sure that the actual artifact or structured background parts are compressed with lossless techniques. Our algorithm achieves compression rates compared to the PNG image compression standard ranging from 1:2 to 1:4 depending on the artifact size.

Show publication details

Fu, Biying; Damer, Naser; Kirchbuchner, Florian; Kuijper, Arjan

Sensing Technology for Human Activity Recognition: a Comprehensive Survey

2020

IEEE Access

Sensors are devices that quantify the physical aspects of the world around us. This ability is important to gain knowledge about human activities. Human Activity recognition plays an import role in people’s everyday life. In order to solve many human-centered problems, such as health-care, and individual assistance, the need to infer various simple to complex human activities is prominent. Therefore, having a well defined categorization of sensing technology is essential for the systematic design of human activity recognition systems. By extending the sensor categorization proposed by White, we survey the most prominent research works that utilize different sensing technologies for human activity recognition tasks. To the best of our knowledge, there is no thorough sensor-driven survey that considers all sensor categories in the domain of human activity recognition with respect to the sampled physical properties, including a detailed comparison across sensor categories. Thus, our contribution is to close this gap by providing an insight into the state-of-the-art developments. We identify the limitations with respect to the hardware and software characteristics of each sensor category and draw comparisons based on benchmark features retrieved from the research works introduced in this survey. Finally, we conclude with general remarks and provide future research directions for human activity recognition within the presented sensor categorization.

Show publication details

Terhörst, Philipp; Kolf, Jan Niklas; Damer, Naser; Kirchbuchner, Florian; Kuijper, Arjan

SER-FIQ: Unsupervised Estimation of Face Image Quality Based on Stochastic Embedding Robustness

2020

2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition

IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) <2020, online>

Face image quality is an important factor to enable high-performance face recognition systems. Face quality assessment aims at estimating the suitability of a face image for the purpose of recognition. Previous work proposed supervised solutions that require artificially or human labelled quality values. However, both labelling mechanisms are error prone as they do not rely on a clear definition of quality and may not know the best characteristics for the utilized face recognition system. Avoiding the use of inaccurate quality labels, we proposed a novel concept to measure face quality based on an arbitrary face recognition model. By determining the embedding variations generated from random subnetworks of a face model, the robustness of a sample representation and thus, its quality is estimated. The experiments are conducted in a cross-database evaluation setting on three publicly available databases. We compare our proposed solution on two face embeddings against six state-of-the-art approaches from academia and industry. The results show that our unsupervised solution outperforms all other approaches in the majority of the investigated scenarios. In contrast to previous works, the proposed solution shows a stable performance over all scenarios. Utilizing the deployed face recognition model for our face quality assessment methodology avoids the training phase completely and further outperforms all baseline approaches by a large margin. Our solution can be easily integrated into current face recognition systems, and can be modified to other tasks beyond face recognition.

Show publication details

Araslanov, Nikita; Roth, Stefan

Single-Stage Semantic Segmentation From Image Labels

2020

2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition

IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) <2020, online>

Recent years have seen a rapid growth in new approaches improving the accuracy of semantic segmentation in a weakly supervised setting, i.e. with only image-level labels available for training. However, this has come at the cost of increased model complexity and sophisticated multi-stage training procedures. This is in contrast to earlier work that used only a single stage -- training one segmentation network on image labels -- which was abandoned due to inferior segmentation accuracy. In this work, we first define three desirable properties of a weakly supervised method: local consistency, semantic fidelity, and completeness. Using these properties as guidelines, we then develop a segmentation-based network model and a self-supervised training scheme to train for semantic masks from image-level annotations in a single stage. We show that despite its simplicity, our method achieves results that are competitive with significantly more complex pipelines, substantially outperforming earlier single-stage methods.

978-1-7281-7168-5

Show publication details

Rojtberg, Pavel; Pöllabauer, Thomas Jürgen; Kuijper, Arjan

Style-transfer GANs for Bridging the Domain Gap in Synthetic Pose Estimator Training

2020

2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR). Proceedings

IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR) <2020, online>

Given the dependency of current CNN architectures on a large training set, the possibility of using synthetic data is alluring as it allows generating a virtually infinite amount of labeled training data. However, producing such data is a nontrivial task as current CNN architectures are sensitive to the domain gap between real and synthetic data.We propose to adopt general-purpose GAN models for pixellevel image translation, allowing to formulate the domain gap itself as a learning problem. The obtained models are then used either during training or inference to bridge the domain gap. Here, we focus on training the single-stage YOLO6D [20] object pose estimator on synthetic CAD geometry only, where not even approximate surface information is available. When employing paired GAN models, we use an edge-based intermediate domain and introduce different mappings to represent the unknown surface properties.Our evaluation shows a considerable improvement in model performance when compared to a model trained with the same degree of domain randomization, while requiring only very little additional effort.

Show publication details

Damer, Naser; Grebe, Jonas Henry; Chen, Cong; Boutros, Fadi; Kirchbuchner, Florian; Kuijper, Arjan

The Effect of Wearing a Mask on Face Recognition Performance: an Exploratory Study

2020

BIOSIG 2020

Conference on Biometrics and Electronic Signatures (BIOSIG) <19, 2020, Online>

GI-Edition - Lecture Notes in Informatics (LNI), P-306

Face recognition has become essential in our daily lives as a convenient and contactless method of accurate identity verification. Process such as identity verification at automatic border control gates or the secure login to electronic devices are increasingly dependant on such technologies. The recent COVID-19 pandemic have increased the value of hygienic and contactless identity verification. However, the pandemic led to the wide use of face masks, essential to keep the pandemic under control. The effect of wearing a mask on face recognition in a collaborative environment is currently sensitive yet understudied issue. We address that by presenting a specifically collected database containing three session, each with three different capture instructions, to simulate realistic use cases.We further study the effect of masked face probes on the behaviour of three top-performing face recognition systems, two academic solutions and one commercial off-the-shelf (COTS) system.

Show publication details

Tödtmann, Helmut; Vahl, Matthias; Lukas, Uwe von; Ullrich, Torsten

Time-unfolding Object Existence Detection in Low-quality Underwater Videos using Convolutional Neural Networks

2020

Proceedings of the 15th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications

International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP) <15, 2020, Valetta, Malta>

Monitoring the environment for early recognition of changes is necessary for assessing the success of renaturation measures on a facts basis. It is also used in fisheries and livestock production for monitoring and for quality assurance. The goal of the presented system is to count sea trouts annually over the course of several months. Sea trouts are detected with underwater camera systems triggered by motion sensors. Such a scenario generates many videos that have to be evaluated manually. This article describes the techniques used to automate the image evaluation process. An effective method has been developed to classify videos and determine the times of occurrence of sea trouts, while significantly reducing the annotation effort. A convolutional neural network has been trained via supervised learning. The underlying images are frame compositions automatically extracted from videos on which sea trouts are to be detected. The accuracy of the resulting detection system reaches values of up to 97.7 %.

Show publication details

Haescher, Marian; Höpfner, Florian; Chodan, Wencke; Kraft, Dimitri; Aehnelt, Mario; Urban, Bodo

Transforming Seismocardiograms Into Electrocardiograms by Applying Convolutional Autoencoders

2020

2020 IEEE International Conference on Acoustics, Speech, and Signal Processing. Proceedings

International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2020) <45, 2020, online>

Electrocardiograms constitute the key diagnostic tool for cardiologists. While their diagnostic value is yet unparalleled, electrode placement is prone to errors, and sticky electrodes pose a risk for skin irritations and may detach in long-term measurements. Heart.AI presents a fundamentally new approach, transforming motion-based seismocardiograms into electrocardiograms interpretable by cardiologists. Measurements are conducted simply by placing a sensor on the user’s chest. To generate the transformation model, we trained a convolutional autoencoder with the publicly available CEBS dataset. The transformed ECG strongly correlates with the ground truth (r=.94, p<.01), and important features (number of R-peaks, QRS-complex durations) are modeled realistically (Bland-Altman analyses, p>0.12). On a 5- point Likert scale, 15 cardiologists rated the morphological and rhythmological validity as high (4.63/5 and 4.8/5, respectively). Our electrodeless approach solves crucial problems of ECG measurements while being scalable, accessible and inexpensive. It contributes to telemedicine, especially in low-income and rural regions worldwide.

Show publication details

Müller, Martin; Petzold, Markus; Wunderlich, Marcel; Baumgartl, Tom; Höhn, Markus; Eichel, Vanessa; Mutters, Nico T.

Visual Analysis for Hospital Infection Control using a RNN Model

2020

EuroVA 2020

International EuroVis Workshop on Visual Analytics (EuroVA) <2020, Norrköping, Sweden>

Bacteria and viruses are transmitted among patients in the hospital. Infection control experts develop strategies for infection control. Currently, this is done mostly manually, which is time-consuming and error-prone. Visual analysis approaches mainly focus disease spread on population level.We learn a RNN model for detection of potential infections, transmissions and infection factors. We present a novel interactive visual interface to explore the model results. Together with infection control experts, we apply our approach to real hospital data. The experts could identify factors for infections and derive infection control measures.

Show publication details

Oyarzun Laura, Cristina; Hartwig, Katrin; Hertlein, Anna-Sophia; Jung, Florian; Burmeister, Jan; Kohlhammer, Jörn; Wesarg, Stefan; Sauter, Guido

Web-based Prostate Visualization Tool

2020

Proceedings of the 2020 Annual Meeting of the German Society of Biomedical Engineering

Jahrestagung der Deutschen Gesellschaft für Biomedizinische Technik im VDE (BMT) <54, 2020, online>

Current Directions in Biomedical Engineering

Proper treatment of prostate cancer is essential toincrease the survival chance. In this sense, numerous studiesshow how important the communication between all stakeholders in the clinic is. This communication is difficult because of the lack of conventions while referring to the locationwhere a biopsy for diagnosis was taken. This becomes evenmore challenging taking into account that experts of differentfields work on the data and have different requirements. In thispaper a web-based communication tool is proposed that incorporates a visualization of the prostate divided into 27 segments according to the PI-RADS protocol. The tool provides2 working modes that consider the requirements of radiologistand pathologist while keeping it consistent. The tool comprisesall relevant information given by pathologists and radiologists,such as, severity grades of the disease or tumor length. Everything is visualized using a colour code for better undestanding.