Die »Selected Readings in Computer Graphics 2018« bestehen aus 40 ausgewählten Artikeln von insgesamt 124 wissenschaftlichen Veröffentlichungen.

Die Beiträge kommen aus dem Fraunhofer-Institut für Graphische Datenverarbeitung IGD mit Standorten in Darmstadt wie auch in Rostock, Singapur und Graz, den Partner-Instituten an den jeweiligen Universitäten, der Fachgruppe Graphisch-Interaktive Systeme der Technischen Universität Darmstadt, der Computergraphics and Communication Gruppe am Institut für Informatik der Universität Rostock, der Nanyang Technological University (NTU), Singapur, und dem Visual Computing Excellenz-Cluster der Technischen Universität Graz. Sie alle arbeiten eng in Projekten sowie Forschung und Entwicklung im Gebiet der Computer Graphik zusammen.

Alle Artikel erschienen vorher in verschiedenen wissenschaftlichen Büchern, Zeitschriften, Konferenzbänden und Workshops. Die Veröffentlichungen mussten einen gründlichen Begutachtungsprozess durch international führende Experten und etabilierte technische Vereinigungen durchlaufen. Deshalb geben die Selected Readings einen recht guten und detaillierten Überblick über die wissenschaftlichen Entwicklungen in der Computer Graphik im Jahr 2018. Sie werden von Professor Dieter W. Fellner, dem Leiter des Fraunhofer-Instituts für Graphische Datenverarbeitung IGD in Darmstadt zusammengestellt. Er ist zugleich Professor am Fachbereich Informatik der Technischen Universität Darmstadt und Professor an der Fakultät für Informatik der Technischen Universität Graz.

Die Selected Readings in Computer Graphics 2018 befassen sich mit Aspekten und Trends der Forschung und Entwicklung in Computer Graphik auf den Gebieten

Liste der Publikationen

Show publication details

Hiemenz, Benedikt; Krämer, Michel

Dynamic Searchable Symmetric Encryption for Storing Geospatial Data in the Cloud

2019

International Journal of Information Security

We present a dynamic searchable symmetric encryption scheme allowing users to securely store geospatial data in the cloud. Geospatial data sets often contain sensitive information, for example, about urban infrastructures. Since clouds are usually provided by third parties, these data need to be protected. Our approach allows users to encrypt their data in the cloud and make them searchable at the same time. It does not require an initialization phase, which enables users to dynamically add new data and remove existing records. We design multiple protocols differing in their level of security and performance, respectively. All of them support queries containing boolean expressions, as well as geospatial queries based on bounding boxes, for example. Our findings indicate that although the search in encrypted data requires more runtime than in unencrypted data, our approach is still suitable for real-world applications.We focus on geospatial data storage, but our approach can also be applied to applications from other areas dealing with keyword-based searches in encrypted data. We conclude the paper with a discussion on the benefits and drawbacks of our approach.

Show publication details

Zhou, Wei; Ma, Calwen; Yao, Tong; Chang, Peng; Zhang, Qi; Kuijper, Arjan

Histograms of Gaussian Normal Distribution for 3D Feature Matching in Cluttered Scenes

2019

The Visual Computer

3D feature descriptors provide essential information to find given models in captured scenes. In practical applications, these scenes often contain clutter. This imposes severe challenges on the 3D object recognition leading to feature mismatches between scenes and models. As such errors are not fully addressed by the existing methods, 3D feature matching still remains a largely unsolved problem. We therefore propose our Histograms of Gaussian Normal Distribution (HGND) for capturing salient feature information on a local reference frame (LRF) that enables us to solve this problem. We define a LRF on each local surface patch by using the eigenvectors of the scatter matrix. Different from the traditional local LRF-based methods, our HGND descriptor is based on the combination of geometrical and spatial information without calculating the distribution of every point and its geometrical information in a local domain. This makes it both simple and efficient. We encode the HGND descriptors in a histogram by the geometrical projected distribution of the normal vectors. These vectors are based on the spatial distribution of the points.We use three public benchmarks, the Bologna, the UWA and the Ca’ Foscari Venezia dataset, to evaluate the speed, robustness, and descriptiveness of our approach. Our experiments demonstrate that the HGND is fast and obtains a more reliable matching rate than state-of-the-art approaches in cluttered situations.

Show publication details

Bernard, Jürgen; Sessler, David; Kohlhammer, Jörn; Ruddle, Roy A.

Using Dashboard Networks to Visualize Multiple Patient Histories: A Design Study on Post-operative Prostate Cancer

2019

IEEE Transactions on Visualization and Computer Graphics

In this design study, we present a visualization technique that segments patients' histories instead of treating them as raw event sequences, aggregates the segments using criteria such as the whole history or treatment combinations, and then visualizes the aggregated segments as static dashboards that are arranged in a dashboard network to show longitudinal changes. The static dashboards were developed in nine iterations, to show 15 important attributes from the patients' histories. The final design was evaluated with five non-experts, five visualization experts and four medical experts, who successfully used it to gain an overview of a 2,000 patient dataset, and to make observations about longitudinal changes and differences between two cohorts. The research represents a step-change in the detail of large-scale data that may be successfully visualized using dashboards, and provides guidance about how the approach may be generalized.

Show publication details

Brunton, Alan; Arikan, Can Ates; Tanksale, Tejas Madan; Urban, Philipp

3D Printing Spatially Varying Color and Translucency

2018

ACM Transactions on Graphics

We present an efficient and scalable pipeline for fabricating full-colored objects with spatially-varying translucency from practical and accessible input data via multi-material 3D printing. Observing that the costs associated with BSSRDF measurement and processing are high, the range of 3D printable BSSRDFs are severely limited, and that the human visual system relies only on simple high-level cues to perceive translucency, we propose a method based on reproducing perceptual translucency cues. The input to our pipeline is an RGBA signal defined on the surface of an object, making our approach accessible and practical for designers. We propose a framework for extending standard color management and profiling to combined color and translucency management using a gamut correspondence strategy we call opaque relative processing. We present an efficient streaming method to compute voxel-level material arrangements, achieving both realistic reproduction of measured translucent materials and artistic effects involving multiple fully or partially transparent geometries.

Show publication details

Ding, Peng; Zhang, Ye; Deng, Wei-Jian; Jia, Ping; Kuijper, Arjan

A Light and Faster Regional Convolutional Neural Network for Object Detection in Optical Remote Sensing Images

2018

ISPRS Journal of Photogrammetry and Remote Sensing

Detection of objects from satellite optical remote sensing images is very important for many commercial and governmental applications. With the development of deep convolutional neural networks (deep CNNs), the field of object detection has seen tremendous advances. Currently, objects in satellite remote sensing images can be detected using deep CNNs. In general, optical remote sensing images contain many dense and small objects, and the use of the original Faster Regional CNN framework does not yield a suitably high precision. Therefore, after careful analysis we adopt dense convoluted networks, a multi-scale representation and various combinations of improvement schemes to enhance the structure of the base VGG16-Net for improving the precision. We propose an approach to reduce the test-time (detection time) and memory requirements. To validate the effectiveness of our approach, we perform experiments using satellite remote sensing image datasets of aircraft and automobiles. The results show that the improved network structure can detect objects in satellite optical remote sensing images more accurately and efficiently.

Show publication details

Ma, Jingting; Lin, Feng; Wesarg, Stefan; Erdt, Marius

A Novel Bayesian Model Incorporating Deep Neural Network and Statistical Shape Model for Pancreas Segmentation

2018

Medical Image Computing and Computer Assisted Intervention – MICCAI 2018: Part IV

International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI) <21, 2018, Granada, Spain>

Lecture Notes in Computer Science (LNCS), 11073

Deep neural networks have achieved significant success in medical image segmentation in recent years. However, poor contrast to surrounding tissues and high flexibility of anatomical structure of the interest object are still challenges. On the other hand, statistical shape model based approaches have demonstrated promising performance on exploiting complex shape variabilities but they are sensitive to localization and initialization. This motivates us to leverage the rich shape priors learned from statistical shape models to improve the segmentation of deep neural networks. In this work, we propose a novel Bayesian model incorporating the segmentation results from both deep neural network and statistical shape model for segmentation. In evaluation, experiments are performed on 82 CT datasets of the challenging public NIH pancreas dataset. We report 85.32 % of the mean DSC that outperforms the state-of-the-art and approximately 12 % improvement from the predicted segment of deep neural network.

978-3-030-00936-6

Show publication details

Wientapper, Folker; Schmitt, Michael; Fraissinet-Tachet, Matthieu; Kuijper, Arjan

A Universal, Closed-form Approach for Absolute Pose Problems

2018

Computer Vision and Image Understanding

We propose a general approach for absolute pose problems including the well known perspective-n-point (PnP) problem, its generalized variant (GPnP) with and without scale, and the pose from 2D line correspondences (PnL). These have received a tremendous attention in the computer vision community during the last decades. However, it was only recently that efficient, globally optimal, closed-form solutions have been proposed, which can handle arbitrary numbers of correspondences including minimal configurations as well as over-constrained cases with linear complexity. We follow the general scheme by eliminating the linear parameters first, which results in a least squares error function that only depends on the non-linear rotation and a small symmetric coefficient matrix of fixed size. Then, in a second step the rotation is solved with algorithms which are derived using methods from algebraic geometry such as the Gröbner basis method. We propose a unified formulation based on a representation with orthogonal complements which allows to combine different types of constraints elegantly in one single framework. We show that with our unified formulation existing polynomial solvers can be interchangeably applied to problem instances other than those they were originally proposed for. It becomes possible to compare them on various registrations problems with respect to accuracy, numerical stability, and computational speed. Our compression procedure not only preserves linear complexity, it is even faster than previous formulations. For the second step we also derive an own algebraic equation solver, which can additionally handle the registration from 3D point-to-point correspondences, where other rotation solvers fail. Finally, we also present a marker-based SLAM approach with automatic registration to a target coordinate system based on partial and distributed reference information. It represents an application example that goes beyond classical camera pose estimation from image measurements and also serves for evaluation on real data.

Show publication details

Smith, Neil; Moehrle, Nils; Goesele, Michael; Heidrich, Wolfgang

Aerial Path Planning for Urban Scene Reconstruction: A Continuous Optimization Method and Benchmark

2018

ACM Transactions on Graphics

Small unmanned aerial vehicles (UAVs) are ideal capturing devices for high-resolution urban 3D reconstructions using multi-view stereo. Nevertheless, practical considerations such as safety usually mean that access to the scan target is often only available for a short amount of time, especially in urban environments. It therefore becomes crucial to perform both view and path planning to minimize flight time while ensuring complete and accurate reconstructions. In this work, we address the challenge of automatic view and path planning for UAV-based aerial imaging with the goal of urban reconstruction from multi-view stereo. To this end, we develop a novel continuous optimization approach using heuristics for multi-view stereo reconstruction quality and apply it to the problem of path planning. Even for large scan areas, our method generates paths in only a few minutes, and is therefore ideally suited for deployment in the field. To evaluate our method, we introduce and describe a detailed benchmark dataset for UAV path planning in urban environments which can also be used to evaluate future research efforts on this topic. Using this dataset and both synthetic and real data, we demonstrate survey-grade urban reconstructions with ground resolutions of 1 cm or better on large areas (30 000m2).

Show publication details

Getto, Roman; Kuijper, Arjan; Fellner, Dieter W.

Automatic Procedural Model Generation for 3D Object Variation

2018

The Visual Computer

3D objects are used for numerous applications. In many cases not only single objects but also variations of objects are needed. Procedural models can be represented in many different forms, but generally excel in content generation. Therefore this representation is well suited for variation generation of 3D objects. However, the creation of a procedural model can be time-consuming on its own. We propose an automatic generation of a procedural model from a single exemplary 3D object. The procedural model consists of a sequence of parameterizable procedures and represents the object construction process. Changing the parameters of the procedures changes the surface of the 3D object. By linking the surface of the procedural model to the original object surface, we can transfer the changes and enable the possibility of generating variations of the original 3D object. The user can adapt the derived procedural model to easily and intuitively generate variations of the original object. We allow the user to define variation parameters within the procedures to guide a process of generating random variations. We evaluate our approach by computing procedural models for various object types, and we generate variations of all objects using the automatically generated procedural model.

Show publication details

Wirtz, Andreas; Mirashi, Sudesh Ganapati; Wesarg, Stefan

Automatic Teeth Segmentation in Panoramic X-Ray Images Using a Coupled Shape Model in Combination with a Neural Network

2018

Medical Image Computing and Computer Assisted Intervention – MICCAI 2018: Part IV

International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI) <21, 2018, Granada, Spain>

Lecture Notes in Computer Science (LNCS), 11073

Dental panoramic radiographs depict the full set of teeth in a single image and are used by dentists as a popular first tool for diagnosis. In order to provide the dentist with automatic diagnostic support, a robust and accurate segmentation of the individual teeth is required. However, poor image quality of panoramic x-ray images like low contrast or noise as well as teeth variations in between patients make this task difficult. In this paper, a fully automatic approach is presented that uses a coupled shape model in conjunction with a neural network to overcome these challenges. The network provides a preliminary segmentation of the teeth region which is used to initialize the coupled shape model in terms of position and scale. Then the 28 individual teeth (excluding wisdom teeth) are segmented and labeled using gradient image features in combination with the model’s statistical knowledge about their shape variation and spatial relation. The segmentation quality of the approach is assessed by comparing the generated results to manually created goldstandard segmentations of the individual teeth. Experimental results on a set of 14 test images show average precision and recall values of 0.790 and 0.827, respectively and a DICE overlap of 0.744.

978-3-030-00936-6

Show publication details

Limper, Max; Vining, Nicolas; Sheffer, Alla

Box Cutter: Atlas Refinement for Efficient Packing via Void Elimination

2018

ACM Transactions on Graphics

Packed atlases, consisting of 2D parameterized charts, are ubiquitously used to store surface signals such as texture or normals. Tight packing is similarly used to arrange and cut-out 2D panels for fabrication from sheet materials. Packing efficiency, or the ratio between the areas of the packed atlas and its bounding box, significantly impacts downstream applications. We propose Box Cutter, a new method for optimizing packing efficiency suitable for both settings. Our algorithm improves packing efficiency without changing distortion by strategically cutting and repacking the atlas charts or panels. It preserves the local mapping between the 3D surface and the atlas charts and retains global mapping continuity across the newly formed cuts. We balance packing efficiency improvement against increase in chart boundary length and enable users to directly control the acceptable amount of boundary elongation. While the problem we address is NP-hard, we provide an effective practical solution by iteratively detecting large rectangular empty spaces, or void boxes, in the current atlas packing and eliminating them by first refining the atlas using strategically placed axis-aligned cuts and then repacking the refined charts. We repeat this process until no further improvement is possible, or until the desired balance between packing improvement and boundary elongation is achieved. Packed chart atlases are only useful for the applications we address if their charts are overlap-free; yet many popular parameterization methods, used as-is, produce atlases with global overlaps. Our pre-processing step eliminates all input overlaps while explicitly minimizing the boundary length of the resulting overlap-free charts. We demonstrate our combined strategy on a large range of input atlases produced by diverse parameterization methods, as well as on multiple sets of 2D fabrication panels. Our framework dramatically improves the output packing efficiency on all inputs; for instance with boundary length increase capped at 50% we improve packing efficiency by 68% on average.

Show publication details

Gödde, Michael; Gabler, Frank; Siegmund, Dirk; Braun, Andreas

Cinematic Narration in VR – Rethinking Film Conventions for 360 Degrees

2018

Virtual Augmented and Mixed Reality: Applications in Health, Cultural Heritage, and Industry

International Conference Virtual Augmented and Mixed Reality (VAMR) <10, 2018, Las Vegas, NV, USA>

The rapid development of VR technology in the past three years allowed artists, filmmakers and other media producers to create great experiences in this new medium. But filmmakers are, however, facing big challenges, when it comes to cinematic narration in VR. The old, established rules of filmmaking do not apply for VR films and important techniques of cinematography and editing must be completely rethought. Possibly, a new filmic language will be found. But even though filmmakers eagerly experiment with the new medium already, there exist relatively few scientific studies about the differences between classical filmmaking and filmmaking in 360 and VR. We therefore present this study on cinematic narration in VR. In this we give a comprehensive overview of techniques and concepts that are applied in current VR films and games. We place previous research on narration, film, games and human perception into the context of VR experiences and we deduce consequences for cinematic narration in VR. We base our assumptions on a conducted empirical test with 50 participants and on an additional online survey. In the empirical study, we selected 360-degree videos and showed them to a test-group, while the viewer’s behavior and attention was observed and documented. As a result of this paper, we present guidelines which suggest methods of guiding the viewers’ attention as well as approaches to cinematography, staging and editing in VR.

Show publication details

Bernard, Jürgen; Hutter, Marco; Zeppelzauer, Matthias; Fellner, Dieter W.; Sedlmair, Michael

Comparing Visual-Interactive Labeling with Active Learning: An Experimental Study

2018

IEEE Transactions on Visualization and Computer Graphics

Labeling data instances is an important task in machine learning and visual analytics. Both fields provide a broad set of labeling strategies, whereby machine learning (and in particular active learning) follows a rather model-centered approach and visual analytics employs rather user-centered approaches (visual-interactive labeling). Both approaches have individual strengths and weaknesses. In this work, we conduct an experiment with three parts to assess and compare the performance of these different labeling strategies. In our study, we (1) identify different visual labeling strategies for user-centered labeling, (2) investigate strengths and weaknesses of labeling strategies for different labeling tasks and task complexities, and (3) shed light on the effect of using different visual encodings to guide the visual-interactive labeling process. We further compare labeling of single versus multiple instances at a time, and quantify the impact on efficiency. We systematically compare the performance of visual interactive labeling with that of active learning. Our main findings are that visual-interactive labeling can outperform active learning, given the condition that dimension reduction separates well the class distributions. Moreover, using dimension reduction in combination with additional visual encodings that expose the internal state of the learning model turns out to improve the performance of visual-interactive labeling.

Show publication details

Damer, Naser; Wainakh, Yaza; Boller, Viola; Berken von den, Sven; Terhörst, Philipp; Braun, Andreas; Kuijper, Arjan

CrazyFaces: Unassisted Circumvention of Watchlist Face Identification

2018

IEEE 9th International Conference on Biometrics: Theory, Applications and Systems

IEEE International Conference on Biometrics: Theory, Applications and Systems (BTAS) <9, 2018, Redondo Beach, CA, USA>

Once upon a time, there was a blacklisted criminal who usually avoided appearing in public. He was surfing the Web, when he noticed, what had to be a targeted advertisement announcing a concert of his favorite band. The concert was in a near town, and the only way to get there was by train. He was worried, because he heard in the news about the new face identification system installed at the train station. From his last stay with the police, he remembers that they took these special face images with the white background. He thought about what can he do to avoid being identified and an idea popped in his mind “what if I can make a crazy-face, as the kids call it, to make my face look different? What do I exactly have to do? And will it work?”. He called his childhood geeky friend and asked him if he can build him a face recognition application he can tinker with. The geeky friend was always interested in such small projects where he can use open-source resources and didn’t really care about the goal, as usual. The criminal tested the application and played around, trying to figure out how can he make a crazy-face that won’t be identified as himself. On the day of the concert, he took off to the train station with some doubt in his mind and fear in his soul. To know what happened next, you should read the rest of this paper.

978-1-5386-7180-1

Show publication details

Saeedan, Faraz; Weber, Nicolas; Goesele, Michael; Roth, Stefan

Detail-Preserving Pooling in Deep Networks

2018

2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition

IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) <2018, Salt Lake City, Utah, USA>

Most convolutional neural networks use some method for gradually downscaling the size of the hidden layers. This is commonly referred to as pooling, and is applied to reduce the number of parameters, improve invariance to certain distortions, and increase the receptive field size. Since pooling by nature is a lossy process, it is crucial that each such layer maintains the portion of the activations that is most important for the network’s discriminability. Yet, simple maximization or averaging over blocks, max or average pooling, or plain downsampling in the form of strided convolutions are the standard. In this paper, we aim to leverage recent results on image downscaling for the purposes of deep learning. Inspired by the human visual system, which focuses on local spatial changes, we propose detailpreserving pooling (DPP), an adaptive pooling method that magnifies spatial changes and preserves important structural detail. Importantly, its parameters can be learned jointly with the rest of the network. We analyze some of its theoretical properties and show its empirical benefits on several datasets and networks, where DPP consistently outperforms previous pooling approaches.

Show publication details

Lan, Zirui; Sourina, Olga; Wang, Lipo; Scherer, Reinhold; Müller-Putz, Gernot

Domain Adaptation Techniques for EEG-based Emotion Recognition: A Comparative Study onTwo Public Datasets

2018

IEEE Transactions on Cognitive and Developmental Systems

Affective brain-computer interface (aBCI) introduces personal affective factors to human-computer interaction. The state-of-the-art aBCI tailors its classifier to each individual user to achieve accurate emotion classification. A subject-independent classifier that is trained on pooled data from multiple subjects generally leads to inferior accuracy, due to the fact that encephalogram (EEG) patterns vary from subject to subject. Transfer learning or domain adaptation techniques have been leveraged to tackle this problem. Existing studies have reported successful applications of domain adaptation techniques on SEED dataset. However, little is known about the effectiveness of the domain adaptation techniques on other affective datasets or in a cross-dataset application. In this paper, we focus on a comparative study on several state-of-the-art domain adaptation techniques on two datasets: DEAP and SEED. We demonstrate that domain adaptation techniques can improve the classification accuracy on both datasets, but not so effective on DEAP as on SEED. Then, we explore the efficacy of domain adaptation in a cross-dataset setting when the data are collected under different environments using different devices and experimental protocols. Here, we propose to apply domain adaptation to reduce the intersubject variance as well as technical discrepancies between datasets, and then train a subject-independent classifier on one dataset and test on the other. Experiment results show that using domain adaptation technique in a transductive adaptation setting can improve the accuracy significantly by 7.25% – 13.40% compared to the baseline accuracy where no domain adaptation technique is used.

Show publication details

Berkei, Sarah; Limper, Max; Hörr, Chrisitan; Kuijper, Arjan

Efficient Global Registration for Nominal/Actual Comparisons

2018

VMV 2018

International Symposium on Vision, Modeling and Visualization (VMV) <23, 2018, Stuttgart, Germany>

We investigate global registration methods for Nominal/Actual comparisons, using precise, high-resolution 3D scans. First we summarize existing approaches and requirements for this field of application. We then demonstrate that a basic RANSAC strategy, along with a slightly modified version of basic building blocks, can lead to a high global registration performance at moderate registration times. Specifically, we introduce a simple feedback loop that exploits the fast convergence of the ICP algorithm to efficiently speed up the search for a valid global alignment. Using the example of 3D printed parts and range images acquired by two different high-precision 3D scanners for quality control, we show that our method can be efficiently used for Nominal/Actual comparison. For this scenario, the proposed algorithm significantly outperforms the current state of the art, with regards to registration time and success rate.

978-3-03868-072-7

Show publication details

Zhou, Wei; Ma, Caiwen; Liao, Shenghui; Shi, Jinjing; Yao, Tong; Chang, Peng; Kuijper, Arjan

Feature Fusion Information Statistics for Feature Matching in Cluttered Scenes

2018

Computers & Graphics

Object recognizing in cluttered scenes remains a largely unsolved problem, especially when applying feature matching to cluttered scenes there are many feature mismatches between the scenes and models. We propose our Feature Fusion Information Statistics (FFIS) as the calculation framework for extracting salient information from a Local Surface Patch (LSP) by a Local Reference Frame (LRF). Our LRF is defined on each LSP by projecting the scatter matrix’s eigenvectors to a plane which is perpendicular to the normal of the LSP. Based on this, our FFIS descriptor of each LSP is calculated, for which we use the combined distribution of mesh and point information in a local domain. Finally, we evaluate the speed, robustness and descriptiveness of our FFIS with the state-of-the-art methods on several public benchmarks. Our experiments show that our FFIS is fast and obtains a more reliable matching rate than other approaches in cluttered situations.

Show publication details

Fu, Biying; Kirchbuchner, Florian; Kuijper, Arjan; Braun, Andreas; Gangatharan, Dinesh Vaithyalingam

Fitness Activity Recognition on Smartphones Using Doppler Measurements

2018

Informatics

Quantified Self has seen an increased interest in recent years, with devices including smartwatches, smartphones, or other wearables that allow you to monitor your fitness level. This is often combined with mobile apps that use gamification aspects to motivate the user to perform fitness activities, or increase the amount of sports exercise. Thus far, most applications rely on accelerometers or gyroscopes that are integrated into the devices. They have to be worn on the body to track activities. In this work, we investigated the use of a speaker and a microphone that are integrated into a smartphone to track exercises performed close to it. We combined active sonar and Doppler signal analysis in the ultrasound spectrum that is not perceivable by humans. We wanted to measure the body weight exercises bicycles, toe touches, and squats, as these consist of challenging radial movements towards the measuring device. We have tested several classification methods, ranging from support vector machines to convolutional neural networks. We achieved an accuracy of 88% for bicycles, 97% for toe-touches and 91% for squats on our test set.

Show publication details

Mueller-Roemer, Johannes; Stork, André

GPU-based Polynomial Finite Element Matrix Assembly for Simplex Meshes

2018

Computer Graphics Forum

Pacific Conference on Computer Graphics and Applications (PG) <26, 2018, Hong Kong, China>

In this paper, we present a matrix assembly technique for arbitrary polynomial order finite element simulations on simplex meshes for graphics processing units (GPU). Compared to the current state of the art in GPU-based matrix assembly, we avoid the need for an intermediate sparse matrix and perform assembly directly into the final, GPU-optimized data structure. Thereby, we avoid the resulting 180% to 600% memory overhead, depending on polynomial order, and associated allocation time, while simplifying the assembly code and using a more compact mesh representation. We compare our method with existing algorithms and demonstrate significant speedups.

Show publication details

Zhou, Wei; Ma, Caiwen; Kuijper, Arjan

Hough-space-based Hypothesis Generation and Hypothesis Verification for 3D Object Recognition and 6D Pose Estimation

2018

Computers & Graphics

Hypothesis Generation (HG) and Hypothesis Verification (HV) play an important role in 3D objection recognition. However, performing 3D object recognition in cluttered scenes using HG and HV still re- mains a largely unsolved problem. High False Positive (FP) in HG and HV stages are witnessed due to clutter and occlusion, which will further affect the final accuracy of recognition. To address these prob- lems, we propose a novel Hough-space-based HG approach for extracting hypotheses. Differently from the existing methods, our approach is based on a Hough space which adopts a self-adapted measure to generate hypotheses. Based on this, a novel HV-based method is proposed to verify the hypotheses obtained from HG procedures. The proposed method is evaluated on four public benchmark datasets to verify its performance. Experiments show that our approach outperforms state-of-the-art methods, and obtains a higher recognition rate without sacrificing precision both at high FP rates and high occlusion rates.

Show publication details

Noll, Matthias; Noa-Rudolph, Werner; Wesarg, Stefan; Kraly, Michael; Stoffels, Ingo; Klode, Joachim; Spass, Cédric; Spass, Gerrit

ICG based Augmented-Reality-System for Sentinel Lymph Node Biopsy

2018

Eurographics Workshop on Visual Computing for Biology and Medicine

Eurographics Workshop on Visual Computing for Biology and Medicine (VCBM) <8, 2018, Granada, Spain>

In this paper we introduce a novel augmented-reality (AR) system for the sentinel lymph node (SLN) biopsy. The AR system consists of a cubic recording device with integrated stereo near-infrared (NIR) and stereo color cameras, an head mounted display (HMD) for visualizing the SLN information directly into the physicians view and a controlling software application. The labeling of the SLN is achieved using the fluorescent dye indocyanine green (ICG). The dye accumulates in the SLN where it is excited to fluorescence by applying infrared light. The fluorescence is recorded from two directions by the NIR stereo cameras using appropriate filters. Applying the known rigid camera geometry, an ICG depth map can be generated from the camera images, thus creating a live 3D representation of the SLN. The representation is then superimposed to the physicians field of view, by applying a series of coordinate system transformations, that are determined in four separate system calibration steps. To compensate for the head motion, the recording systems is continuously tracked by a single camera on the HMD using fiducial markers. Because the system does not require additional monitors, the physicians attention is kept solely on the operation site. This can potentially decrease the intervention time and render the procedure safer for the patient.

978-3-03868-056-7

Show publication details

Chegini, Mohammad; Shao, Lin; Gregor, Robert; Lehmann, Dirk J.; Schreck, Tobias

Interactive Visual Exploration of Local Patterns in Large Scatterplot Spaces

2018

Computer Graphics Forum

Eurographics / IEEE VGTC Conference on Visualization (EuroVis) <20, 2018, Brno, Czech Republic>

Analysts often use visualisation techniques like a scatterplot matrix (SPLOM) to explore multivariate datasets. The scatterplots of a SPLOM can help to identify and compare two-dimensional global patterns. However, local patterns which might only exist within subsets of records are typically much harder to identify and may go unnoticed among larger sets of plots in a SPLOM. This paper explores the notion of local patterns and presents a novel approach to visually select, search for, and compare local patterns in a multivariate dataset. Model-based and shape-based pattern descriptors are used to automatically compare local regions in scatterplots to assist in the discovery of similar local patterns. Mechanisms are provided to assess the level of similarity between local patterns and to rank similar patterns e_ectively. Moreover, a relevance feedback module is used to suggest potentially relevant local patterns to the user. The approach has been implemented in an interactive tool and demonstrated with two real-world datasets and use cases. It supports the discovery of potentially useful information such as clusters, functional dependencies between variables, and statistical relationships in subsets of data records and dimensions.

Show publication details

Braun, Andreas; Zander-Walz, Sebastian; Majewski, Martin; Kuijper, Arjan

Investigating Large Curved Interaction Devices

2018

Personal and Ubiquitous Computing

Large interactive surfaces enable novel forms of interaction for their users, particularly in terms of collaborative interaction. During longer interactions, the ergonomic factors of interaction systems have to be taken into consideration. Using the full interaction space may require considerable motion of the arms and upper body over a prolonged period of time, potentially causing fatigue. In this work, we present Curved, a large-surface interaction device, whose shape is designed based on the natural movement of an outstretched arm. It is able to track one or two hands above or on its surface by using 32 capacitive proximity sensors. Supporting both touch and mid-air interaction can enable more versatile modes of use. We use image processing methods for tracking the user's hands and classify gestures based on their motion. Virtual reality is a potential use case for such interaction systems and was chosen for our demonstration application. We conducted a study with ten users to test the gesture tracking performance, as well as user experience and user preference for the adjustable system parameters.

Show publication details

Tack, A.; Mukhopadhyay, Anirban; Zachow, S.

Knee Menisci Segmentation Using Convolutional Neural Networks: Data from the Osteoarthritis Initiative

2018

Osteoarthritis and Cartilage

Objective: To present a novel method for automated segmentation of knee menisci from MRIs. To evaluate quantitative meniscal biomarkers for osteoarthritis (OA) estimated thereof. Method: A segmentation method employing convolutional neural networks in combination with statistical shape models was developed. Accuracy was evaluated on 88 manual segmentations. Meniscal volume, tibial coverage, and meniscal extrusion were computed and tested for differences between groups of OA, joint space narrowing (JSN), and WOMAC pain. Correlation between computed meniscal extrusion and MRI Osteoarthritis Knee Score (MOAKS) experts' readings was evaluated for 600 subjects. Suitability of biomarkers for predicting incident radiographic OA from baseline to 24 months was tested on a group of 552 patients (184 incident OA, 386 controls) by performing conditional logistic regression. Results: Segmentation accuracy measured as dice similarity coefficient was 83.8% for medial menisci (MM) and 88.9% for lateral menisci (LM) at baseline, and 83.1% and 88.3% at 12-month follow-up. Medial tibial coverage was significantly lower for arthritic cases compared to non-arthritic ones. Medial meniscal extrusion was significantly higher for arthritic knees. A moderate correlation between automatically computed medial meniscal extrusion and experts' readings was found (r ¼ 0.44). Mean medial meniscal extrusion was significantly greater for incident OA cases compared to controls (1.16 ± 0.93 mm vs 0.83 ± 0.92 mm; P < 0.05). Conclusion: Especially for medial menisci an excellent segmentation accuracy was achieved. Our meniscal biomarkers were validated by comparison to experts' readings as well as analysis of differences w.r.t groups of OA, JSN, and WOMAC pain. It was confirmed that medial meniscal extrusion is a predictor for incident OA.

Show publication details

Silva, Nelson; Schreck, Tobias; Veas, Eduardo; Sabon, Vedran; Eggeling, Eva; Fellner, Dieter W.

Leveraging Eye-gaze and Time-series Features to Predict User Interests and Build a Recommendation Model for Visual Analysis

2018

ETRA '18

ACM Symposium on Eye Tracking Research & Applications (ETRA) <10, 2018, Warsaw, Poland>

We developed a new concept to improve the efficiency of visual analysis through visual recommendations. It uses a novel eye-gaze based recommendation model that aids users in identifying interesting time-series patterns. Our model combines time-series features and eye-gaze interests, captured via an eye-tracker. Mouse selections are also considered. The system provides an overlay visualization with recommended patterns, and an eye-history graph, that supports the users in the data exploration process. We conducted an experiment with 5 tasks where 30 participants explored sensor data of a wind turbine. This work presents results on pre-attentive features, and discusses the precision/recall of our model in comparison to final selections made by users. Our model helps users to efficiently identify interesting time-series patterns.

978-1-4503-5706-7

Show publication details

Haescher, Marian; Matthies, Denys J.C.; Srinivasan, Karthik; Bieber, Gerald

Mobile Assisted Living: Smartwatch-based Fall Risk Assessment for Elderly People

2018

iWOAR 2018

International Workshop on Sensor-based Activity Recognition (iWOAR) <5, 2018, Rostock, Germany>

ACM International Conference Proceedings Series

We present a novel Smartwatch-based approach, to enable Mobile Assisted Living (MAL) for users with special needs. A major focus group for this approach are elderly people. We developed a tool for caregivers applicable in home environments, nursing care, and hospitals, to assess the vitality of their patients. Hereby, we particularly focus on the prediction of falls, because falls are a major reason for serious injuries and premature death among elderly. Therefore, we propose a multi parametric score based on standardized fall risk assessment tests, as well as on sleep quality, medication, patient history, motor skills, and environmental factors. The resulting total fall risk score reflects individual changes in behavior and vitality, which consequently enables for fall preventing interventions. Our system has been deployed and evaluated in a pilot study among 30 elderly patients over a period of four weeks.

978-1-4503-6487-4

Show publication details

Damer, Naser; Moseguí Saladié, Alexandra; Braun, Andreas; Kuijper, Arjan

MorGAN: Recognition Vulnerability and Attack Detectability of Face Morphing Attacks Created by Generative Adversarial Network

2018

IEEE 9th International Conference on Biometrics: Theory, Applications and Systems

IEEE International Conference on Biometrics: Theory, Applications and Systems (BTAS) <9, 2018, Redondo Beach, CA, USA>

Face morphing attacks aim at creating face images that are verifiable to be the face of multiple identities, which can lead to building faulty identity links in operations like border crossing. Research has been focused on creating more accurate attack detection approaches by considering different image properties. However, all the attacks considered so far are based on manipulating facial landmarks localized in the morphed face images. In contrast, this work presents novel face morphing attacks based on image generated by generative adversarial networks. We present the MorGAN structure that considers the representation loss to successfully create realistic morphing attacks. Based on that, we present a novel face morphing attacks database (MorGAN database) that contains 1000 morph images for both, the proposed MorGAN and landmark-based attacks. We present vulnerability analysis of two face recognition approaches facing the proposed attacks. Moreover, the detectability of the proposed MorGAN attacks is studied, in the scenarios where this type of attacks is know and unknown. We concluded with pointing out the challenge of detecting such unknown novel attacks and an analysis of detection performances of different features in detecting such attacks.

978-1-5386-7180-1

Show publication details

Riffnaller-Schiefer, Andreas; Augsdörfer, Ursula H.; Fellner, Dieter W.

Physics-based Deformation of Subdivision Surfaces for Shared Virtual Worlds

2018

Computers & Graphics

International Conference on Cyberworlds (CW) <2017, Chester, UK>

Creating immersive interactive virtual worlds not only require plausible visuals, but it is also important to allow the user to interact with the virtual scene in a natural way. While rigid-body physics simulations are widely used to provide basic interaction, realistic soft-body deformations of virtual objects are challenging and therefore typically not offered in multi user environments. We present a web service for interactive deformation which can accurately replicate real world material behavior. Its architecture is highly flexible, can be used from any web enabled client, and facilitates synchronization of computed deformations across multiple users and devices at different levels of detail.

Show publication details

Fauser, Johannes; Sakas, Georgios; Mukhopadhyay, Anirban

Planning Nonlinear Access Paths for Temporal Bone Surgery

2018

International Journal of Computer Assisted Radiology and Surgery

Purpose: Interventions at the otobasis operate in the narrow region of the temporal bone where several highly sensitive organs define obstacles with minimal clearance for surgical instruments. Nonlinear trajectories for potentialminimally invasive interventions can provide larger distances to risk structures and optimized orientations of surgical instruments, thus improving clinical outcomes when compared to existing linear approaches. In this paper, we present fast and accurate planning methods for such nonlinear access paths. Methods: We define a specific motion planning problem in SE(3) = R3 × SO(3) with notable constraints in computation time and goal pose that reflect the requirements of temporal bone surgery. We then present k-RRT-Connect: two suitable motion planners based on bidirectional Rapidly exploring Random Tree (RRT) to solve this problem efficiently. Results: The benefits of k-RRT-Connect are demonstrated on real CT data of patients. Their general performance is shown on a large set of realistic synthetic anatomies. We also show that these new algorithms outperform state-of-the-art methods based on circular arcs or Bézier-Splines when applied to this specific problem. Conclusion: With this work, we demonstrate that preoperative and intra-operative planning of nonlinear access paths is possible for minimally invasive surgeries at the otobasis.

Show publication details

Behrisch, Michael; Blumenschein, M.; Kim, N. W.; Shao, Lin; El-Assady, M.; Fuchs, Johannes; Seebacher, Daniel; Diehl, Alexandra; Brandes, U.; Pfister, Hanspeter; Schreck, Tobias; Weiskopf, Daniel; Keim, Daniel A.

Quality Metrics for Information Visualization

2018

Computer Graphics Forum

Eurographics / IEEE VGTC Conference on Visualization (EuroVis) <20, 2018, Brno, Czech Republic>

The visualization community has developed to date many intuitions and understandings of how to judge the quality of views in visualizing data. The computation of a visualization’s quality and usefulness ranges from measuring clutter and overlap, up to the existence and perception of specific (visual) patterns. This survey attempts to report, categorize and unify the diverse understandings and aims to establish a common vocabulary that will enable a wide audience to understand their differences and subtleties. For this purpose, we present a commonly applicable quality metric formalization that should detail and relate all constituting parts of a quality metric. We organize our corpus of reviewed research papers along the data types established in the information visualization community: multi- and high-dimensional, relational, sequential, geospatial and text data. For each data type, we select the visualization subdomains in which quality metrics are an active research field and report their findings, reason on the underlying concepts, describe goals and outline the constraints and requirements. One central goal of this survey is to provide guidance on future research opportunities for the field and outline how different visualization communities could benefit from each other by applying or transferring knowledge to their respective subdomain. Additionally, we aim to motivate the visualization community to compare computed measures to the perception of humans.

Show publication details

Tang, Chong; Wang, Rui; Wang, Yu; Wang, Shuo; Lukas, Uwe von; Tan, Min

RobCutt: A Framework of Underwater Biomimetic Vehicle-Manipulator System for Autonomous Interventions

2018

2018 14th IEEE International Conference on Automation Science and Engineering (CASE)

IEEE International Conference on Automation Science and Engineering (CASE) <14, 2018, Munich, Germany>

This paper presents a general concept framework of the underwater biomimetic vehicle-manipulator system (UBVMS) for autonomous interventions in terms of objectives, as well as technologies and methodologies. With full consideration of the autonomous cruise and intervention, the RobCutt system’s configuration and methodology are designed to promote the levels of autonomy of the autonomous underwater vehiclemanipulator system (UVMS). The second generation UBVMS (RobCutt II) is introduced, including the design and principle of the biomimetic propulsor inspired by the cuttlefish and lightweight manipulator, and the advantages are concluded. Moreover, technologies and methodologies of underwater localization, object detection and coordination control are designed and accomplished respectively. Finally, pool tests have been carried out to verify the feasibility and effectiveness of the developed framework and methodology.

978-1-5386-2514-9

Show publication details

Matthies, Denys J.C.; Daza Parra, Laura Milena; Urban, Bodo

Scaling Notifications Beyond Alerts: From Subtly Drawing Attention up to Forcing the User to Take Action

2018

UIST 2018 Adjunct

Research has been done in sophisticated notifications, still, devices today mainly stick to a binary level of information, while they are either attention drawing or silent. We propose scalable notifications, which adjust the intensity level reaching from subtle to obtrusive and even going beyond that level while forcing the user to take action. To illustrate the technical feasibility and validity of this concept, we developed three prototypes. The prototypes provided mechano-pressure, thermal, and electrical feedback, which were evaluated in different lab studies. Our first prototype provides subtle poking through to high and frequent pressure on the user’s spine, which significantly improves back posture. In a second scenario, the user is able to perceive the overuse of a drill by an increased temperature on the palm of a hand until the heat is intolerable, forcing the user to eventually put down the tool. The last application comprises of a speed control in a driving simulation, while electric muscle stimulation on the users’ legs, conveys information on changing the car’s speed by a perceived tingling until the system forces the foot to move involuntarily. In conclusion, all studies’ findings support the feasibility of our concept of a scalable notification system, including the system forcing an intervention.

978-1-4503-5949-8

Show publication details

Sacha, Dominik; Kraus, Matthias; Bernard, Jürgen; Behrisch, Michael; Schreck, Tobias; Asano, Yuki; Keim, Daniel A.

SOMFlow: Guided Exploratory Cluster Analysis with Self-Organizing Maps and Analytic Provenance

2018

IEEE Transactions on Visualization and Computer Graphics

Clustering is a core building block for data analysis, aiming to extract otherwise hidden structures and relations from raw datasets, such as particular groups that can be effectively related, compared, and interpreted. A plethora of visual-interactive cluster analysis techniques has been proposed to date, however, arriving at useful clusterings often requires several rounds of user interactions to fine-tune the data preprocessing and algorithms. We present a multi-stage Visual Analytics (VA) approach for iterative cluster refinement together with an implementation (SOMFlow) that uses Self-Organizing Maps (SOM) to analyze time series data. It supports exploration by offering the analyst a visual platform to analyze intermediate results, adapt the underlying computations, iteratively partition the data, and to reflect previous analytical activities. The history of previous decisions is explicitly visualized within a flow graph, allowing to compare earlier cluster refinements and to explore relations. We further leverage quality and interestingness measures to guide the analyst in the discovery of useful patterns, relations, and data partitions. We conducted two pair analytics experiments together with a subject matter expert in speech intonation research to demonstrate that the approach is effective for interactive data analysis, supporting enhanced understanding of clustering results as well as the interactive process itself.

Show publication details

Ritz, Martin; Breitfelder, Simon; Santos, Pedro; Kuijper, Arjan; Fellner, Dieter W.

Synthesis and Rendering of Seamless and Non-Repetitive 4D Texture Variations for Measured Optical Material Properties

2018

SIGGRAPH Asia 2018 Technical Briefs

Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia (SIGGRAPH ASIA) <11, 2018, Tokyo, Japan>

We have lifted the one weakness of an existing fully automatic acquisition system for spatially varying optical material behavior of real object surfaces. While its expression of spatially varying material behavior with spherical dependence on incoming light as 4D texture (ABTF material model) allows flexible mapping on arbitrary 3D geometries, photo-realistic rendering and interaction in real-time, this very method of texture-like representation exposed it to common problems of texturing, striking in two levels. First, non-seamless textures create visible border artifacts. Second, even a perfectly seamless texture causes repetition artifacts due to side-by-side distribution in large numbers over the 3D surface. We solved both problems through our novel texture synthesis that generates a set of seamless texture variations randomly distributed on the surface at shading time. When compared to regular 2D textures, the inter-dimensional coherence of the 4D ABTF material model poses entirely new challenges to texture synthesis, which includes maintaining the consistency of material behavior throughout the space spanned by the spatial image domain and the angular illumination hemisphere. In addition, we tackle the increased memory consumption caused by the numerous variations through a fitting scheme specifically designed to reconstruct the most prominent effects captured in the material model.

978-1-4503-6062-3

Show publication details

Samartzidis, Timotheos; Siegmund, Dirk; Gödde, Michael; Damer, Naser; Braun, Andreas; Kuijper, Arjan

The Dark Side of the Face: Exploring the Ultraviolet Spectrum for Face Biometrics

2018

2018 International Conference on Biometrics (ICB)

IAPR International Conference on Biometrics (ICB) <11, 2018, Gold Coast, Australia>

Facial recognition in the visible spectrum is a widelyused application but it is also still a major field of research.In this paper we present melanin face pigmentation (MFP)as a new modality to be used to extend classical face biometrics. Melanin pigmentation are sun-damaged cells thatoccur as revealed and/or unrevealed pattern on human skin.Most MFP can be found in the faces of some people whenusing ultraviolet (UV) imaging. To proof the relevance ofthis feature for biometrics, we present a novel image datasetof 91 multiethnic subjects in both, the visible and the UVspectrum. We show a method to extract the MFP featuresfrom the UV images, using the well known SURF featuresand compare it with other techniques. In order to proof itsbenefits, we use weighted score-level fusion and evaluatethe performance in an one against all comparison. As a resultwe observed a significant amplification of performancewhere traditional face recognition in the visible spectrum isextended with MFP from UV images. We conclude with afuture perspective about the use of these features for futureresearch and discuss observed issues and limitations.

Show publication details

Meister, Simon; Hur, Junhwa; Roth, Stefan

UnFlow: Unsupervised Learning of Optical Flow with a Bidirectional Census Loss

2018

32nd AAAI Conference on Artificial Intelligence

AAAI Conference on Artificial Intelligence <32, 2018, New Orleans, Louisiana, USA>

In the era of end-to-end deep learning, many advances in computer vision are driven by large amounts of labeled data. In the optical flow setting, however, obtaining dense per-pixel ground truth for real scenes is difficult and thus such data is rare. Therefore, recent end-to-end convolutional networks for optical flow rely on synthetic datasets for supervision, but the domain mismatch between training and test scenarios continues to be a challenge. Inspired by classical energy-based optical flow methods, we design an unsupervised loss based on occlusion-aware bidirectional flow estimation and the robust census transform to circumvent the need for ground truth flow. On the KITTI benchmarks, our unsupervised approach outperforms previous unsupervised deep networks by a large margin, and is even more accurate than similar supervised methods trained on synthetic datasets alone. By optionally fine-tuning on the KITTI training data, our method achieves competitive optical flow accuracy on the KITTI 2012 and 2015 benchmarks, thus in addition enabling generic pre-training of supervised networks for datasets with limited amounts of ground truth.

978-1-57735-800-8

Show publication details

Bernard, Jürgen; Zeppelzauer, Matthias; Sedlmair, Michael; Aigner, Wolfgang

VIAL: A Unified Process for Visual Interactive Labeling

2018

The Visual Computer

The assignment of labels to data instances is a fundamental prerequisite for many machine learning tasks. Moreover, labeling is a frequently applied process in visual interactive analysis approaches and visual analytics. However, the strategies for creating labels usually differ between these two fields. This raises the question whether synergies between the different approaches can be attained. In this paper, we study the process of labeling data instances with the user in the loop, from both the machine learning and visual interactive perspective. Based on a review of differences and commonalities, we propose the "visual interactive labeling" (VIAL) process that unifies both approaches.We describe the six major steps of the process and discuss their specific challenges. Additionally, we present two heterogeneous usage scenarios from the novel VIAL perspective, one on metric distance learning and one on object detection in videos. Finally, we discuss general challenges to VIAL and point out necessary work for the realization of future VIAL approaches.

Show publication details

Ballweg, Kathrin; Pohl, Margit; Wallner, Günter; Landesberger, Tatiana von

Visual Similarity Perception of Directed Acyclic Graphs: A Study on Infuencing Factors and Similarity Judgment Strategies

2018

Journal of Graph Algorithms and Applications

Visual comparison of directed acyclic graphs (DAGs) is commonly encountered in various disciplines (e.g., finance, biology). Still, knowledge about humans' perception of their similarity is currently quite limited. By similarity perception, we mean how humans perceive commonalities and differences of DAGs and herewith come to a similarity judgment. To fill this gap, we strive to identify factors influencing the DAG similarity perception. Therefore, we conducted a card sorting study employing a quantitative and qualitative analysis approach to identify (1) groups of DAGs the participants perceived as similar and (2) the reasons behind their groupings. We also did an extended analysis of our collected data to (1) reveal specifics of the influencing factors and (2) investigate which strategies are employed to come to a similarity judgment. Our results suggest that DAG similarity perception is mainly influenced by the number of levels, the number of nodes on a level, and the overall shape of the DAG. We also identified three strategies used by the participants to form groups of similar DAGs: divide and conquer, respecting the entire dataset and considering the factors one after the other, and considering a single factor. Factor specifics are, e.g., that humans on average consider four factors while judging the similarity of DAGs. Building an understanding of these processes may inform the design of comparative visualizations and strategies for interacting with them. The interaction strategies must allow the user to apply her similarity judgment strategy to the data. The considered factors bear information on, e.g., which factors are overlooked by humans and thus need to be highlighted by the visualization.

Show publication details

Terhörst, Philipp; Damer, Naser; Braun, Andreas; Kuijper, Arjan

What Can a Single Minutia Tell about Gender?

2018

2018 International Workshop on Biometrics and Forensics (IWBF)

International Workshop on Biometrics and Forensics (IWBF) <2018, Sassari, Italy>

Since fingerprints are one of the most widely deployed biometrics, several applications can benefit from an accurate fingerprint gender estimation. Previous work mainly tackled the task of gender estimation based on complete fingerprints. However, partial fingerprint captures are frequently occurring in many applications including forensics and consumer electronics, with the considered ratio of the fingerprint is variable. Therefore, this work investigates gender estimation on a small, detectable, and well-defined partition of a fingerprint. It investigates gender estimation on the level of a single minutia. Working on this level, we propose a feature extraction process that is able to deal with the rotation and translation invariance problems of fingerprints. This is evaluated on a publicly available database and with five different binary classifiers. As a result, the information of a single minutia achieves a comparable accuracy on the gender classification task as previous work using quarters of aligned fingerprints with an average of more than 25 minutiae.