List of scientific publications

Ergebnisse 1 - 75 von 75
Show publication details
Abb, Benjamin; Kuijper, Arjan [1. Review]; Gutbell, Ralf [2. Review]

3D Mesh Generation Through Noised RGB-D Inputstream and Rule Based Denoising with Virtual City Model

2020

Darmstadt, TU, Master Thesis, 2020

3D models are popular for planning in an urban context. The Levels of Detail (LoDs) can vary from cuboid shapes to highly detailed meshes. The acquisition and updating of those models is a cost intensive process requiring aerial footage and manual labor. This is why often only low detailed city models are available, which do not represent an up-to-date state. Updating a city model with RGB-D mesh generation can be a viable option, since depth sensing cameras have become cheap and machine learning techniques for predicting depth from a single color image have advanced. But depth values from those methods are very noisy. Although there are good options available for reconstructing a 3D mesh from a stream of color and depth images, this amount of noise represents a challenge. In this thesis a 3D mesh reconstruction method is presented that uses the existing virtual city model as a second data input to minimize the influence of noise. Therefore a virtual depth stream is created by rendering the urban model from the same perspective as the noised RGB-D stream. A set of rules merges both streams by leveraging their depth difference and normal deviation. The approach is implemented as an extension to the reconstruction algorithm of SurfelMeshing. The output is an updated model with more detailed building features. The evaluation is done in an artificial environment to test against ground truth with fixed noise levels. Quantitative results show that the approach is less prone to errors than using just the noised depth stream. Artifacts in the reconstruction can still arise especially with a very high noise level. The denoising capabilities show that salient features are kept while the overall output error is reduced.

Show publication details
Kuban, Katharina; Kuijper, Arjan [1. Review]; Schufrin, Marija [2. Review]

A Gamified Information Visualization for the Exploration of Home Network Traffic Data

2020

Darmstadt, TU, Bachelor Thesis, 2020

Internet users today are exposed to a variety of cyberthreats. Therefore, it is necessary to make even nonexpert users aware of the importance of cybersecurity. In this thesis, an approach to address this problem was developed based on the User Centered Design process. The development focused on visualizing the users home network data to improve security in private usage. Existing approaches are either focusing on visualizing network data for more experienced users or teaching cybersecurity with gamified solutions. Combining both, the visualization of the data was embedded in a game to motivate the user. A user study was conducted to identify the user requirements. It could be shown that the main reasons for not dealing with cybersecurity and network data are the user’s lack of motivation and the difficulty of the topic. While following information visualization and game design principles, a prototype was implemented based on the user requirements. The prototype was evaluated from the user perspective revealing that the game strengthens general awareness for the communication of devices in one’s home network and makes the topic network data more accessible. Additionally, it was found out that the quality of player experience design is a crucial factor to motivate the user in the context of the presented approach. It should therefore get higher attention in future steps.

Show publication details
Neumann, Kai Alexander; Kuijper, Arjan [1. Review]; Domajnko, Matevz [2. Review]; Tausch, Reimar [3. Review]

Adaptive Camera View Clustering for Fast Incremental Image-based 3D Reconstruction

2020

Darmstadt, TU, Bachelor Thesis, 2020

Photogrammetry, more precisely image-based 3D reconstruction, is an established method for digitizing cultural heritage sites and artifacts. This method utilizes images from different perspectives to reconstruct the geometry and texture of an object. What images are necessary for a successful reconstruction depends on the size, shape, and complexity of the object. Therefore, an autonomous scanning system for 3D reconstruction requires some kind of feedback during acquisition. In this thesis, we present an evaluation of different state-of-the-art photogrammetry solutions to identify which of them is most capable of providing feedback that predicts the quality of the final 3D reconstruction during acquisition. For this, we focused on the open-source incremental reconstruction solutions COLMAP, Alicevision Meshroom and MVE. Additionally, we included the commercial solution Agisoft Metashape to evaluate how it compares against the open-source solutions. While we were able to identify some characteristic behaviors, the accuracy and runtime of all four reconstruction solutions vary based on the input dataset. Because of this, and the fact that all four solutions compute very similar results under the same conditions, our tests were not conclusive. Nevertheless, we chose COLMAP as the back-end for further use as it provided good results on the real dataset as well as an extensive command-line interface (CLI). Based on these results, we introduce an iterative image-based reconstruction pipeline that uses a cluster-based acceleration structure to deliver more robust and efficient 3D reconstructions. The photogrammetry solution used for reconstruction is exchangeable. In this pipeline, images that portray common parts of an object are assigned to clusters based on their camera frustums. Each cluster can be reconstructed separately. The pipeline was implemented as a c++ module and tested on the autonomous robotic scanner CultArm3D®. For this system, we embedded the pipeline in a feedback loop with a density-based Next-Best-View (NBV) algorithm to assist during autonomous acquisition.

Show publication details
Chen, Cong; Kuijper, Arjan [1. Review]; Damer, Naser [2. Review]

Advanced analyses of CrazyFaces attacks on face identification systems

2020

Darmstadt, TU, Bachelor Thesis, 2020

After 5 years in prison, the greedy criminal was released. He never gave up the idea of sinagain. But he didn’t want to spend another 5 years in prison. So he began to summarizethe lessons of his last arrest. Five years ago, he was arrested at a bank because surveillancecameras identified him. This was a bit of a surprise to him, because this clever criminal hadrepeatedly escaped the pursuit of surveillance cameras by changing his facial expressions.After investigating, he learned that the face recognition system in that bank is a differentone. Therefore, his previously trained facial expressions failed. So one new idea comesin his mind now, "can i just find one or more facial expressions that can disable moststate-of-the-art face recognition systems?". To known the end of this story, please read therest of this thesis.

Show publication details
Strassheim, Konstantin; Rus, Silvia [Betreuer]; Kuijper, Arjan [Betreuer]

Ambient Respiratory Rate Detection Using Capacitive Sensors Inside Seats

2020

Darmstadt, TU, Bachelor Thesis, 2020

The purpose of this research is to develop an application, to measure the human respiratory rate using capacitive sensors while seated. If the respiratory rate could be filtered out, it would provide a way to monitor the health status in daily life, and can also be used to raise alerts or taking actions in case of measuring abnormal respiratory rate. The sensors could be integrated in daily life, like car seats, chairs and sofas but also could serve as medical instruments in hospitals to measure respiratory rate ambiently.

Show publication details

Analysis of Schedule and Layout Tuning for Sparse Matrices With Compound Entries on GPUs

2020

Computer Graphics Forum

Large sparse matrices with compound entries, i.e. complex and quaternionic matrices as well as matrices with dense blocks, are a core component of many algorithms in geometry processing, physically based animation and other areas of computer graphics. We generalize several matrix layouts and apply joint schedule and layout autotuning to improve the performance of the sparse matrix-vector product on massively parallel graphics processing units. Compared to schedule tuning without layout tuning, we achieve speedups of up to 5.5×. In comparison to cuSPARSE, we achieve speedups of up to 4.7×.

Show publication details
Grebe, Jonas Henry; Kuijper, Arjan [1. Gutachten]; Terhörst, Philipp [2. Gutachten]

Anomaly-based Face Search

2020

Darmstadt, TU, Bachelor Thesis, 2020

Biometric face identification refers to the use of face images for the automatic identification of individuals. Due to the high performance achieved by current face search algorithms, these algorithms are useful tools, e.g. in criminal investigations. Based on the facial description of a witness, the number of suspects can be significantly reduced. However, while modern face image retrieval approaches either require an accurate verbal description or an example image of the suspect’s face, eyewitness testimonies can seldom provide this level of detail. Moreover, while eyewitness’ recall is one of the most convincing pieces of evidence, it is also one of the most unreliable. Hence, exploiting the more reliable, but vague memories about distinctive facial features directly, such as obvious tattoos, scars or birthmarks, should be considered to filter potential suspects in a first step. This might reduce the risk of wrongful convictions caused by retroactively inferred details in the witness’ recall for subsequent steps. Therefore, this thesis proposes an anomaly-based face search solution that aims at enabling a reduction of the search space solely based on locations of anomalous facial features. We developed an unsupervised image anomaly detection approach based on a cascaded image completion network that allows to roughly localize anomalous regions in face images. (1) This completion model is assumed to fill in deleted regions with probable values conditioned on all the remaining parts of the face image. (2) The reconstruction errors of this model were used as an anomaly signal to create a grid of potential anomaly locations in a given face image. (3) These grids, in the form of a thresholded matrix, were then subsequently used to search for the most relevant images. We evaluated the respective retrieval model on a preprocessed subset of 17.855 images of the VGGFace2 dataset. The three main contributions of this work are (1) a cascaded face image completion approach, (2) an unsupervised inpainting-based anomaly localization approach, and (3) a query-by-anomaly face image retrieval approach. The face inpainting achieved promising results when compared to other recent completion approaches since we didn’t leverage any adversarial component in order to simplify the entire training procedure. These inpaintings enabled to roughly localize anomalies in face images. The proposed retrieval model achieved a 60% hit rate at a penetration rate of about 20% over a gallery of 17.855 images. Despite the limitations of the proposed searching approach, the results revealed the potential benefits of using the more reliable anomaly information to reduce the search space, instead of entirely relying on the elicitation of detailed perpetrator descriptions, either in textual or in visual form.

Show publication details
Bergmann, Tim Alexander; Kuijper, Arjan [1. Gutachten]; Noll, Matthias [2. Gutachten]

AR-Visualisierung von Echtzeitbildgebung für ultraschallgestützte Leberbiopsien

2020

Darmstadt, TU, Master Thesis, 2020

In dieser Arbeit wird ein Augmented Reality-System zur Anzeige von Ultraschallbildern direkt am Patienten vorgestellt. Die Überlagerung wird mit der Hilfe von optisch durchsichtigen Head­mounted Displays durchgeführt. Die lagegerichtete Darstellung der Ultraschallbilder basiert auf einem externen optischen Trackingsystem, dem NDI Polaris Vicra. Um die korrekte Überlagerung zu gewährleisten, wird das Sichtfeld eines Trägers mittels angepasster Single Point Active Alignment Method bestimmt. Die Lage der Ultraschallbilder relativ zu den Tracking-Markierungen der Ultra­schallsonde wird mit einer angepassten Pivot-Kalibrierung ermittelt. Zum objektiven Testen des Systems wurde ein Träger-Dummy verwendet, der das Sehen eines Trägers durch Kameras simuliert. Die Lage von Tracking-Markierungen im Sichtfeld des Träger-Dummies konnte mit einem RMSE von 1,1480 mm bestimmt werden. Bei den Tests der Überlagerung der Ultraschallbilder über den darin repräsentierten Strukturen erreicht das System einen Dice-Koeffizienten von 88,33 %. Zur besseren Skalierung der Berechnungsdauer mit der Anzahl der verwendeten Geräte wurden Matrixoperatoren für die verwendeten Transformationsmatrizen optimiert. Die Berechnungen werden im Schnitt mehr als dreimal so schnell durchgeführt wie die allgemeine Implementierung der Operatoren. Das System versetzt behandelnde Ärzte in die Lage, Ultraschallbilder lagegerichtet über den darin repräsentierten Strukturen zu betrachten. Die Anzeige der Bilder auf einem externen Monitor wird dadurch überflüssig.

Show publication details

Automatic Procedural Model Generation for 3D Object Variation

2020

The Visual Computer

3D objects are used for numerous applications. In many cases not only single objects but also variations of objects are needed. Procedural models can be represented in many different forms, but generally excel in content generation. Therefore this representation is well suited for variation generation of 3D objects. However, the creation of a procedural model can be time-consuming on its own. We propose an automatic generation of a procedural model from a single exemplary 3D object. The procedural model consists of a sequence of parameterizable procedures and represents the object construction process. Changing the parameters of the procedures changes the surface of the 3D object. By linking the surface of the procedural model to the original object surface, we can transfer the changes and enable the possibility of generating variations of the original 3D object. The user can adapt the derived procedural model to easily and intuitively generate variations of the original object. We allow the user to define variation parameters within the procedures to guide a process of generating random variations. We evaluate our approach by computing procedural models for various object types, and we generate variations of all objects using the automatically generated procedural model.

Show publication details
Mertz, Tobias; Guthe, Stefan [1. Gutachten]; Kuijper, Arjan [2. Gutachten]

Automatic View Planning for 3D Reconstruction of Objects with Thin Features

2020

Darmstadt, TU, Master Thesis, 2020

View planning describes the process of planning view points, from which to record an object or environment for digitization. This thesis examines the applicability of view planning to the 3D reconstruction of insect specimens from extended depth of field images and depth maps generated with a focus stacking method. Insect specimens contain very thin features, such as legs and antennae, while the depth maps, generated during the focus stacking, contain large levels of uncertainty. Since focus stacking is usually not used for 3D reconstruction, there are no state-of-the-art view planning systems, which deal with the unique challenges of this data. Within this thesis, a view planning system with two components is designed to deal with the uncertainty explicitly. The first component utilizes volumetric view planning methods from well established research along with a novel sensor model, to represent the synthetic camera, generated from the focus stack. The second component is a novel 2D feature tracking module, which is designed to capture small details, which can not be recorded within a volumetric representation. The evaluation of the system shows that the application of view planning can still significantly reduce the time required for scene exploration and provide similar amounts of detail as an unplanned approach. Some future improvements are suggested, which may enable the system to capture even more detail.

Show publication details
Iffland, Dominik; Kuijper, Arjan [Advisor]; Efremov, Anton [Advisor]

Bildsegmentierung und Erkennung von Farben auf Buchcovern

2020

Darmstadt, TU, Bachelor Thesis, 2020

Die automatische Erkennung von Farben auf Bildern ist im Allgemeinen schwierig, da die menschliche Wahrnehmung von Farbe sehr individuell und deshalb nicht universal berechenbar ist. Die einzelnen Farbwerte werden in den meisten Fällen in Gruppen wie Rot, Grün oder Gelb eingeteilt, wobei sich die Elemente innerhalb einer Gruppe oft unterscheiden. Speziell in der Buchbranche werden die aussagekräftigsten Farbgruppen für jedes Buchcover angegeben, da diese Farbinformationen in verschiedenen Bereichen, wie beispielsweise dem Marketing einen großen Mehrwert besitzen und für diverse Werbemaßnahmen eingesetzt werden. Das Ziel dieser Arbeit ist die Erforschung und Bewertung verschiedener Algorithmen zur Segmentierung und Farberkennung mit anschließender Extraktion der stärksten Farben in einem Bild. Dazu werden im Theorieteil verschiedene Methoden zur Farberkennung wie k-means vorgestellt und diskutiert. Die zur Segmentierung genutzten Algorithmen, wie diverse Kantenerkennungen oder Threshold Verfahren, werden ebenfalls diskutiert und ausgewertet. Anschließend wird die Farberkennung mit einer in Vorder- und Hintergrund segmentierten Variante verglichen. Die Einteilung der Bilder in die Segmente Vorder- und Hintergrund imitiert die menschliche Wahrnehmung von Bildern und erlaubt es diese Bereiche separat zu analysieren. Diese Segmentierung optimiert mit angepasster Gewichtung des Vorder- und Hintergrunds die Ergebnisse der Farberkennung. Die gemessenen Beobachtungen zeigen, dass eine Kombination von Farberkennung und gewichteter Segmentierung zu den profitabelsten Ergebnissen führt. Die Segmentierungsalgorithmen werden mithilfe verschiedener Datensets und Gütekriterien beurteilt. Eine anschließende Nutzerstudie soll zeigen, ob die Ergebnisse des gewählten Algorithmus zu einem der menschlichen Wahrnehmung entsprechenden Ergebnis führen. Die Farberkennung wird dabei auf zuvor ausgewählten Bildern angewendet und von verschiedenen Nutzern bezüglich der Qualität bewertet. Aufbauend auf den gewonnenen Ergebnissen der Segmentierung mit entsprechender Gewichtung der einzelnen Farben kann das resultierende Produkt in der Praxis und im speziellem Bereich der Buchbranche eingesetzt werden um die Qualität der Farberkennung deutlich zu steigern. Im praktischen Teil der Arbeit wird deshalb ein Programm entwickelt, das eine automatische Extraktion der kräftigsten Farben auf einem Bild durchführt.

Show publication details
Vaniet, Emmanuelle; Wesarg, Stefan

Blick in den Körper

2020

Spektrum der Wissenschaft

Die Entdeckung der Röntgenstrahlen läutete die klinische Bildgebung ein - und ermöglichte das äußerst leistungsfähige Verfahren der Computertomografie.

Show publication details
Venkatesh, Sushma; Zhang, Haoyu; Ramachandra, Raghavendra; Raja, Kiran; Damer, Naser; Busch, Christoph

Can GAN Generated Morphs Threaten Face Recognition Systems Equally as Landmark Based Morphs ? - Vulnerability and Detection

2020

IWBF 2020. Proceedings

International Workshop on Biometrics and Forensics (IWBF) <8, 2020, Porto, Portugal>

The primary objective of face morphing is to com-bine face images of different data subjects (e.g. an malicious actor and an accomplice) to generate a face image that can be equally verified for both contributing data subjects. In this paper, we propose a new framework for generating face morphs using a newer Generative Adversarial Network (GAN) - StyleGAN. In contrast to earlier works, we generate realistic morphs of both high-quality and high resolution of 1024 × 1024 pixels. With the newly created morphing dataset of 2500 morphed face images, we pose a critical question in this work. (i) Can GAN generated morphs threaten Face Recognition Systems (FRS) equally as Landmark based morphs? Seeking an answer, we benchmark the vulnerability of a Commercial-Off-The-Shelf FRS (COTS) and a deep learning-based FRS (ArcFace). This work also benchmarks the detection approaches for both GAN generated morphs against the landmark based morphs using established Morphing Attack Detection (MAD) schemes.

Show publication details

Capability-based Scheduling of Scientific Workflows in the Cloud

2020

Proceedings of the 9th International Conference on Data Science, Technology and Applications

International Conference on Data Science, Technology and Applications (DATA) <9, 2020>

We present a distributed task scheduling algorithm and a software architecture for a system executing scientific workflows in the Cloud. The main challenges we address are (i) capability-based scheduling, which means that individual workflow tasks may require specific capabilities from highly heterogeneous compute machines in the Cloud, (ii) a dynamic environment where resources can be added and removed on demand, (iii) scalability in terms of scientific workflows consisting of hundreds of thousands of tasks, and (iv) fault tolerance because in the Cloud, faults can happen at any time. Our software architecture consists of loosely coupled components communicating with each other through an event bus and a shared database. Workflow graphs are converted to process chains that can be scheduled independently. Our scheduling algorithm collects distinct required capability sets for the process chains, asks the agents which of these sets they can manage, and then assigns process chains accordingly. We present the results of four experiments we conducted to evaluate if our approach meets the aforementioned challenges. We finish the paper with a discussion, conclusions, and future research opportunities. An implementation of our algorithm and software architecture is publicly available with the open-source workflow management system “Steep”.

Show publication details

Comparison-Level Mitigation of Ethnic Bias in Face Recognition

2020

IWBF 2020. Proceedings

International Workshop on Biometrics and Forensics (IWBF) <8, 2020, online>

Current face recognition systems achieve high performance on several benchmark tests. Despite this progress,recent works showed that these systems are strongly biasedagainst demographic sub-groups. Previous works introducedapproaches that aim at learning less biased representations.However, applying these approaches in real applications requiresa complete replacement of the templates in the database. Thisreplacement procedure further requires that a face image ofeach enrolled individual is stored as well. In this work, wepropose the first bias-mitigating solution that works on thecomparison-level of a biometric system. We propose a fairnessdriven neural network classifier for the comparison of twobiometric templates to replace the systems similarity function.This fair classifier is trained with a novel penalization termin the loss function to introduce the criteria of group andindividual fairness to the decision process. This penalization termforces the score distributions of different ethnicities to be similar,leading to a reduction of the intra-ethnic performance differences.Experiments were conducted on two publicly available datasetsand evaluated the performance of four different ethnicities. Theresults showed that for both fairness criteria, our proposedapproach is able to significantly reduce the ethnic bias, whileit preserves a high recognition ability. Our model, build onindividual fairness, achieves bias reduction rate between 15.35%and 52.67%. In contrast to previous work, our solution is easy tointegrate into existing systems by simply replacing the systemssimilarity functions with our fair template comparison approach.

Show publication details

Data augmentation for time series: traditional vs generative models on capacitive proximity time series

2020

Proceedings of the 13th ACM International Conference on PErvasive Technologies Related to Assistive Environments

ACM International Conference on PErvasive Technologies Related to Assistive Environments (PETRA) <13, 2020, Corfu, Greece>

ACM International Conference Proceedings Series (ICPS)

Large labeled quantities and diversities of training data are often needed for supervised, data-based modelling. Data distribution should cover a rich representation to support the generalizability of the trained end-to-end inference model. However, this is often hindered by limited labeled data and the expensive data collection process, especially for human activity recognition tasks. Extensive manual labeling is required. Data augmentation is thus a widely used regularization method for deep learning, especially applied on image data to increase the classification accuracy. But it is less researched for time series. In this paper, we investigate the data augmentation task on continuous capacitive time series with the example on exercise recognition. We show that the traditional data augmentation can enrich the source distribution and thus make the trained inference model more generalized. This further increases the recognition performance for unseen target data around 21.4 percentage points compared to inference model without data augmentation. The generative models such as variational autoencoder or conditional variational autoencoder can further reduce the variance on the target data.

Show publication details

Deep Learning Multi-layer Fusion for an Accurate Iris Presentation Attack Detection

2020

FUSION 2020

International Conference on Information Fusion (FUSION) <23, 2020, Online>

Iris presentation attack detection (PAD) algorithms are developed to address the vulnerability of iris recognition systems to presentation attacks. Taking into account that the deep features successfully improved computer vision performance in various fields including iris recognition, it is natural to use features extracted from deep neural networks for iris PAD. Each layer in a deep learning network carries features of different level of abstraction. The features extracted from the first layer to the higher layers become more complex and more abstract. This might point our complementary information in these features that can collaborate towards an accurate PAD decision. Therefore, we propose an iris PAD solution based on multi-layer fusion. The information extracted from the last several convolutional layers are fused on two levels, feature-level and score-level. We demonstrated experiments on both, off-theshelf pre-trained network and network trained from scratch. An extensive experiment also explores the complementary between different layer combinations of deep features. Our experimental results show that feature-level based multi-layer fusion method performs better than the best single layer feature extractor in most cases. In addition, our fusion results achieve similar or better results than the state-of-the-art algorithms on the Notre Dame and IIITD-WVU databases of the Iris Liveness Detection Competition 2017 (LivDet-Iris 2017).

Show publication details
Drozdowski, Pawel; Rathgeb, Christian; Dantcheva, Antitza; Damer, Naser; Busch, Christoph

Demographic Bias in Biometrics: A Survey on an Emerging Challenge

2020

IEEE Transactions on Technology and Society

Systems incorporating biometric technologies have become ubiquitous in personal, commercial, and governmental identity management applications. Both cooperative (e.g. access control) and non-cooperative (e.g. surveillance and forensics) systems have benefited from biometrics. Such systems rely on the uniqueness of certain biological or behavioural characteristics of human beings, which enable for individuals to be reliably recognised using automated algorithms. Recently, however, there has been a wave of public and academic concerns regarding the existence of systemic bias in automated decision systems (including biometrics). Most prominently, face recognition algorithms have often been labelled as “racist” or “biased” by the media, non-governmental organisations, and researchers alike. The main contributions of this article are: (1) an overview of the topic of algorithmic bias in the context of biometrics, (2) a comprehensive survey of the existing literature on biometric bias estimation and mitigation, (3) a discussion of the pertinent technical and social matters, and (4) an outline of the remaining challenges and future work items, both from technological and social points of view.

Show publication details

Demographic Bias in Presentation Attack Detection of Iris Recognition Systems

2020

With the widespread use of biometric systems, the demographic bias problem raises more attention. Although many studies addressed bias issues in biometric verification, there is no works that analyse the bias in presentation attack detection (PAD) decisions. Hence, we investigate and analyze the demographic bias in iris PAD algorithms in this paper. To enable a clear discussion, we adapt the notions of differential performance and differential outcome to the PAD problem. We study the bias in iris PAD using three baselines (hand-crafted, transfer-learning, and training from scratch) using the the NDCLD-2013 database. The experimental results points out that female users will be significantly less protected by the PAD, in comparison to males.

Show publication details
Wang, Yu; Yu, Weidong; Liu, Xiuqing; Wang, Chunle; Kuijper, Arjan; Guthe, Stefan

Demonstration and Analysis of an Extended Adaptive General Four-Component Decomposition

2020

IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing

The overestimation of volume scattering is an essentialshortcoming of the model-based polarimetric syntheticaperture radar (PolSAR) target decomposition method. It islikely to affect the measurement accuracy and result in mixedambiguity of scattering mechanism. In this paper, an extendedadaptive four-component decomposition method (ExAG4UThs)is proposed. First, the orientation angle compensation (OAC)is applied to the coherency matrix and artificial areas areextracted as the basis for selecting the decomposition method.Second, for the decomposition of artificial areas, one of the twocomplex unitary transformation matrices of the coherency matrixis selected according to the wave anisotropy (Aw). In addition, thebranch condition that is used as a criterion for the hierarchicalimplementation decomposition is the ratio of the correlationcoefficient (Rcc). Finally, the selected unitary transformationmatrix and discriminative threshold are used to determine thestructure of the selected volume scattering models, which aremore effectively to adapt to various scattering mechanisms. Inthis paper, the performance of the proposed method is evaluatedon GaoFen-3 full PolSAR data sets for various time periods andregions. The experimental results demonstrate that the proposedmethod can effectively represent the scattering characteristics ofthe ambiguous regions and the oriented building areas can bewell discriminated as dihedral or odd-bounce structures.

Show publication details

Designing smart home controls for elderly

2020

Proceedings of the 13th ACM International Conference on PErvasive Technologies Related to Assistive Environments

ACM International Conference on PErvasive Technologies Related to Assistive Environments (PETRA) <13, 2020, Corfu, Greece>

ACM International Conference Proceedings Series (ICPS)

Technology is evolving by the day and with it the devices to control it. Sophisticated systems, like Smart Homes, are currently controlled in most cases via a smartphone app. While this may be acceptable for younger and middle-aged people, elders, however, have trouble keeping up with new devices and might not want to use a smartphone. Most modern-day control schemes like touch screens and menus are regarded as too complicated. However, Smart Homes provide many opportunities to reduce the every-day burden on elderly and people with special needs. Providing elderly people easy access to advanced and helpful technology via familiar interface types immensely improves their quality of life.We propose a Smart Home control designed especially for use by elderly. Our contribution ranges from evaluating existing systems to designing and building the Smart Home control for elderly based on their special requirements. Moreover, we involve elderly in the design process and evaluate the proposed prototype in a qualitative study with 10 elderly users. The results conclude that being presented with the scenario to already own the required Smart Home technology, the participants were quick to accept the cube as user friendlier when compared to smartphone controls or touchscreen controls in general.

Show publication details
Krispel, Ulrich; Fellner, Dieter W.; Ullrich, Torsten

Distance Measurements of CAD Models in Boundary Representation

2020

Transactions on Computational Science XXXVI
Lecture Notes in Computer Science (LNCS), Transactions on Computational Science
12060, 12060

The need to analyze and visualize distances between objects arises in many use cases. Although the problem to calculate the distance between two polygonal objects may sound simple, real-world scenarios with large models will always be challenging, but optimization techniques – such as space partitioning – can reduce the complexity of the average case significantly. Our contribution to this problem is a publicly available benchmark to compare distance calculation algorithms. To illustrate the usage, we investigated and evaluated a grid-based distance measurement algorithm.

Show publication details
Reiß-Wöltche, Dominik Bernd; Kuijper, Arjan [1. Gutachten]; Bielinski, Lukas [2. Gutachten]

Effiziente Softwareentwicklung in kleinen Teams am Beispiel der Entwicklung einer Android App für WLAN-Onboarding mithilfe von Bluetooth Beacons

2020

Darmstadt, TU, Bachelor Thesis, 2020

Zunehmende Digitalisierung und Vernetzung sind mitunter Grund für den steigenden Bedarf von individuellen Softwarelösungen. Eine Herausforderung, die damit einhergeht, ist genügend Softwareentwickler zu finden. Viele Softwareentwicklungsteams sind klein oder unterbesetzt und müssen wegen dieser Resscourcenknappheit einen Kompromiss zwischen Qualitätsicherungsmaßnahmen und Entwicklung der Funktionalität finden - oft zum Nachteil der Qualität. Das Ziel dieser Arbeit ist es, einen Softwareentwicklungsprozess zu entwerfen, der für kleine Teams optimiert ist und eine effiziente Entwicklung qualitativ hochwertiger Software ermöglicht. Dazu beschäftigt sich die Arbeit mit den Fragen: • Wie sind Softwareentwicklungsprozesse grundlegend gestaltet? • Welche Anforderungen aus Team- und Projektkontext beeinflussen die Gestaltung eines Softwareentwicklungsprozesses? • Was bedeutet Effizienz in der Softwareentwicklung? Zur Beantwortung dieser Fragen wurde recherchiert, welches die fundamentalen Phasen von Softwareentwicklungsprozessen sind, analysiert, welche Aspekte bei Softwareprojekten am aufwändigsten sind und betrachtet, welche Herausforderungen bei Projekten in kleinen Entwicklungsteams auftreten. Die Resultate wurden genutzt, um die für effiziente Entwicklung maßgebenden Phasen eines Prozesses hervorzuheben und zu optimieren. Das Ergebnis ist ein modular aufgebauter Prozess, der zum einen an den Wechsel und die Veränderung des Entwicklerteams während des gesamten Lebenszyklus eines Produktes, zum anderen an spezifische Projektanforderungen angepasst werden kann. Dazu wird eine Prozessvorlage zu Beginn eines Projektes vom Entwicklerteam mithilfe einer definierten Methodik vervollständigt. Ausgehend von Teamfluktuation und Projektumfang wird mit der Methodik die Relevanz und der Ausprägungsgrad von Prozessaktivitäten festgelegt. Um die definierte Methodik und den entworfenen Prozesses zu evaluieren, wurden diese zur Entwicklung einer mobilen Applikation genutzt. Es wurde bewertet, inwieweit Anforderungen des Team- und Projektkontextes, sowie weitere Kriterien aus der Problemstellung durch den definierten Prozess umgesetzt, und welche Auswirkungen auf die Entwicklungsperformanz festgestellt werden konnten.

Show publication details
Becker, Hagen; Kuijper, Arjan [1. Gutachten]; Tazari, Mohammad-Reza [2. Gutachten]

Einführung von Serious-Gaming Techniken in die Digitale Physiotherapie der Zukunft

2020

Darmstadt, TU, Master Thesis, 2020

Im Rahmen dieser Arbeit wurde das Spiel PDDanceCity, welches der Bewegungsförderung dient aber auch kognitive Fähigkeiten trainieren soll, in mehreren Punkten erweitert. Das Spiel befand sich zu Beginn der Arbeit noch im Status eines Proof of Conecpt und ist vor allem für ältere Menschen konzeptioniert worden, in dem sich der Spieler auf einer Tanzfläche physisch bewegen muss, um seine Spielfigur durch ein Labyrinth ans Ziel zu steuern. Die Erweiterungen beinhalten zum einen die Möglichkeit das Spiel in der modernen Physiotherapie einzusetzen. Durch das in der Arbeit umgesetzte Profilverwaltungssystem ist es möglich, dass Physiotherapeuten die Entwicklung eines Patienten beobachten, begleiten und gegebenenfalls die Therapie anpassen können. Weitere Punkte waren die Verbesserung und Automatisierung der Steuereinheit des Spieles sowie die Erarbeitung eines Algorithmus für die Generierung der einzelnen Spielkarten, basierend auf den Einstellungen der Profile der Spieler. Zu Beginn musste die Tanzmatte mit der man das Spiel steuert für jeden Spieler neu eingestellt werden und die Generierung einer Spielkarte war zufällig und bezog sich nicht auf Spielerprofile. Durch diese Arbeit entstand ein Algorithmus, welcher die Spielkarten individuell basierend auf den Einstellungen des jeweiligen Spielers generierte. Auch wurde die Kommunikation der Tanzmatte mit dem Spiel verbessert, sodass zukünftig die Kalibrierung der Tanzmatte für jeden einzelnen Spieler entfällt. Außerdem ist es nun durch Gesichtserkennung möglich sich in sein Spielerprofil einzuloggen. Dies soll die Akzeptanz von älteren Menschen verbessern, da sie durch diese Technologie nicht mit Maus und Tastatur agieren müssen und einfacher und schneller das Spiel starten können. Weiterhin wurde eine Studie in einer Einrichtung für ältere Menschen durchgeführt, um Zusammenhänge zwischen der Fitness und der Spielweise eines Probanden zu untersuchen. Der Fitnesszustand jedes Spielers wurde mittels eines unabhängigen Fitnesstestes ermittelt. Nach dem Spielen von PDDanceCity wurden mit verschiedenen Maschinellen Lernen-Algorithmen die Bewegungsdaten der Probanden ausgewertet. Dadurch können künftig Rückschlüsse auf die Fitness der Probanden abhängig von ihrer Spielweise geschlossen werden.

Show publication details
Yurukova, Veronika; Kuijper, Arjan [1. Gutachten]; Rus, Silvia [2. Gutachten]

Emotion Recognition Technologies – Review of Current Approaches and Future Developments

2020

Darmstadt, TU, Bachelor Thesis, 2020

Recognising emotion is integral part of being human and essential part of human communication, the ability to express and understand them is universal for all human beings and if human computer interaction is to progress to a new level, machines would need to detect and express emotions with similar skills. If we are to achieve artificial intelligence that can replace human labour especially in fields like education, healthcare, security, client services we need such agents. The field of affective computing, sometimes called emotion AI, still a relatively young brunch of computer science, is dedicated to this task and encompasses knowledge and techniques from a lot of other fields – psychology, physiology, machine learning, artificial intelligence and robotics, natural language processing, pattern recognition, computer vision, statistics, etc. This thesis will look into the approaches implemented so far and discuss them in respect to the affect model systems used, the range of emotions that are aimed to be detected and types of data gathered and the limitation that poses on the validity of results. In addition, different classification techniques and their effectiveness will be compared. An approach that combines successful practices under the condition of unobtrusive observation that aims to protect sensitive personal data will be proposed.

Show publication details
Kraft, Dimitri; Bader, Rainer; Bieber, Gerald

Enhancing Vibroarthrography by using Sensor Fusion

2020

Proceedings of the 9th International Conference on Sensor Network

International Joint Conference on Computer Vision and Computer Graphics Theory and Applications (VISIGRAPP) <15, 2020, Valetta, Malta>

Natural and artificial joints of a human body are emitting vibration and sound during the movement. The sound and vibration pattern of a joint is characteristic and changes due to damage, uneven tread wear, injuries, or other influences. Hence, the vibration and sound analysis enables an estimation of the joint condition. This kind of analysis, vibroarthrography (VAG), allows the analysis of diseases like arthritis or osteoporosis and might determine trauma, inflammation, or misalignment. The classification of the vibration and sound data is very challenging and needs a comprehensive annotated data base. Current existing data bases are very limited and insufficient for deep learning or artificial intelligent approaches. In this paper, we describe a new concept of the design of a vibroarthrography system using a sensor network. We discuss the possible improvements and we give an outlook for the future work and application fields of VAG.

Show publication details

ExerTrack—Towards Smart Surfaces to Track Exercises

2020

Technologies

The concept of the quantified self has gained popularity in recent years with the hype of miniaturized gadgets to monitor vital fitness levels. Smartwatches or smartphone apps and other fitness trackers are overwhelming the market. Most aerobic exercises such as walking, running, or cycling can be accurately recognized using wearable devices. However whole-body exercises such as push-ups, bridges, and sit-ups are performed on the ground and thus cannot be precisely recognized by wearing only one accelerometer. Thus, a floor-based approach is preferred for recognizing whole-body activities. Computer vision techniques on image data also report high recognition accuracy; however, the presence of a camera tends to raise privacy issues in public areas. Therefore, we focus on combining the advantages of ubiquitous proximity-sensing with non-optical sensors to preserve privacy in public areas and maintain low computation cost with a sparse sensor implementation. Our solution is the ExerTrack, an off-the-shelf sports mat equipped with eight sparsely distributed capacitive proximity sensors to recognize eight whole-body fitness exercises with a user-independent recognition accuracy of 93.5% and a user-dependent recognition accuracy of 95.1% based on a test study with 9 participants each performing 2 full sessions. We adopt a template-based approach to count repetitions and reach a user-independent counting accuracy of 93.6 %. The final model can run on a Raspberry Pi 3 in real time. This work includes data-processing of our proposed system and model selection to improve the recognition accuracy and data augmentation technique to regularize the network.

Show publication details
Kutlu, Hasan; Weinmann, Andreas [Referee]; Ritz, Martin [Co-Referee]

Fully Automatic Mechanical Scan Range Extension of a Lens-Shifted Structured Light System

2020

Darmstadt, Hochschule, Master Thesis, 2020

Cultural heritage are precious goods which need to be preserved for coming generations. Due to many reasons, e.g., wars or natural decay, those objects are in danger of destruction. In order to prevent them from being lost forever, those objects are digitized as 3D models to be accessible for further generations of mankind, the Fraunhofer Institute for computer graphics research offers a fully automatic 3D digitization system called the CultLab3D. There is already a fully functional system for big objects. However, it is more difficult to scan small objects like coins or rings. Those small objects are often referred to as 2.5D objects because they often got engravings and inscriptions on their surface, which cannot even be felt with ones fingers. Scanning such fine detailed objects needs a system that can measure such details. This is accomplished by the MesoScannerV2, an extension of the CultLab3D. It is designed for the digitization of these 2.5D objects without missing details. The MesoScannerV2 is a structured light system which uses a special variation of the phase shift method in order to improve the accuracy of the digitized 3D model of the object. The structured light-based MesoScannerV2 reaches an advanced depth and lateral resolution due to its specialty, the extension of state-of-the-art fringe patterns by a mechanical lens-shifted surface encoding method. Due to bad data acquisition and due to possible uncertainties of numerical algorithms noise is generated which directly influences the digitized 3D models. Therefore, this thesis aims to reduce the generated noise to get cleaner 3D models. Furthermore, the MesoScannerV2 needs to be future-proof, which requires an automation of the scan process of many objects at the same time. The integration of an automation procedure to the MesoScannerV2 is another topic discussed in this thesis. We show that methods are found to reduce the generated noise significantly in particular, we provide a corresponding evaluation. Further, possible solutions to automate the scan process could be found.

Show publication details
Boutros, Fadi; Damer, Naser; Raja, Kiran; Ramachandra, Raghavendra; Kirchbuchner, Florian; Kuijper, Arjan

Fusing Iris and Periocular Region for User Verification in Head Mounted Displays

2020

FUSION 2020

International Conference on Information Fusion (FUSION) <23, 2020, Online>

The growing popularity of Virtual Reality and Augmented Reality (VR/AR) devices in many applications also demands authentication of users. As the devices inherently capture the eye image while capturing the user interaction, the authentication can be devised using the iris and periocular recognition. While both iris and periocular data being non-ideal unlike the data captured from standard biometric sensors, the authentication performance is expected to be lower. In this work, we present and evaluate a fusion framework for improving the biometric authentication performance. Specifically, we employ score-level fusion for two independent biometric systems of iris and periocular region to avoid expensive feature-level fusion. With a detailed evaluation of three different score-level fusion after the score normalization on a dataset of 12579 images, we report the performance gain in authentication using score-level fusion for iris and periocular recognition.

Show publication details
Stockhause, Simon; Ritz, Harald [Referent]; Bormann, Pascal [Korreferent]

Generierung und Ordnung von Events in verteilten Systemen mit asynchroner Kommunikation

2020

Giessen, Technische Hochschule Mittelhessen, Bachelor Thesis, 2020

Der Trend der serviceorientierten Architekturen schafft das Bedürfnis, die Komplexität von verteilten Systemen fassen zu können. Viele bestehende Werkzeuge nutzen Logs und Metriken, um Schlussfolgerungen aus der Anwendung ziehen zu können. Allerdings bieten diese nur eingeschränkt die Möglichkeit, kausal zusammenhänge Events zu erfassen. In dieser Arbeit werden Konzepte zur Darstellung von Events und deren Ordnung in verteilten Systemen präsentiert. Diese werden in praxisnahen Anwendungen eingesetzt. Es wird gezeigt, inwiefern die erarbeiteten Konzepte die spezifizierten Anforderungen und Ziele erfüllen. Um die Eventgenerierung und ihre anschließende Ordnung zu gewährleisten, wird ein Datenmodell beschrieben. Es werden zwei Prototypen zur Kontextpropagierung vorgestellt. Zudem werden Visualisierungsansätze präsentiert, die die erhobenen Daten in ansprechender Form darstellen können. Die implementierte Kontextpropagierung bieten Erfahrungswerte, die für zukünftige Arbeit genutzt werden kann. Die Visualisierungsformen der Frame Galerie und des dreidimensionalen Flamengraphs bieten neue Perspektiven zur Darstellung von Tracingdaten.

Show publication details
Wälde, Simone; Kuijper, Arjan [1. Review]; Ritz, Martin [2. Review]

Geometry Classification through Feature Extraction and Pattern Recognition in 3D Space

2020

Darmstadt, TU, Master Thesis, 2020

In dieser Masterarbeit wird der Versuch unternommen, ähnliche Wappendarstellungen auf 3D-Modellen von Scherben abzugleichen. Teil eines initialen Workflows ist die Reliefextraktion, für die ein Ansatz von Zatzarinni et al.[30] verwendet wird. Um Informationen der Objektoberfläche zu extrahieren, wird eine Local Binary Pattern Variante von Thompson et al.[24] implementiert. Die resultierenden Merkmalsdeskriptoren werden dann unter Verwendung einer Abstandsmetrik verglichen. Am Ende führt der vorgeschlagene Ansatz nicht zu guten Ergebnissen, aber die aufgetretenen Herausforderungen sind dokumentiert und zukünftige Lösungen werden diskutiert.

Show publication details

GeoRocket: A scalable and cloud-based data store for big geospatial files

2020

SoftwareX

We present GeoRocket, a software for the management of very large geospatial datasets in the cloud. GeoRocket employs a novel way to handle arbitrarily large datasets by splitting them into chunks that are processed individually. The software has a modern reactive architecture and makes use of existing services including Elasticsearch and storage back ends such as MongoDB or Amazon S3. GeoRocket is schema-agnostic and supports a wide range of heterogeneous geospatial file formats. It is also format-preserving and does not alter imported data in any way. The main benefits of GeoRocket are its performance, scalability, and usability, which make it suitable for a number of scientific and commercial use cases dealing with very high data volumes, complex datasets, and high velocity (Big Data). GeoRocket also provides many opportunities for further research in the area of geospatial data management.

Show publication details
Mueller-Roemer, Johannes Sebastian; Fellner, Dieter W. [1. Gutachten]; Stork, André [2. Gutachten]; Müller, Heinrich [3. Gutachten]

GPU Data Structures and Code Generation for Modeling, Simulation, and Visualization

2020

Darmstadt, TU., Diss., 2019

Virtual prototyping, the iterative process of using computer-aided (CAx) modeling, simulation, and visualization tools to optimize prototypes and products before manufacturing the first physical artifact, plays an increasingly important role in the modern product development process. Especially due to the availability of affordable additive manufacturing (AM) methods (3D printing), it is becoming increasingly possible to manufacture customized products or even for customers to print items for themselves. In such cases, the first physical prototype is frequently the final product. In this dissertation, methods to efficiently parallelize modeling, simulation, and visualization operations are examined with the goal of reducing iteration times in the virtual prototyping cycle, while simultaneously improving the availability of the necessary CAx tools. The presented methods focus on parallelization on programmable graphics processing units (GPUs). Modern GPUs are fully programmable massively parallel manycore processors that are characterized by their high energy efficiency and good priceperformance ratio. Additionally, GPUs are already present in many workstations and home computers due to their use in computer-aided design (CAD) and computer games. However, specialized algorithms and data structures are required to make efficient use of the processing power of GPUs. Using the novel GPU-optimized data structures and algorithms as well as the new applications of compiler technology introduced in this dissertation, speedups between approximately one (10×) and more than two orders of magnitude (> 100×) are achieved compared to the state of the art in the three core areas of virtual prototyping. Additionally, memory use and required bandwidths are reduced by up to nearly 86%. As a result, not only can computations on existing models be executed more efficiently but larger models can be created and processed as well. In the area of modeling, efficient discrete mesh processing algorithms are examined with a focus on volumetric meshes. In the field of simulation, the assembly of the large sparse system matrices resulting from the finite element method (FEM) and the simulation of fluid dynamics are accelerated. As sparse matrices form the foundation of the presented approaches to mesh processing and simulation, GPU-optimized sparse matrix data structures and hardware- and domain-specific automatic tuning of these data structures are developed and examined as well. In the area of visualization, visualization latencies in remote visualization of cloud-based simulations are reduced by using an optimizing query compiler. By using hybrid visualization, various user interactions can be performed without network round trip latencies.

Show publication details
Giebel, Andreas; Stork, André [Advisor]; Grasser, Tim [Advisor]

GPU-beschleunigte Finite Elemente Modalanalyse mit Projektionsansatz

2020

Darmstadt, TU, Bachelor Thesis, 2020

Diese Arbeit befasst sich mit der numerischen Bestimmung der Eigenschwingungsparameter eines schwingungsfahigen Systems. Mithilfe von Finite Elemente Diskretisierungen kann ein generalisiertes Eigenwertproblem fur dieses System formuliert werden, welches mit iterativen, approximierenden Eigenwertverfahren wie dem generalisierten Lanczos-Verfahren gelost werden kann. Dieses Verfahren kann effizient parallelisiert werden, durch Parallelisierung der zugrundeliegenden Operationen der linearen Algebra. Besonders zeitintensiv ist dabei das Losen linearer Gleichungssysteme, was fur Massivparallele Hardware, wie Grafikkarten, mit dem modifizierten PCG-Verfahren speichereffizient und skalierbar implementiert werden kann. Durch Nutzung eines Projektionsansatz konnen Randbedingungen implizit angewandt werden ohne Matrixmanipulationen durchzufuhren. Die GPU-Version erzielte einen maximalen Speedup von ca. 24, 71 gegenuber den CPU-Implementierungen, bei einem Speicherverbrauch von nur ca. einem Siebtel. Schlieslich werden die Ergebnisse der entwickelten Modalanalyse mit analytischen Ergebnissen und den Simulationsergebnissen von Ansys anhand eines Balkens verglichen.

Show publication details
Schall, Oliver; Stork, André [Advisor]; Grasser, Tim [Advisor]

Implicit Contact Handling for Deformable Objects

2020

Darmstadt, TU, Bachelor Thesis, 2020

Physics simulation is a complex and active field of research that includes many issues like the accurate detection and resolution of collisions. Those simulations of realistic physical behaviour are needed in many industries like engineering, movies and games. Usually a physical simulation is composed of many different algorithms that all solve a part of the simulation problem, like collision detection and collision resolution. The focus of this thesis is the resolution of collisions using an approach based on mathematical optimization. This is generally not a simple task since the resolution of one collision might result in new collisions somewhere else in the simulation. While the simulation of rigid bodies is overall a well understood issue and there exist solid algorithms to solve the task, the simulation of deformable objects and especially the resolution of occurring collisions is a lot more complicated and the implementation of a robust solution can be significantly more challenging than their rigid body counterpart. There have been different approaches to solve the issue of collision resolution for deformable objects. This thesis uses an already implemented collision detection algorithm and aims to implement and evaluate an accurate algorithm to resolve all collision between the deformable bodies in the simulation, as well all collisions of deformable bodies with static geometry. The approach used in this thesis is formulating a mathematical optimization problem that applies constraints on contact points to prevent collisions. The optimization problem is then solved by a quadratic program solver which calculates the needed velocity changes to resolve all collisions.

Show publication details
Zhou, Wei; Hao, Xingxing; Wang, Kaidi; Zhang, Zhenyang; Yu, Yongxiang; Su, Haonan; Li, Kang; Cao, Xin; Kuijper, Arjan

Improved estimation of motion blur parameters for restoration from a single image

2020

PLOS ONE

This paper presents an improved method to estimate the blur parameters of motion deblurring algorithm for single image restoration based on the point spread function (PSF) in frequency spectrum. We then introduce a modification to the Radon transform in the blur angleestimation scheme with our proposed difference value vs angle curve. Subsequently, theauto-correlation matrix is employed to estimate the blur angle by measuring the distancebetween the conjugated-correlated troughs. Finally, we evaluate the accuracy, robustnessand time efficiency of our proposed method with the existing algorithms on the public benchmarks and the natural real motion blurred images. The experimental results demonstratethat the proposed PSF estimation scheme not only could obtain a higher accuracy for theblur angle and blur length, but also demonstrate stronger robustness and higher time efficiency under different circumstances.

Show publication details
Knauthe, Volker; Ballweg, Kathrin; Wunderlich, Marcel; Landesberger, Tatiana von; Guthe, Stefan

Influence of Container Resolutions on the Layout Stability of Squarified and Slice-And-Dice Treemaps

2020

EuroVis 2020. Eurographics / IEEE VGTC Conference on Visualization 2020. Short Papers

Eurographics / IEEE VGTC Conference on Visualization (EuroVis) <22, 2020, Norrköping, Sweden>

In this paper, we analyze the layout stability for the squarify and slice-and-dice treemap layout algorithms when changingthe visualization containers resolution. We also explore how rescaling a finished layout to another resolution compares toa recalculated layout, i.e. fixed layout versus changing layout. For our evaluation, we examine a real world use-case anduse a total of 240000 random data treemap visualizations. Rescaling slice-and-dice or squarify layouts affects the aspectratios. Recalculating slice-and-dice layouts is equivalent to rescaling since the layout is not affected by changing the containerresolution. Recalculating squarify layouts, on the other hand, yields stable aspect ratios but results in potentially huge layoutchanges. Finally, we provide guidelines for using rescaling, recalculation and the choice of algorithm.

Show publication details
Boutros, Fadi; Damer, Naser; Raja, Kiran; Ramachandra, Raghavendra; Kirchbuchner, Florian; Kuijper, Arjan

Iris and periocular biometrics for head mounted displays: Segmentation, recognition, and synthetic data generation

2020

Image and Vision Computing

Augmented and virtual reality deployment is finding increasing use in novel applications. Some of these emerging and foreseen applications allow the users to access sensitive information and functionalities. Head Mounted Displays (HMD) are used to enable such applications and they typically include eye facing cameras to facilitate advanced user interaction. Such integrated cameras capture iris and partial periocular region during the interaction. This work investigates the possibility of using the captured ocular images from integrated cameras from HMD devices for biometric verification, taking into account the expected limited computational power of such devices. Such an approach can allow user to be verified in a manner that does not require any special and explicit user action. In addition to our comprehensive analyses, we present a light weight, yet accurate, segmentation solution for the ocular region captured from HMD devices. Further, we benchmark a number of well-established iris and periocular verification methods along with an in-depth analysis on the impact of iris sample selection and its effect on iris recognition performance for HMD devices. To the end, we also propose and validate an identity-preserving synthetic ocular image generation mechanism that can be used for large scale data generation for training purposes or attack generation purposes. We establish the realistic image quality of generated images with high fidelity and identity preserving capabilities through benchmarking them for iris and periocular verification.

Show publication details
Fellner, Dieter W. [Hg.]; Sihn, Wilfried [Advisor]

Jahresbericht 2019

2020

Show publication details
Bortolato, Blaz; Ivanovska, Marija; Rot, Peter; Križaj, Janez; Terhörst, Philipp; Damer, Naser; Peer, Peter; Struc, Vitomir

Learning Privacy-Enhancing Face Representations through Feature Disentanglement

2020

15th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2020). Proceedings

International Conference on Automatic Face and Gesture Recognition (FG) <15, 2020, Buenos Aires, Argentina>

Convolutional Neural Networks (CNNs) are today the de-facto standard for extracting compact and discriminative face representations (templates) from images in automatic face recognition systems. Due to the characteristics of CNN models, the generated representations typically encode a multitude of information ranging from identity to soft-biometric attributes, such as age, gender or ethnicity. However, since these representations were computed for the purpose of identity recognition only, the soft-biometric information contained in the templates represents a serious privacy risk. To mitigate this problem, we present in this paper a privacy-enhancing approach capable of suppressing potentially sensitive soft-biometric information in face representations without significantly compromising identity information. Specifically, we introduce a Privacy-Enhancing Face-Representation learning Network (PFRNet) that disentangles identity from attribute information in face representations and consequently allows to efficiently suppress soft-biometrics in face templates. We demonstrate the feasibility of PFRNet on the problem of gender suppression and show through rigorous experiments on the CelebA, Labeled Faces in the Wild (LFW) and Adience datasets that the proposed disentanglement-based approach is highly effective and improves significantly on the existing state-of-the-art.

Show publication details

Measurement Based AR for Geometric Validation Within Automotive Engineering and Construction Processes

2020

Virtual, Augmented and Mixed Reality. Industrial and Everyday Life Applications

International Conference Virtual Augmented and Mixed Reality (VAMR) <12, 2020, Copenhagen, Denmark>

Lecture Notes in Computer Science (LNCS)
12191

We look at the final stages of the automobile design process, during which the geometric validation process for a design, in particular for the vehicle front end, is examined. A concept is presented showing how this process can be improved using augmented reality. Since the application poses high accuracy requirements the augmented reality also needs to be highly accurate and of measurable quality. We present a Measurement Based AR approach to overlaying 3D information onto images, which extends the existing process and is particularly suited to the application in question. We also discuss how the accuracy of this new approach can be validated using computer vision methods employed under the appropriate conditions. The results of an initial study are presented, where the overlay accuracy is expressed in image pixels as well as millimeters followed by a discussion on how this validation can be improved to meet the requirements posed by the application.

Show publication details
Rasheed, Muhammad Irtaza Bin; Kuijper, Arjan [1. Prüfer]; Burkhardt, Dirk [2. Prüfer]

Name Disambiguation

2020

Darmstadt, TU, Master Thesis, 2020

Name ambiguity is a challenge and critical problem in many applications, such as scienti_c literature management, trend analysis etc. The main reason of this is due to di_erent name abbreviations, identical names, name misspellings in publications and bibliographies. An author may have multiple names and multiple authors may have the same name. So when we look for a particular name, many documents containing that person's name may be returned or missed because of the author's di_erent style of writing their name. This can produce name ambiguity which a_ects the performance of document retrieval, web search, database integration, and may result improper classi_cation of authors. Previously, many clustering based algorithm have been proposed, but the problem still remains largely unsolved for both research and industry communities, specially with the fast growth of information available. The aim of this thesis is the implementation of a universal name disambiguation approach that considers almost any existing property to identify authors. After an author of a paper is identi_ed, the normalized name writing form on the paper is used to re_ne the author model and even give an overview about the di_erent writing forms of the author's name. This can be achieved by _rst examine the research on Human-Computer Interaction speci_cally with focus on (Visual) Trend Analysis. Furthermore, a research on di_erent name disambiguation techniques. After that, building a concept and implementing a generalized method to identify author name and a_liation disambiguation while evaluating di_erent properties.

Show publication details
Ulmer, Alex; Sessler, David; Kohlhammer, Jörn

NetCapVis: Web-based Progressive Visual Analytics for Network Packet Captures

2020

VizSec 2019

IEEE Symposium on Visualization for Cyber Security (VizSec) <16, 2019>

Network traffic log data is a key data source for forensic analysis of cybersecurity incidents. Packet Captures (PCAPs) are the raw information directly gathered from the network device. As the bandwidth and connections to other hosts rise, this data becomes very large quickly. Malware analysts and administrators are using this data frequently for their analysis. However, the currently most used tool Wireshark is displaying the data as a table, making it difficult to get an overview and focus on the significant parts. Also, the process of loading large files into Wireshark takes time and has to be repeated each time the file is closed. We believe that this problem poses an optimal setting for a client-server infrastructure with a progressive visual analytics approach. The processing can be outsourced to the server while the client is progressively updated. In this paper we present NetCapVis, an web-based progressive visual analytics system where the user can upload PCAP files, set initial filters to reduce the data before uploading and then instantly interact with the data while the rest is progressively loaded into the visualizations.

Show publication details

Neue Normen für biometrische Datenaustauschformate

2020

Datenschutz & Datensicherheit

Dieser Artikel gibt einen Überblick über die neuen, erweiterbaren biometrischen Datenaustauschformate in der Normenreihe ISO/IEC 39794. Diese könnten in ein paar Jahren, nach der erforderlichen Vorbereitungszeit, für biometrische Referenzdaten in langlebigen maschinenlesbaren Reisedokumenten eingesetzt werden.

Show publication details

OLBVH: octree linear bounding volume hierarchy for volumetric meshes

2020

The Visual Computer

We present a novel bounding volume hierarchy for GPU-accelerated direct volume rendering (DVR) as well as volumetric mesh slicing and inside-outside intersection testing. Our novel octree-based data structure is laid out linearly in memory using space filling Morton curves. As our new data structure results in tightly fitting bounding volumes, boundary markers can be associated with nodes in the hierarchy. These markers can be used to speed up all three use cases that we examine. In addition, our data structure is memory-efficient, reducing memory consumption by up to 75%. Tree depth and memory consumption can be controlled using a parameterized heuristic during construction. This allows for significantly shorter construction times compared to the state of the art. For GPU-accelerated DVR, we achieve performance gain of 8.4×–13×. For 3D printing, we present an efficient conservative slicing method that results in a 3×–25× speedup when using our data structure. Furthermore, we improve volumetric mesh intersection testing speed by 5×–52×.

Show publication details
Henniger, Olaf; Fu, Biying; Chen, Cong

On the assessment of face image quality based on handcrafted features

2020

BIOSIG 2020

Conference on Biometrics and Electronic Signatures (BIOSIG) <19, 2020, Online>

GI-Edition - Lecture Notes in Informatics (LNI)

This paper studies the assessment of the quality of face images, predicting the utility of face images for automated recognition. The utility of frontal face images from a publicly available dataset was assessed by comparing them with each other using commercial off-the-shelf face recognition systems. Multiple face image features delineating face symmetry and characteristics of the capture process were analysed to find features predictive of utility. The selected features were used to build system-specific and generic random forest classifiers.

Show publication details
Berndt, René; Tuemmler, Carl; Kehl, Christian; Aehnelt, Mario; Grasser, Tim; Franek, Andreas; Ullrich, Torsten

Open Problems in 3D Model and Data Management

2020

Proceedings of the 15th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications

International Joint Conference on Computer Vision and Computer Graphics Theory and Applications (VISIGRAPP) <15, 2020, Valetta, Malta>

In interdisciplinary, cooperative projects that involve different representations of 3D models (such as CAD data and simulation data), a version problem can occur: different representations and parts have to be merged to form a holistic view of all relevant aspects. The individual partial models may be exported by and modified in different software environments. These modifications are a recurring activity and may be carried out again and again during the progress of the project. This position paper investigates the version problem; furthermore, this contribution is intended to stimulate discussion on how the problem can be solved.

Show publication details
Tamellini, Lorenzo; Chiumenti, Michele; Altenhofen, Christian; Attene, Marco; Barrowclough, Oliver; Livesu, Marco; Marini, Federico; Martinelli, Massimiliano; Skytt, Vibeke

Parametric Shape Optimization for Combined Additive–Subtractive Manufacturing

2020

JOM: The Journal of The Minerals, Metals & Materials Society

In industrial practice, additive manufacturing (AM) processes are often followed by post-processing operations such as heat treatment, subtractive machining, milling, etc., to achieve the desired surface quality and dimensional accuracy. Hence, a given part must be 3D-printed with extra material to enable this finishing phase. This combined additive/subtractive technique can be optimized to reduce manufacturing costs by saving printing time and reducing material and energy usage. In this work, a numerical methodology based on parametric shape optimization is proposed for optimizing the thickness of the extra material, allowing for minimal machining operations while ensuring the finishing requirements. Moreover, the proposed approach is complemented by a novel algorithm for generating inner structures to reduce the part distortion and its weight. The computational effort induced by classical constrained optimization methods is alleviated by replacing both the objective and constraint functions by their sparse grid surrogates. Numerical results showcase the effectiveness of the proposed approach.

Show publication details

PAVED: Pareto Front Visualization for Engineering Design

2020

EuroVis 2020. Eurographics / IEEE VGTC Conference on Visualization 2020

Eurographics / IEEE VGTC Conference on Visualization (EuroVis) <22, 2020, Norrköping, Sweden>

Design problems in engineering typically involve a large solution space and several potentially conflicting criteria. Selecting a compromise solution is often supported by optimization algorithms that compute hundreds of Pareto-optimal solutions, thus informing a decision by the engineer. However, the complexity of evaluating and comparing alternatives increases with the number of criteria that need to be considered at the same time. We present a design study on Pareto front visualization to support engineers in applying their expertise and subjective preferences for selection of the most-preferred solution. We provide a characterization of data and tasks from the parametric design of electric motors. The requirements identified were the basis for our development of PAVED, an interactive parallel coordinates visualization for exploration of multi-criteria alternatives. We reflect on our user-centered design process that included iterative refinement with real data in close collaboration with a domain expert as well as a summative evaluation in the field. The results suggest a high usability of our visualization as part of a real-world engineering design workflow. Our lessons learned can serve as guidance to future visualization developers targeting multi-criteria optimization problems in engineering design or alternative domains

Show publication details
Terhörst, Philipp; Riehl, Kevin; Damer, Naser; Rot, Peter; Bortolato, Blaz; Kirchbuchner, Florian; Struc, Vitomir; Kuijper, Arjan

PE-MIU: A Training-Free Privacy-Enhancing Face Recognition Approach Based on Minimum Information Units

2020

IEEE Access

Research on soft-biometrics showed that privacy-sensitive information can be deduced from biometric data. Utilizing biometric templates only, information about a persons gender, age, ethnicity, sexual orientation, and health state can be deduced. For many applications, these templates are expected to be used for recognition purposes only. Thus, extracting this information raises major privacy issues. Previous work proposed two kinds of learning-based solutions for this problem. The first ones provide strong privacy-enhancements, but limited to pre-defined attributes. The second ones achieve more comprehensive but weaker privacy-improvements. In this work, we propose a Privacy-Enhancing face recognition approach based on Minimum Information Units (PE-MIU). PE-MIU, as we demonstrate in this work, is a privacy-enhancement approach for face recognition templates that achieves strong privacy-improvements and is not limited to pre-defined attributes. We exploit the structural differences between face recognition and facial attribute estimation by creating templates in a mixed representation of minimal information units. These representations contain pattern of privacy-sensitive attributes in a highly randomized form. Therefore, the estimation of these attributes becomes hard for function creep attacks. During verification, these units of a probe template are assigned to the units of a reference template by solving an optimal best-matching problem. This allows our approach to maintain a high recognition ability. The experiments are conducted on three publicly available datasets and with five state-of-the-art approaches. Moreover, we conduct the experiments simulating an attacker that knows and adapts to the systems privacy mechanism. The experiments demonstrate that PE-MIU is able to suppress privacy-sensitive information to a significantly higher degree than previous work in all investigated scenarios. At the same time, our solution is able to achieve a verification performance close to that of the unmodified recognition system. Unlike previous works, our approach offers a strong and comprehensive privacy-enhancement without the need of training.

Show publication details
Boutros, Fadi; Damer, Naser; Raja, Kiran; Ramachandra, Raghavendra; Kirchbuchner, Florian; Kuijper, Arjan

Periocular Biometrics in Head-Mounted Displays: A Sample Selection Approach for Better Recognition

2020

IWBF 2020. Proceedings

International Workshop on Biometrics and Forensics (IWBF) <8, 2020, online>

Virtual and augmented reality technologies are increasingly used in a wide range of applications. Such technologies employ a Head Mounted Display (HMD) that typicallyincludes an eye-facing camera and is used for eye tracking.As some of these applications require accessing or transmittinghighly sensitive private information, a trusted verification ofthe operator’s identity is needed. We investigate the use ofHMD-setup to perform verification of operator using periocularregion captured from inbuilt camera. However, the uncontrollednature of the periocular capture within the HMD results inimages with a high variation in relative eye location and eyeopening due to varied interactions. Therefore, we propose a newnormalization scheme to align the ocular images and then, a newreference sample selection protocol to achieve higher verificationaccuracy. The applicability of our proposed scheme is exemplifiedusing two handcrafted feature extraction methods and two deeplearning strategies.We conclude by stating the feasibility of sucha verification approach despite the uncontrolled nature of thecaptured ocular images, especially when proper alignment andsample selection strategy is employed.

  • 978-1-7281-6232-4
Show publication details
Nottebaum, Moritz; Kuijper, Arjan [1. Review]; Rus, Silvia [2. Review]

Person Re-identification in a Car Seat

2020

Darmstadt, TU, Bachelor Thesis, 2020

In this thesis, I enhanced a car seat with 16 capacity sensors, which collect data from the person sitting on it, which is then used to train a machine learning algorithm to re-identify the person from a group of other already trained persons. In practice, the car seat recognizes the person when he/she sits on the car seat and greets the person with their own name, enabling various customisations in the car unique to the user, like seat configurations, to be applied. Many researchers have done similar things with car seats or seats in general, though focusing on other topics like posture classification. Other interesting use cases of capacitive sensor enhanced seats involved measuring the emotions or focusing on general activity recognition. One major challenge in capacitive sensor research is the inconstancy of the received data, as they are not only affected by objects or persons near to it, but also by changing effects like humidity and temperature. My goal was to make the re-identification robust and use a learning algorithm which can quickly learn the patterns of new persons and is able to achieve satisfiable results even after getting only few training instances to learn from. Another important property was to have a learning algorithm which can operate independent and fast to be even applicable in cars. Both points were achieved by using a shallow convolutional neural network which learns an embedding and is trained with triplet loss, resulting in a computationally cheap inference. In Evaluation, results showed that neural networks are definitely not always the best choice, even though the computation time difference is insignificant. Without enough training data, they often lack in generalisation over the training data. Therefore an ensemble-learning approach with majority voting proved to be the best choice for this setup.

Show publication details
Grimm, Niklas; Damer, Naser [1. Gutachten]; Kuijper, Arjan [2. Gutachten]

Poseninvariante Handerkennung mit generativer Bildkorrektur

2020

Darmstadt, TU, Bachelor Thesis, 2020

Auf Händen basierende biometrische Verfahren sind in weiten Bevölkerungsschichten akzeptiert und können kontaktlos angewendet werden. Bei diesen kontaktlosen Authentifizierungsverfahren sind variierende Handposen eines der größten Probleme. Diese Arbeit erforscht, ob es möglich ist, aus Bildern mit variierenden Handposen solche zu synthetisieren, die einer normalisierten Handpose entsprechen. Auf der Grundlage dieser normalisierten Handpose wäre dann ein besserer Vergleich mit dem Referenzbild möglich. Das Fehlen eines großen Datensatzes und die variierende Skalierung bei kontaktlos akquirierten Handbildern bringt viele Herausforderungen. Von diesen Herausforderungen motiviert beschreibt diese Arbeit mehrere Experimente mit einer begrenzten Menge an Trainingsdaten und variierenden Skalierungen der Eingabebilder, echt wirkende Bilder mit normalisierten Handposen, zu synthetisieren. Die synthetisierten Bilder werden in Verifikationsverfahren mit den Orginalbildern verglichen. Am Ende zeigten die Experimente, dass es nicht möglich ist, mit diesem Versuchsaufbau echt wirkende Handbilder mit normalisierten Posen zu synthetisieren.

Show publication details
Morrissey, John P.; Totoo, Prabhat; Hanley, Kevin J.; Papanicolopulos, Stefanos-Aldo; Ooi, Jin Y.; Gonzalez, Ivan Cores; Raffin, Bruno; Mostajabodaveh, Seyedmorteza; Gierlinger, Thomas

Post-processing and visualization of large-scale DEM simulation data with the open-source VELaSSCo platform

2020

Simulation

Regardless of its origin, in the near future the challenge will not be how to generate data, but rather how to manage big and highly distributed data to make it more easily handled and more accessible by users on their personal devices. VELaSSCo (Visualization for Extremely Large-Scale Scientific Computing) is a platform developed to provide new visual analysis methods for large-scale simulations serving the petabyte era. The platform adopts Big Data tools/architectures to enable in-situ processing for analytics of engineering and scientific data and hardware-accelerated interactive visualization. In large-scale simulations, the domain is partitioned across several thousand nodes, and the data (mesh and results) are stored on those nodes in a distributed manner. The VELaSSCo platform accesses this distributed information, processes the raw data, and returns the results to the users for local visualization by their specific visualization clients and tools. The global goal of VELaSSCo is to provide Big Data tools for the engineering and scientific community, in order to better manipulate simulations with billions of distributed records. The ability to easily handle large amounts of data will also enable larger, higher resolution simulations, which will allow the scientific and engineering communities to garner new knowledge from simulations previously considered too large to handle. This paper shows, by means of selected Discrete Element Method (DEM) simulation use cases, that the VELaSSCo platform facilitates distributed post-processing and visualization of large engineering datasets.

Show publication details
González, Camila; Kuijper, Arjan [1. Gutachten]; Mukhopadhyay, Anirban [2. Gutachten]

Preventing Catastrophic Forgetting in Deep Learning Classifiers

2020

Darmstadt, TU, Master Thesis, 2020

Deep neural networks suffer from the problem of catastrophic forgetting. When a model is trained sequentially with batches of data coming from different domains, it adapts too strongly to properties present on the last batch. This causes a catastrophic fall in performance for data similar to that in the initial batches of training. Regularization-based methods are a popular way to reduce the degree of forgetting, as they have an array of desirable properties. However, they perform poorly when no information about the data origin is present at inference time. We propose a way to improve the performance of such methods which comprises introducing insularoty noise in unimportant parameters so that the model grows robust against them changing. Additionally, we present a way to bypass the need for sourcing information. We propose using an oracle to decide which of the previously seen domains a new instance belongs to. The oracle’s prediction is then used to select the model state. In this work, we introduce three such oracles. Two of these select the model which is most confident for the instance. The first, the cross-entropy oracle, chooses the model with least cross-entropy between the prediction and the one-hot form of the prediction. The second, the MC dropout oracle, chooses the model with lowest standard deviation between predictions resulting from performing an array of forward passes while applying dropout. Finally, the domain identification oracle extracts information about the data distribution for each task using the training data. At inference time, it assesses which task the instance is likeliest to belong to, and applies the corresponding model. For all of our three different datasets, at least one oracle performs better than all regularization-based methods. Furthermore, we show that the oracles can be combined with a sparsification-based approach that significantly reduces the memory requirements.

Show publication details
Terhörst, Philipp; Huber, Marco; Damer, Naser; Rot, Peter; Kirchbuchner, Florian; Struc, Vitomir; Kuijper, Arjan

Privacy Evaluation Protocols for the Evaluation of Soft-Biometric Privacy-Enhancing Technologies

2020

BIOSIG 2020

Conference on Biometrics and Electronic Signatures (BIOSIG) <19, 2020, Online>

GI-Edition - Lecture Notes in Informatics (LNI)
P-306

Biometric data includes privacy-sensitive information, such as soft-biometrics. Soft-biometric privacy enhancing technologies aim at limiting the possibility of deducing such information. Previous works proposed several solutions to this problem using several different evaluation processes, metrics, and attack scenarios. The absence of a standardized evaluation protocol makes a meaningful comparison of these solutions difficult. In this work, we propose privacy evaluation protocols (PEPs) for privacy-enhancing technologies (PETs) dealing with soft-biometric privacy. Our framework evaluates PETs in the most critical scenario of an attacker that knows and adapts to the systems privacy-mechanism. Moreover, our PEPs differentiate between PET of learning-based or training-free nature. To ensure that our protocol meets the highest standards in both cases, it is based on Kerckhoffs‘s principle of cryptography.

Show publication details
Cao, Min; Chen, Chen; Dou, Hao; Hu, Xiyuan; Peng, Silong; Kuijper, Arjan

Progressive Bilateral-Context Driven Model for Post-Processing Person Re-Identification

2020

IEEE Transactions on Multimedia

Most existing person re-identification methods compute pairwise similarity by extracting robust visual features and learning the discriminative metric. Owing to visual ambiguities, these content-based methods that determine the pairwise relationship only based on the similarity between them, inevitably produce a suboptimal ranking list. Instead, the pairwise similarity can be estimated more accurately along the geodesic path of the underlying data manifold by exploring the rich contextual information of the sample. In this paper, we propose a lightweight post-processing person re-identification method in which the pairwise measure is determined by the relationship between the sample and the counterpart's context in an unsupervised way. We translate the point-to-point comparison into the bilateral point-to-set comparison. The sample's context is composed of its neighbor samples with two different definition ways: the first order context and the second order context, which are used to compute the pairwise similarity in sequence, resulting in a progressive post-processing model. The experiments on four large-scale person re-identification benchmark datasets indicate that (1) the proposed method can consistently achieve higher accuracies by serving as a post-processing procedure after the content-based person re-identification methods, showing its state-of-the-art results, (2) the proposed lightweight method only needs about 6 milliseconds for optimizing the ranking results of one sample, showing its high-efficiency. Code is available at: https://github.com/123ci/PBCmodel.

Show publication details
Jansen, Nils; Kuijper, Arjan [Advisor]; Siegmund, Dirk [Advisor]

Rapid Depth from Multi-view Images

2020

Darmstadt, TU, Master Thesis, 2020

Show publication details
Fina, Kenten; Kuijper, Arjan [1. Review]; Urban, Philipp [2. Review]; Dennstädt, Marco [3. Review]

Real-time rendering of CSG-operations on high resolution data for preview of 3D-prints

2020

Darmstadt, TU, Master Thesis, 2020

In this thesis various optimizations for the ray-marching algorithm are introduced to efficiently render CSG-operations on high resolution meshes. By using a 2- pass render method and CSG-node memory method, speed-ups of factor 2 to 3 can be achieved in contrast to standard ray marching. We implement a oct-tree based data structure to compress the high resolution SDF (signed distance function) as well as color data. For raw data at a resolution 1024^3, our compressed data requires on average 1.69% of the raw data. Lastly we compare our performance against the openCSG implementations of the well-known Goldfeather and SCS algorithm.

Show publication details
Stenger, Pascal; Stork, André [Advisor]; Grasser, Tim [Advisor]

Rigid Multi-Point Constraint Finite Elements Using an Iterative Solver and a Projection Approach

2020

Darmstadt, TU, Bachelor Thesis, 2020

In this thesis we want to present a way to incorporate multi-point constraints into a given finite elements system by using a projection approach. The solving method we want to incorporate this in will be a conjugate gradient (CG) algorithm. For this we first want to expalin what MPC actually are and what the general idea of a projection approach is. Next we present efficient algorithms to obtain the projection matrices and how we use those in the conjugate gradient solver. Our main framework is already given and since it is written in C++ and CUDA the final implementation also will be in C++/CUDA. Finally we will take a look at the results and the performance hit we get when using this approach. We will also take a look at how the convergence speed is affected when using a modified precoditioner.

Show publication details
Krämer, Michel; Gutbell, Ralf; Würz, Hendrik Martin; Weil, Jannis

Scalable processing of massive geodata in the cloud: generating a level-of-detail structure optimized for web visualization

2020

Full paper Proceedings of the 23rd AGILE Conference on Geographic Information Science

Conference on Geographic Information Science (AGILE) <23, 2020, Chania, Crete, Creece>

We present a cloud-based approach to transform arbitrarily large terrain data to a hierarchical level-of-detail structure that is optimized for web visualization. Our approach is based on a divide-andconquer strategy. The input data is split into tiles that are distributed to individual workers in the cloud. These workers apply a Delaunay triangulation with a maximum number of points and a maximum geometric error. They merge the results and triangulate them again to generate less detailed tiles. The process repeats until a hierarchical tree of different levels of detail has been created. This tree can be used to stream the data to the web browser. We have implemented this approach in the frameworks Apache Spark and GeoTrellis. Our paper includes an evaluation of our approach and the implementation. We focus on scalability and runtime but also investigate bottlenecks, possible reasons for them, as well as options for mitigation. The results of our evaluation show that our approach and implementation are scalable and that we are able to process massive terrain data.

Show publication details

Sensing Technology for Human Activity Recognition: a Comprehensive Survey

2020

IEEE Access

Sensors are devices that quantify the physical aspects of the world around us. This ability is important to gain knowledge about human activities. Human Activity recognition plays an import role in people’s everyday life. In order to solve many human-centered problems, such as health-care, and individual assistance, the need to infer various simple to complex human activities is prominent. Therefore, having a well defined categorization of sensing technology is essential for the systematic design of human activity recognition systems. By extending the sensor categorization proposed by White, we survey the most prominent research works that utilize different sensing technologies for human activity recognition tasks. To the best of our knowledge, there is no thorough sensor-driven survey that considers all sensor categories in the domain of human activity recognition with respect to the sampled physical properties, including a detailed comparison across sensor categories. Thus, our contribution is to close this gap by providing an insight into the state-of-the-art developments. We identify the limitations with respect to the hardware and software characteristics of each sensor category and draw comparisons based on benchmark features retrieved from the research works introduced in this survey. Finally, we conclude with general remarks and provide future research directions for human activity recognition within the presented sensor categorization.

Show publication details

SER-FIQ: Unsupervised Estimation of Face Image Quality Based on Stochastic Embedding Robustness

2020

2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition

IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) <2020, virtual>

Face image quality is an important factor to enable high-performance face recognition systems. Face quality assessment aims at estimating the suitability of a face image for the purpose of recognition. Previous work proposed supervised solutions that require artificially or human labelled quality values. However, both labelling mechanisms are error prone as they do not rely on a clear definition of quality and may not know the best characteristics for the utilized face recognition system. Avoiding the use of inaccurate quality labels, we proposed a novel concept to measure face quality based on an arbitrary face recognition model. By determining the embedding variations generated from random subnetworks of a face model, the robustness of a sample representation and thus, its quality is estimated. The experiments are conducted in a cross-database evaluation setting on three publicly available databases. We compare our proposed solution on two face embeddings against six state-of-the-art approaches from academia and industry. The results show that our unsupervised solution outperforms all other approaches in the majority of the investigated scenarios. In contrast to previous works, the proposed solution shows a stable performance over all scenarios. Utilizing the deployed face recognition model for our face quality assessment methodology avoids the training phase completely and further outperforms all baseline approaches by a large margin. Our solution can be easily integrated into current face recognition systems, and can be modified to other tasks beyond face recognition.

Show publication details
Pöllabauer, Thomas Jürgen; Rojtberg, Pavel [1. Prüfer]; Kuijper, Arjan [2. Prüfer]

STYLE: Style Transfer for Synthetic Training of a YoLo6D Pose Estimator

2020

Darmstadt, TU, Master Thesis, 2020

Supervised training of deep neural networks requires a large amount of training data. Since labeling is time-consuming and error prone and many applications lack data sets of adequate size, research soon became interested in generating this data synthetically, e.g. by rendering images, which makes the annotation free and allows utilizing other sources of available data, for example, CAD models. However, unless much effort is invested, synthetically generated data usually does not exhibit the exact same properties as real-word data. In context of images, there is a difference in the distribution of image features between synthetic and real imagery, a domain gap. This domain gap reduces the transfer-ability of synthetically trained models, hurting their real world inference performance. Current state-of-the-art approaches trying to mitigate this problem concentrate on domain randomization: Overwhelming the model’s feature extractor with enough variation to force it to learn more meaningful features, effectively rendering real-world images nothing more but one additional variation. The main problem with most domain randomization approaches is that it requires the practitioner to decide on the amount of randomization required, a fact research calls "blind" randomization. Domain adaptation in contrast directly tackles the domain gap without the assistance of the practitioner, which makes this approach seem superior. This work deals with training of a DNN-based object pose estimator in three scenarios: First, a small amount of real-world images of the objects of interest is available, second, no images are available, but object specific texture is given, and third, no images and no textures are available. Instead of copying successful randomization techniques, these three problems are tackled mainly with domain adaptation techniques. The main proposition is the adaptation of general-purpose, widely-available, pixel-level style transfer to directly tackle the differences in features found in images from different domains. To that end several approaches are introduced and tested, corresponding to the three different scenarios. It is demonstrated that in scenario one and two, conventional conditional GANs can drastically reduce the domain gap, thereby improving performance by a large margin when compared to non-photo-realistic renderings. More importantly: ready-to-use style transfer solutions improve performance significantly when compared to a model trained with the same degree of randomization, even when there is no real-world data of the target objects available (scenario three), thereby reducing the reliance on domain randomization.

Show publication details
Damer, Naser; Grebe, Jonas Henry; Chen, Cong; Boutros, Fadi; Kirchbuchner, Florian; Kuijper, Arjan

The Effect of Wearing a Mask on Face Recognition Performance: an Exploratory Study

2020

BIOSIG 2020

Conference on Biometrics and Electronic Signatures (BIOSIG) <19, 2020, Online>

GI-Edition - Lecture Notes in Informatics (LNI)
P-306

Face recognition has become essential in our daily lives as a convenient and contactless method of accurate identity verification. Process such as identity verification at automatic border control gates or the secure login to electronic devices are increasingly dependant on such technologies. The recent COVID-19 pandemic have increased the value of hygienic and contactless identity verification. However, the pandemic led to the wide use of face masks, essential to keep the pandemic under control. The effect of wearing a mask on face recognition in a collaborative environment is currently sensitive yet understudied issue. We address that by presenting a specifically collected database containing three session, each with three different capture instructions, to simulate realistic use cases.We further study the effect of masked face probes on the behaviour of three top-performing face recognition systems, two academic solutions and one commercial off-the-shelf (COTS) system.

Show publication details

Time-unfolding Object Existence Detection in Low-quality Underwater Videos using Convolutional Neural Networks

2020

Proceedings of the 15th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications

International Joint Conference on Computer Vision and Computer Graphics Theory and Applications (VISIGRAPP) <15, 2020, Valetta, Malta>

Monitoring the environment for early recognition of changes is necessary for assessing the success of renaturation measures on a facts basis. It is also used in fisheries and livestock production for monitoring and for quality assurance. The goal of the presented system is to count sea trouts annually over the course of several months. Sea trouts are detected with underwater camera systems triggered by motion sensors. Such a scenario generates many videos that have to be evaluated manually. This article describes the techniques used to automate the image evaluation process. An effective method has been developed to classify videos and determine the times of occurrence of sea trouts, while significantly reducing the annotation effort. A convolutional neural network has been trained via supervised learning. The underlying images are frame compositions automatically extracted from videos on which sea trouts are to be detected. The accuracy of the resulting detection system reaches values of up to 97.7 %.

Show publication details

Towards 3D Digitization in the GLAM (Galleries, Libraries, Archives, and Museums) Sector – Lessons Learned and Future Outlook

2020

The IPSI BgD Transactions on Internet Research

The European Cultural Heritage Strategy for the 21st century, within the Digital Agenda, one of the flagship initiatives of the Europe 2020 Strategy, has led to an increased demand for fast, efficient and faithful 3D digitization technologies for cultural heritage artefacts. 3D digitization has proven to be a promising approach to enable precise reconstructions of objects. Yet, unlike the digital acquisition of cultural goods in 2D which is widely used and automated today, 3D digitization often still requires significant manual intervention, time and money. To enable heritage institutions to make use of large scale, economic, and automated 3D digitization technologies, the Competence Center for Cultural Heritage Digitization at the Fraunhofer Institute for Computer Graphics Research IGD has developed CultLab3D, the world’s first fully automatic 3D mass digitization technology for collections of three-dimensional objects. 3D scanning robots such as the CultArm3D-P are specifically designed to automate the entire 3D digitization process thus allowing to capture and archive objects on a large-scale and produce highly accurate photo-realistic representations. The unique setup allows to shorten the time needed for digitization from several hours to several minutes per artefact.

Show publication details

Transforming Seismocardiograms Into Electrocardiograms by Applying Convolutional Autoencoders

2020

2020 IEEE International Conference on Acoustics, Speech, and Signal Processing. Proceedings

45th International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2020) <45, 2020, Barcelona, Spain>

Electrocardiograms constitute the key diagnostic tool for cardiologists. While their diagnostic value is yet unparalleled, electrode placement is prone to errors, and sticky electrodes pose a risk for skin irritations and may detach in long-term measurements. Heart.AI presents a fundamentally new approach, transforming motion-based seismocardiograms into electrocardiograms interpretable by cardiologists. Measurements are conducted simply by placing a sensor on the user’s chest. To generate the transformation model, we trained a convolutional autoencoder with the publicly available CEBS dataset. The transformed ECG strongly correlates with the ground truth (r=.94, p<.01), and important features (number of R-peaks, QRS-complex durations) are modeled realistically (Bland-Altman analyses, p>0.12). On a 5- point Likert scale, 15 cardiologists rated the morphological and rhythmological validity as high (4.63/5 and 4.8/5, respectively). Our electrodeless approach solves crucial problems of ECG measurements while being scalable, accessible and inexpensive. It contributes to telemedicine, especially in low-income and rural regions worldwide.

Show publication details

Unconstrained workout activity recognition on unmodified commercial off-the-shelf smartphones

2020

Proceedings of the 13th ACM International Conference on PErvasive Technologies Related to Assistive Environments

ACM International Conference on PErvasive Technologies Related to Assistive Environments (PETRA) <13, 2020, Corfu, Greece>

ACM International Conference Proceedings Series (ICPS)

Smartphones have become an essential part of our lives. Especially its computing power and its current specifications make a modern smartphone even more powerful than the computers NASA used to send astronauts to the moon. Equipped with various integrated sensors, a modern smartphone can be leveraged for lots of smart applications. In this paper, we investigate the possibility of using a unmodified commercial off-the-shelf (COTS) smartphone to recognize 8 different workout exercises. App-based workout has become popular in the last few years. People do not need to go to the gym to practice. The advantage of using a mobile device is, that you can practice anywhere at anytime. In this work, we turned a COTS smartphone to an active sonar device to leverage the echo reflected from exercising movement close to the device. By conducting a test study with 14 participants performing these eight exercises, we show first results for cross person evaluation and the generalization ability of our inference models on unseen participants. A bidirectional LSTM model achieved an overall F1 score of 88.86 % for the cross subject case and 79.52 % for the holdout participants evaluation. Similar good results can be achieved by a VGG16 fine-tuned model in comparison to a 2D-CNN architecture trained from scratch.

Show publication details
Fellner, Dieter W. [Hrsg.]; Welling, Daniela [Red.]; Ackeren, Janine van [Red.]; Bergstedt, Bettina [Red.]; Krüger, Kathrin [Red.]; Prasche, Svenja [Advisor]; Bornemann, Heidrun [Red.]; Roth, Anahit [Red.]

Unser Jahr 2019

2020

Die Visual-Computing-Anwendungen des Fraunhofer IGD setzen auf eine realitätsgetreue Visualisierung und verbinden diese mit wichtigem Spezialwissen, um komplexe Sachverhalte bereits in der Planungsphase zu vermitteln. Wir bieten sowohl Fachleuten als auch Bürgerinnen und Bürgern eine interaktive 3D-Webanwendung an, welche die Projektentwicklung in einen nachvollziehbaren, realistischen Kontext stellt – dies führt zu deutlich mehr Akzeptanz der Ergebnisse. Die Stadt Hamburg setzt ein solches Szenario bereits um. Bürger können neue Pflanzorte für Bäume vorschlagen und erhalten eine direkte Rückmeldung, ob alle Richtlinien eingehalten werden – zugleich informiert die städtische Planungssoftware über mögliche Ausweichorte. Transparenz und ein schnelles Feedback lassen die Bürgerinnen und Bürger aktiv und produktiv an urbanen Planungsprozessen mitwirken. Das Prinzip ist übertragbar. Ob es um Infrastruktur für den Breitbandausbau geht, um Verkehr oder erneuerbare Energien: Alle Beteiligten treffen sich zeit- und ortsunabhängig, allen liegen die gleichen umfassenden Informationen vor – diskutiert wird auf virtueller Ebene. Auch im Bildungsbereich hat das Fraunhofer IGD neue Möglichkeiten geschaffen: ökonomisch, ökologisch, effizient. Wer über mehrere Sinneskanäle lernt, etwa über Sprache und Bilder, kann Wissen besser abspeichern. So üben seit 2019 ehrenamtliche Helfer beim Deutschen Roten Kreuz in virtuellen Trainingswelten, wie sich ein Einsatz im Rettungswagen genau gestaltet – manchmal ist eben kein Rettungswagen zum Üben verfügbar. Oder: Was passiert, wenn Auszubildende bei der Heidelberger Druckmaschinen AG komplexes Gerät verstehen, warten und reparieren sollen? Die Produktion stoppen und die Maschine auseinanderbauen und wieder zusammensetzen? Dank virtueller Lernräume können die Auszubildenden die Abläufe im Inneren der Maschine »sehen«, erkennen und verstehen. Visual Computing mit Virtual Reality (VR) und Augmented Reality (AR) ist und bleibt spannend, nicht nur für die Wissenschaft: Laut einer Studie von PricewaterhouseCoopers haben VR und AR großes Potenzial. 2030 werden allein in Deutschland 400 000 Menschen am Arbeitsplatz damit zu tun haben, derzeit sind es 15 000.

Show publication details

Vibroarthrography using Convolutional Neural Networks

2020

Proceedings of the 13th ACM International Conference on PErvasive Technologies Related to Assistive Environments

ACM International Conference on PErvasive Technologies Related to Assistive Environments (PETRA) <13, 2020, Corfu, Greece>

ACM International Conference Proceedings Series (ICPS)

Knees, hip, and other human joints generate noise and vibration while they move. The vibration and sound pattern is characteristic not only for the type of joint but also for the condition. The pattern vary due to abrasion, damage, injury, and other causes. Therefore, the vibration and sound analysis, also known as vibroarthrography (VAG), provides information and possible conclusions about the joint condition, age and health state. The analysis of the pattern is very sophisticated and complex and so approaches of machine learning techniques were applied before. In this paper, we are using convolutional neural networks for the analysis of vibroarthrographic signals and compare the results with already known machine learning techniques.

Show publication details
Kluge, Sven; Gladisch, Stefan; Lukas, Uwe von; Staadt, Oliver; Tominski, Christian

Virtual Lenses as Embodied Tools for Immersive Analytics

2020

Virtuelle und Erweiterte Realität

Workshop der GI-Fachgruppe VR/AR: Virtuelle und Erweiterte Realität <17, 2020, online>

Interactive lenses are useful tools for supporting the analysis of data in differentways. Most existing lenses are designed for 2D visualization and are operated using standardmouse and keyboard interaction. On the other hand, research on virtual lenses for novel3D immersive visualization environments is scarce. Our work aims to narrow this gap inthe literature. We focus particularly on the interaction with lenses. Inspired by naturalinteraction with magnifying glasses in the real world, our lenses are designed as graspabletools that can be created and removed as needed, manipulated and parameterized dependingon the task, and even combined to flexibly create new views on the data. We implementedour ideas in a system for the visual analysis of 3D sonar data. Informal user feedback frommore than 100 people suggests that the designed lens interaction is easy to use for the taskof finding a hidden wreck in sonar data.

Show publication details
Metzler, Simon Konstantin; Kuijper, Arjan [1. Review]; Yeste Magdaleno, Javier [2. Review]

Visually-aware Recommendation System for Interior Design

2020

Darmstadt, TU, Bachelor Thesis, 2020

Suitable recommendations are critical for a successful e-commerce experience, especially for product categories such as furniture. A well thought-out choice of furniture is decisive for the visual appearance and the comfort of a room. Interior design can take much time and not everyone is capable to do it. Some furniture stores offer recommendation systems on their website, which are usually based on collaborative filters that are very restrictive, can be inaccurate and require many data at first. This work aims to develop a method to provide set recommendations that adhere to a cohesive visual style. The method can automatically advise the user on what set of furniture to choose for a room around one seed piece. The proposed system uses a database where learned attributes of the dataset are previously stored. Once the user select a seed, the system extracts the attributes from the image to execute a query in the database. Finally, a visual search performed in the filtered subset will return the best candidates. This way has the advantage to receive the results faster and to reduce the searching space thereby improving efficiency. The system is presented that is both powerful and efficient enough to give useful user-specific recommendations in real-time.

Show publication details
Zouhar, Florian; Senner, Ivo

Web-Based Visualization of Big Geospatial Vector Data

2020

Geospatial Technologies for Local and Regional Development

Conference on Geographic Information Science (AGILE) <22, 2019, Limassol, Cyprus>

Lecture Notes in Geoinformation and Cartography (LNGC)

Today, big data is one of the most challenging topics in computer science. To give customers, developers or domain experts an overview of their data, it needs to be visualized. In case data contains geospatial information, it becomes more difficult, because most users have a well-trained experience how to explore geographic information. A common map interface allows users zooming and panning to explore the whole dataset. This paper focuses on an approach to visualize huge sets of geospatial data in modern web browsers along with maintaining a dynamic tile tree. The contribution of this work is, to make it possible to render over one million polygons integrated in a modern web application by using 2D Vector Tiles. A major challenge is the map interface providing interaction features such as data-driven filtering and styling of vector data for intuitive data exploration. A web application requests, handles and renders the vector tiles. Such an application has to keep its responsiveness for a better user experience. Our approach to build and maintain the tile tree database provides an interface to import new data and more valuable a flexible way to request Vector Tiles. This is important to face the issues regarding memory allocation in modern web applications.

Show publication details
Kraft, Dimitri; Srinivasan, Karthik; Bieber, Gerald

Wrist-worn Accelerometer based Fall Detection for Embedded Systems and IoT devices using Deep Learning Algorithms

2020

Proceedings of the 13th ACM International Conference on PErvasive Technologies Related to Assistive Environments

ACM International Conference on PErvasive Technologies Related to Assistive Environments (PETRA) <13, 2020, Corfu, Greece>

ACM International Conference Proceedings Series (ICPS)

With increasing age, elderly persons are falling more often. While a third of people over 65 years are falling once a year, hospitalized people over 80 years are falling multiple times per year. A reliable fall detection is absolutely necessary for a fast help. Therefore, wristworn accelerometer based fall detection systems are developed but the accuracy and precision is not standardized, comparable or sometimes even known. In this paper, we present an overview about existing public databases with sensor based fall datasets and harmonize existing wrist-worn datasets for a broader and robust evaluation. Furthermore, we are analyzing the current possible recognition rate of fall detection using deep learning algorithms for mobile and embedded systems. The presented results and databases can be used for further research and optimizations in order to increase the recognition rate to enhance the independent life of the elderly.