Die »Selected Readings in Computer Graphics 2016« bestehen aus 40 ausgewählten Artikeln von insgesamt 117 wissenschaftlichen Veröffentlichungen.
Die Beiträge kommen aus dem Fraunhofer-Institut für Graphische Datenverarbeitung IGD mit Standorten in Darmstadt wie auch in Rostock, Singapur und Graz, den Partner-Instituten an den jeweiligen Universitäten, der Fachgruppe Graphisch-Interaktive Systeme der Technischen Universität Darmstadt, der Computergraphics and Communication Gruppe am Institut für Informatik der Universität Rostock, der Nanyang Technological University (NTU), Singapur, und dem Visual Computing Excellenz-Cluster der Technischen Universität Graz. Sie alle arbeiten eng in Projekten sowie Forschung und Entwicklung im Gebiet der Computer Graphik zusammen.
Alle Artikel erschienen vorher in verschiedenen wissenschaftlichen Büchern, Zeitschriften, Konferenzbänden und Workshops. Die Veröffentlichungen mussten einen gründlichen Begutachtungsprozess durch international führende Experten und etabilierte technische Vereinigungen durchlaufen. Deshalb geben die Selected Readings einen recht guten und detaillierten Überblick über die wissenschaftlichen Entwicklungen in der Computer Graphik im Jahr 2016. Sie werden von Professor Dieter W. Fellner, dem Leiter des Fraunhofer-Instituts für Graphische Datenverarbeitung IGD in Darmstadt zusammengestellt. Er ist zugleich Professor am Fachbereich Informatik der Technischen Universität Darmstadt und Professor an der Fakultät für Informatik der Technischen Universität Graz.
Die Selected Readings in Computer Graphics 2016 befassen sich mit Aspekten und Trends der Forschung und Entwicklung in Computer Graphik auf den Gebieten
- Digitale Gesellschaft
- Virtuelles Engineering
- Visuelle Entscheidungshilfe
- Visual Computing Forschung
Liste der Publikationen
IEEE Computer Graphics and Applications
We introduce deferred warping, a novel approach for real-time deformation of 3D objects attached to an animated or manipulated surface. Our target application is virtual prototyping of garments where 2D pattern modeling is combined with 3D garment simulation which allows an immediate validation of the design. The technique works in two steps: First, the surface deformation of the target object is determined and the resulting transformation field is stored as a matrix texture. Then the matrix texture is used as look-up table to transform a given geometry onto a deformed surface. Splitting the process in two steps yields a large flexibility since different attachment types can be realized by simply defining specific mapping functions. Our technique can directly handle complex topology changes within the surface. We demonstrate a fast implementation in the vertex shading stage allowing the use of highly decorated surfaces with millions of triangles in real-time.
3DHOG for Geometric Similarity Measurement and Retrieval on Digital Cultural Heritage Archives
Intelligent Interactive Multimedia Systems and Services 2016
KES International Conference on Intelligent Interactive Multimedia Systems and Services (IIMSS) <9, 2016, Puerto de la Cruz, Tenerife, Spain>
Smart Innovation, Systems and Technologies, 55
With projects such as CultLab3D, 3D Digital preservation of cultural heritage will become more affordable and with this, the number of 3D-models representing scanned artefacts will dramatically increase. However, once mass digitization is possible, the subsequent bottleneck to overcome is the annotation of cultural heritage artefacts with provenance data. Current annotation tools are mostly based on textual input, eventually being able to link an artefact to documents, pictures, videos and only some tools already support 3D models. Therefore, we envisage the need to aid curators by allowing for fast, web-based, semi-automatic, 3D-centered annotation of artefacts with metadata. In this paper we give an overview of various technologies we are currently developing to address this issue. On one hand we want to store 3D models with similarity descriptors which are applicable independently of different 3D model quality levels of the same artefact. The goal is to retrieve and suggest to the curator metadata of already annotated similar artefacts for a new artefact to be annotated, so he can eventually reuse and adapt it to the current case. In addition we describe our web-based, 3D-centered annotation tool with meta- and object repositories supporting various databases and ontologies such as CIDOC-CRM.
A Direct Method for Robust Model-Based 3D Object Tracking from a Monocular RGB Image
Computer Vision - ECCV 2016 Workshops. Proceedings Part I
European Conference on Computer Vision (ECCV) <14, 2016, Amsterdam, The Netherlands>
This paper proposes a novel method for robust 3D object tracking from a monocular RGB image when an object model is available. The proposed method is based on direct image alignment between consecutive frames over a 3D target object. Unlike conventional direct methods that only rely on image intensity, we newly model intensity variations using the surface normal of the object under the Lambertian assumption. From the prediction about image intensity in this model, we also employ a constrained objective function, which significantly alleviates degradation of the tracking performance. In experiments, we evaluate our method using datasets that consist of test sequences under challenging conditions, and demonstrate its benefits compared to other methods.
A Fast, Massively Parallel Solver for Large, Irregular Pairwise Markov Random Fields
High Performance Graphics 2016
High-Performance Graphics (HPG) <8, 2016, Dublin, Ireland>
Given the increasing availability of high-resolution input data, today's computer vision problems tend to grow beyond what has been considered tractable in the past. This is especially true for Markov Random Fields (MRFs), which have expanded beyond millions of variables with thousands of labels. Such MRFs pose new challenges for inference, requiring massively parallel solvers that can cope with large-scale problems and support general, irregular input graphs. We propose a block coordinate descent based solver for large MRFs designed to exploit many-core hardware such as recent GPUs. We identify tree-shaped subgraphs as a block coordinate scheme for irregular topologies and optimize them efficiently using dynamic programming. The resulting solver supports arbitrary MRF topologies efficiently and can handle arbitrary, dense or sparse label sets as well as label cost functions. Together with two additional heuristics for further acceleration, our solver performs favorably even compared to modern specialized solvers in terms of speed and solution quality, especially when solving very large MRFs.
A Novel Implementation Approach for Resource Holons in Reconfigurable Product Manufacturing Cell
Proceedings of the 13th International Conference on Informatics in Control, Automation and Robotics
International Conference on Informatics in Control, Automation and Robotics (ICINCO) <13, 2016, Lisbon, Portugal>
Holonic Control Architecture is a successful solution model for reconfigurable manufacturing problems. Two well-known different technologies have been used separately to implement the holonic control model. The first technology is IEC 61499 standard, and the second is autonomous reactive agent. Both of the previous mentioned technologies have its own pros and cons. Therefore this research is merging the two technologies together in one solution body, to magnifying their pros and reduce their cons. Ultimately; it provides a novel implementation model for the manufacturing holons, to be followed in similar reconfigurable manufacturing problems. A human worker in cooperation with a safe industrial robot, has been selected as a case study of a reconfigurable manufacturing problem. The proposed holonic control solution has been applied to the case study, to evaluate the ability of the solution to satisfy the requirements of the case study. The results show the ability of the proposed control solution to provide a flexible physical and logical interaction framework, which can be scaled over more workers in cooperation with more industrial robots.
A Software Tool for Planning and Evaluation of Non-Linear Trajectories for Minimally Invasive Lateral Skull Base Surgery
Jahrestagung der Deutschen Gesellschaft für Computer- und Roboter Assistierte Chirurgie (CURAC) <15, 2016, Bern, Schweiz>
The research project MUKNO II investigates the feasibility of non-linear access paths for minimally invasive lateral skull base surgery to optimize safety distance to risk structures and direction of insertion vectors. For this purpose a new surgical planning tool for manual as well as automatic nonholonomic path planning was developed. In ten 3D surface models of the temporal bone region trajectories to specific target points were manually created. The distance to critical structures and the curvature were evaluated along the course of these trajectories. First experiments with automatic nonholonomic planning showed the applicability of the implemented motion planner in the complex dense environment.
Adaptive UW Image Deblurring via Sparse Representation
Eurographics 2016. Short Papers
Annual Conference of the European Association for Computer Graphics (Eurographics) <37, 2016, Lisbon, Portugal>
We present an adaptive underwater (UW) image deblurring algorithm based on sparse representation where a blur estimation is used to guide the algorithm for the best image reconstruction. The strong blur in this medium is caused by forward scatter and is challenging since it increases by camera scene distance. It is a common practice to use methods such as dark channel prior to estimate the depth map, and use this information to improve the image quality. However, we found it not successful in the case of blur since these methods are based on haze phenomenon. We propose a simple but effective algorithm via sparse representation which establishes a blur strength estimation and uses this information for adaptive deblurring. Extensive experiments manifest the effectiveness of our method in case of small but challenging blur changes.
Addressing Inaccuracies in BLOSUM Computation Improves Homology Search Performance
Background: BLOSUM matrices belong to the most commonly used substitution matrix series for protein homology search and sequence alignments since their publication in 1992. In 2008, Styczynski et al. discovered miscalculations in the clustering step of the matrix computation. Still, the RBLOSUM64 matrix based on the corrected BLOSUM code was reported to perform worse at a statistically significant level than the BLOSUM62. Here, we present a further correction of the (R)BLOSUM code and provide a thorough performance analysis of BLOSUM-, RBLOSUM- and the newly derived CorBLOSUM-type matrices. Thereby, we assess homology search performance of these matrix-types derived from three different BLOCKS databases on all versions of the ASTRAL20, ASTRAL40 and ASTRAL70 subsets resulting in 51 different benchmarks in total. Our analysis is focused on two of the most popular BLOSUM matrices-BLOSUM50 and BLOSUM62. Results: Our study shows that fixing small errors in the BLOSUM code results in substantially different substitution matrices with a beneficial influence on homology search performance when compared to the original matrices. The CorBLOSUM matrices introduced here performed at least as good as their BLOSUM counterparts in ~ 75 % of all test cases. On up-to-date ASTRAL databases BLOSUM matrices were even outperformed by CorBLOSUM matrices in more than 86 % of the times. In contrast to the study by Styczynski et al., the tested RBLOSUM matrices also outperformed the corresponding BLOSUM matrices in most of the cases. Comparing the CorBLOSUM with the RBLOSUM matrices revealed no general performance advantages for either on older ASTRAL releases. On up-to-date ASTRAL databases however CorBLOSUM matrices performed better than their RBLOSUM counterparts in ~ 74 % of the test cases. Conclusions: Our results imply that CorBLOSUM type matrices outperform the BLOSUM matrices on a statistically significant level in most of the cases, especially on up-to-date databases such as ASTRAL greater than/equal to 2.01. Additionally, CorBLOSUM matrices are closer to those originally intended by Henikoff and Henikoff on a conceptual level. Hence, we encourage the usage of CorBLOSUM over (R)BLOSUM matrices for the task of homology search.
An Automatic Hypothesis of Electrical Lines from Range Scans and Photographs
International Conference on Computing in Civil and Building Engineering (ICCCBE) <16, 2016, Osaka, Japan>
Building information modeling (BIM) with high level of detail and semantic information on buildings throughout their lifetime are getting more and more important for stakeholders in the building domain. Currently, such models are not yet present for the majority of today's building stock. With increasing speed and precision of laser scans or photogrammetry, geometric data can be acquired at reasonable costs. Unfortunately, these data are unstructured and do not provide high-level semantic information, which stakeholder require for non-trivial workflows. A current research topic are methods to extract non-visible structures from visible geometric entities. This work uses domain specific geometric and semantic constraints to automatically deduce information that is not directly observable in architectural objects: electrical power supply lines. It utilizes as-built BIM data from scans of indoor spaces in order to provide a hypothesis of paths of electrical lines. The system assumes that legal requirements and standards exist for defining the placement of power supply lines. This prior knowledge is formalized in a set of rules, using a 2D shape grammar that yields installation zones for a given room. Observable endpoints (sockets and switches) are detected in indoor scenes of buildings using methods from computer vision. The information from the reconstructed BIM model, as well as the detections and the generated installation zones are combined in a graph that represents all likely paths the power lines could take. Using this graph and a discrete optimization approach, the subgraph is generated that corresponds to a probable hypothesis. Our approach has been tested against synthetic and measured data and shows promising first results. Application possibilities include generation of a probable wiring for as-built / optically acquired building model, or suggesting cable ducts for a building reorganization or during planning of a new building.
Benchmarking Sensors in Smart Environments - Method and Use Cases
Journal of Ambient Intelligence and Smart Environments
Smart environment applications can be based on a large variety of different sensors that may support the same use case, but have specific advantages or disadvantages. Benchmarking can allow determining the most suitable sensor systems for a given application by calculating a single benchmarking score, based on weighted evaluation of features that are relevant in smart environments. This set of features has to represent the complexity of applications in smart environments. In this work we present a benchmarking model that can calculate a benchmarking score, based on nine selected features that cover aspects of performance, the environment and the pervasiveness of the application. Extensions are presented that normalize the benchmark-ing score if required and compensate central tendency bias, if necessary. We outline how this model is applied to capacitive proximity sensors that measure properties of conductive objects over a distance. The model is used to identify existing and find potential new application domains for this upcoming technology in smart environments.
Building a Driving Simulator with Parallax Barrier Displays
Proceedings of the 11th Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications. Volume 1
International Joint Conference on Computer Vision and Computer Graphics Theory and Applications (VISIGRAPP) <11, 2016, Rome, Italy>
In this paper, we present an optimized 3D stereoscopic display based on parallax barriers for a driving simulator. The overall purpose of the simulator is to enable user studies in a reproducible environment under controlled conditions to test and evaluate advanced driver assistance systems. Our contribution and the focus of this article is a visualization based on parallax barriers with (I) a-priori optimized barrier patterns and (II) an iterative calibration algorithm to further reduce visualization errors introduced by production inaccuracies. The result is an optimized 3D stereoscopic display perfectly integrated into its environment such that a single user in the simulator environment sees a stereoscopic image without having to wear specialized eye-wear.
c-Space: Time-evolving 3D Models (4D) from Heterogeneous Distributed Video Sources
Eurographics Workshop on Graphics and Cultural Heritage (GCH) <14, 2016, Genova, Italy>
We introduce c-Space, an approach to automated 4D reconstruction of dynamic real world scenes, represented as time-evolving 3D geometry streams, available to everyone. Our novel technique solves the problem of fusing all sources, asynchronously captured from multiple heterogeneous mobile devices around a dynamic scene at a real word location. To this end all captured input is broken down into a massive unordered frame set, sorting the frames along a common time axis, and finally discretizing the ordered frame set into a time-sequence of frame subsets, each subject to photogrammetric 3D reconstruction. The result is a time line of 3D models, each representing a snapshot of the scene evolution in 3D at a specific point in time. Just like a movie is a concatenation of time-discrete frames, representing the evolution of a scene in 2D, the 4D frames reconstructed by c-Space line up to form the captured and dynamically changing 3D geometry of an event over time, thus enabling the user to interact with it in the very same way as with a static 3D model. We do image analysis to automatically maximize the quality of results in the presence of challenging, heterogeneous and asynchronous input sources exhibiting a wide quality spectrum. In addition we show how this technique can be integrated as a 4D reconstruction web service module, available to mobile end-users.
CapTap - Combining Capacitive Gesture Recognition and Acoustic Touch Detection
International Workshop on Sensor-based Activity Recognition (iWOAR) <3, 2016, Rostock, Germany>
Capacitive sensing is a common technology for finger-controlled touch screens. The variety of proximity sensors extends the range, thus supporting mid-air gesture interaction and application below any non-conductive materials. However, this comes at the cost of limited resolution for touch detection. In this paper, we present CapTap, which uses capacitive proximity and acoustic sensing to create an interactive surface that combines mid-air and touch gestures, while being invisibly integrated into living room furniture. We introduce capacitive imaging, investigating the use of computer vision methods to track hand and arm positions and present several use cases for CapTap. In a user study we found that the system has average localization errors of 1.5cm at touch distance and 5cm at an elevation of 20cm above the table. The users found the system intuitive and interesting to use.
Computational Geometry in the Context of Building Information Modeling
Energy and Buildings
Building energy analysis has gained attention in recent years, as awareness for energy efficiency is rising in order to reduce greenhouse gas emissions. At the same time, the building information modeling paradigm is aiming to develop comprehensive digital representations of building characteristics based on semantic 3D models. Most of the data required for energy performance calculation can be found in such models; however, extracting the relevant data is not a trivial problem. This article presents an algorithm to prepare input data for energy analysis based on building information models. The crucial aspect is geometric simplification according to semantic constraints: the building element geometries are reduced to a set of surfaces representing the thermal shell as well as the internal boundaries. These boundary parts are then associated with material layers and thermally relevant data. The presented approach, previously discussed at the International Academic Conference on Places and Technologies (Ladenhauf et al., 2014), significantly reduces the needed time for energy analysis.
Constructive Roofs from Solid Building Primitives
Transactions on Computational Science XXVI
International Conference on Cyberworlds (CW) <13, 2014, Santander, Spain>
The creation of building models has high importance, due to the demand for detailed buildings in virtual worlds, games, movies and geo information systems. Due to the high complexity of such models, especially in the urban context, their creation is often very demanding in resources. Procedural methods have been introduced to lessen these costs, and allow to specify a building (or a class of buildings) by a higher level approach, and leave the geometry generation to the system. While these systems allow to specify buildings in immense detail, roofs still pose a problem. Fully automatic roof generation algorithms might not yield desired results (especially for reconstruction purposes), and complete manual specification can get very tedious due to complex geometric configurations. We present a new method for an abstract building specification, that allows to specify complex buildings from simpler parts with an emphasis on assisting the blending of roofs.
From Sensor to Situational Awareness
OCEANS 2016 MTS/IEEE Monterey
MTS/IEEE Oceans Conference and Exhibition (OCEANS) <2016, Monterey, CA, USA>
This paper describes the integrated FlexMoT data management and visualization software system for marine sensor data. Unlike most existing solutions this software platform is designed to be neither data-centric nor visualization-centric, but seamlessly integrating both, management and visualization of measurement data. Within the project FlexMoT (Flexible Monitoring Tool) a modular, easy-to-use and fast deployable environmental monitoring solution with a high temporal and depth resolution for underwater environments has been developed. This monitoring system was designed for the use in the underwater surrounding of offshore installations like oil rigs or gas production platforms and other critical and sensitive marine environments. It enables the exact measurement of concentration of dissolved gases such as methane in water and other environmental parameters that may be indicators of leakages. The FlexMoT software stack offers situational awareness and advanced decision support in environmental monitoring by organizing, preparing and presenting the gathered sensor data in a way that (1) operator personnel of offshore installations and marine researchers can draw the right conclusions and (2) appropriate actions can be initiated swiftly and well-informed. Similar to the modular hardware approach, the software uses a plugin approach to foster the simple reconfiguration to different use-cases. The designed visualization system utilizes human perception to transform lots of data quickly and intuitively into helpful information, to draw attention to critical events or striking data and support explorative data analysis with interactive displays. The implemented visualization solution combines recent webtechnologies and linked interactive 2D and 3D data presentations utilizing a direct-touch interaction metaphor. In this paper, we present the complete software stack of FlexMoT for data management, operational near real-time monitoring, visual data analytics of marine environmental data and visual forecast of gas leakage situations. It is proposed as a universal approach to improve visualization-based work with heterogeneous sensor data in environmental monitoring and marine research.
Guiding the Exploration of Scatter Plot Data Using Motif-based Interest Measures
Journal of Visual Languages & Computing
Finding interesting patterns in large scatter plot spaces is a challenging problem and becomes even more difficult with increasing number of dimensions. Previous approaches for exploring large scatter plot spaces like e.g., the well-known Scagnostics approach, mainly focus on ranking scatter plots based on their global properties. However, often local patterns contribute significantly to the interestingness of a scatter plot. We are proposing a novel approach for the automatic determination of interesting views in scatter plot spaces based on analysis of local scatter plot segments. Specifically, we automatically classify similar local scatter plot segments, which we call scatter plot motifs. Inspired by the well-known tf ×idf-approach from information retrieval, we compute local and global quality measures based on certain frequency properties of the local motifs. We show how we can use these to filter, rank and compare scatter plots and their incorporated motifs. We demonstrate the usefulness of our approach with synthetic and real-world data sets and showcase our corresponding data exploration tool that visualizes the distribution of local scatter plot motifs in relation to a large overall scatter plot space.
Interactive Screenspace Stream-Compaction Fragment Rendering of Direct Illumination from Area Lights
2016 International Conference on Cyberworlds
International Conference on Cyberworlds (CW) <2016, Chongqing, China>
Interactive rendering of illumination from area lights in virtual worlds has always proved to be challenging. In this paper, we extend the work of multi resolution rendering for direct illumination from area lights. We propose a deferred shading method for direct illumination which subdivides screenspace into multi resolution 2D-fragments in which higher resolution fragments are created to represent geometric and depth discontinuities as well as shadow boundaries. To detect shadow boundaries, our subdivision scheme, sub-fragment visibility test (SFVT), performs a visibility discontinuity check within each fragment and subdivides the fragment to a higher resolution level if discontinuity is found. In addition, our proposed gradient aware screenspace subdivision (GASS) algorithm accelerates the refinement by increasing the number of subdivisions based on gradient differences. Our technique utilizes the streamcompaction feature of the transform feedback shader (TFS) in the graphics shading pipeline to filter out fragments for soft shadow refinement. A single pass screenspace irradiance upsampling scheme which uses radial basis functions (RBF) is proposed for interpolating scattered fragments. This reduces artifacts caused by large fragments. Our technique does not require precomputations and is able to run at interactive rates.
Mesh Saliency Analysis via Local Curvature Entropy
Eurographics 2016. Short Papers
Annual Conference of the European Association for Computer Graphics (Eurographics) <37, 2016, Lisbon, Portugal>
We present a novel approach for estimating mesh saliency. Our method is fast, flexible, and easy to implement. By applying the well-known concept of Shannon entropy to 3D mesh data, we obtain an efficient method to determine mesh saliency. Comparing our method to the most recent, state-of-the-art approach, we show that results of at least similar quality can be achieved within a fraction of the original computation time. We present saliency-guided mesh simplification as a possible application.
Multi-Camera Piecewise Planar Object Tracking with Mutual Information
Journal of Mathematical Imaging and Vision
Real-time and robust tracking of 3D objects based on a 3D model with multiple cameras is still an unsolved problem albeit relevant in many practical and industrial applications. Major problems are caused by appearance changes of the object. We present a template-based tracking algorithm for piecewise planar objects. It is robust against changes in the appearance of the object (occlusion, illumination variation, specularities). The version we propose supports multiple cameras. The method consists in minimizing the error between the observed images of the object and the warped images of the planes. We use the mutual information as registration function combined with an inverse composition approach for reducing the computational costs and get a near-real-time algorithm. We discuss different hypotheses that can be made for the optimization algorithm.
Neuroscience Based Design: Fundamentals and Applications
2016 International Conference on Cyberworlds
International Conference on Cyberworlds (CW) <2016, Chongqing, China>
Neuroscience-based or neuroscience-informed design is a new application area of Brain-Computer Interaction (BCI). It takes its roots in study of human well-being in architecture, human factors study in engineering and manufacturing including neuroergonomics. In traditional human factors studies and/or well-being study, mental workload, stress, and emotion are obtained through questionnaires that are administered upon completion of some task and/or the whole experiment. Recent advances in BCI research allow for using Electroencephalogram (EEG) based brain state recognition algorithms to assess the interaction between brain and human performance. We propose and develop an EEG-based system CogniMeter to monitor and analyze human factors measurements of newly designed software/hardware systems and/or working places. Machine learning techniques are applied to the EEG data to recognize levels of mental workload, stress and emotions during each task. The EEG is used as a tool to monitor and record the brain states of subjects during human factors study experiments. We describe two applications of CogniMeter system: human performance assessment in maritime simulator and EEG-based human factors evaluation in Air Traffic Control (ATC) workplace. By utilizing the proposed EEG-based system, true understanding of subjects working patterns can be obtained. Based on the analyses of the objective real time EEG-based data together with the subjective feedback from the subjects, we are able to reliably evaluate current systems/hardware and/or working place design and refine new concepts and design of future systems.
new/s/leak - Information Extraction and Visualization for Investigative Data Journalists
The 54th Annual Meeting of the Association for Computational Linguistics
Annual Meeting of the Association for Computational Linguistics <54, 2016, Berlin, Deutschland>
We present new/s/leak, a novel tool developed for and with the help of journalists, which enables the automatic analysis and discovery of newsworthy stories from large textual datasets. We rely on different NLP preprocessing steps such named entity tagging, extraction of time expressions, entity networks, relations and metadata. The system features an intuitive web-based user interface based on network visualization combined with data exploring methods and various search and faceting mechanisms. We report the current state of the software and exemplify it with the WikiLeaks PlusD (Cablegate) data.
Platypus - Indoor Localization and Identification through Sensing Electric Potential Changes in Human Bodies
Mobile Systems, Applications, and Services
International Conference on Mobile Systems, Applications, and Services (MobiSys) <14, 2016, Singapore>
Platypus is the first system to localize and identify people by remotely and passively sensing changes in their body electric potential which occur naturally during walking. While it uses three or more electric potential sensors with a maximum range of 2 m, as a tag-free system it does not require the user to carry any special hardware. We describe the physical principles behind body electric potential changes, and a predictive mathematical model of how this affects a passive electric field sensor. By inverting this model and combining data from sensors, we infer a method for localizing people and experimentally demonstrate a median localization error of 0.16m. We also use the model to remotely infer the change in body electric potential with a mean error of 8.8% compared to direct contact-based measurements. We show how the reconstructed body electric potential differs from person to person and thereby how to perform identification. Based on short walking sequences of 5 s, we identify four users with an accuracy of 94 %, and 30 users with an accuracy of 75%. We demonstrate that identification features are valid over multiple days, though change with footwear.
Playing for Data: Ground Truth from Computer Games
Computer Vision - ECCV 2016. Proceedings Part II
European Conference on Computer Vision (ECCV) <14, 2016, Amsterdam, The Netherlands>
Recent progress in computer vision has been driven by highcapacity models trained on large datasets. Unfortunately, creating large datasets with pixel-level labels has been extremely costly due to the amount of human effort required. In this paper, we present an approach to rapidly creating pixel-accurate semantic label maps for images extracted from modern computer games. Although the source code and the internal operation of commercial games are inaccessible, we show that associations between image patches can be reconstructed from the communication between the game and the graphics hardware. This enables rapid propagation of semantic labels within and across images synthesized by the game, with no access to the source code or the content. We validate the presented approach by producing dense pixel-level semantic annotations for 25 thousand images synthesized by a photorealistic openworld computer game. Experiments on semantic segmentation datasets show that using the acquired data to supplement real-world images significantly increases accuracy and that the acquired data enables reducing the amount of hand-labeled real-world data: models trained with game data and just 1/3 of the CamVid training set outperform models trained on the complete CamVid training set.
Practical View on Face Presentation Attack Detection
Proceedings of the British Machine Vision Conference 2016 [Online]
British Machine Vision Conference (BMVC) <27, 2016, York, UK>
Face recognition is one of the most socially accepted forms of biometric recognition. The recent availability of very accurate and efficient face recognition algorithms leaves the vulnerability to presentation attacks as the major challenge to face recognition solutions. Previous works have shown high preforming presentation attack detection PAD solutions under controlled evaluation scenarios. This work tried to analyze the practical use of PAD by investigating the more realistic scenario of cross-database evaluation and presenting a state-of-the-art performance comparison. The work also investigated the relation between the video duration and the PAD performance. This is done along with presenting an optical flow based approach that proves to outperform state-of-the-art solutions in most experiment settings.
Procedural Mesh Features Applied to Subdivision Surfaces using Graph Grammars
Computers & Graphics
Shape Modeling International (SMI) <2016, Berlin, Germany>
A typical industrial design modelling scenario involves defining the overall shape of a product followed by adding detail features. Procedural features are well-established in computer aided design (CAD) involving regular forms, but are less applicable to free-form modelling involving subdivision surfaces. Current approaches do not generate sparse subdivision control meshes as output, which is why free-form features are manually modelled into subdivision control meshes by domain experts. Domain experts change the local topology of the subdivision control mesh to incorporate features into the surface, without increasing the mesh density unnecessarily and carefully avoiding the appearance of artefacts. In this paper we show how to translate this expert knowledge to grammar rules. The rules may then be invoked in an interactive system to automatically apply features to subdivision surfaces.
Rapid, Detail-Preserving Image Downscaling
ACM Transactions on Graphics
Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia (SIGGRAPH ASIA) <9, 2016, Macao>
Image downscaling is arguably the most frequently used image processing tool. We present an algorithm based on convolutional filters where input pixels contribute more to the output image the more their color deviates from their local neighborhood, which preserves visually important details. In a user study we verify that users prefer our results over related work. Our efficient GPU implementation works in real-time when downscaling images from 24M to 70 k pixels. Further, we demonstrate empirically that our method can be successfully applied to videos.
Reducing Over- and Undersegmentations of the Liver in Computed Tomographies Using Anatomical Knowledge
XIV Mediterranean Conference on Medical and Biological Engineering and Computing
Mediterranean Conference on Medical and Biological Engineering and Computing (MEDICON) <14, 2016, Paphos, Cyprus>
In the last decades several liver segmentation methods have been proposed. The proposed methods go from region growing to the more complex statistical shape models. Despite the robustness of those algorithms, liver segmentation is still a challenging task especially in areas in which its neighboring organs have similar intensities, e.g., heart and ribcage. In addition to this, pathological organs that contain tumors near their surface present additional difficulties. This paper presents a solution to increase the accuracy of those algorithms in the aforementioned areas. The effect of the improvement using the generated heart and ribcage walls (7% and 1% respectively) is evaluated on 9 clinical computer tomographies (CT). The improvement (12 %) when tumors are near the surface, on the contrary, is tested on 7 clinical CT images.
Rixels: Towards Secure Interactive 3D Graphics in Engineering Clouds
The IPSI BgD Transactions on Internet Research
Cloud computing rekindles old and imposes new challenges on remote visualization especially for interactive 3D graphics applications, e.g., in engineering and/or in entertainment. In this paper we present and discuss an approach entitled 'rich pixels' (short 'rixels') that balances the requirements concerning security and interactivity with the possibilities of hardware accelerated post-processing and rendering, both on the server side as well as on the client side using WebGL.
SeismoTracker: Upgrade Any Smart Wearable to Enable a Sensing of Heart Rate, Respiration Rate, and Microvibrations
Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing
Conference on Human Factors in Computing Systems (CHI) <34, 2016, San Jose, CA, USA>
In this paper we present a method to enable any smart Wearable to sense vital data in resting states. These resting states (e.g. sleeping, sitting calmly, etc.) imply the presence of low-amplitude body-motions. Our approach relies on seismocardiography (SCG), which only requires a built-in accelerometer. Compared to commonly applied technologies, such as photoplethysmography (PPG), our approach is not only tracking heart rate (HR), but also respiration rate (RR), and microvibrations (MV) of the muscles, while being also computational inexpensive. In addition, we can calculate several other parameters, such as HR variability and RR variability. Our extracted vital parameters match with the vital data gathered from clinical state-of-the art technology. These data allow us to gain an impression on the user's activity, quality of sleep, arousal and stress level over the whole day, week, month, or year. Moreover, we can detect whether a device is actually worn or doffed, which is crucial when connecting such data with health services. We implemented our method on two current smartwatches: a Simvalley AW420 RX as well as on a LG G Watch R and recorded user data for several months. A web platform enables to keep track of one's data.
Shading-Aware Multi-View Stereo
Computer Vision - ECCV 2016. Proceedings Part III
European Conference on Computer Vision (ECCV) <14, 2016, Amsterdam, The Netherlands>
We present a novel multi-view reconstruction approach that effectively combines stereo and shape-from-shading energies into a single optimization scheme. Our method uses image gradients to transition between stereo-matching (which is more accurate at large gradients) and Lambertian shape-from-shading (which is more robust in flat regions). In addition, we show that our formulation is invariant to spatially varying albedo without explicitly modeling it. We show that the resulting energy function can be optimized efficiently using a smooth surface representation based on bicubic patches, and demonstrate that this algorithm outperforms both previous multi-view stereo algorithms and shading based refinement approaches on a number of datasets.
Stereo-Image Normalization of Voluminous Objects Improves Textile Defect Recognition
Advances in Visual Computing. 12th International Symposium, ISVC 2016
International Symposium on Visual Computing (ISVC) <12, 2016, Las Vegas, NV, USA>
The visual detection of defects in textiles is an important application in the textile industry. Existing systems require textiles to be spread flat so they appear as 2D surfaces, in order to detect defects. In contrast, we show classification of textiles and textile feature extraction methods, which can be used when textiles are in inhomogeneous, voluminous shape. We present a novel approach on image normalization to be used in stain-defect recognition. The acquired database consist of images of piles of textiles, taken using stereo vision. The results show that a simple classifier using normalized images outperforms other approaches using machine learning in classification accuracy.
Supporting Collaborative Political Decision Making - An Interactive Policy Process Visualization System
VINCI 2016. The 9th International Symposium on Visual Information Communication and Interaction
International Symposium on Visual Information Communication and Interaction (VINCI 2016) <9, 2016, Dallas, Texas>
The process of political decision making is often complex and tedious. The policy process consists of multiple steps, most of them are highly iterative. In addition, different stakeholder groups are involved in political decision making and contribute to the process. A series of textual documents accompanies the process. Examples are official documents, discussions, scientific reports, external reviews, newspaper articles, or economic white papers. Experts from the politi- cal domain report that this plethora of textual documents often exceeds their ability to keep track of the entire policy process. We present PolicyLine, a visualization system that supports different stakeholder groups in overview-and-detail tasks for large sets of textual documents in the political decision making process. In a longitudinal design study conducted together with domain experts in political decision making, we identfied missing analytical functionality on the basis of a problem and domain characterization. In an iterative design phase, we created PolicyLine in close collaboration with the domain experts. Finally, we present the results of three evaluation rounds, and reect on our collaborative visualization system.
The Cityscapes Dataset for Semantic Urban Scene Understanding
29th IEEE Conference on Computer Vision and Pattern Recognition. Proceedings
IEEE Conference on Computer Vision and Pattern Recognition (CVPR) <2016, Las Vegas, Nevada, USA>
Visual understanding of complex urban street scenes is an enabling factor for a wide range of applications. Object detection has benefited enormously from large-scale datasets, especially in the context of deep learning. For semantic urban scene understanding, however, no current dataset adequately captures the complexity of real-world urban scenes. To address this, we introduce Cityscapes, a benchmark suite and large-scale dataset to train and test approaches for pixel-level and instance-level semantic labeling. Cityscapes is comprised of a large, diverse set of stereo video sequences recorded in streets from 50 different cities. 5000 of these images have high quality pixel-level annotations; 20 000 additional images have coarse annotations to enable methods that leverage large volumes of weakly-labeled data. Crucially, our effort exceeds previous attempts in terms of dataset size, annotation richness, scene variability, and complexity. Our accompanying empirical study provides an in-depth analysis of the dataset characteristics, as well as a performance evaluation of several state-of-the-art approaches based on our benchmark.
The IQmulus Urban Showcase: Automatic Tree Classification and Identification in Huge Mobile Mapping Point Clouds
XXIII ISPRS Congress Prague 2016, Commission III
International Society for Photogrammetry and Remote Sensing Congress (ISPRS) <23, 2016, Prague, Czech Republic>
The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, XLI-B3
Current 3D data capturing as implemented on for example airborne or mobile laser scanning systems is able to efficiently sample the surface of a city by billions of unselective points during one working day. What is still difficult is to extract and visualize meaningful information hidden in these point clouds with the same efficiency. This is where the FP7 IQmulus project enters the scene. IQmulus is an interactive facility for processing and visualizing big spatial data. In this study the potential of IQmulus is demonstrated on a laser mobile mapping point cloud of 1 billion points sampling ~ 10 km of street environment in Toulouse, France. After the data is uploaded to the IQmulus Hadoop Distributed File System, a workflow is defined by the user consisting of retiling the data followed by a PCA driven local dimensionality analysis, which runs efficiently on the IQmulus cloud facility using a Spark implementation. Points scattering in 3 directions are clustered in the tree class, and are separated next into individual trees. Five hours of processing at the 12 node computing cluster results in the automatic identification of 4000+ urban trees. Visualization of the results in the IQmulus fat client helps users to appreciate the results, and developers to identify remaining flaws in the processing workflow.
Trichromatic Reflectance Capture Using a Tunable Light Source: Setup, Characterization and Reflectance Estimation
Measuring, Modeling, and Reproducing Material Appearance 2016
Measuring, Modeling, and Reproducing Material Appearance (MMRMA) <2016, San Francisco, CA, USA>
Electronic Imaging, 9
A research project is underway to develop a gonio imager particularly dedicated to sample the Bidirectional Reflectance Distribution Function (BRDF) of materials and material compositions employed and created by multimaterial 3D printers. It comprises an almost colorimetric RGB camera and a spectrally tunable light source. In this paper, we investigate an important part of this system, particularly the approach to estimate reflectances from RGB values acquired under multiple illuminants. We first characterize the system by estimating the spectral sensitivities of the camera. Then, we use the sensitivities, a set of illuminants produced by the tunable light source and the corresponding sensor responses to estimate reflectances. For evaluating this approach, we measure the Neugebauer primary reflectances of a polyjet printer employing highly translucent photo-polymer printing materials colored in cyan, magenta, yellow and white. Spectral and colorimetric deviations to spectroradiometric comparison measurements (average 0.67 CIEDE2000 units / 0.0286 spectral RMS) are within the inter-instrument variability of hand-held spectrophotometers used in graphic arts for prints on paper.
Visual Analytics for Concept Exploration in Subspaces of Patient Groups
Medical doctors and researchers in bio-medicine are increasingly confronted with complex patient data, posing new and difficult analysis challenges. These data are often comprising high-dimensional descriptions of patient conditions and measurements on the success of certain therapies. An important analysis question in such data is to compare and correlate patient conditions and therapy results along with combinations of dimensions. As the number of dimensions is often very large, one needs to map them to a smaller number of relevant dimensions to be more amenable for expert analysis. This is because irrelevant, redundant, and conflicting dimensions can negatively affect effectiveness and efficiency of the analytic process (the so-called curse of dimensionality). However, the possible mappings from high- to low-dimensional spaces are ambiguous. For example, the similarity between patients may change by considering different combinations of relevant dimensions (subspaces). We demonstrate the potential of subspace analysis for the interpretation of high-dimensional medical data. Specifically, we present SubVIS, an interactive tool to visually explore subspace clusters from different perspectives, introduce a novel analysis workflow, and discuss future directions for high-dimensional (medical) data analysis and its visual exploration. We apply the presented workflow to a real-world dataset from the medical domain and show its usefulness with a domain expert evaluation.
Visual-Interactive Search for Soccer Trajectories to Identify Interesting Game Situations
Visualization and Data Analysis 2016
Visualization and Data Analysis (VDA) <2016, San Francisco, CA, USA>
Electronic Imaging, 1
Recently, sports analytics has turned into an important research area of visual analytics and may provide interesting findings, such as the best player of the season, for various kinds of sports. Soccer is a very popular and tactical game, which also attracted great attention in the last few years. However, the search for complex game movements is a very crucial and challenging task. We present a system for searching trajectory data in soccer matches by means of an interactive search interface that enables the user to sketch a situation of interest. Furthermore, we apply a domain specific prefiltering process to extract a set of local movement segments, which are similar to a given sketch. Our approach comprises single-trajectory, multi-trajectory, and event-specific search functions based on two different similarity measures. To demonstrate the usefulness of our approach, we define a domain specific task analysis and conduct a case study together with a domain expert from FC Bayern München by investigating a real-world soccer match. Finally, we show that multi-trajectory search in combination with event-specific filtering is needed to describe and retrieve complex moves in soccer matches.
Visualization of Composer Relationships Using Implicit Data Graphs
Human Interface and the Management of Information: Applications and Services
International Conference on Human Interface and the Management of Information (HIMI) <2016, Toronto, ON, Canada>
Relationships between classical music composers are known due to explicit historic material, for instance the friendship between Joseph Haydn and Wolfgang Amadeus Mozart, as well as the influence of the latter on Ludwig van Beethoven. While Haydn and Mozart were critics of each others work, Mozart and Beethoven probably never met in person. In spite of that there is an impact on especially the early music of Beethoven. While relationships between well-known composers like the mentioned ones are investigated, it can also be of historic interest to know the roles less-known composers played. Some of them might have a part in a famous persons work but were not further analyzed given the fact that there have been many composers and no hints given to researchers indicating which person would be worth studying. In this work we develop an approach to visually hint possible relationships among a large number of composers. Detailed historic knowledge is not taken into account; the hints are only based on the composer works as well as their lifetimes in order to guess directions of influence.
Visualization System Requirements for Data Processing Pipeline Design and Optimization
IEEE Transactions on Visualization and Computer Graphics
Eurographics Conference on Visualization (EuroVis) <19, 2017, Barcelona, Spain>
The rising quantity and complexity of data creates a need to design and optimize data processing pipelines - the set of data processing steps, parameters and algorithms that perform operations on the data. Visualization can support this process but, although there are many examples of systems for visual parameter analysis, there remains a need to systematically assess users' requirements and match those requirements to exemplar visualization methods. This article presents a new characterization of the requirements for pipeline design and optimization. This characterization is based on both a review of the literature and first-hand assessment of eight application case studies. We also match these requirements with exemplar functionality provided by existing visualization tools. Thus, we provide end-users and visualization developers with a way of identifying functionality that addresses data processing problems in an application. We also identify seven future challenges for visualization research that are not met by the capabilities of today's systems.