• Vita
  • Publikationen
Show publication details

Santos, Pedro; Ritz, Martin; Fuhrmann, Constanze; Fellner, Dieter W.

3D Mass Digitization: A Milestone for Archeological Documentation

2017

VAR. Virtual Archaeology Review [online], Vol.8 (2017), 16, pp. 1-11

In the heritage field the demand for fast and efficient 3D digitization technologies for historic remains is increasing. Besides, 3D digitization has proved to be a promising approach to enable precise reconstructions of objects. Yet, unlike the digital acquisition of cultural goods in 2D widely used today, 3D digitization often still requires a significant investment of time and money. To make it more widely available to heritage institutions, the Competence Center for Cultural Heritage Digitization at the Fraunhofer Institute for Computer Graphics Research IGD has developed CultLab3D, the world's first fully automatic 3D mass digitization facility for collections of three-dimensional objects. CultLab3D is specifically designed to automate the entire 3D digitization process thus allowing users to scan and archive objects on a large-scale. Moreover, scanning and lighting technologies are combined to capture the exact geometry, texture, and optical material properties of artefacts to produce highly accurate photo-realistic representations. The unique setup allows shortening the time needed for digitization to several minutes per artefact instead of hours, as required by conventional 3D scanning methods.

Show publication details

Getto, Roman; Merz, Johannes; Kuijper, Arjan; Fellner, Dieter W.

3D Meta Model Generation with Application in 3D Object Retrieval

2017

Mao, Xiaoyang (Ed.) et al.: CGI 2017. Proceedings of the Computer Graphics International Conference. New York: ACM, 2017. (ACM International Conference Proceedings Series (ICPS) 1368), 6 p.

Computer Graphics International (CGI) <34, 2017, Yokohama, Japan>

In the application of 3D object retrieval we search for 3D objects similar to a given query object. When a user searches for a certain class of objects like 'planes' the results can be unsatisfying: Many object variations are possible for a single class and not all of them are covered with one or a few example objects. We propose a meta model representation which corresponds to a procedural model with meta-parameters. Changing the meta-parameters leads to different variations of a 3D object. For the meta model generation a single object is constructed with a modeling tool. We automatically extract a procedural representation of the object. By inserting metaparameters we generate our meta model. The meta model defines a whole object class. The user can choose a meta model and search for all objects similar to any instance of the meta model to retrieve all objects of a certain class from a 3D object database. We show that the retrieval precision is signifcantly improved using the meta model as retrieval query.

Show publication details

Santos, Pedro; Ritz, Martin; Fuhrmann, Constanze; Monroy Rodriguez, Rafael; Schmedt, Hendrik; Tausch, Reimar; Domajnko, Matevz; Knuth, Martin; Fellner, Dieter W.

Acceleration of 3D Mass Digitization Processes: Recent Advances and Challenges

2017

Ioannides, Marinos (Ed.) et al.: Mixed Reality and Gamification for Cultural Heritage. Springer International Publishing, 2017, pp. 99-128

In the heritage field, the demand for fast and efficient 3D digitization technologies for historic remains is increasing. Besides, 3D has proven to be a promising approach to enable precise reconstructions of cultural heritage objects. Even though 3D technologies and postprocessing tools are widespread and approaches to semantic enrichment and Storage of 3D models are just emerging, only few approaches enable mass capture and computation of 3D virtual models from zoological and archeological findings. To illustrate how future 3D mass digitization systems may look like, we introduce CultLab3D, a recent approach to 3D mass digitization, annotation, and archival storage by the Competence Center for Cultural Heritage Digitization at the Fraunhofer Institute for Computer Graphics Research IGD. CultLab3D can be regarded as one of the first feasible approaches worldwide to enable fast, efficient, and cost-effective 3D digitization. lt specifically designed to automate the entire process and thus allows to scan and archive large amounts of heritage objects for documentation and preservation in the best possible quality, taking advantage of integrated 30 visualization and annotation within regular Web browsers using technologies such as WebGI and X3D.

Show publication details

Bernard, Jürgen; Vögele, Anna; Klein, Reinhard; Fellner, Dieter W.

Approaches and Challenges in the Visual-interactive Comparison of Human Motion Data

2017

Linsen, Lars (Ed.) et al.: IVAPP 2017. Proceedings : 8th International Conference on Information Visualization Theory and Applications (VISIGRAPP 2017 Volume 3). SciTePress, 2017, pp. 217-224

International Conference on Information Visualization Theory and Applications (IVAPP) <8, 2017, Porto, Portugal>

Many analysis goals involving human motion capture (MoCap) data require the comparison of motion patterns. Pioneer works in visual analytics recently recognized visual comparison as substantial for visual-interactive analysis. This work reflects the design space for visual-interactive systems facilitating the visual comparison of human MoCap data, and presents a taxonomy comprising three primary factors, following the general visual analytics process: algorithmic models, visualizations for motion comparison, and back propagation of user feedback. Based on a literature review, relevant visual comparison approaches are discussed. We outline remaining challenges and inspiring works on MoCap data, information visualization, and visual analytics.

Show publication details

Bernard, Jürgen; Dobermann, Eduard; Sedlmair, Michael; Fellner, Dieter W.

Combining Cluster and Outlier Analysis with Visual Analytics

2017

Sedlmaier, Michael (Ed.) et al.: EuroVA 2017 : EuroVis Workshop on Visual Analytics. Goslar: Eurographics Association, 2017, pp. 19-23

International EuroVis Workshop on Visual Analytics (EuroVA) <8, 2017, Barcelona, Spain>

Cluster and outlier analysis are two important tasks. Due to their nature these tasks seem to be opposed to each other, i.e., data objects either belong to a cluster structure or a sparsely populated outlier region. In this work, we present a visual analytics tool that allows the combined analysis of clusters and outliers. Users can add multiple clustering and outlier analysis algorithms, compare results visually, and combine the algorithms' results. The usefulness of the combined analysis is demonstrated using the example of labeling unknown data sets. The usage scenario also shows that identified clusters and outliers can share joint areas of the data space.

Show publication details

Altenhofen, Christian; Schuwirth, Felix; Stork, André; Fellner, Dieter W.

Implicit Mesh Generation Using Volumetric Subdivision

2017

Jaillet, Fabrice (Ed.) et al.: VRIPHYS 17: 13th Workshop in Virtual Reality Interactions and Physical Simulations. Goslar: Eurographics Association, 2017, pp. 9-19

International Workshop in Virtual Reality Interaction and Physical Simulations (VRIPHYS) <13, 2017, Lyon, France>

In this paper, we present a novel approach for a tighter integration of 3D modeling and physically-based simulation. Instead of modeling 3D objects as surface models, we use a volumetric subdivision representation. Volumetric modeling operations allow designing 3D objects in similar ways as with surface-based modeling tools. Encoding the volumetric information already in the design mesh drastically simplifies and speeds up the mesh generation process for simulation. The transition between design, simulation and back to design is consistent and computationally cheap. Since the subdivision and mesh generation can be expressed as a precomputable matrix-vector multiplication, iteration times can be greatly reduced compared to common modeling and simulation setups. Therefore, this approach is especially well suited for early-stage modeling or optimization use cases, where many geometric changes are made in a short time and their physical effect on the model has to be evaluated frequently. To test our approach, we created, simulated and adapted several 3D models. Additionally, we measured and evaluated the timings for generating and applying the matrices for different subdivision levels. For comparison, we also measured the tetrahedral meshing functionality offered by CGAL for similar numbers of elements. For changing topology, our implicit meshing approach proves to be up to 70 times faster than creating the tetrahedral mesh only based on the outer surface. Without changing the topology and by precomputing the matrices, we achieve a speed-up of up to 2800.

Show publication details

Fellner, Dieter W.; Baier, Konrad; Ackeren, Janine van; Alexandrin, Max; Barth, Anna; Bockholt, Ulrich; Kopold, Franziska; Löwer, Chris; May, Thorsten; Peters, Wiebke; Wehner, Detlef; Gollnast, Anja; Bumke, Carina

Jahresbericht 2016: Fraunhofer-Institut für Graphische Datenverarbeitung IGD

2017

Darmstadt, 2017

Das Fraunhofer IGD hat seine Forschungsaktivitäten vor Kurzem in vier Leitthemen gebündelt, welche die Basis seiner Arbeit bilden und verschiedene Themen abteilungsübergreifend miteinander verknüpfen. Eines dieser Leitthemen ist "Visual Computing as a Service - Die Plattform für angewandtes Visual Computing". Die Basis dieser universellen Plattform für Visual-Computing Lösungen ist gelegt und wird kontinuierlich erweitert. Dieser technologische Ansatz bildet die Grundlage für die weiteren Leitthemen. In der "Individuellen Gesundheit - Digitale Lösungen für das Gesundheitswesen" werden die Daten betrachtet, die in der personalisierten Medizin anfallen - mithilfe der Visual-Computing-Technologien des Instituts. Im Leitthema "Intelligente Stadt - Innovativ, digital und nachhaltig" ist die Fragestellung, wie man den Lebenszyklus urbaner Prozesse unterstützen kann. Und im Leitthema "Digitalisierte Arbeit - Der Mensch in der Industrie 4.0" geht es erster Linie um die Unterstützung des Menschen in der durch die Digitalisierung veränderten Produktion.

Show publication details

Edelsbrunner, Johannes; Havemann, Sven; Sourin, Alexei; Fellner, Dieter W.

Procedural Modeling of Architecture with Round Geometry

2017

Computers & Graphics, Vol.64 (2017), pp. 14-25

International Conference on Cyberworlds (CW) <2016, Chongqing, China>

Creation of procedural 3D building models can significantly reduce the costs of modeling, since it allows for generating a variety of similar shapes from one procedural description. The common field of appli- cation for procedural modeling is modeling of straight building facades, which are very well suited for shape grammars-a special kind of procedural modeling system. In order to generate round building geometry, we present a way to set up different coordinate systems in shape grammars. Besides Cartesian, these are primarily cylindrical and spherical coordinate systems for generation of structures such as towers or domes, that can procedurally adapt to different dimensions and parameters. The users can apply common splitting idioms from shape grammars in their familiar way for creating round instead of straight geometry. The second enhancement we propose is to provide a way for users to give high level inputs that are used to automatically arrange and adapt parts of the models.

Show publication details

Altenhofen, Christian; Dietrich, Andreas; Stork, André; Fellner, Dieter W.

Rixels: Towards Secure Interactive 3D Graphics in Engineering Clouds

2017

Lukas, Uwe von (Ed.) et al.: Go-3D 2017: Mit 3D Richtung Maritim 4.0 : Tagungsband zur Konferenz Go-3D 2017. Stuttgart: Fraunhofer Verlag, 2017, pp. 25-43

Go-3D <8, 2017, Rostock, Germany>

Cloud computing rekindles old and imposes new challenges on remote visualization especially for interactive 3D graphics applications, e.g., in engineering and/or in entertainment. In this paper we present and discuss an approach entitled 'rich pixels' (short 'rixels') that balances the requirements concerning security and interactivity with the possibilities of hardware accelerated post-processing and rendering, both on the server side as well as on the client side using WebGL.

Show publication details

Getto, Roman; Kuijper, Arjan; Fellner, Dieter W.

Unsupervised 3D Object Retrieval with Parameter-Free Hierarchical Clustering

2017

Mao, Xiaoyang (Ed.) et al.: CGI 2017. Proceedings of the Computer Graphics International Conference. New York: ACM, 2017. (ACM International Conference Proceedings Series (ICPS) 1368), 6 p.

Computer Graphics International (CGI) <34, 2017, Yokohama, Japan>

In 3D object retrieval, additional knowledge like user input, classification information or database dependent configured parameters are rarely available in real scenarios. For example, meta data about 3D objects is seldom if the objects are not within a well-known evaluation database. We propose an algorithm which improves the performance of unsupervised 3D object retrieval without using any additional knowledge. For the computation of the distances in our system any descriptor can be chosen; we use the Panorama-descriptor. Our algorithm uses a precomputed parameter-free agglomerative hierarchical clustering and combines the information of the hierarchy of clusters with the individual distances to improve a single object query. Additionally, we propose an adaption algorithm for the cases that new objects are added frequently to the database. We evaluate our approach with 6 databases including a total of 13271 objects in 481 classes. We show that our algorithm improves the average precision in an unsupervised scenario without any parameter configuration.

Show publication details

Bernard, Jürgen; Dobermann, Eduard; Vögele, Anna; Krüger, Björn; Kohlhammer, Jörn; Fellner, Dieter W.

Visual-Interactive Semi-Supervised Labeling of Human Motion Capture Data

2017

Wischgoll, Thomas (Ed.) et al.: Visualization and Data Analysis 2017. Springfield: IS&T, 2017. (Electronic Imaging), pp. 34-45

Visualization and Data Analysis (VDA) <2017, Burlingame, CA, USA>

The characterization and abstraction of large multivariate time series data often poses challenges with respect to effectiveness or efficiency. Using the example of human motion capture data challenges exist in creating compact solutions that still reflect semantics and kinematics in a meaningful way. We present a visual-interactive approach for the semi-supervised labeling of human motion capture data. Users are enabled to assign labels to the data which can subsequently be used to represent the multivariate time series as sequences of motion classes. The approach combines multiple views supporting the user in the visual-interactive labeling process. Visual guidance concepts further ease the labeling process by propagating the results of supportive algorithmic models. The abstraction of motion capture data to sequences of event intervals allows overview and detail-on-demand visualizations even for large and heterogeneous data collections. The guided selection of candidate data for the extension and improvement of the labeling closes the feedback loop of the semi-supervised workflow. We demonstrate the effectiveness and the efficiency of the approach in two usage scenarios, taking visual-interactive learning and human motion synthesis as examples.

Show publication details

Bernard, Jürgen; Ritter, Christian; Sessler, David; Zeppelzauer, Matthias; Kohlhammer, Jörn; Fellner, Dieter W.

Visual-Interactive Similarity Search for Complex Objects by Example of Soccer Player Analysis

2017

Linsen, Lars (Ed.) et al.: IVAPP 2017. Proceedings : 8th International Conference on Information Visualization Theory and Applications (VISIGRAPP 2017 Volume 3). SciTePress, 2017, pp. 75-87

International Conference on Information Visualization Theory and Applications (IVAPP) <8, 2017, Porto, Portugal>

The definition of similarity is a key prerequisite when analyzing complex data types in data mining, information retrieval, or machine learning. However, the meaningful definition is often hampered by the complexity of data objects and particularly by different notions of subjective similarity latent in targeted user groups. Taking the example of soccer players, we present a visual-interactive system that learns users' mental models of similarity. In a visual-interactive interface, users are able to label pairs of soccer players with respect to their subjective notion of similarity. Our proposed similarity model automatically learns the respective concept of similarity using an active learning strategy. A visual-interactive retrieval technique is provided to validate the model and to execute downstream retrieval tasks for soccer player analysis. The applicability of the approach is demonstrated in different evaluation strategies, including usage scenarions and cross-validation tests.

Show publication details

Tausch, Reimar; Schmedt, Hendrik; Santos, Pedro; Schröttner, Martin; Fellner, Dieter W.

3DHOG for Geometric Similarity Measurement and Retrieval on Digital Cultural Heritage Archives

2016

Giuseppe De Pietro (Ed.) et al.: Intelligent Interactive Multimedia Systems and Services 2016. Switzerland: Springer International Publishing, 2016. (Smart Innovation, Systems and Technologies 55), pp. 459-469

KES International Conference on Intelligent Interactive Multimedia Systems and Services (IIMSS) <9, 2016, Puerto de la Cruz, Tenerife, Spain>

With projects such as CultLab3D, 3D Digital preservation of cultural heritage will become more affordable and with this, the number of 3D-models representing scanned artefacts will dramatically increase. However, once mass digitization is possible, the subsequent bottleneck to overcome is the annotation of cultural heritage artefacts with provenance data. Current annotation tools are mostly based on textual input, eventually being able to link an artefact to documents, pictures, videos and only some tools already support 3D models. Therefore, we envisage the need to aid curators by allowing for fast, web-based, semi-automatic, 3D-centered annotation of artefacts with metadata. In this paper we give an overview of various technologies we are currently developing to address this issue. On one hand we want to store 3D models with similarity descriptors which are applicable independently of different 3D model quality levels of the same artefact. The goal is to retrieve and suggest to the curator metadata of already annotated similar artefacts for a new artefact to be annotated, so he can eventually reuse and adapt it to the current case. In addition we describe our web-based, 3D-centered annotation tool with meta- and object repositories supporting various databases and ontologies such as CIDOC-CRM.

Show publication details

El Hakimi, Wissam; Fellner, Dieter W. (Betreuer); Sakas, Georgios (Betreuer); Schipper, Jörg (Betreuer)

Accurate 3D-Reconstruction and -Navigation for High-Precision Minimal-Invasive Interventions

2016

Darmstadt, TU, Diss., 2016

The current lateral skull base surgery is largely invasive since it requires wide exposure and direct visualization of anatomical landmarks to avoid damaging critical structures. A multi-port approach aiming to reduce such invasiveness has been recently investigated. Thereby three canals are drilled from the skull surface to the surgical region of interest: the first canal for the instrument, the second for the endoscope, and the third for material removal or an additional instrument. The transition to minimal invasive approaches in the lateral skull base surgery requires sub-millimeter accuracy and high outcome predictability, which results in high requirements for the image acquisition as well as for the navigation. Computed tomography (CT) is a non-invasive imaging technique allowing the visualization of the internal patient organs. Planning optimal drill channels based on patient-specific models requires high-accurate three-dimensional (3D) CT images. This thesis focuses on the reconstruction of high quality CT volumes. Therefore, two conventional imaging systems are investigated: spiral CT scanners and C-arm cone-beam CT (CBCT) systems. Spiral CT scanners acquire volumes with typically anisotropic resolution, i.e. the voxel spacing in the slice-selection-direction is larger than the in-the-plane spacing. A new super-resolution reconstruction approach is proposed to recover images with high isotropic resolution from two orthogonal low-resolution CT volumes. C-arm CBCT systems offers CT-like 3D imaging capabilities while being appropriate for interventional suites. A main drawback of these systems is the commonly encountered CT artifacts due to several limitations in the imaging system, such as the mechanical inaccuracies. This thesis contributes new methods to enhance the CBCT reconstruction quality by addressing two main reconstruction artifacts: the misalignment artifacts caused by mechanical inaccuracies, and the metal-artifacts caused by the presence of metal objects in the scanned region. CBCT scanners are appropriate for intra-operative image-guided navigation. For instance, they can be used to control the drill process based on intra-operatively acquired 2D fluoroscopic images. For a successful navigation, accurate estimate of C-arm pose relative to the patient anatomy and the associated surgical plan is required. A new algorithm has been developed to fulfill this task with high-precision. The performance of the introduced methods is demonstrated on simulated and real data.

Show publication details

Braun, Andreas; Wichert, Reiner; Kuijper, Arjan; Fellner, Dieter W.

Benchmarking Sensors in Smart Environments - Method and Use Cases

2016

Journal of Ambient Intelligence and Smart Environments, Vol.8 (2016), 6, pp. 645-664

Smart environment applications can be based on a large variety of different sensors that may support the same use case, but have specific advantages or disadvantages. Benchmarking can allow determining the most suitable sensor systems for a given application by calculating a single benchmarking score, based on weighted evaluation of features that are relevant in smart environments. This set of features has to represent the complexity of applications in smart environments. In this work we present a benchmarking model that can calculate a benchmarking score, based on nine selected features that cover aspects of performance, the environment and the pervasiveness of the application. Extensions are presented that normalize the benchmark-ing score if required and compensate central tendency bias, if necessary. We outline how this model is applied to capacitive proximity sensors that measure properties of conductive objects over a distance. The model is used to identify existing and find potential new application domains for this upcoming technology in smart environments.

Show publication details

Schinko, Christoph; Peer, Markus; Hammer, Daniel; Pirstinger, Matthias; Lex, Cornelia; Koglbauer, Ioana; Eichberger, Arno; Holzinger, Jürgen; Eggeling, Eva; Fellner, Dieter W.; Ullrich, Torsten

Building a Driving Simulator with Parallax Barrier Displays

2016

Magnenat-Thalmann, Nadia (Ed.) et al.: Proceedings of the 11th Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications. Volume 1 : 11th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications. SciTePress, 2016, pp. 283-291

International Joint Conference on Computer Vision and Computer Graphics Theory and Applications (VISIGRAPP) <11, 2016, Rome, Italy>

In this paper, we present an optimized 3D stereoscopic display based on parallax barriers for a driving simulator. The overall purpose of the simulator is to enable user studies in a reproducible environment under controlled conditions to test and evaluate advanced driver assistance systems. Our contribution and the focus of this article is a visualization based on parallax barriers with (I) a-priori optimized barrier patterns and (II) an iterative calibration algorithm to further reduce visualization errors introduced by production inaccuracies. The result is an optimized 3D stereoscopic display perfectly integrated into its environment such that a single user in the simulator environment sees a stereoscopic image without having to wear specialized eye-wear.

Show publication details

Ritz, Martin; Knuth, Martin; Domajnko, Matevz; Posniak, Oliver; Santos, Pedro; Fellner, Dieter W.

c-Space: Time-evolving 3D Models (4D) from Heterogeneous Distributed Video Sources

2016

Catalano, Chiara Eva (Ed.) et al.: GCH 2016 : Eurographics Workshop on Graphics and Cultural Heritage. Goslar: Eurographics Association, 2016, pp. 12-18

Eurographics Symposium on Graphics and Cultural Heritage (GCH) <14, 2016, Genova, Italy>

We introduce c-Space, an approach to automated 4D reconstruction of dynamic real world scenes, represented as time-evolving 3D geometry streams, available to everyone. Our novel technique solves the problem of fusing all sources, asynchronously captured from multiple heterogeneous mobile devices around a dynamic scene at a real word location. To this end all captured input is broken down into a massive unordered frame set, sorting the frames along a common time axis, and finally discretizing the ordered frame set into a time-sequence of frame subsets, each subject to photogrammetric 3D reconstruction. The result is a time line of 3D models, each representing a snapshot of the scene evolution in 3D at a specific point in time. Just like a movie is a concatenation of time-discrete frames, representing the evolution of a scene in 2D, the 4D frames reconstructed by c-Space line up to form the captured and dynamically changing 3D geometry of an event over time, thus enabling the user to interact with it in the very same way as with a static 3D model. We do image analysis to automatically maximize the quality of results in the presence of challenging, heterogeneous and asynchronous input sources exhibiting a wide quality spectrum. In addition we show how this technique can be integrated as a 4D reconstruction web service module, available to mobile end-users.

Show publication details

Ladenhauf, Daniel; Battisti, Kurt; Berndt, Rene; Eggeling, Eva; Fellner, Dieter W.; Gratzl-Michlmair, Markus; Ullrich, Torsten

Computational Geometry in the Context of Building Information Modeling

2016

Energy and Buildings, Vol.115 (2016), pp. 78-84

Building energy analysis has gained attention in recent years, as awareness for energy efficiency is rising in order to reduce greenhouse gas emissions. At the same time, the building information modeling paradigm is aiming to develop comprehensive digital representations of building characteristics based on semantic 3D models. Most of the data required for energy performance calculation can be found in such models; however, extracting the relevant data is not a trivial problem. This article presents an algorithm to prepare input data for energy analysis based on building information models. The crucial aspect is geometric simplification according to semantic constraints: the building element geometries are reduced to a set of surfaces representing the thermal shell as well as the internal boundaries. These boundary parts are then associated with material layers and thermally relevant data. The presented approach, previously discussed at the International Academic Conference on Places and Technologies (Ladenhauf et al., 2014), significantly reduces the needed time for energy analysis.

Show publication details

Edelsbrunner, Johannes; Krispel, Ulrich; Havemann, Sven; Sourin, Alexei; Fellner, Dieter W.

Constructive Roofs from Solid Building Primitives

2016

Gavrilova, Marina L. (Ed.) et al.: Transactions on Computational Science XXVI : Special Issue on Cyberworlds and Cybersecurity. Berlin, Heidelberg, New York: Springer, 2016. (Lecture Notes in Computer Science (LNCS) 9550), pp. 17-40

International Conference on Cyberworlds (CW) <13, 2014, Santander, Spain>

The creation of building models has high importance, due to the demand for detailed buildings in virtual worlds, games, movies and geo information systems. Due to the high complexity of such models, especially in the urban context, their creation is often very demanding in resources. Procedural methods have been introduced to lessen these costs, and allow to specify a building (or a class of buildings) by a higher level approach, and leave the geometry generation to the system. While these systems allow to specify buildings in immense detail, roofs still pose a problem. Fully automatic roof generation algorithms might not yield desired results (especially for reconstruction purposes), and complete manual specification can get very tedious due to complex geometric configurations. We present a new method for an abstract building specification, that allows to specify complex buildings from simpler parts with an emphasis on assisting the blending of roofs.

Show publication details

Catalano, Chiara Eva; Luca, Livio De; Falcidieno, Bianca; Fellner, Dieter W.

GCH 2016: Eurographics Workshop on Graphics and Cultural Heritage

2016

Goslar : Eurographics Association, 2016

Eurographics Symposium on Graphics and Cultural Heritage (GCH) <14, 2016, Genova, Italy>

The 14th EUROGRAPHICS Workshop on Graphics and Cultural Heritage (GCH 2016) aims to foster an international dialogue between ICT experts and CH scientists to have a better understanding of the critical requirements for processing, managing, and delivering cultural information to a broader audience. The objective of the workshop is to present and showcase new developments within the overall process chain, from data acquisition, analysis and synthesis, 3D documentation, and data management, to new forms of interactive presentations and 3D printing solutions. Interdisciplinary approaches for analysis, classification and interpretation of cultural artefacts are particularly relevant to the event. The intention of GCH 2016 is also to establish a scientific forum for scientists and CH professionals to exchange and disseminate novel ideas and techniques in research, education and dissemination of Cultural Heritage, transfer them in practice, and trace future research and technological directions. Therefore, we seek original, innovative and previously unpublished contributions in the computer graphics area applied to digital cultural heritage, challenging the state of the art solutions and leveraging new ideas for future developments. Specific sessions will be devoted to reports on applications, experiences and projects in this domain. Contributions are solicited (but not limited to) in the following areas: - 2/3/4D data acquisition and processing in Cultural Heritage - Multispectral imaging and data fusion - Digital acquisition, representation and communication of intangible heritage - Material acquisition analysis - Heterogeneous data collection, integration and management - 3D printing of cultural assets - Shape analysis and interpretation - Similarity and search of digital artefacts - Visualization and Virtual Museums - Multi-modal and interactive environments and applications for Cultural Heritage - Spatial and mobile augmentation of physical collections with digital presentations - Semantic-aware representation of digital artefacts (metadata, classification schemes, annotation) - Digital libraries and archiving of 3D documents - Standards and documentation - Serious games in Cultural Heritage - Storytelling and design of heritage communications - Tools for education and training in Cultural Heritage - Experiences and projects in Computer Graphics and CH documentation, conservation and dissemination

Show publication details

Riffnaller-Schiefer, A.; Augsdörfer, Ursula H.; Fellner, Dieter W.

Isogeometric Shell Analysis with NURBS Compatible Subdivision Surfaces

2016

Applied Mathematics and Computation, Vol.272 (2016), Part 1, pp. 139-147. Available online 18 July 2015

We present a discretisation of Kirchhoff-Love thin shells based on a subdivision algorithm that generalizes NURBS to arbitrary topology. The isogeometric framework combines the advantages of both subdivision and NURBS, enabling higher degree analysis on watertight meshes of arbitrary geometry, including conic sections. Because multiple knots are supported, it is possible to benefit from symmetries in the geometry for a more efficient subdivision based analysis. The use of the new subdivision algorithm is an improvement to the flexibility of current isogeometric analysis approaches and allows new use cases.

Show publication details

Fellner, Dieter W.; Baier, Konrad; Ackeren, Janine van; Bornemann, Heidrun; Wehner, Detlef; Bumke, Carina; Boysens, Oliver; Egner, Juliane

Jahresbericht 2015: Fraunhofer-Institut für Graphische Datenverarbeitung IGD

2016

Darmstadt, 2016

Seit über 25 Jahren entwickelt das Fraunhofer IGD Technologien und Anwendungen auf Basis des Visual Computing. In Zusammenarbeit mit seinen Partnern entstehen technische Lösungen und marktrelevante Produkte. Das Fraunhofer IGD stellt dabei den Menschen als Benutzer in den Mittelpunkt und hilft ihm mit technischen Lösungen, das Arbeiten mit dem Computer zu erleichtern und effizienter zu gestalten. Die Lösungen des Instituts beschäftigen sich mit der ausgeprägten Fähigkeit des menschlichen Gehirns, komplexe Sachverhalte schnell visuell zu erfassen und zu verarbeiten. Durch seine zahlreichen Innovationen hebt das Fraunhofer IGD die Interaktion zwischen Mensch und Maschine auf eine neue Ebene. Der Mensch kann mithilfe des Computers und der Entwicklungen des Visual Computing ergebnisorientierter und effektiver arbeiten.

Show publication details

Limper, Max; Kuijper, Arjan; Fellner, Dieter W.

Mesh Saliency Analysis via Local Curvature Entropy

2016

Santos, Luis Paulo (Ed.) et al.: Eurographics 2016. Short Papers. The Eurographics Association, 2016, pp. 13-16

Annual Conference of the European Association for Computer Graphics (Eurographics) <37, 2016, Lisbon, Portugal>

We present a novel approach for estimating mesh saliency. Our method is fast, flexible, and easy to implement. By applying the well-known concept of Shannon entropy to 3D mesh data, we obtain an efficient method to determine mesh saliency. Comparing our method to the most recent, state-of-the-art approach, we show that results of at least similar quality can be achieved within a fraction of the original computation time. We present saliency-guided mesh simplification as a possible application.

Show publication details

Cui, Jian; Fellner, Dieter W.; Kuijper, Arjan; Sourin, Alexei

Mid-Air Gestures for Virtual Modeling with Leap Motion

2016

Streitz, Norbert (Ed.) et al.: Distributed, Ambient, and Pervasive Interactions : DAPI 2016. Switzerland: Springer International Publishing, 2016. (Lecture Notes in Computer Science (LNCS) 9749), pp.221-230

International Conference on Distributed, Ambient and Pervasive Interactions (DAPI) <4, 2016, Toronto, Canada>

We study to which extent Leap Motion can be used for midair interaction while working on various virtual assembling and shape modeling tasks. First, we outline the conceptual design phase, which is done by studying and classification of how human hands are used for various creative tasks in real life. Then, during the phase of the functional design, we propose our hypothesis how to efficiently implement and use natural gestures with Leap Motion and introduce the ideas of the algorithms. Next we describe the implementation phase of the gestures in virtual environment. It is followed by the user study proving our concept.

Show publication details

De Stefano, Antonio; Tausch, Reimar; Santos, Pedro; Kuijper, Arjan; Di Gironimo, Giuseppe; Fellner, Dieter W.; Siciliano, Bruno

Modeling a Virtual Robotic System for Automated 3D Digitization of Cultural Heritage Artifacts

2016

Journal of Cultural Heritage, (2016), 19, pp. 531-537

Complete and detailed 3D-scanning of cultural heritage artifacts is a still time-consuming process that requires skilled operators. Automating the digitization process is necessary to deal with the growing amount of artifacts available. It poses a challenging task because of the uniqueness and variety in size, shape and texture of these artifacts. Scanning devices have usually a limited focus or measurement volume and thus require precise positioning. We propose a robotic system for automated photogrammetric 3D-reconstruction. It consists of a lightweight robotic arm with a mounted camera and a turntable for the artifact. In a virtual 3D-environment, all relevant parts of the system are modeled and monitored. Here, camera views in position and orientation can be planned with respect to the depth of field of the camera, the size of the object and preferred coverage density. Given a desired view, solving inverse kinematics allows for collision-free and stable optimization of joint configurations and turntable rotation. We adopt the closed-loop inverse kinematics (CLIK) algorithm to solve the inverse kinematics on the basis of a particular definition of the orientation error. The design and parameters of the solver are described involving the option to shift the weighting between different parts of the objective function, such as precision or mechanical stability. We then use these kinematic solutions to perform the actual scanning of real objects. We conduct several tests with different kinds of objects showing reliable and sufficient results in positioning and safety. We present a visual comparison involving the real robotic system with its virtual environment demonstrating how view poses for different-sized objects are successfully planned, achieved and used for 3D-reconstruction.

Show publication details

Thaller, Wolfgang; Augsdörfer, Ursula H.; Fellner, Dieter W.

Procedural Mesh Features Applied to Subdivision Surfaces Using Graph Grammars

2016

Computers & Graphics, (2016), 58, pp. 184-192

Shape Modeling International (SMI) <2016, Berlin, Germany>

A typical industrial design modelling scenario involves defining the overall shape of a product followed by adding detail features. Procedural features are well-established in computer aided design (CAD) involving regular forms, but are less applicable to free-form modelling involving subdivision surfaces. Current approaches do not generate sparse subdivision control meshes as output, which is why free-form features are manually modelled into subdivision control meshes by domain experts. Domain experts change the local topology of the subdivision control mesh to incorporate features into the surface, without increasing the mesh density unnecessarily and carefully avoiding the appearance of artefacts. In this paper we show how to translate this expert knowledge to grammar rules. The rules may then be invoked in an interactive system to automatically apply features to subdivision surfaces.

Show publication details

Edelsbrunner, Johannes; Havemann, Sven; Sourin, Alexei; Fellner, Dieter W.

Procedural Modeling of Round Building Geometry

2016

Sourin, Alexei (Ed.) et al.: 2016 International Conference on Cyberworlds : CW 2016. Los Alamitos, Calif.: IEEE Computer Society Conference Publishing Services (CPS), 2016, pp. 81-88

International Conference on Cyberworlds (CW) <2016, Chongqing, China>

Creation of procedural 3D building models can significantly lessen the costs of modeling, since it allows generating a variety of similar shapes from one procedural description. The common field of application for procedural modeling is modeling of straight building facades, which are very well suited for shape grammars - a special kind of procedural modeling system. In order to generate round building geometry, we present a way to setup different coordinate systems in shape grammars. Besides Cartesian, these are primarily cylindrical and spherical coordinate systems for generation of structures like towers or domes, that can procedurally adapt to different dimensions and parameters. The users can apply common splitting idioms from shape grammars in their familiar way, for creating round instead of straight geometry.

Show publication details

Altenhofen, Christian; Dietrich, Andreas; Stork, André; Fellner, Dieter W.

Rixels: Towards Secure Interactive 3D Graphics in Engineering Clouds

2016

The IPSI BgD Transactions on Internet Research, Vol.12 (2016), 1, pp. 31-38

Cloud computing rekindles old and imposes new challenges on remote visualization especially for interactive 3D graphics applications, e.g., in engineering and/or in entertainment. In this paper we present and discuss an approach entitled 'rich pixels' (short 'rixels') that balances the requirements concerning security and interactivity with the possibilities of hardware accelerated post-processing and rendering, both on the server side as well as on the client side using WebGL.

Show publication details

Silva, Nelson; Shao, Lin; Schreck, Tobias; Eggeling, Eva; Fellner, Dieter W.

Sense.me - Open Source Framework for the Exploration and Visualization of Eye Tracking Data

2016

IEEE Computer Society, 2016

IEEE Conference on Visualization (VIS) <2016, Baltimore, USA>

We present a new open-source prototype framework to explore and visualize eye-tracking experiments data. Firstly, standard eyetrackers are used to record raw eye gaze data-points on user experiments. Secondly, the analyst can configure gaze analysis parameters, such as, the definition of areas of interest, multiple thresholds or the labeling of special areas, and we upload the data to a search server. Thirdly, a faceted web interface for exploring and visualizing the users' eye gaze on a large number of areas of interest is available. Our framework integrates several common visualizations and it also includes new combined representations like an eye analysis overview and a clustered matrix that shows the attention time strength between multiple areas of interest. The framework can be readily used for the exploration of eye tracking experiments data. We make available the source code of our prototype framework for eye-tracking data analysis.

Show publication details

Cui, Jian; Kuijper, Arjan; Fellner, Dieter W.; Sourin, Alexei

Understanding People's Mental Models of Mid-Air Interaction for Virtual Assembly and Shape Modeling

2016

Magnenat-Thalmann, Nadia (Conference Chair) et al.: Proceedings of the 29th International Conference on Computer Animation and Social Agents : CASA 2016. New York: ACM, 2016, pp. 139-146

International Conference on Computer Animation and Social Agents (CASA) <29, 2016, Geneva, Switzerland>

Naturalness of the mid-air interaction interface for virtual assembly and shape modeling is important. In order to design an interface perceived as "natural" by most people, common behaviors and mental patterns for mid-air interaction of people have to be recognized, which is an area merely explored yet. This paper serves this purpose of understanding the users' mental interaction models, in order to provide standards and recommendation for devising a natural virtual interaction interface. We tested three kinds of tasks - manipulating tasks, deforming tasks and tool-based operating tasks on 16 participants. We have found that: 1) different features of mental models were observed for different types of tasks. Interaction techniques should be designed to match these features; 2) virtual hand selfavatar helps estimate size of virtual objects, as well as helps plan and visualize the complex process and procedures of a task, which is especially helpful for tool-based tasks; 3) bimanual interaction is witnessed as a dominant interaction mode preferred by the majority; 4) natural gestures for deforming tasks always reflect forces exerted. These suggestions are useful for designing a midair interaction interface matching users' mental models.

Show publication details

Berndt, Rene; Silva, Nelson; Caldera, Christian; Krispel, Ulrich; Eggeling, Eva; Sunk, Alexander; Reisinger, Gerhard; Sihn, Wilfried; Fellner, Dieter W.

VASCO - Digging the Dead Man's Chest of Value Streams

2016

International Journal on Advances in Intelligent Systems, Vol.9 (2016), 3, pp. 401-416

Value stream mapping is a lean management method for analyzing and optimizing a series of events for production or services. Even today the first step in value stream analysis - the acquisition of the current state map - is still created using pen & paper by physically visiting the production line. We capture a digital representation of how manufacturing processes look like in reality. The manufacturing processes can be represented and efficiently analyzed for future production planning as a future state map by using a meta description together with a dependency graph. With VASCO we present a tool, which contributes to all parts of value stream analysis - from data acquisition, over analyzing, planning, comparison up to simulation of alternative future state maps. We call this a holistic approach for Value stream mapping including detailed analysis of lead time, productivity, space, distance, material disposal, energy and carbon dioxide equivalents - depending in a change of calculated direct product costs.

Show publication details

Berndt, Rene; Silva, Nelson; Caldera, Christian; Krispel, Ulrich; Eggeling, Eva; Sunk, Alexander; Edtmayr, Thomas; Sihn, Wilfried; Fellner, Dieter W.

VASCO - Mastering the Shoals of Value Stream Mapping

2016

Sehring, Hans-Werner (Ed.) et al.: CONTENT 2016 : The Eighth International Conference on Creative Content Technologies [online]. [cited 22 June 2017] Available from: http://www.thinkmind.org/index.php?view=instance&instance=CONTENT+2016: ThinkMind, 2016, pp. 42-47

International Conference on Creative Content Technologies (CONTENT) <8, 2016, Rome, Italy>

Value stream mapping is a lean management method for analyzing and optimizing a series of events for production or services. Even today the first step in value stream analysis - the acquisition of the current state - is still created using pen & paper by physically visiting the production place. We capture a digital representation of how manufacturing processes look like in reality. The manufacturing processes can be represented and efficiently analyzed for future production planning by using a meta description together with a dependency graph. With our Value Stream Creator and explOrer (VASCO) we present a tool, which contributes to all parts of value stream analysis - from data acquisition, over planning, comparison with previous realities, up to simulation of future possible states.

Show publication details

Silva, Nelson; Shao, Lin; Schreck, Tobias; Eggeling, Eva; Fellner, Dieter W.

Visual Exploration of Hierarchical Data Using Degree-of-Interest Controlled by Eye-Tracking

2016

Aigner, Wolfgang (Ed.) et al.: FMT 2016 : Proceedings of the 9th Forum Media Technology 2016 and 2nd All Around Audio Symposium 2016. (CEUR Workshop Proceedings 1734), pp. 82-89

Forum Media Technology (FMT) <9, 2016, St. Pölten, Austria>

Effective visual exploration of large data sets is an important problem. A standard technique for mapping large data sets is to use hierarchical data representations (trees, or dendrograms) that users may navigate. If the data sets get large, so do the hierarchies, and effective methods for the navigation are required. Traditionally, users navigate visual representations using desktop interaction modalities, including mouse interaction. Motivated by recent availability of lowcost eye-tracker systems, we investigate application possibilities to use eye-tracking for controlling the visual-interactive data exploration process. We implemented a proof-of-concept system for visual exploration of hierarchic data, exemplified by scatter plot diagrams which are to be explored for grouping and similarity relationships. The exploration includes usage of degree-of-interest based distortion controlled by user attention read from eye-movement behavior. We present the basic elements of our system, and give an illustrative use case discussion, outlining the application possibilities. We also identify interesting future developments based on the given data views and captured eye-tracking information.

Show publication details

Landesberger, Tatiana von; Fellner, Dieter W.; Ruddle, Roy A.

Visualization System Requirements for Data Processing Pipeline Design and Optimization

2016

IEEE Transactions on Visualization and Computer Graphics, Vol.23 (2016), 8, pp. 2028-2041. Published Online: 25 August 2016

Eurographics Conference on Visualization (EuroVis) <19, 2017, Barcelona, Spain>

The rising quantity and complexity of data creates a need to design and optimize data processing pipelines - the set of data processing steps, parameters and algorithms that perform operations on the data. Visualization can support this process but, although there are many examples of systems for visual parameter analysis, there remains a need to systematically assess users' requirements and match those requirements to exemplar visualization methods. This article presents a new characterization of the requirements for pipeline design and optimization. This characterization is based on both a review of the literature and first-hand assessment of eight application case studies. We also match these requirements with exemplar functionality provided by existing visualization tools. Thus, we provide end-users and visualization developers with a way of identifying functionality that addresses data processing problems in an application. We also identify seven future challenges for visualization research that are not met by the capabilities of today's systems.

Show publication details

Getto, Roman; Fellner, Dieter W.

3D Object Retrieval with Parametric Templates

2015

Pratikakis, Ioannis (Ed.) et al.: Eurographics 2015 Workshop on 3D Object Retrieval : EG 3DOR 2015. Goslar: Eurographics Association, 2015. (Eurographics Workshop and Symposia Proceedings Series), pp. 47-54

Eurographics Workshop on 3D Object Retrieval (EG 3DOR) <8, 2015, Zurich, Switzerland>

We propose a 3D object retrieval system which uses parametric templates as prior knowledge for the retrieval. A parametric template represents an object-domain and a semantic concept like 'chair' or 'plane' or a more specific concept like 'dining-char' or 'biplane'. The template can be specified at a general or specific level and can even equal actual retrieved objects. The parametric template is composed of several input parameters and an operation chain which constructs an object. Different parameter combinations lead to different object instances. We combine and evaluate a paramteric template with different descriptors. Our results show that the usage of parametric templates can raise the retrieval performance significantly.

Show publication details

Fuhrmann, Constanze; Santos, Pedro; Fellner, Dieter W.

3D-Massendigitalisierung - ein Meilenstein für die museale Nutzung

2015

Museumskunde, Vol.80 (2015), 1, pp. 58-61

Seit mehr als einem Jahrzehnt gibt es umfangreiche Maßnahmen zur digitalen Verarbeitung und Visualisierung von Sammlungsbeständen. So sind die EU-Mitgliedstaaten noch Empfehlungen der Europäischen Kommission im Rahmen der Digitalen Agenda aufgefordert, Digitalisierung, Online-Verfügbarkeit und digitale Erhaltung von historischem Material voranzutreiben. Auch die Reflexionsgruppe zur "Digitalisierung des kulturellen Erbes in Europa" erwartet unter Hinweis auf dessen wirtschaftliches Potenzial eine "neue Renaissance" durch Zusammenführung von Beständen im Netz. Während vor diesem Hintergrund die digitale Aufbereitung von Kulturgut in 2D längst zur Kernaufgabe von Bibliotheken, Archiven und Museen gehört, ist die 3D-Digitalisierung von dreidimensionalen Artefakten aufgrund ihrer komplexeren Form noch immer eine Herausforderung. Aufgrund des hohen Kosten- und Zeitaufwands konzentrieren sich hier Maßnahmen auf prestigeträchtige Einzelobjekte statt auf ganze Sammlungen. Denn es existieren bisher keine kommerziell verfügbaren Technologien zur effizienten und hochpräzisen Massendigitalisierung in 3D. Auch bleibt die 3D-Digitalisierung in den wenigen existenten Digitalisierungs- und Erhaltungsstrategien von musealen Einrichtungen noch immer unberücksichtigt. Erst jüngst wurden die DFG-Praxisregeln zur Digitalisierung dahingehend überarbeitet, unter dem Begriff nicht mehr ausschließlich die digitale Fotografie zu verstehen. In der aktualisierten Version wird zwischen der digitalen Repräsentation eines dreidimensionalen Objekts und dessen digitaler Replik unterschieden. Ersteres stellt eine ,,fotografische Erfassung aller relevanten visuellen Eigenschaften des Objekts, meist von mehr als einem Aufnahmestandpunkt" dar. Eine digitale Replik hingegen ist eine "originalgetreue Rekonstruktion seiner Form und Oberflächen-Licht-lnteraktion", also ein digitales 3D-Modell samt Hülle und Anmutung.

Show publication details

Eckeren, Katharina van; Tausch, Reimar; Santos, Pedro; Kuijper, Arjan; Fellner, Dieter W.

3DHOG for Geometric Similarity Measurement and Retrieval for Digital Cultural Heritage Archives

2015

Guidi, Gabriele (Ed.) et al.: 2015 Digital Heritage International Congress. Volume 2. New York: The Institute of Electrical and Electronics Engineers (IEEE), 2015, pp. 117-120

Digital Heritage International Congress (DH) <2015, Granada, Spain>

With projects such as CultLab3D, 3D Digital preservation of cultural heritage will become more affordable and with this, the number of 3D-models representing scanned artefacts will dramatically increase. However, once mass digitization is possible, the subsequent bottleneck to overcome is the annotation of cultural heritage artefacts with provenance data. Current annotation tools are mostly based on textual input, eventually being able to link an artefact to documents, pictures, videos and only some tools already support 3D models. Therefore, we envisage the need to aid curators by allowing for fast, web-based, semi-automatic, 3D-centered annotation of artefacts with metadata. In this paper we give an overview of various technologies we are currently developing to address this issue. On one hand we want to store 3D models with similarity descriptors which are applicable independently of different 3D model quality levels of the same artefact. The goal is to retrieve and suggest to the curator metadata of already annotated similar artefacts for a new artefact to be annotated, so he can eventually reuse and adapt it to the current case. In addition we describe our web-based, 3D-centered annotation tool with meta- and object repositories supporting various databases and ontologies such as CIDOC-CRM.

Show publication details

Weber, Daniel; Mueller-Roemer, Johannes; Stork, André; Fellner, Dieter W.

A Cut-Cell Geometric Multigrid Poisson Solver for Fluid Simulation

2015

Computer Graphics Forum, Vol.34 (2015), 2, pp. 481-491

Annual Conference of the European Association for Computer Graphics (Eurographics) <36, 2015, Zürich, Switzerland>

We present a novel multigrid scheme based on a cut-cell formulation on regular staggered grids which generates compatible systems of linear equations on all levels of the multigrid hierarchy. This geometrically motivated formulation is derived from a finite volume approach and exhibits an improved rate of convergence compared to previous methods. Existing fluid solvers with voxelized domains can directly benefit from this approach by only modifying the representation of the non-fluid domain. The necessary building blocks are fully parallelizable and can therefore benefit from multi- and many-core architectures.

Show publication details

Merz, Johannes; Getto, Roman; Landesberger, Tatiana von; Fellner, Dieter W.

Analysis of 3D Mesh Correspondences Concerning Foldovers

2015

Skala, Vaclav (Ed.): WSCG 2015. Short Papers Proceedings : 23rd International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision. [cited 16 October 2015] Available from http://wscg.zcu.cz/DL/wscg DL.htm: University of West Bohemia, 2015, pp. 149-158

International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision (WSCG) <23, 2015, Plzen, Czech Republic>

Foldovers (i.e., folding of triangles in a 3D mesh) are artifacts that cause problems for morphing. Mesh morphing uses vertex correspondences among the source and the target mesh to define the morphing path. Although there exist techniques for making a foldover-free mesh morphing, identification and correction of foldovers in existing correspondences is still an unsolved issue. This paper proposes a new technique for the identification and resolution of foldovers for mesh morphing using predefined 3D mesh correspondences. The technique is evaluated on several different meshes with given correspondences. The mesh examples comprise both real medical data and synthetically deformed meshes. We also present various possible usage scenarios of the new algorithm, showing its benefit for the analysis and comparison of mesh correspondences with respect to foldover problems.

Show publication details

Krispel, Ulrich; Evers, Henrik Leander; Tamke, Martin; Viehauser, Robert; Fellner, Dieter W.

Automatic Texture and Orthophoto Generation from Registered Panoramic Views

2015

Gonzalez-Aguilera, D. (Ed.) et al.: 3D-ARCH 2015 : 3D Virtual Reconstruction and Visualization of Complex Architectures. [cited 18 June 2015] Available from: http://www.int-arch-photogramm-remote-sens-spatial-inf-sci.net/XL-5-W4/index.html, 2015. (The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-5/W4), pp. 131-137

ISPRS International Workshop 3D-ARCH <6, 2015, Avila, Spain>

Recent trends in 3D scanning are aimed at the fusion of range data and color information from images. The combination of these two outputs allows to extract novel semantic information. The workflow presented in this paper allows to detect objects, such as light switches, that are hard to identify from range data only. In order to detect these elements, we developed a method that utilizes range data and color information from high-resolution panoramic images of indoor scenes, taken at the scanners position. A proxy geometry is derived from the point clouds; orthographic views of the scene are automatically identified from the geometry and an image per view is created via projection. We combine methods of computer vision to train a classifier to detect the objects of interest from these orthographic views. Furthermore, these views can be used for automatic texturing of the proxy geometry.

Show publication details

Schinko, Christoph; Krispel, Ulrich; Ullrich, Torsten; Fellner, Dieter W.

Built by Algorithms - State of the Art on Procedural Modeling

2015

Gonzalez-Aguilera, D. (Ed.) et al.: 3D-ARCH 2015 : 3D Virtual Reconstruction and Visualization of Complex Architectures. [cited 18 June 2015] Available from: http://www.int-arch-photogramm-remote-sens-spatial-inf-sci.net/XL-5-W4/index.html, 2015. (The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-5/W4), pp. 469-479

ISPRS International Workshop 3D-ARCH <6, 2015, Avila, Spain>

The idea of generative modeling is to allow the generation of highly complex objects based on a set of formal construction rules. Using these construction rules, a shape is described by a sequence of processing steps, rather than just by the result of all applied operations: Shape design becomes rule design. Due to its very general nature, this approach can be applied to any domain and to any shape representation that provides a set of generating functions. The aim of this report is to give an overview of the concepts and techniques of procedural and generative modeling as well as their applications with a special focus on Archaeology and Architecture.

Show publication details

Braun, Andreas; Wichert, Reiner; Kuijper, Arjan; Fellner, Dieter W.

Capacitive Proximity Sensing in Smart Environments

2015

Journal of Ambient Intelligence and Smart Environments, Vol.7 (2015), 4, pp. 483-510

To create applications for smart environments we can select from a huge variety of sensors that measure environmental parameters or detect activities of different actors within the premises. Capacitive proximity sensors use weak electric fields to recognize conductive objects, such as the human body. They can be unobtrusively applied or even provide information when hidden from view. In the past years various research groups have used this sensor category to create singular applications in this domain. On the following pages we discuss the application of capacitive proximity sensors in smart environments, establishing a classification in comparison to other sensor technologies. We give a detailed overview of the background of this sensing technology and identify specific application domains. Based on existing systems from literature and a number of prototypes we have created in the past years we can specify benefits and limitations of this technology and give a set of guidelines to researchers that are considering this technology in their smart environment applications.

Show publication details

Große-Puppendahl, Tobias; Fellner, Dieter W. (Betreuer); Van Laerhoven, Kristof (Betreuer)

Capacitive Sensing and Communication for Ubiquitous Interaction and Environmental Perception

2015

Darmstadt, TU, Diss., 2015

During the last decade, the functionalities of electronic devices within a living environment constantly increased. Besides the personal computer, now tablet PCs, smart household appliances, and smartwatches enriched the technology landscape. The trend towards an ever-growing number of computing systems has resulted in many highly heterogeneous human-machine interfaces. Users are forced to adapt to technology instead of having the technology adapt to them. Gathering context information about the user is a key factor for improving the interaction experience. Emerging wearable devices show the benefits of sophisticated sensors which make interaction more efficient, natural, and enjoyable. However, many technologies still lack of these desirable properties, motivating me to work towards new ways of sensing a user's actions and thus enriching the context. In my dissertation I follow a human-centric approach which ranges from sensing hand movements to recognizing whole-body interactions with objects. This goal can be approached with a vast variety of novel and existing sensing approaches. I focused on perceiving the environment with quasi-electrostatic fields by making use of capacitive coupling between devices and objects. Following this approach, it is possible to implement interfaces that are able to recognize gestures, body movements and manipulations of the environment at typical distances up to 50 cm. These sensors usually have a limited resolution and can be sensitive to other conductive objects or electrical devices that affect electric fields. The technique allows for designing very energy-efficient and high-speed sensors that can be deployed unobtrusively underneath any kind of non-conductive surface. Compared to other sensing techniques, exploiting capacitive coupling also has a low impact on a user's perceived privacy. In this work, I also aim at enhancing the interaction experience with new perceptional capabilities based on capacitive coupling. I follow a bottom-up methodology and begin by presenting two low-level approaches for environmental perception. In order to perceive a user in detail, I present a rapid prototyping toolkit for capacitive proximity sensing. The prototyping toolkit shows significant advancements in terms of temporal and spatial resolution. Due to some limitations, namely the inability to determine the identity and fine-grained manipulations of objects, I contribute a generic method for communications based on capacitive coupling. The method allows for designing highly interactive systems that can exchange information through air and the human body. I furthermore show how human body parts can be recognized from capacitive proximity sensors. The method is able to extract multiple object parameters and track body parts in real-time. I conclude my thesis with contributions in the domain of context-aware devices and explicit gesture-recognition systems.

Show publication details

Encarnação, José L.; Fellner, Dieter W.

Computer Graphics "Made in Germany": Darmstadt, the Leading "Computer Graphics and Visual Computing Hub" in Europe: The Way from 1975 to 2014: 40 Years of Computer Graphics in Darmstadt

2015

Computers & Graphics, Vol.53 (2015), Part A, pp. 13-27

The paper reports on the 40 years of development of Computer Graphics and, more recently, Visual computing (VC) at the Technische Universität Darmstadt in Germany, from its beginning in 1975 to the leading "Computer Graphics and Visual Computing Hub" in Europe as of 2014. This development is described along three axes. First, the institutional development and its rational to establish Computer Graphics as a discipline of Computer Science and as an enabling technology for developing our Knowledge Society, are described. Second, the scientific and technological impact based on the teaching activities and the large number of theses submitted in Darmstadt for the area during these 40 years are addressed. Finally, the research roadmaps of the Computer Graphics and Visual Computing Hub in Darmstadt are presented relatively to the different stages of CG and VC research, relatively to a scientific view to the large number of projects implemented over these 40 years and, finally, also relatively to the project results as seen from the media. In order to manage the quantity as well as the complexity of the information available, the description of these roadmaps is divided in four time periods: 1975-1984, 1985-1994, 1995-2004 and 2004-2015. The paper also gives the view of the authors on how they see the future of Computer Graphics and Visual Computing. At the end, the paper includes an extensive list of references for the reported content.

Show publication details

Weber, Daniel; Mueller-Roemer, Johannes; Altenhofen, Christian; Stork, André; Fellner, Dieter W.

Deformation Simulation using Cubic Finite Elements and Efficient p-multigrid Methods

2015

Computers & Graphics, Vol.53 (2015), PART B, pp. 185-195

We present a novel p-multigrid method for efficient simulation of corotational elasticity with higher-order finite elements. In contrast to other multigrid methods proposed for volumetric deformation, the resolution hierarchy is realized by varying polynomial degrees on a tetrahedral mesh. The multigrid approach can be either used as a direct method or as a preconditioner for a conjugate gradient algorithm. We demonstrate the efficiency of our approach and compare it to commonly used direct sparse solvers and preconditioned conjugate gradient methods. As the polynomial representation is defined w.r.t. the same mesh, the update of the matrix hierarchy necessary for corotational elasticity can be computed efficiently. We introduce the use of cubic finite elements for volumetric deformation and investigate different combinations of polynomial degrees for the hierarchy. We analyze the applicability of cubic finite elements for deformation simulation by comparing analytical results in a static and dynamic scenario and demonstrate our algorithm in dynamic simulations with quadratic and cubic elements. Applying our method to quadratic and cubic finite elements results in a speed-up of up to a factor of 7 for solving the linear system.

Show publication details

Limper, Max; Brandherm, Florian; Fellner, Dieter W.; Kuijper, Arjan

Evaluating 3D Thumbnails for Virtual Object Galleries

2015

ACM SIGGRAPH: Proceedings Web3D 2015 : 20th International Conference on 3D Web Technology. New York: ACM, 2015, pp. 17-24

International Conference on 3D Web Technology (WEB3D) <20, 2015, Heraklion, Crete, Greece>

Virtual 3D object galleries on the Web nowadays often use realtime, interactive 3D graphics. However, this does usually still not hold for their preview images, sometimes referred to as thumbnails. We provide a technical analysis on the applicability of so-called 3D thumbnails within the context virtual 3D object galleries. Like a 2D thumbnail for an image, a 3D thumbnail acts as a compact preview for a real 3D model. In contrast to an image series, however, it enables a wider variety of interaction methods and rendering effects. By performing a case study, we show that such true 3D representations are, under certain circumstances, even able to outperform 2D image series in terms of bandwidth consumption. We thus present a complete pipeline for generating compact 3D thumbnails for given meshes in a fully automatic fashion.

Show publication details

Bernard, Jürgen; Fellner, Dieter W. (Betreuer); Schreck, Tobias (Betreuer)

Exploratory Search in Time-Oriented Primary Data

2015

Darmstadt, TU, Diss., 2015

In a variety of research fields, primary data that describes scientific phenomena in an original condition is obtained. Time-oriented primary data, in particular, is an indispensable data type, derived from complex measurements depending on time. Today, time-oriented primary data is collected at rates that exceed the domain experts' abilities to seek valuable information undiscovered in the data. It is widely accepted that the magnitudes of uninvestigated data will disclose tremendous knowledge in data-driven research, provided that domain experts are able to gain insight into the data. Domain experts involved in data-driven research urgently require analytical capabilities. In scientific practice, predominant activities are the generation and validation of hypotheses. In analytical terms, these activities are often expressed in confirmatory and exploratory data analysis. Ideally, analytical support would combine the strengths of both types of activities. Exploratory Search (ES) ES is a concept that seamlessly includes information-seeking behaviors ranging from search to exploration. ES supports domain experts in both gaining an understanding of huge and potentially unknown data collections and the drill-down to relevant subsets, e.g., to validate hypotheses. As such, ES combines predominant tasks of domain experts applied to data-driven research. For the design of useful and usable ES systems (ESS), data scientists have to incorporate different sources of knowledge and technology. Of particular importance is the state-of-the-art in interactive data visualization and data analysis. Research in these factors is at heart of Information Visualization (IV) and Visual Analytics (VA). Approaches in IV and VA provide meaningful visualization and interaction designs, allowing domain experts to perform the information-seeking process in an effective and efficient way. Today, best-practice ESS almost exclusively exist for textual data content, e.g., put into practice in digital libraries to facilitate the reuse of digital documents. For time-oriented primary data, ES mainly remains at a theoretical state. Motivation and Problem Statement This thesis is motivated by two main assumptions. First, we expect that ES will have a tremendous impact on data-driven research for many research fields. In this thesis, we focus on time-oriented primary data, as a complex and important data type for data-driven research. Second, we assume that research conducted to IV and VA will particularly facilitate ES. For time-oriented primary data, however, novel concepts and techniques are required that enhance the design and the application of ESS. In particular, we observe a lack of methodological research in ESS for time-oriented primary data. In addition, the size, the complexity, and the quality of time-oriented primary data hampers the content-based access, as well as the design of visual interfaces for gaining an overview of the data content. Furthermore, the question arises how ESS can incorporate techniques for seeking relations between data content and metadata to foster data-driven research. Overarching challenges for data scientists are to create usable and useful designs, urgently requiring the involvement of the targeted user group and support techniques for choosing meaningful algorithmic models and model parameters. Throughout this thesis, we will resolve these challenges from conceptual, technical, and systemic perspectives. In turn, domain experts can benefit from novel ESS as a powerful analytical support to conduct data-driven research. Contribution In essence, our contributions cover the entire time series analysis process starting from accessing raw time-oriented primary data, processing and transforming time series data, to visual-interactive analysis of time series. We present visual search interfaces providing content-based access to time-oriented primary data. In a series of novel exploration-support techniques, we facilitate both gaining an overview of large and complex time-oriented primary data collections and seeking relations between data content and metadata. Throughout this thesis, we introduce VA as a means of designing effective and efficient visual-interactive systems. Our VA techniques empower data scientists to choose appropriate models and model parameters, as well as to involve users in the design. With both principles, we support the design of usable and useful interfaces which can be included into ESS. In this way, our contributions bridge the gap between search systems requiring exploration support and exploratory data analysis systems requiring visual querying capability. In the ESS presented in two case studies, we prove that our techniques and systems support data-driven research in an efficient and effective way.

Show publication details

Grabner, Harald; Ullrich, Torsten; Fellner, Dieter W.

Generative Training for 3D-Retrieval

2015

Braz, José (Ed.) et al.: GRAPP 2015 : Proceedings of the International Conference on Computer Graphics Theory and Applications and International Conference on Information Visualization Theory and Applications. SciTePress, 2015, pp. 97-105

International Conference on Computer Graphics Theory and Applications (GRAPP) <10, 2015, Berlin, Germany>

A digital library for non-textual, multimedia documents can be defined by its functionality: markup, indexing, and retrieval. For textual documents, the techniques and algorithms to perform these tasks are well studied. For non-textual documents, these tasks are open research questions: How to markup a position on a digitized statue? What is the index of a building? How to search and query for a CAD model? If no additional, textual information is available, current approaches cluster, sort and classify non-textual documents using machine learning techniques, which have a cold start problem: they either need a manually labeled, sufficiently large training set or the (automatic) clustering / classification result may not respect semantic similarity. We solve this problem using procedural modeling techniques, which can generate arbitrary training sets without the need of any "real" data. The retrieval process itself can be performed with any method. In this article we describe the histogram of inverted distances in detail and compare it to salient local visual features method. Both techniques are evaluated using the Princeton Shape Benchmark (Shilane et al., 2004). Furthermore, we improve the retrieval results by diffusion processes.

Show publication details

Oyarzun Laura, Cristina; Fellner, Dieter W. (Betreuer); Sakas, Georgios (Betreuer); Bale, Reto (Betreuer)

Graph-matching and FEM-based Registration of Computed Tomographies for Outcome Validation of Liver Interventions

2015

Darmstadt, TU, Diss., 2015

Liver cancer is one of the leading causes of death worldwide. One of the reasons for that is the high tumor recurrence rate. The only way to reduce the recurrence rate is to ensure that all carcinogenic cells are destroyed after intervention. Unfortunately, the information available to assess the outcome of an intervention is limited. In the clinical routine, a pair of pre- and post-operatively gathered computed tomographies (CT) of the abdomen are typically compared to decide whether the patient needs further treatment. However, the post-operative liver will be deformed due to breathing and intervention which will complicate the comparison task by simple inspection of both images. The results presented in this thesis will support the physician during the outcome validation process after minimally invasive interventions and open liver surgeries. Therefore, the physician is provided with qualitative measures and visualizations that support him in the decision making task. The basis of a reliable outcome validation is an accurate non-rigid registration method. This thesis proposes to combine internal correspondences at vessel ramifications and landmarks at the surface of the organ to increase the accuracy of the registration results. The internal correspondences are the result of a novel efficient and fully automatic graph matching method. Landmarks at the surface of the liver are given by a method that detects the organs that are adjacent to it at each surface point. Both types of landmarks are incorporated in a FEM-based registration. The registration method has been tested in 25 pairs of pre- and post-operative clinical CT images achieving an average accuracy of 1.22 mm and a positive predictive value of 0.95. In consequence of the accuracy obtained with the proposed methods the physician is able to determine with certainty if the outcome of the intervention was satisfactory. Hence, he can without delay decide to re-treat the patient if needed to remove the remnant tumor. This fast response could at the end reduce the tumor recurrence rate.

Show publication details

Schaller, Andreas; Biedenkapp, Tim; Keil, Jens; Fellner, Dieter W.; Kuijper, Arjan

Immersive Interaction Paradigms for Controlling Virtual Worlds by Customer Devices Exemplified in a Virtual Planetarium

2015

Antona, Margherita (Ed.) et al.: Universal Access in Human-Computer Interaction. Proceedings Part IV : Access to the Human Environment and Culture. Springer International Publishing, 2015. (Lecture Notes in Computer Science (LNCS) 9178), pp. 74-86

International Conference on Universal Access in Human-Computer Interaction (UAHCI) <9, 2015, Los Angeles, CA, USA>

This work provides an insight into the basics of 3D applications in conjunction with various customer devices. In this case, the application is a 3D planetarium of our solar system for a museum. The aim is to create a concept for intuitive and immersive navigation through the virtual planetarium using inexpensive Customer Devices. Visitors should be able to move freely and easily in the solar system. Here, the visitor should be able to focus on the simulation and not quickly lose interest in the complex control application. For this similar approaches and previous research are examined and a new approach is described. As low-cost customer devices, the controller of the Nintendo Wii (Wiimote) and current smartphones are considered in this work. A detailed analysis of these devices is an integral part of this work. Based on the selected devices, there are various possibilities for interaction and resulting interaction concepts. For each device, a concept will be developed to meet the identified needs.

Show publication details

Weber, Daniel; Stork, André (Betreuer); Fellner, Dieter W. (Betreuer); Goesele, Michael (Betreuer)

Interactive Physically Based Simulation - Efficient Higher-Order Elements, Multigrid Approaches and Massively Parallel Data Structures

2015

Darmstadt, TU, Diss., 2015

This thesis covers interactive physically based simulation for applications such as computer games or virtual environments. Interactivity, i.e., the option that a user can influence a system, imposes challenging requirements on the simulation algorithms. A simple way to achieve this goal is to drastically limit the resolution in order to guarantee this low computation time. However, with current methods the number of degrees of freedom will be rather low, which results in a low degree of realism. This is due to the fact that not every detail that is important for realistically representing the physical system can be resolved. This thesis contributes to interactive physically based simulation by developing novel methods and data structures. These can be associated with the three pillars of this thesis: more accurate discrete representations, efficient methods for linear systems, and data structures and methods for massively parallel computing. The novel approaches are evaluated in two application areas relevant in computer generated animation: simulation of dynamic volumetric deformation and fluid dynamics. The resulting accelerations allow for a higher degree of realism because the number of elements or the resolution can be significantly increased.

Show publication details

Hornung, Christoph; Encarnação, José L.; Fellner, Dieter W.

Introduction: Guest Editor Foreword: With "Words of Welcome" by Peter Liggesmeyer, Anders Ynnerman, Bob Hopgood, David Duce, Andries van Dam, James D. Foley, Henry Fuchs, José Luis Encarnação

2015

Computers & Graphics, Vol.53 (2015), Part A, pp. 1-11

Computer Graphics - today this is a well-established field in science and technology. It offers leading-edge functionality to provide a level of flexibility, adaptability and re-usability not possible even a few years ago. It is an indispensible part of our daily life ,the Internet revolution would not have been possible without easy to use interfaces of smartphones and tablets. It is hard to believe, that only about 40 years ago, Computer Graphics evolved as a selfstanding discipline. Technische Universität Darmstadt formed the nucleus of Computer Graphics in Germany by establishing a professorship for "Graphische Datenverarbeitung (GRIS)" in 1975.This special issue is dedicated to 40 Years of Computer Graphics in Darmstadt, its foundation, growth and establishment in research, technology, and a broad range of application fields. It consists of a survey of 40 Years of Computer Graphics in Darmstadt, scientific papers from GRIS alumni and Words of Welcome from pioneers of computer graphics standing in long time connection with Darmstadt.

Show publication details

Riffnaller-Schiefer, A.; Augsdörfer, Ursula H.; Fellner, Dieter W.

Isogeometric Analysis for Modelling and Design

2015

Bickel, Bernd (Ed.) et al.: Eurographics 2015. Short Papers. The Eurographics Association, 2015, pp. 17-20

Annual Conference of the European Association for Computer Graphics (Eurographics) <36, 2015, Zürich, Switzerland>

We present an isogeometric design and analysis approach based on NURBS-compatible subdivision surfaces. The approach enables the description of watertight free-form surfaces of arbitrary degree, including conic sections and an accurate simulation and analysis based directly on the designed surface. To explore the seamless integration of design and analysis provided by the isogeometric approach, we built a prototype software which combines free-form modelling tools with thin shell simulation tools to offer the designer a wide range of design and analysis instruments.

Show publication details

Fellner, Dieter W.; Baier, Konrad; Ackeren, Janine van; Bornemann, Heidrun; Fraunhoffer, Katrin; Wehner, Detlef

Jahresbericht 2014: Fraunhofer-Institut für Graphische Datenverarbeitung IGD

2015

Darmstadt, 2015

Die Forscher des Fraunhofer-Instituts für Graphische Datenverarbeitung IGD machen aus Informationen Bilder und aus Bildern Informationen. Die bild- und modellbasierte Informatik nennt man "Visual Computing". Hierzu zählen Graphische Datenverarbeitung, Computer Vision sowie Virtuelle und Erweiterte Realität. Mithilfe des Visual Computing werden Bilder, Modelle und Graphiken für alle denkbaren computerbasierten Anwendungen verwendet, erfasst und bearbeitet. Dabei setzen die Forscherinnen und Forscher graphische Anwendungsdaten in Wechselbeziehung mit nicht graphischen Daten, was bedeutet, dass sie Bilder, Videos und 3D-Modelle mit Texten, Ton und Sprache rechnergestützt anreichern. Daraus ergeben sich wiederum neue Erkenntnisse, die in innovative Produkte und Dienstleistungen umgesetzt werden können. Dafür werden entsprechend fortgeschrittene Dialogtechniken entworfen. Durch seine zahlreichen Innovationen hebt das Fraunhofer IGD die Interaktion zwischen Mensch und Maschine auf eine neue Ebene.

Show publication details

Schiffer, Thomas; Fellner, Dieter W.

Multi-kernel Ray Traversal for Graphics Processing Units

2015

Braz, José (Ed.) et al.: Computer Vision, Imaging and Computer Graphics : Theory and Applications. Berlin, Heidelberg, New York: Springer, 2015. (Communications in Computer and Information Science 550), pp. 78-93

International Joint Conference on Computer Vision and Computer Graphics Theory and Applications (VISIGRAPP) <9, 2014, Lisbon, Portugal>

Communications in Computer and Information Science

Ray tracing is a very popular family of algorithms that are used to compute images with high visual quality. One of its core challenges is designing an efficient mapping of ray traversal computations to massively parallel hardware architectures like modern algorithms graphics processing units (GPUs). In this paper we investigate the performance of state-of-the-art ray traversal algorithms on GPUs and discuss their potentials and limitations. Based on this analysis, a novel ray traversal scheme called batch tracing is proposed. It subdivides the task into multiple kernels, each of which is designed for efficient parallel execution. Our algorithm achieves comparable performance to current approaches and represents a promising direction for future research.

Show publication details

Zmugg, René; Braun, Andreas; Roelofsma, Peter H.M.P; Thaller, Wolfgang; Moeskops, Lisette; Havemann, Sven; Reljic, Gabrijela; Fellner, Dieter W.

Personalization of Virtual Coaching Applications using Procedural Modeling

2015

Holzinger, Andreas (Ed.) et al.: ICT 4 AgeingWell 2015 : International Conference on Information and Communication Technologies for Ageing Well and e-Health. SciTePress, 2015, pp. 37-44

International Conference on Information and Communication Technologies for Ageing Well and e-Health (ICT4AgeingWell) <1, 2015, Lisbon, Portugal)

Virtual coaching is an application area that allows individuals to improve existing skills or learn new ones; it ranges from simple textual tutoring tools to fully immersive 3D learning situations. The latter aim at improving the learning experience with realistic 3D environments. In highly individual training scenarios it can be beneficial to provide some level of personalization of the environment. This can be supported using procedural modeling that allows to easily modify shape, look and contents of an environment. We present the application of personalization using procedural modeling in learning applications in the project V2me. This project combines virtual and social networks to help senior citizens maintain and create meaningful relationships. We present a system that uses a procedurally generated ambient virtual coaching environment that can be adjusted by training subjects themselves or in collaboration. A small user experience study has been executed that gives first insight to the acceptance of such an approach.

Show publication details

Samadzadegan, Sepideh; Fellner, Dieter W. (Betreuer); Dörsam, Edgar (Betreuer); Hardeberg, Jon Yngve (Betreuer)

Printing Beyond Color: Spectral and Specular Reproduction

2015

Darmstadt, TU, Diss., 2015

For accurate printing (reproduction), two important appearance attributes to consider are color and gloss. These attributes are related to two topics focused on in this dissertation: spectral reproduction and specular (gloss) printing. In the conventional printing workflow known as the metameric printing workflow, which we use mostly nowadays, high-quality prints -- in terms of colorimetric accuracy -- can be achieved only under a predefined illuminant (i.e. an illuminant that the printing pipeline is adjusted to; e.g. daylight). While this printing workflow is useful and sufficient for many everyday purposes, in some special cases, such as artwork (e.g. painting) reproduction, security printing, accurate industrial color communication and so on, in which accurate reproduction of an original image under a variety of illumination conditions (e.g. daylight, tungsten light, museum light, etc.) is required, metameric reproduction may produce satisfactory results only with luck. Therefore, in these cases, another printing workflow, known as spectral printing pipeline must be used, with the ideal aim of illuminant-invariant match between the original image and the reproduction. In this workflow, the reproduction of spectral raw data (i.e. reflectances in the visible wavelength range), rather than reproduction of colorimetric values (colors) alone (under a predefined illuminant) is taken into account. Due to the limitations of printing systems extant, the reproduction of all reflectances is not possible even with multi-channel (multi-colorant) printers. Therefore, practical strategies are required in order to map non-reproducible reflectances into reproducible spectra and to choose appropriate combinations of printer colorants for the reproduction of the mapped reflectances. For this purpose, an approach called Spatio-Spectral Gamut Mapping and Separation, SSGMS, was proposed, which results in almost artifact-free spectral reproduction under a set of various illuminants. The quality control stage is usually the last stage in any printing pipeline. Nowadays, the quality of the printout is usually controlled only in terms of colorimetric accuracy and common printing artifacts. However, some gloss-related artifacts, such as gloss-differential (inconsistent gloss appearance across an image, caused mostly by variations in deposited ink area coverage on different spots), are ignored, because no strategy to avoid them exists. In order to avoid such gloss-related artifacts and to control the glossiness of the printout locally, three printing strategies were proposed. In general, for perceptually accurate reproduction of color and gloss appearance attributes, understanding the relationship between measured values and perceived magnitudes of these attributes is essential. There has been much research into reproduction of colors within perceptually meaningful color spaces, but little research from the gloss perspective has been carried out. Most of these studies are based on simulated display-based images (mostly with neutral colors) and do not take real objects into account. In this dissertation, three psychophysical experiments were conducted in order to investigate the relationship between measured gloss values (objective quantities) and perceived gloss magnitudes (subjective quantities) using real colored samples printed by the aforementioned proposed printing strategies. These experiments revealed that the relationship mentioned can be explained by a Power function according to Stevens' Power Law, considering almost the entire gloss range. Another psychophysical experiment was also conducted in order to investigate the interrelation between perceived surface gloss and texture, using 2.5D samples printed in two different texture types and with various gloss levels and texture elevations. According to the results of this experiment, different macroscopic texture types and levels (in terms of texture elevation) were found to influence the perceived surface gloss level slightly. No noticeable influence of surface gloss on the perceived texture level was observed, indicating texture constancy regardless of the gloss level printed. The SSGMS approach proposed for the spectral reproduction, the three printing strategies presented for gloss printing, and the results of the psychophysical experiments conducted on gloss printing and appearance can be used to improve the overall print quality in terms of color and gloss reproduction.

Show publication details

Steiger, Martin; Fellner, Dieter W. (Betreuer); Kohlhammer, Jörn (Betreuer)

Supporting Management of Sensor Networks through Interactive Visual Analysis

2015

Darmstadt, TU, Diss., 2015

With the increasing capabilities of measurement devices and computing machines, the amount of recorded data grows rapidly. It is so high that manual processing is no longer feasible. The Visual Analytics approach is powerful because it combines the strengths of human recognition and vision system with today's computing power. Different, but strongly linked visualizations and views provide unique perspectives on the same data elements. The views are linked using position on the screen as well as color, which also plays a secondary role in indicating the degree of similarity. This enables the human recognition system to identify trends and anomalies in a network of measurement readings. As a result, the data analyst has the ability to approach more complex questions such as: are there anomalies in the measurement records? What does the network usually look like? In this work we propose a collection of Visual Analytics approaches to support the user in exploratory search and related tasks in graph data sets. One aspect is graph navigation, where we use the information of existing labels to support the user in analyzing with this data set. Another consideration is the preservation of the user's mental map, which is supported by smooth transitions between individual keyframes. The later chapters focus on sensor networks, a type of graph data that additionally contains time series data on a per-node basis; this adds an extra dimension of complexity to the problem space. This thesis contributes several techniques to the scientific community in different domains and we summarize them as follows. We begin with an approach for network exploration. This forms the basis for subsequent contributions, as it to supports user in the orientation and the navigation in any kind of network structure. This is achieved by providing a showing only a small subset of the data (in other words: a local graph view). The user expresses interest in a certain area by selecting one of more focus nodes that define the visible subgraph. Visual cues in the form of pointing arrows indicate other areas of the graph that could be relevant for the user. Based on this network exploration paradigm, we present a combination of different techniques that stabilize the layout of such local graph views by reducing acting forces. As a result, the movement of nodes in the node-link diagram is reduced, which reduces the mental effort to track changes on the screen. However, up to this point the approach suffers from one of the most prominent shortcomings of force-directed graph layouts. Little changes in the initial setup, force parameters, or graph topology changes have a strong impact on the visual representation of the drawing. When the user explores the network, the set of visible nodes continuously changes and therefore the layout will look different when an area of the graph is visited a second time. This makes it difficult to identify differences or recognize different drawing as equal in terms of topology. We contribute an approach for the deterministic generation of layouts based on pre-computed layout patches that are stitched at runtime. This ensures that even force-directed layouts are deterministic, allowing the analyst to recognize previvously explored areas of the graph. In the next step, we apply these rather general purpose concepts from theory in practical applications. One of the most important network category is that of sensor networks, a type of graph data structure where every node is annotated with a time series. Such networks exist in the form of electric grids and other supply networks. In the wake of distributed and localized energy generation, the analysis of these networks becomes more and more important. We present and discuss a multi-view and multi-perspective environment for network analysis of sensor networks that integrates different data sources. It is then extended into a visualization environment that enables the analyst to track the automated analysis of the processing pipeline of an expert system. As a result, the user can verify the correctness of the system and intervene where necessary. One key issue with expert systems, which typically operate on manually written rules, is that they can deal with explicit statements. They cannot grasp terms such as "uncommon" or "anomalous". Unfortunately, this is often what the domain experts are looking for. We therefore modify and extend the system into an integrated analysis system for the detection of similar patterns in space and in different granularities of time. Its purpose is to obtain an overview of a large system and to identify hot spots and other anomalies. The idea here is to use similar colors to indicate similar patterns in the network. For that, it is vital to be able to rely on the mapping of time series patterns to color. The Colormap-Explorer supports the analysis and comparison of different implementations of 2D color maps to find the best fit for the task. As soon as the domain expert has identified problems in the networks, he or she might want to take countermeasures to improve the network stability. We present an approach that integrates simulation in the process to perform "What-If" analysis based on an underlying simulation framework. Subsequent runs can be compared to quickly identify differences and discover the effect of changes in the network. The approaches that are presented can be utilized in a large variety of applications and application domains. This enables the domain expert to navigate and explore networks, find key elements such as bridges, and detect spurious trends early.

Show publication details

Franke, Tobias; Fellner, Dieter W. (Betreuer); Wimmer, Michael (Betreuer)

The Delta Radiance Field

2015

Darmstadt : Technische Universität, 2015

Darmstadt, TU, Diss., 2015

The wide availability of mobile devices capable of computing high fidelity graphics in real-time has sparked a renewed interest in the development and research of Augmented Reality applications. Within the large spectrum of mixed real and virtual elements one specific area is dedicated to produce realistic augmentations with the aim of presenting virtual copies of real existing objects or soon to be produced products. Surprisingly though, the current state of this area leaves much to be desired: Augmenting objects in current systems are often presented without any reconstructed lighting whatsoever and therefore transfer an impression of being glued over a camera image rather than augmenting reality. In light of the advances in the movie industry, which has handled cases of mixed realities from one extreme end to another, it is a legitimate question to ask why such advances did not fully reflect onto Augmented Reality simulations as well. Generally understood to be real-time applications which reconstruct the spatial relation of real world elements and virtual objects, Augmented Reality has to deal with several uncertainties. Among them, unknown illumination and real scene conditions are the most important. Any kind of reconstruction of real world properties in an ad-hoc manner must likewise be incorporated into an algorithm responsible for shading virtual objects and transferring virtual light to real surfaces in an ad-hoc fashion. The immersiveness of an Augmented Reality simulation is, next to its realism and accuracy, primarily dependent on its responsiveness. Any computation affecting the final image must be computed in real-time. This condition rules out many of the methods used for movie production. The remaining real-time options face three problems: The shading of virtual surfaces under real natural illumination, the relighting of real surfaces according to the change in illumination due to the introduction of a new object into a scene, and the believable global interaction of real and virtual light. This dissertation presents contributions to answer the problems at hand. Current state-of-the-art methods build on Differential Rendering techniques to fuse global illumination algorithms into AR environments. This simple approach has a computationally costly downside, which limits the options for believable light transfer even further. This dissertation explores new shading and relighting algorithms built on a mathematical foundation replacing Differential Rendering. The result not only presents a more efficient competitor to the current state-of-the-art in global illumination relighting, but also advances the field with the ability to simulate effects which have not been demonstrated by contemporary publications until now.

Show publication details

Eggeling, Eva; Settgast, Volker; Silva, Nelson; Poiger, Michael; Zeh, Theodor; Fellner, Dieter W.

The Sixth Sense of an Air Traffic Controller

2015

Schaefer, Dirk (Ed.): Proceedings of the SESAR Innovation Days [online]. [cited 31. March 2016] http://www.sesarinnovationdays.eu/2015/papersandpresentations: EUROCONTROL, 2015, 8 p.

SESAR Innovation Days <5, 2015, Bologna, Italy>

The project Sixth Sense postulates that the users body language is different at "good" and "bad" decisions. Therefore, in Sixth Sense we are looking for patterns or hidden data signs that allow us to detect moments of bad and good decisions that could be incorporated in an automated system in order to detect and eventually predict the next actions of a user. In our case the user is an Air Traffic Controller (ATCO). Specifically, we intend to analyse the correlation between the change in the ATCO's behaviour - expressed through his body language - and the quality of his/her decision. For that, an experiment was set up to collect, explore and analyse data about the user behaviour. The results of our work may be used for early warnings for upcoming "bad" situations or decision aids for ATCOs.

Show publication details

Kim, Hyosun; Schinko, Christoph; Havemann, Sven; Redi, Ivan; Redi, Andrea; Fellner, Dieter W.

Tiled Projection Onto Deforming Screens

2015

Borgo, Rita (Ed.) et al.: Computer Graphics and Visual Computing : CGVC 2015. The Eurographics Association, 2015, pp. 35-42

Computer Graphics and Visual Computing (CGVC) <2015, London, UK>

For the next generation of visual installations it will not be sufficient to surround the visitor by stunning responsive audiovisual experiences - the next step is that space itself deforms in response to the user or user groups. Dynamic reconfigurable spaces are a new exciting possibility to influence the behaviour of groups and individuals; they may have the potential of stimulating various different social interactions and behaviours in a user-adapted fashion. However, some technical hurdles must be overcome. Projecting on larger surfaces, like a ceiling screen of 6 8 meters, is typically possible only with a tiled projection, i.e., with multiple projectors creating one large seamless image. This works well with a static ceiling; however, when the ceiling dynamically moves and deforms, the tiling becomes visible since the images no longer match. In this paper we present a method that can avoid such artifacts by dynamically adjusting the tiled projection to the deforming surface. Our method is surprisingly simple and efficient, and it does not require any image processing at runtime, nor any 3D reconstruction of the surface at any point.

Show publication details

Bernard, Jürgen; Daberkow, Debora; Fellner, Dieter W.; Fischer, Katrin; Koepler, Oliver; Kohlhammer, Jörn; Runnwerth, Mila; Ruppert, Tobias; Schreck, Tobias; Sens, Irina

VisInfo: A Digital Library System for Time Series Research Data Based on Exploratory Search - a User-centered Design Approach

2015

International Journal on Digital Libraries, Vol.16 (2015), 1, pp. 37-59

To this day, data-driven science is a widely accepted concept in the digital library (DL) context (Hey et al. in The fourth paradigm: data-intensive scientific discovery. Microsoft Research, 2009). In the same way, domain knowledge from information visualization, visual analytics, and exploratory search has found its way into the DL workflow. This trend is expected to continue, considering future DLchallenges such as content-based access to newdocument types, visual search, and exploration for information landscapes, or big data in general. To cope with these challenges, DL actors need to collaborate with external specialists from different domains to complement each other and succeed in given tasks such as making research data publicly available. Through these interdisciplinary approaches, the DL ecosystem may contribute to applications focused on data-driven science and digital scholarship. In this work, we present Vis- Info (2014) , a web-based digital library system (DLS) with the goal to provide visual access to time series research data. Based on an exploratory search (ES) concept (White and Roth in Synth Lect Inf Concepts Retr Serv 1(1):1-98, 2009), VisInfo at first provides a content-based overview visualization of large amounts of time series research data. Further, the system enables the user to define visual queries by example or by sketch. Finally, VisInfo presents visual-interactive capability for the exploration of search results. The development process of VisInfo was based on the user-centered design principle. Experts from computer science, a scientific digital library, usability engineering, and scientists from the earth, and environmental sciences were involved in an interdisciplinary approach. We report on comprehensive user studies in the requirement analysis phase based on paper prototyping, user interviews, screen casts, and user questionnaires. Heuristic evaluations and two usability testing rounds were applied during the system implementation and the deployment phase and certify measurable improvements for our DLS. Based on the lessons learned in VisInfo, we suggest a generalized project workflow that may be applied in related, prospective approaches.

Show publication details

Landesberger, Tatiana von; Diel, Simon; Bremm, Sebastian; Fellner, Dieter W.

Visual Analysis of Contagion in Networks

2015

Information Visualization, Vol.14 (2015), 2, pp. 93-110. Published online before print May 28, 2013

Contagion is a process whereby the collapse of a node in a network leads to the collapse of neighboring nodes and thereby sets off a chain reaction in the network. It thus creates a special type of time-dependent network. Such processes are studied in various applications, for example, in financial network analysis, infection diffusion prediction, supply-chain management, or gene regulation. Visual analytics methods can help analysts examine contagion effects. For this purpose, network visualizations need to be complemented with specific features to illustrate the contagion process. Moreover, new visual analysis techniques for comparison of contagion need to be developed. In this paper, we propose a system geared to the visual analysis of contagion. It includes the simulation of contagion effects as well as their visual exploration. We present new tools able to compare the evolution of the different contagion processes. In this way, propagation of disturbances can be effectively analyzed. We focus on financial networks; however, our system can be applied to other use cases as well.

Show publication details

Nazemi, Kawa; Retz, Reimond; Burkhardt, Dirk; Kuijper, Arjan; Kohlhammer, Jörn; Fellner, Dieter W.

Visual Trend Analysis with Digital Libraries

2015

Lindstaedt, Stefanie (Ed.) et al.: i-KNOW 2015 : Proceedings of the 15th International Conference on Knowledge Technologies and Data-driven Business. New York: ACM, 2015. (ACM International Conference Proceedings Series 1098), pp. 14:1-14:8

International Conference on Knowledge Management and Data-driven Business (I-KNOW) <15, 2015, Graz, Austria>

The early awareness of new technologies and upcoming trends is essential for making strategic decisions in enterprises and research. Trends may signal that technologies or related topics might be of great interest in the future or obsolete for future directions. The identification of such trends premises analytical skills that can be supported through trend mining and visual analytics. Thus the earliest trends or signals commonly appear in science, the investigation of digital libraries in this context is inevitable. However, digital libraries do not provide sufficient information for analyzing trends. It is necessary to integrate data, extract information from the integrated data and provide effective interactive visual analysis tools. We introduce in this paper a model that investigates all stages from data integration to interactive visualization for identifying trends and analyzing the market situation through our visual trend analysis environment. Our approach improves the visual analysis of trends by investigating the entire transformation steps from raw and structured data to visual representations.

Show publication details

Hecher, Martin; Traxler, Christoph; Hesina, Gerd; Fuhrmann, Anton; Fellner, Dieter W.

Web-based Visualization Platform for Geospatial Data

2015

Braz, José (Ed.) et al.: IVAPP 2015. Proceedings : 6th International Conference on Information Visualization Theory and Applications. SciTePress, 2015, pp. 311-316

International Conference on Information Visualization Theory and Applications (IVAPP) <6, 2015, Berlin, Germany>

This paper describes a new platform for geospatial data analysis. The main purpose is to explore new ways to visualize and interact with multidimensional satellite data and computed models from various Earth Observation missions. The new V-MANIP platform facilitates a multidimensional exploring approach that allows to view the same dataset in multiple viewers at the same time to efficiently find and explore interesting features within the shown data. The platform provides visual analytics capabilities including viewers for displaying 2D or 3D data representations, as well as for volumetric input data. Via a simple configuration file the system can be configured for different stakeholder use cases, by defining desired data sources and available viewer modules. The system architecture, which will be discussed in this paper in detail, uses Open Geospatial Consortium web service interfaces to allow an easy integration of new visualization modules. The implemented software is based on open source libraries and uses modern web technologies to provide a platform-independent, plugin-free user experience.

Show publication details

Caldera, Christian; Berndt, Rene; Eggeling, Eva; Schröttner, Martin; Fellner, Dieter W.

"Mining Bibliographic Data" - Using Author's Publication History for a Brighter Reviewing Future within Conference Management Systems

2014

International Journal on Advances in Intelligent Systems, Vol.7 (2014), 3 & 4, pp. 609-619

Organizing and managing a conference is a cumbersome and time consuming task. Electronic conference management systems support reviewers, conference chairs and the International Programme Committee members (IPC) in managing the huge amount of submissions. These systems implement the complete workflow of scientific conferences. One of the most time consuming tasks within a conference is the assignment of IPC members to the submissions. Finding the best-suited person for reviewing a paper strongly depends on the expertise of the IPC member. There are already various approaches like "bidding" or "topic matching". However, these approaches allocate a considerable amount of resources on the IPC member side. This article introduces how the workflow of a conference looks like and what the challenges for an electronic conference management are. It will take a close look on the latest version of the Eurographics Submission and Review Management system (SRMv2). Finally, it will introduce an extension of SRMv2 called the Paper Rating and IPC Matching Tool (PRIMA), which reduces the workload for both - IPC members and chairs - to support and improve the assignment process.

Show publication details

Braun, Andreas; Wichert, Reiner; Kuijper, Arjan; Fellner, Dieter W.

A Benchmarking Model for Sensors in Smart Environments

2014

Aarts, Emile (Ed.) et al.: Ambient Intelligence : European Conference, AmI 2014. Berlin, Heidelberg, New York: Springer, 2014. (Lecture Notes in Computer Science (LNCS) 8850), pp. 242-257

European Conference on Ambient Intelligence (AmI) <11, 2014, Eindhoven, The Netherlands>

In smart environments, developers can choose from a large variety of sensors supporting their use case that have specific advantages or disadvantages. In this work we present a benchmarking model that allows estimating the utility of a sensor technology for a use case by calculating a single score, based on a weighting factor for applications and a set of sensor features. This set takes into account the complexity of smart environment systems that are comprised of multiple subsystems and applied in non-static environments. We show how the model can be used to find a suitable sensor for a use case and the inverse option to find suitable use cases for a given set of sensors. Additionally, extensions are presented that normalize differently rated systems and compensate for central tendency bias. The model is verified by estimating technology popularity using a frequency analysis of associated search terms in two scientific databases.

Show publication details

Weber, Daniel; Mueller-Roemer, Johannes; Altenhofen, Christian; Stork, André; Fellner, Dieter W.

A p-Multigrid Algorithm using Cubic Finite Elements for Efficient Deformation Simulation

2014

Bender, Jan (Ed.) et al.: VRIPHYS 14: 11th Workshop in Virtual Reality Interactions and Physical Simulations. Goslar: Eurographics Association, 2014, pp. 49-58

International Workshop in Virtual Reality Interaction and Physical Simulations (VRIPHYS) <11, 2014, Bremen, Germany>

We present a novel p-multigrid method for efficient simulation of co-rotational elasticity with higher-order finite elements. In contrast to other multigrid methods proposed for volumetric deformation, the resolution hierarchy is realized by varying polynomial degrees on a tetrahedral mesh. We demonstrate the efficiency of our approach and compare it to commonly used direct sparse solvers and preconditioned conjugate gradient methods. As the polynomial representation is defined w.r.t. the same mesh, the update of the matrix hierarchy necessary for co-rotational elasticity can be computed efficiently. We introduce the use of cubic finite elements for volumetric deformation and investigate different combinations of polynomial degrees for the hierarchy. We analyze the applicability of cubic finite elements for deformation simulation by comparing analytical results in a static scenario and demonstrate our algorithm in dynamic simulations with quadratic and cubic elements. Applying our method to quadratic and cubic finite elements results in speed up of up to a factor of 7 for solving the linear system.

Show publication details

Schinko, Christoph; Berndt, Rene; Eggeling, Eva; Fellner, Dieter W.

A Scalable Rendering Framework for Generative 3D Content

2014

Polys, Nicholas F. (General Chair) et al.: Proceedings Web3D 2014 : 19th International Conference on 3D Web Technology. New York: ACM, 2014, pp. 81-87

International Conference on 3D Web Technology (WEB3D) <19, 2014, Vancouver, BC, Canada>

Delivering high quality 3D content through a web browser is still a challenge especially when intellectual property (IP) protection is necessary. Thus, the transfer of 3D modeling information to a client should be avoided. In our work we present a solution to this problem by introducing a server-side rendering framework. Only images are transferred to the client, the actual 3D content is not delivered. By providing simple proxy geometry it is still possible to provide direct interaction on the client. Our framework incorporates the Generative Modeling Language (GML) for the description and rendering of generative content. It is then possible to not only interact with the 3D content, but to modify the actual shape within the possibilities of the generative content. By introducing a control layer and encapsulating processing and rendering of the generative content in a so called GML Rendering Unit (GRU) it is possible to provide a scalable rendering framework.

Show publication details

Nazemi, Kawa; Fellner, Dieter W. (Betreuer); Wrobel, Stefan (Betreuer)

Adaptive Semantics Visualization

2014

Goslar : Eurographics Association, 2014

Darmstadt, TU, Diss., 2014

Human access to the increasing amount of information and data plays an essential role for the professional level and also for everyday life. While information visualization has developed new and remarkable ways for visualizing data and enabling the exploration process, adaptive systems focus on users' behavior to tailor information for supporting the information acquisition process. Recent research on adaptive visualization shows promising ways of synthesizing these two complementary approaches and make use of the surpluses of both disciplines. The emerged methods and systems aim to increase the performance, acceptance, and user experience of graphical data representations for a broad range of users. Although the evaluation results of the recently proposed systems are promising, some important aspects of information visualization are not considered in the adaptation process. The visual adaptation is commonly limited to change either visual parameters or replace visualizations entirely. Further, no existing approach adapts the visualization based on data and user characteristics. Other limitations of existing approaches include the fact that the visualizations require training by experts in the field. In this thesis, we introduce a novel model for adaptive visualization. In contrast to existing approaches, we have focused our investigation on the potentials of information visualization for adaptation. Our reference model for visual adaptation not only considers the entire transformation, from data to visual representation, but also enhances it to meet the requirements for visual adaptation. Our model adapts different visual layers that were identified based on various models and studies on human visual perception and information processing. In its adaptation process, our conceptual model considers the impact of both data and user on visualization adaptation. We investigate different approaches and models and their effects on system adaptation to gather implicit information about users and their behavior. These are than transformed and applied to affect the visual representation and model human interaction behavior with visualizations and data to achieve a more appropriate visual adaptation. Our enhanced user model further makes use of the semantic hierarchy to enable a domain-independent adaptation. To face the problem of a system that requires to be trained by experts, we introduce the canonical user model that models the average usage behavior with the visualization environment. Our approach learns from the behavior of the average user to adapt the different visual layers and transformation steps. This approach is further enhanced with similarity and deviation analysis for individual users to determine similar behavior on an individual level and identify differing behavior from the canonical model. Users with similar behavior get similar visualization and data recommendations, while behavioral anomalies lead to a lower level of adaptation. Our model includes a set of various visual layouts that can be used to compose a multi-visualization interface, a sort of "'visualization cockpit"'. This model facilitates various visual layouts to provide different perspectives and enhance the ability to solve difficult and exploratory search challenges. Data from different data-sources can be visualized and compared in a visual manner. These different visual perspectives on data can be chosen by users or can be automatically selected by the system. This thesis further introduces the implementation of our model that includes additional approaches for an efficient adaptation of visualizations as proof of feasibility. We further conduct a comprehensive user study that aims to prove the benefits of our model and underscore limitations for future work. The user study with overall 53 participants focuses with its four conditions on our enhanced reference model to evaluate the adaptation effects of the different visual layers.

Show publication details

Braun, Andreas; Fellner, Dieter W. (Betreuer); Mühlhäuser, Max (Betreuer)

Application and Validation of Capacitive Proximity Sensing Systems in Smart Environments

2014

Darmstadt, TU, Diss., 2014

Smart environments feature a number of computing and sensing devices that support occupants in performing their tasks. In the last decades there has been a multitude of advances in miniaturizing sensors and computers, while greatly increasing their performance. As a result new devices are introduced into our daily lives that have a plethora of functions. Gathering information about the occupants is fundamental in adapting the smart environment according to preference and situation. There is a large number of different sensing devices available that can provide information about the user. They include cameras, accelerometers, GPS, acoustic systems, or capacitive sensors. The latter use the properties of an electric field to sense presence and properties of conductive objects within range. They are commonly employed in finger-controlled touch screens that are present in billions of devices. A less common variety is the capacitive proximity sensor. It can detect the presence of the human body over a distance, providing interesting applications in smart environments. Choosing the right sensor technology is an important decision in designing a smart environment application. Apart from looking at previous use cases, this process can be supported by providing more formal methods. In this work I present a benchmarking model that is designed to support this decision process for applications in smart environments. Previous benchmarks for pervasive systems have been adapted towards sensors systems and include metrics that are specific for smart environments. Based on distinct sensor characteristics, different ratings are used as weighting factors in calculating a benchmarking score. The method is verified using popularity matching in two scientific databases. Additionally, there are extensions to cope with central tendency bias and normalization with regards to average feature rating. Four relevant application areas are identified by applying this benchmark to applications in smart environments and capacitive proximity sensors. They are indoor localization, smart appliances, physiological sensing and gesture interaction. Any application area has a set of challenges regarding the required sensor technology, layout of the systems, and processing that can be tackled using various new or improved methods. I will present a collection of existing and novel methods that support processing data generated by capacitive proximity sensors. These are in the areas of sparsely distributed sensors, model-driven fitting methods, heterogeneous sensor systems, image-based processing and physiological signal processing. To evaluate the feasibility of these methods, several prototypes have been created and tested for performance and usability. Six of them are presented in detail. Based on these evaluations and the knowledge generated in the design process, I am able to classify capacitive proximity sensing in smart environments. This classification consists of a comparison to other popular sensing technologies in smart environments, the major benefits of capacitive proximity sensors, and their limitations. In order to support parties interested in developing smart environment applications using capacitive proximity sensors, I present a set of guidelines that support the decision process from technology selection to choice of processing methods.

Show publication details

Knöbelreiter, Patrick; Berndt, Rene; Ullrich, Torsten; Fellner, Dieter W.

Automatic Fly-through Camera Animations for 3D Architectural Repositories

2014

Coquillart, Sabine (Ed.) et al.: GRAPP 2014 : Proceedings of the International Conference on Computer Graphics Theory and Applications and International Conference on Information Visualization Theory and Applications. SciTePress, 2014, pp. 335-341

International Conference on Computer Graphics Theory and Applications (GRAPP) <9, 2014, Lisbon, Portugal>

Virtual fly-through animations through computer generated models are a strong tool to convey properties and the appearance of these models. In, e.g., architectural models the big advantage of such a fly-through animation is that it is possible to convey the structure of the model easily. However, the path generation is not always trivial, to get a good looking animation. The proposed approach in this paper can handle arbitrary 3D models and then extract a meaningful and good looking camera path. To visualize the path HTML/X3DOM is used and therefore it is possible to view the final result in a browser with X3DOM support.

Show publication details

Steger, Teena; Fellner, Dieter W. (Betreuer); Sakas, Georgios (Betreuer); Wagner, Manfred (Betreuer)

Bronchoskopische Navigation mittels Pose Estimation des C-Bogens aus musterkodierten Fluoroskopie-Aufnahmen

2014

Darmstadt, TU, Diss., 2014

Die Bronchoskopie ist die wichtigste und sicherste Untersuchungsmethode bei Verdacht auf Lungenkrebs. Sie dient sowohl der visuellen Inspektion der Atemwege als auch der Gewebeentnahme an verdächtigen Läsionen. Erst aufgrund der so gewonnenen Probe kann entschieden werden, ob es sich um bösartiges Gewebe handelt. Damit die Biopsie an der korrekten Stelle durchgeführt wird, ist es besonders wichtig, dass die bronchoskopischen Operationsinstrumente präzise innerhalb des Bronchialbaums geführt werden können. Dabei behilft sich der Arzt zum einen mit der Kamera an der Bronchoskopspitze und zum anderen mit der intraoperativen C-Bogen-Durchleuchtung. Leider liefert keine dieser Visualisierungstechniken eine 3D-Sicht des Bronchialbaums oder die aktuelle 3D-Position des Instruments. Genau diese Hilfestellung leisten bronchoskopische Navigationssysteme und tragen somit erheblich zur Genauigkeit der Instrumentenführung und Beschleunigung des Eingriffs bei. Bronchoskopische Navigationssysteme verwenden meist EM-Sensoren, um die aktuelle Position des Instruments innerhalb der Bronchien zu verfolgen. Solche Systeme müssen nicht nur kostspielig und aufwändig installiert, sondern die jeweiligen getrackten Instrumente auch nach jedem Gebrauch teuer ersetzt werden. Um dieses Problem zu umgehen, wird auch an Systemen entwickelt, die lediglich die Bronchoskopvideobilder zur 2D/3D-Registrierung verwenden. Damit kann die Navigation aber nur so lange angeboten werden, wie die Bronchoskopspitze in die Bronchien vorgeschoben werden kann. Im Normalfall wird aber gerade in den nicht erreichbaren peripheren Verästelungen eine Navigationsstütze benötigt. Deshalb wird in dieser Arbeit ein Verfahren vorgestellt, welches unabhängig von der Bronchoskopreichweite anwendbar ist und ausschließlich auf die bereits vorhandene Ausstattung im OP-Saal zurückgreift. Somit wird eine höhere klinische Einsetzbarkeit und Akzeptanz erwartet. Die neuartige Grundidee ist hierbei, dass bei bekannter Aufnahmeposition des C-Bogens ein virtueller Strahl von C-Bogen-Röntgenquelle durch das Patienten-CT zur Instrumentenspitzenposition auf dem Durchleuchtungsbild generiert werden kann. Dieser 3D-Strahl schneidet dann den Bronchialbaum im CT genau an der Stelle, wo sich das Instrument aktuell befindet. Die große Herausforderung ist nun die C-Bogen Pose während der Aufnahme zu bestimmen. Dafür wurde von mir eine innovative Markerplatte entwickelt, die auf dem Patiententisch befestigt wird. Bei jeder Aufnahme wird ein Teil der radioopaken Marker auf der Durchleuchtung abgebildet. Um nun eine C-Bogen Pose Estimation durchzuführen, müssen die abgebildeten 2D-Marker eindeutig ihren korrespondierenden 3D-Markern auf der Platte zugeordnet werden. Deshalb habe ich bei der Anordnung der Marker erstmals die projektive Invariante Doppelverhältnis eingesetzt. Dies ermöglicht, dass die Marker auch nach Projektion verlässlich identifiziert und zugeordnet werden. Die entworfene Markerplatte wurde zahlreichen Experimenten unterzogen, darunter auch Phantom- und Tierkadavertests. Dabei wurden sehr gute quantitative Ergebnisse für die C-Bogen Pose Estimation bezüglich Erfolgsraten und Genauigkeiten gemessen. In dieser Arbeit stellt weitere wichtige Komponenten eines Bronchoskopie-Navigationssystems vor: Bronchialbaumsegmentierung und -skeletonisierung, Tumorsegmentierung, 2D-Instrumentenverfolgung, Patient-zu-Tisch-Registrierung, Pfadberechnung und 3D-Visualisierung. Dabei wurden vorhandene Lösungen aus der Literatur aufgegriffen bzw. erweitert aber auch neue Methoden entwickelt. Alle diese Komponenten wurden sowohl einzeln und als auch im Zusammenspiel miteinander untersucht. Bei Tests mit einem Bronchialbaumphantom erzielte sehr gute qualitative Ergebnisse..

Show publication details

Hadjiprocopis, Andreas; Wenzel, Konrad; Rothermel, Mathias; Ioannides, Marinos; Fritsch, Dieter; Klein, Michael; Johnsons, Paul S.; Weinlinger, Guenther; Doulamis, Anastasios; Protopapadakis, Eftychios; Kyriakaki, Georgia; Makantasis, Kostas; Fellner, Dieter W.; Stork, André; Santos, Pedro

Cloud-based 3D Reconstruction of Cultural Heritage Monuments using Open Access Image Repositories

2014

Klein, Reinhard (Ed.) et al.: GCH 2014. Short Papers - Posters : Eurographics Workshop on Graphics and Cultural Heritage. Goslar: Eurographics Association, 2014, pp. 5-8

Eurographics Symposium on Graphics and Cultural Heritage (GCH) <12, 2014, Darmstadt, Germany>

A large number of photographs of cultural heritage items and monuments is publicly available in various Open Access Image Repositories (OAIR) and social media sites. Metadata inserted by camera, user and host site may help to determine the photograph content, geo-location and date of capture, thus allowing us, with relative success, to localise photos in space and time. Additionally, developments in Photogrammetry and Computer Vision, such as Structure from Motion (SfM), provide a simple and cost-effective method of generating relatively accurate camera orientations and sparse and dense 3D point clouds from 2D images. Our main goal is to provide a software tool able to run on desktop or cluster computers or as a back end of a cloud-based service, enabling historians, architects, archaeologists and the general public to search, download and reconstruct 3D point clouds of historical monuments from hundreds of images from the web in a cost-effective manner. The end products can be further enriched with metadata and published. This paper describes a workflow for searching and retrieving photographs of historical monuments from OAIR, such as Flickr and Picasa, and using them to build dense point clouds using SfM and dense image matching techniques. Computational efficiency is improved by a technique which reduces image matching time by using an image connectivity prior derived from low-resolution versions of the original images. Benchmarks for two large datasets showing the respective efficiency gains are presented.

Show publication details

Edelsbrunner, Johannes; Krispel, Ulrich; Havemann, Sven; Sourin, Alexei; Fellner, Dieter W.

Constructive Roof Geometry

2014

2014 International Conference on Cyberworlds : CW 2014. Los Alamitos, Calif.: IEEE Computer Society Conference Publishing Services (CPS), 2014, pp. 63-70

International Conference on Cyberworlds (CW) <13, 2014, Santander, Spain>

While the growing demand for new building models contained in virtual worlds, games, and movies, makes the easy and fast creation of modifiable models more and more important, 3D modeling of buildings can be a tedious task due to their sometimes complex geometry. For historic buildings, especially the roofs can be challenging. We present a new method of combining simple building solids to form more complex buildings, and give an emphasis on the blending of roof faces. This can be integrated in common pipelines for procedural modeling of buildings and will bring more expressiveness than existing methods.

Show publication details

Grabner, Harald; Ullrich, Torsten; Fellner, Dieter W.

Content-based Retrieval of 3D Models using Generative Modeling Techniques

2014

Klein, Reinhard (Ed.) et al.: GCH 2014. Short Papers - Posters : Eurographics Workshop on Graphics and Cultural Heritage. Goslar: Eurographics Association, 2014, pp. 10-12

Eurographics Symposium on Graphics and Cultural Heritage (GCH) <12, 2014, Darmstadt, Germany>

In this paper we present a novel 3D model retrieval approach based on generative modeling techniques. In our approach generative models are created by domain experts in order to describe 3D model classes. These generative models span a shape space, of which a number of training samples is taken at random. The samples are used to train content-based retrieval methods. With a trained classifier, techniques based on semantic enrichment can be used to index a repository. Furthermore, as our method uses solely generative 3D models in the training phase, it eliminates the cold start problem. We demonstrate the effectiveness of our method by testing it against the Princeton shape benchmark.

Show publication details

Santos, Pedro; Ritz, Martin; Tausch, Reimar; Schmedt, Hendrik; Monroy Rodriguez, Rafael; Stefano, Antonio; Posniak, Oliver; Fuhrmann, Constanze; Fellner, Dieter W.

CultLab3D - On the Verge of 3D Mass Digitization

2014

Klein, Reinhard (Ed.) et al.: GCH 2014 : Eurographics Workshop on Graphics and Cultural Heritage. Goslar: Eurographics Association, 2014, pp. 65-73

Eurographics Symposium on Graphics and Cultural Heritage (GCH) <12, 2014, Darmstadt, Germany>

Acquisition of 3D geometry, texture and optical material properties of real objects still consumes a considerable amount of time, and forces humans to dedicate their full attention to this process. We propose CultLab3D, an automatic modular 3D digitization pipeline, aiming for efficient mass digitization of 3D geometry, texture, and optical material properties. CultLab3D requires minimal human intervention and reduces processing time to a fraction of today's efforts for manual digitization. The final step in our digitization workflow involves the integration of the digital object into enduring 3D Cultural Heritage Collections together with the available semantic information related to the object. In addition, a software tool facilitates virtual, location-independent analysis and publication of the virtual surrogates of the objects, and encourages collaboration between scientists all around the world. The pipeline is designed in a modular fashion and allows for further extensions to incorporate newer technologies. For instance, by switching scanning heads, it is possible to acquire coarser or more refined 3D geometry.

Show publication details

Fuhrmann, Constanze; Santos, Pedro; Fellner, Dieter W.

CultLab3D: Ein mobiles 3D-Scanning Szenario für Museen und Galerien

2014

Bienert, Andreas (Ed.) et al.: EVA 2014 Berlin. Proceedings : Elektronische Medien & Kunst, Kultur, Historie. Berlin: Gesellschaft zur Förderung angewandter Informatik e.V., 2014, pp. 106-109

Electronic Imaging & the Visual Arts (EVA) <21, 2014, Berlin, Germany>

Im Projekt CultLab3D werden Kulturgüter dreidimensional und in sehr hoher Qualität erfasst. Dabei geht es um die Entwicklung einer neuartigen Scan-Technologie in Form eines mobilen Digitalisierungslabors, das aus flexibel einsetzbaren Modulen für die schnelle und ökonomische Erfassung von 3DGeometrie-, Textur- und Materialeigenschaften besteht. Dabei soll langfristig die Qualität der Daten auch wissenschaftlichen Ansprüchen genügen, die bislang Originalvorlagen erfordern. Das System soll hinsichtlich des Aufwands (u.a. Scan-Geschwindigkeit), der erzielbaren Qualität und der Kosten den Markt revolutionieren. Eine Marktreife wird für 2015 erwartet.

Show publication details

Schiffer, Thomas; Fellner, Dieter W.

Efficient Multi-kernel Ray Tracing for GPUs

2014

Coquillart, Sabine (Ed.) et al.: GRAPP 2014 : Proceedings of the International Conference on Computer Graphics Theory and Applications and International Conference on Information Visualization Theory and Applications. SciTePress, 2014, pp. 209-217

International Conference on Computer Graphics Theory and Applications (GRAPP) <9, 2014, Lisbon, Portugal>

Images with high visual quality are often generated by a ray tracing algorithm. Despite its conceptual simplicity, designing an efficient mapping of ray tracing computations to massively parallel hardware architectures is a challenging task. In this paper we investigate the performance of state-of-the-art ray traversal algorithms for bounding volume hierarchies on GPUs and discuss their potentials and limitations. Based on this analysis, a novel ray traversal scheme called batch tracing is proposed. It decomposes the task into multiple kernels, each of which is designed for efficient parallel execution. Our algorithm achieves comparable performance to currently prevailing approaches and represents a promising avenue for future research.

Show publication details

Krispel, Ulrich; Ullrich, Torsten; Fellner, Dieter W.

Fast and Exact Plane-based Representation for Polygonal Meshes

2014

Blashki, Katherine (Ed.) et al.: Proceedings of the International Conferences on Interfaces and Human Computer Interaction 2014, Game and Entertainment Technologies 2014 and Computer Graphics, Visualization, Computer Vision and Image Processing 2014 : Part of the Multi Conference on Computer Science and Information Systems, MCCSIS 2014. IADIS Press, 2014, pp. 189-196

IADIS International Conference Computer Graphics, Visualization, Computer Vision and Image Processing (CGVCVIP) <8, 2014, Lisbon, Portugal>

Boolean operations on meshes tend to be non-robust, due to the rounding of newly constructed vertex coordinates. Plane-based mesh representations are known to circumvent the problem for meshes with planar faces: geometric information is stored by face equations, and vertices (as well as newly constructed vertices) are expressed as plane triplets. We first review the properties of plane-based mesh representations and discuss a variant that is optimized for fast evaluation using fixed integer precision and give some practical insights on implementing search structures for indexing of planes and vertices in this representation.

Show publication details

Klein, Reinhard; Santos, Pedro; Fellner, Dieter W.; Scopigno, Roberto

GCH 2014. Short Papers - Posters: Eurographics Workshop on Graphics and Cultural Heritage

2014

Goslar : Eurographics Association, 2014

Eurographics Symposium on Graphics and Cultural Heritage (GCH) <12, 2014, Darmstadt, Germany>

Focus of this year's forum is to present and showcase new developments within the overall process chain, from data acquisition, 3D documentation, analysis and synthesis, semantical modelling, data management, to the point of virtual museums or new forms of interactive presentations and 3D printing solutions. GCH 2014 therefore provides scientists, engineers and CH managers a possibility to discuss new ICT technologies applied to data modelling, reconstruction and processing, digital libraries, virtual museums, interactive environments and applications for CH, ontologies and semantic processing, management and archiving, standards and documentation, as well as its transfer into practice. Short papers present preliminary results and work-in-progress or focusing on on-going projects, the description of project organization, use of technology, and lesson learned.

Show publication details

Klein, Reinhard; Santos, Pedro; Fellner, Dieter W.; Scopigno, Roberto

GCH 2014: Eurographics Workshop on Graphics and Cultural Heritage

2014

Goslar : Eurographics Association, 2014

Eurographics Symposium on Graphics and Cultural Heritage (GCH) <12, 2014, Darmstadt, Germany>

Focus of this year's forum is to present and showcase new developments within the overall process chain, from data acquisition, 3D documentation, analysis and synthesis, semantical modelling, data management, to the point of virtual museums or new forms of interactive presentations and 3D printing solutions. GCH 2014 therefore provides scientists, engineers and CH managers a possibility to discuss new ICT technologies applied to data modelling, reconstruction and processing, digital libraries, virtual museums, interactive environments and applications for CH, ontologies and semantic processing, management and archiving, standards and documentation, as well as its transfer into practice.

Show publication details

Hoßbach, Martin; Sakas, Georgios (Betreuer); Fellner, Dieter W. (Betreuer)

Integrierte miniaturisierte Kameras zur Instrument- und Zielfindung in medizinischen Anwendungen

2014

Darmstadt, TU, Diss., 2014

Im Bereich der Mikroelektronik hat in den vergangenen Jahrzehnten eine rasante technische und technologische Entwicklung stattgefunden, die neben den offensichtlichen Auswirkungen auf das tägliche Leben auch die Werkzeuge der Ärzte beeinflusst hat. Ein Beispiel dafür sind Trackingverfahren, die vielfältig und erfolgreich in der Medizin Anwendung finden und eine Reihe von neuen Behandlungstechniken ermöglicht haben. In medizinischen Anwendungen kommen verschiedenste Trackingsysteme zum Einsatz. Häufig sind es magnetische und optische Trackingsysteme. Beide haben im OP-Umfeld Nachteile: magnetische Trackingsysteme reagieren empfindlich auf Metalle, die im OP häufig vorkommen; optische Trackingsysteme sind wegen der Line-of-Sight-Problematik im OP umständlich zu benutzen. Allgemein sind diese Systeme häufig teuer in der Anschaffung und rechtfertigen bisweilen, verglichen mit den Kosten des jeweiligen Eingriffs, ihren Einsatz nicht. Demgegenüber steht der aktuelle Trend der Miniaturisierung. Kameras werden derzeit immer kleiner und preiswerter. Es wird daher die These aufgestellt, dass die Nachteile von bisherigen Trackingsystemen in bestimmten medizinischen Anwendungen durch die Verwendung miniaturisierter Kameras ausgeglichen werden können, weil diese deutlich dichter am Ort des Geschehens positioniert werden können. Dadurch fällt auch eine unter Umständen schlechtere Bildqualität (im Vergleich zu präzisen Trackingkameras) nicht ins Gewicht. Diese These wird exemplarisch an zwei Anwendungen untersucht. Es wird ein MRT-kompatibles optisches Kopftrackingsystem entwickelt, das die Kopfbewegung eines Patienten mit Hilfe von runden, planaren, einfarbigen Markern auf der Stirn des Patienten verfolgt. Dafür werden Kameras verwendet, die im Innern des Tomografen mit einer Halterung an der Kopfspule befestigt werden. Algorithmen, die in Infrarot-Trackingsystemen Verwendung finden, mussten wegen der Bildqualität der Kameras, den klinischen Anforderungen (Belästigung des Patienten und Belastung des Personals) und den Gegebenheiten im MR-Tomograf teilweise angepasst werden. Für dieses Trackingsystem wurde ein Kreuzkalibrierverfahren entwickelt, das aus wassergefüllten Kugeln ein virtuelles Kalibrierphantom bildet. Es unterscheidet sich damit von bekannten Verfahren, bei denen bei der Kreuzkalibrierung die verwendeten Strukturen, die im MRT-Bild sichtbar sind, und die Strukturen, die im Kamerabild sichtbar sind, unterschiedlich sind. Entsprechende Kalibrierphantome müssen also aufwändig hergestellt oder präzise vermessen werden. Das Trackingsystem wurde theoretisch, praktisch im Labor und klinisch im Probandenversuch evaluiert. Im Rahmen eines klinischen Projektes, bei dem über einen sehr langen Zeitraum wiederholt MRTAufnahmen mit niedriger Auflösung gemacht wurden, konnte mit dem Trackingsystem eine virtuelle Immobilisation erreicht werden. Weiterhin wurde ein Navigationssystem für die ultraschallgesteuerte Punktion entwickelt. Der Arzt wird dabei durch die Visualisierung des Verlaufs der Nadel im Ultraschallbild bei der Punktion unterstützt. Dafür wurde ein Nadeltrackingsystem entwickelt, das aus zwei preiswerten Kameras besteht, die am Schallkopf befestigt sind. Aus den Bildern der Kameras wird kantenbasiert die Nadel extrahiert, ihr Verlauf relativ zum Ultraschallkopf ermittelt, und Verlauf und Schnittpunkt der Nadel mit dem Ultraschallbild dargestellt. Das Navigationssystem wurde sowohl theoretisch als auch praktisch im Labor am Phantom evaluiert. Daran waren Ärzte beteiligt, die entsprechende Eingriffe in ihrem Arbeitsalltag durchführen. Es konnte gezeigt werden, dass die Genauigkeit gegenüber dem Stand der Technik verbessert werden konnte.

Show publication details

Landesberger, Tatiana von; Fiebig, Sebastian; Bremm, Sebastian; Kuijper, Arjan; Fellner, Dieter W.

Interaction Taxonomy for Tracking of User Actions in Visual Analytics Applications

2014

Huang, Weidong (Ed.): Handbook of Human Centric Visualization. Berlin, Heidelberg, New York: Springer, 2014, pp. 653-670

In various application areas (social science, transportation, or medicine) analysts need to gain knowledge from large amounts of data. This analysis is often supported by interactive Visual Analytics tools that combine automatic analysis with interactive visualization. Such a data analysis process is not streamlined, but consists of several steps and feedback loops. In order to be able to optimize the process, identify problems, or common problem solving strategies, recording and reproducibility of this process is needed. This is facilitated by tracking of user actions categorized according to taxonomy of interactions. Visual Analytics includes several means of interaction that are differentiated according to three fields: information visualization, reasoning, and data processing. At present, however, only separate taxonomies for interaction techniques exist in these three fields. Each taxonomy covers only a part of the actions undertaken in Visual Analytics. Moreover, as they use different foundations (user intentions vs. user actions) and employ different terminology, it is not clear to what extent they overlap and cover the whole Visual Analytics interaction space. We therefore first compare them and then elaborate a new integrated taxonomy in the context of Visual Analytics. In order to show the usability of the new taxonomy, we specify it on visual graph analysis and apply it to the tracking of user interactions in this area.

Show publication details

Fellner, Dieter W.; Baier, Konrad; Ackeren, Janine van; Bornemann, Heidrun; Wehner, Detlef

Jahresbericht 2013: Fraunhofer-Institut für Graphische Datenverarbeitung IGD

2014

Darmstadt, 2014

Die Forscher des Fraunhofer-Instituts für Graphische Datenverarbeitung IGD machen aus Informationen Bilder und aus Bildern Informationen. Die bild- und modellbasierte Informatik nennt man "Visual Computing". Hierzu zählen Graphische Datenverarbeitung, Computer Vision sowie Virtuelle und Erweiterte Realität. Mithilfe des Visual Computing werden Bilder, Modelle und Graphiken für alle denkbaren computerbasierten Anwendungen verwendet, erfasst und bearbeitet. Dabei setzen die Forscherinnen und Forscher graphische Anwendungsdaten in Wechselbeziehung mit nicht graphischen Daten, was bedeutet, dass sie Bilder, Videos und 3D-Modelle mit Texten, Ton und Sprache rechnergestützt anreichern. Daraus ergeben sich wiederum neue Erkenntnisse, die in innovative Produkte und Dienstleistungen umgesetzt werden können. Dafür werden entsprechend fortgeschrittene Dialogtechniken entworfen. Durch seine zahlreichen Innovationen hebt das Fraunhofer IGD die Interaktion zwischen Mensch und Maschine auf eine neue Ebene.

Show publication details

Nazemi, Kawa; Kuijper, Arjan; Hutter, Marco; Kohlhammer, Jörn; Fellner, Dieter W.

Measuring Context Relevance for Adaptive Semantics Visualizations

2014

Lindstaedt, Stefanie (Ed.) et al.: i-KNOW 2014 : Proceedings of the 14th International Conference on Knowledge Technologies and Data-driven Business. New York: ACM, 2014. (ACM International Conference Proceedings Series 889), Article 14, 8 p.

International Conference on Knowledge Technologies and Data-driven Business (I-KNOW) <14, 2014, Graz, Austria>

Semantics visualizations enable the acquisition of information to amplify the acquisition of knowledge. The dramatic increase of semantics in form of Linked Data and Linked-Open Data yield search databases that allow to visualize the entire context of search results. The visualization of this semantic context enables one to gather more information at once, but the complex structures may as well confuse and frustrate users. To overcome the problems, adaptive visualizations already provide some useful methods to adapt the visualization on users' demands and skills. Although these methods are very promising, these systems do not investigate the relevance of semantic neighboring entities that commonly build most information value. We introduce two new measurements for the relevance of neighboring entities: The Inverse Instance Frequency allows weighting the relevance of semantic concepts based on the number of their instances. The Direct Relation Frequency inverse Relations Frequency measures the relevance of neighboring instances by the type of semantic relations. Both measurements provide a weighting of neighboring entities of a selected semantic instance, and enable an adaptation of retinal variables for the visualized graph. The algorithms can easily be integrated into adaptive visualizations and enhance them with the relevance measurement of neighboring semantic entities. We give a detailed description of the algorithms to enable a replication for the adaptive and semantics visualization community. With our method, one can now easily derive the relevance of neighboring semantic entities of selected instances, and thus gain more information at once, without confusing and frustrating users.

Show publication details

Stenin, Igor; Hansen, Stefan; Becker, Meike; Sakas, Georgios; Fellner, Dieter W.; Klenzner, Thomas; Schipper, Jörg

Minimally Invasive Multiport Surgery of the Lateral Skull Base

2014

BioMed Research International, Vol.2014 (2014), Article ID 379295, 7 p.

Objective: Minimally invasive procedures minimize iatrogenic tissue damage and lead to a lower complication rate and high patient satisfaction. To date only experimental minimally invasive single-port approaches to the lateral skull base have been attempted. The aim of this study was to verify the feasibility of a minimally invasive multiport approach for advanced manipulation capability and visual control and develop a software tool for preoperative planning. Methods: Anatomical 3D models were extracted from twenty regular temporal bone CT scans. Collision-free trajectories, targeting the internal auditory canal, round window, and petrous apex, were simulated with a specially designed planning software tool. A set of three collision-free trajectories was selected by skull base surgeons concerning the maximization of the distance to critical structures and the angles between the trajectories. Results: A set of three collision-free trajectories could be successfully simulated to the three targets in each temporal bone model without violating critical anatomical structures. Conclusion: A minimally invasive multiport approach to the lateral skull base is feasible. The developed software is the first step for preoperative planning. Further studies will focus on cadaveric and clinical translation.

Show publication details

Schinko, Christoph; Ullrich, Torsten; Fellner, Dieter W.

Modeling with High-level Descriptions and Low-level Details

2014

Blashki, Katherine (Ed.) et al.: Proceedings of the International Conferences on Interfaces and Human Computer Interaction 2014, Game and Entertainment Technologies 2014 and Computer Graphics, Visualization, Computer Vision and Image Processing 2014 : Part of the Multi Conference on Computer Science and Information Systems, MCCSIS 2014. IADIS Press, 2014, pp. 328-332

IADIS International Conference Computer Graphics, Visualization, Computer Vision and Image Processing (CGVCVIP) <8, 2014, Lisbon, Portugal>

Procedural modeling techniques can be used to encode a geometric shape on a high and abstract level: each class of objects and shapes is represented by one algorithm; and each artifact is one set of high-level parameters. In this paper, we use a generative object description and register it to real-world data e.g. laser scans. Afterwards, we can use the fitted procedural model to modify existing 3D shapes. The high-level description can be used to resemble real-world objects or create new ones. In this way, we can design shapes using both low-level details and high-level shape parameters at the same time.

Show publication details

Becker, Meike; Sakas, Georgios (Betreuer); Fellner, Dieter W. (Betreuer); Schipper, Jörg (Betreuer)

Patientenspezifische Planung für die Multi-Port Otobasischirurgie

2014

Darmstadt, TU, Diss., 2014

Bisher werden Operationen im Bereich der seitlichen Schädelbasis (Otobasis) stark invasiv durchgeführt. Um die Traumatisierung für den Patienten zu reduzieren, wird seit kurzem ein Multi-Port Ansatz untersucht, bei dem bis zu drei dünne Bohrkanäle von der Schädeloberfläche bis zum Operationsziel angelegt werden. Aufgrund der Minimalinvasivität des neuen Eingriffs ist die visuelle Kontrolle durch den Chirurgen nicht mehr möglich. Somit ist eine präzise patientenspezifische Planung basierend auf Bilddaten zwingend erforderlich. Der Fokus dieser Arbeit liegt daher auf der Planung eines Multi-Port Eingriffs basierend auf patientenspezifischen Modellen. Zur Generierung dieser Modelle habe ich zunächst Methoden für die Segmentierung der Risikostrukturen der Otobasis in Computertomographiedaten entwickelt. Die Herausforderungen dabei sind die geringe Größe der Strukturen, der fehlende Kontrast zum umliegenden Gewebe sowie die zum Teil variierende Form und Bildintensität. Daher schlage ich die Verwendung eines modellbasierten Ansatzes - das Probabilistic Active Shape Model - vor. Dieses habe ich für die Risikostrukturen der Otobasis adaptiert und intensiv evaluiert. Dabei habe ich gezeigt, dass die Segmentierungsgenauigkeit im Bereich der manuellen Segmentierungsgenauigkeit liegt. Ferner habe ich Methoden für die automatische Planung der Bohrkanäle basierend auf den durch die Segmentierung gewonnenen patientenspezifischen Modellen entwickelt. Die Herausforderung hierbei ist, dass der Multi-Port Eingriff noch nicht im klinischen Einsatz ist und somit Erfahrung mit der neuen Strategie fehlt. Daher wurde zunächst ein Planungstool zur Berechnung einer Menge von zulässigen Bohrkanälen entwickelt und die manuelle Auswahl einer Bohrkanalkombination ermöglicht. Damit haben zwei Ärzte eine erste Machbarkeitsanalyse durchgeführt. Die so gewonnene Erfahrung und Datenbasis habe ich formalisiert und ein Modell für die automatische Planung einer Bohrkanalkombination abgeleitet. Die Evaluation zeigt, dass auf diese Weise Bohrkanalkombinationen vergleichbar mit der manuellen Wahl der Ärzte berechnet werden können. Damit ist erstmals die computergestützte Planung eines Multi-Port Eingriffs an der Otobasis möglich.

Show publication details

Kahn, Svenja; Fellner, Dieter W. (Betreuer); Stricker, Didier (Betreuer)

Precise Depth Image Based Real-Time 3D Difference Detection

2014

Darmstadt, TU, Diss., 2014

3D difference detection is the task to verify whether the 3D geometry of a real object exactly corresponds to a 3D model of this object. Detecting differences between a real object and a 3D model of this object is for example required for industrial tasks such as prototyping, manufacturing and assembly control. State of the art approaches for 3D difference detection have the drawback that the difference detection is restricted to a single viewpoint from a static 3D position and that the differences cannot be detected in real time. This thesis introduces real-time 3D difference detection with a hand-held depth camera. In contrast to previous works, with the proposed approach, geometric differences can be detected in real time and from arbitrary viewpoints. Therefore, the scan position of the 3D difference detection be changed on the fly, during the 3D scan. Thus, the user can move the scan position closer to the object to inspect details or to bypass occlusions. The main research questions addressed by this thesis are: Q1 How can 3D differences be detected in real time and from arbitrary viewpoints using a single depth camera? Q2 Extending the first question, how can 3D differences be detected with a high precision? Q3 Which accuracy can be achieved with concrete setups of the proposed concept for real time, depth image based 3D difference detection? This thesis answers Q1 by introducing a real-time approach for depth image based 3D difference detection. The real-time difference detection is based on an algorithm which maps the 3D measurements of a depth camera onto an arbitrary 3D model in real time by fusing computer vision (depth imaging and pose estimation) with a computer graphics based analysis-by-synthesis approach. Then, this thesis answers Q2 by providing solutions for enhancing the 3D difference detection accuracy, both by precise pose estimation and by reducing depth measurement noise. A precise variant of the 3D difference detection concept is proposed, which combines two main aspects. First, the precision of the depth camera's pose estimation is improved by coupling the depth camera with a very precise coordinate measuring machine. Second, measurement noise of the captured depth images is reduced and missing depth information is filled in by extending the 3D difference detection with 3D reconstruction. The accuracy of the proposed 3D difference detection is quantified by a ground-truth based, quantitative evaluation. This provides an answer to Q3. The accuracy is evaluated both for the basic setup and for the variants that focus on a high precision. The quantitative evaluation using real-world data covers both the accuracy which can be achieved with a time-of-flight camera (SwissRanger 4000) and with a structured light depth camera (Kinect). With the basic setup and the structured light depth camera, differences of 8 to 24 millimeters can be detected from one meter measurement distance. With the enhancements proposed for precise 3D difference detection, differences of 4 to 12 millimeters can be detected from one meter measurement distance using the same depth camera. By solving the challenges described by the three research question, this thesis provides a solution for precise real-time 3D difference detection based on depth images. With the approach proposed in this thesis, dense 3D differences can be detected in real time and from arbitrary viewpoints using a single depth camera. Furthermore, by coupling the depth camera with a coordinate measuring machine and by integrating 3D reconstruction in the 3D difference detection, 3D differences can be detected in real time and with a high precision.

Show publication details

Caldera, Christian; Berndt, Rene; Eggeling, Eva; Schröttner, Martin; Fellner, Dieter W.

PRIMA - Towards an Automatic Review/Paper Matching Score Calculation

2014

Sehring, Hans-Werner (Ed.) et al.: CONTENT 2014 : The Sixth International Conference on Creative Content Technologies [online]. [cited 18 June 2015] Available from: http://www.thinkmind.org/index.php?view=instance&instance=CONTENT+2014: ThinkMind, 2014, pp. 71-75

International Conference on Creative Content Technologies (CONTENT) <6, 2014, Venice, Italy>

Programme chairs of scientific conferences face a tremendous time pressure. One of the most time-consuming steps during the conference workflow is assigning members of the international programme committee (IPC) to the received submissions. Finding the best-suited persons for reviewing strongly depends on how the paper matches the expertise of each IPC member. While various approaches like "bidding" or "topic matching" exist in order to make the knowledge of these expertises explicit, these approaches allocate a considerable amount of resources on the IPC member side. This paper introduces the Paper Rating and IPC Matching Tool (PRIMA), which reduces the workload for both - IPC members and chairs - to support and improve the assignment process.

Show publication details

Zmugg, René; Thaller, Wolfgang; Krispel, Ulrich; Edelsbrunner, Johannes; Havemann, Sven; Fellner, Dieter W.

Procedural Architecture Using Deformation-aware Split Grammars

2014

The Visual Computer, Vol.30 (2014), 9, pp. 1009-1019. Published online: 29 December 2013

With the current state of video games growing in scale, manual content creation may no longer be feasible in the future. Split grammars are a promising technology for large-scale procedural generation of urban structures, which are very common in video games. Buildings with curved parts, however, can currently only be approximated by static pre-modelled assets, and rules apply only to planar surface parts. We present an extension to split grammar systems that allow the creation of curved architecture through integration of free-form deformations at any level in a grammar. Further split rules can then proceed in two different ways. They can either adapt to these deformations so that repetitions can adjust to more or less space, while maintaining length constraints, or they can split the deformed geometry with straight planes to introduce straight structures on deformed geometry.

Show publication details

Havemann, Sven; Wagener, Olaf; Fellner, Dieter W.

Procedural Shape Modeling in Digital Humanities: Potentials and Issues

2014

Ioannides, Marinos (Ed.) et al.: 3D Research Challenges in Cultural Heritage : A Roadmap in Digital Heritage Preservation. Berlin, Heidelberg, New York: Springer, 2014. (Lecture Notes in Computer Science (LNCS) 8355), pp. 64-77

Procedural modeling is a technology that has great potential to make the abundant variety of shapes that have to be dealt with in Digital Humanities accessible and understandable. There is a gap, however, between technology on the one hand and the needs and requirements of the users in the Humanities community. In this paper we analyze the reasons for the limited uptake of procedural modeling and sketch possible ways to circumvent the problem. The key insight is that we have to find matching concepts in both fields, which are on the one hand grounded in the way shape is explained, e.g., in art history, but which can also be formalized to make them accessible to digital computers.

Show publication details

Silva, Nelson; Settgast, Volker; Eggeling, Eva; Grill, Florian; Zeh, Theodor; Fellner, Dieter W.

Sixth Sense - Air Traffic Control Prediction Scenario Augmented by Sensors

2014

Lindstaedt, Stefanie (Ed.) et al.: i-KNOW 2014 : Proceedings of the 14th International Conference on Knowledge Technologies and Data-driven Business. New York: ACM, 2014. (ACM International Conference Proceedings Series 889), Article 34, 4 p.

International Conference on Knowledge Technologies and Data-driven Business (I-KNOW) <14, 2014, Graz, Austria>

This paper is focused on the fault tolerance of Human Machine Interfaces in the field of air traffic control (ATC) by accepting the overall user's body language as input. We describe ongoing work in progress in the project called Sixth Sense. Interaction patterns are reasoned from the combination of a recommendation and inference engine, the analysis of several graph database relationships and from multiple sensor raw data aggregations. Altogether, these techniques allow us to judge about different possible meanings of the current user's interaction and cognitive state. The results obtained from applying different machine learning techniques will be used to make recommendations and predictions on the user's actions. They are currently monitored and rated by a human supervisor.

Show publication details

Limper, Max; Thöner, Maik; Behr, Johannes; Fellner, Dieter W.

SRC - A Streamable Format for Generalized Web-based 3D Data Transmission

2014

Polys, Nicholas F. (General Chair) et al.: Proceedings Web3D 2014 : 19th International Conference on 3D Web Technology. New York: ACM, 2014, pp. 35-43

International Conference on 3D Web Technology (WEB3D) <19, 2014, Vancouver, BC, Canada>

A problem that still remains with today's technologies for 3D asset transmission is the lack of progressive streaming of all relevant mesh and texture data, with a minimal number of HTTP requests. Existing solutions, like glTF or X3DOM's geometry formats, either send all data within a single batch, or they introduce an unnecessary large number of requests. Furthermore, there is still no established format for a joined, interleaved transmission of geometry data and texture data. Within this paper, we propose a new container file format, entitled Shape Resource Container (SRC). Our format is optimized for progressive, Web-based transmission of 3D mesh data with a minimum number of HTTP requests. It is highly configurable, and more powerful and flexible than previous formats, as it enables a truly progressive transmission of geometry data, partial sharing of geometry between meshes, direct GPU uploads, and an interleaved transmission of geometry and texture data. We also demonstrate how our new mesh format, as well as a wide range of other mesh formats, can be conveniently embedded in X3D scenes, using a new, minimalistic X3D ExternalGeometry node.

Show publication details

Ullrich, Torsten; Fellner, Dieter W.

Statistical Analysis on Global Optimization

2014

MCSI 2014 : 2014 International Conference on Mathematics and Computers in Sciences and in Industry. Los Alamitos, Calif.: IEEE Computer Society Conference Publishing Services (CPS), 2014, pp. 99-106

International Conference on Mathematics and Computers in Sciences and in Industry (MCSI) <2014, Varna, Bulgaria>

The global optimization of a mathematical model determines the best parameters such that a target or cost function is minimized. Optimization problems arise in almost all scientific disciplines (operations research, life sciences, etc.). Only in a few exceptional cases, these problems can be solved analytically-exactly, so in practice numerical routines based on approximations have to be used. The routines return a result - a so-called candidate of a global minimum. Unfortunately, the question whether the candidate represents the optimal solution, often remains unanswered. This article presents a simple-to-use, statistical analysis that determines and assesses the quality of such a result. This information is valuable and important - especially for practical application.

Show publication details

Santos, Pedro; Peña Serna, Sebastian; Stork, André; Fellner, Dieter W.

The Potential of 3D Internet in the Cultural Heritage Domain

2014

Ioannides, Marinos (Ed.) et al.: 3D Research Challenges in Cultural Heritage : A Roadmap in Digital Heritage Preservation. Berlin, Heidelberg, New York: Springer, 2014. (Lecture Notes in Computer Science (LNCS) 8355), pp. 1-17

Europe is rich in cultural heritage but unfortunately much of the tens of millions of artifacts remain in archives. Many of these resources have been collected to preserve our history and to understand their historical context. Nevertheless, CH institutions are neither able to document all the collected resources nor to exhibit them. Additionally, many of these CH resources are unique, and will be on public display only occasionally. Hence, access to and engagement with this kind of cultural resources is important for European culture and the legacy of future generations. However, the technology needed to economically mass digitize and annotate 3D artifacts in analogy to the digitization and annotation of books and paintings has yet to be developed. Likewise approaches to semantic enrichment and storage of 3D models along with meta-data are just emerging. This paper presents challenges and trends to overcome the latter issues and demonstrates latest developments for annotation of 3D artifacts and their subsequent export to Europeana, the European digital library, for integrated, interactive 3D visualization within regular web browsers taking advantage of technologies such as WebGl and X3D.

Show publication details

Sturm, Werner; Berndt, Rene; Halm, Andreas; Ullrich, Torsten; Eggeling, Eva; Fellner, Dieter W.

Time-based Visualization of Large Data-Sets. An Example in the Context of Automotive Engineering

2014

International Journal on Advances in Software, Vol.7 (2014), 1-2, pp. 139-149

Automotive systems can be very complex when using multiple forms of energy. To achieve better energy efficiency, engineers require specialized tools to cope with that complexity and to comprehend how energy is spread and consumed. This is especially essential to develop hybrid systems, which generate electricity by various available forms of energy. Therefore, highly specialized visualizations of multiple measured energies are needed. This paper examines several three-dimensional glyphbased visualization techniques for spatial multivariate data. Besides animated glyphs, two-dimensional visualization techniques for temporal data to allow detailed trend analysis are considered as well. Investigations revealed that Scaled Data-Driven Spheres are best suited for a detailed 3D exploration of measured data. To gain a better overview of the spatial data, Cumulative Glyphs are introduced. For trend analysis, Theme River and Stacked Area Graphs are used. All these visualization techniques are implemented as a web-based prototype without the need of additional web browser plugins using X3DOM and Data-Driven Documents.

Show publication details

Bender, Jan; Kuijper, Arjan; Landesberger, Tatiana von; Theisel, Holger; Urban, Philipp; Fellner, Dieter W.; Goesele, Michael; Roth, Stefan

VMV 2014: Vision, Modeling, and Visualization

2014

Goslar : Eurographics Association, 2014

Workshop on Vision, Modeling, and Visualization (VMV) <19, 2014, Darmstadt, Germany>

VMV is a unique event that brings together scientists and practicioners interested in the interdisciplinary fields of computer vision and computer graphics, with special emphasis on the link between the disciplines. It offers researchers the opportunity to discuss a wide range of different topics within an open, international and interdisciplinary environment, and has done so successfully for many years.

Show publication details

Doulamis, Anastasios; Ioannides, Marinos; Doulamis, Nikolaos; Hadjiprocopis, Andreas; Fritsch, Dieter; Balet, Olivier; Julien, Martine; Protopapadakis, Eftychios; Makantasis, Kostas; Weinlinger, Guenther; Johnsons, Paul S.; Klein, Michael; Fellner, Dieter W.; Stork, André; Santos, Pedro

4D Reconstruction of the Past

2013

Hadjimitsis, Diofantos G. (Ed.) et al.: First International Conference on Remote Sensing and Geoinformation of the Environment : RSCy 2013. Bellingham: SPIE Press, 2013. (Proceedings of SPIE 8795), pp. 87950J-1 - 87950J-11

International Conference on Remote Sensing and Geoinformation of the Environment (RSCy) <1, 2013, Paphos, Cyprus>

One of the main characteristics of the Internet era we are living in, is the free and online availability of a huge amount of data. This data is of varied reliability and accuracy and exists in various forms and formats. Often, it is cross-referenced and linked to other data, forming a nexus of text, images, animation and audio enabled by hypertext and, recently, by the Web3.0 standard. Search engines can search text for keywords using algorithms of varied intelligence and with limited success. Searching images is a much more complex and computationally intensive task but some initial steps have already been made in this direction, mainly in face recognition. This paper aims to describe our proposed pipeline for integrating data available on Internet repositories and social media, such as photographs, animation and text to produce 3D models of archaeological monuments as well as enriching multimedia of cultural / archaeological interest with metadata and harvesting the end products to EUROPEANA. Our main goal is to enable historians, architects, archaeologists, urban planners and affiliated professionals to reconstruct views of historical monuments from thousands of images floating around the web.

Show publication details

Wientapper, Folker; Wuest, Harald; Rojtberg, Pavel; Fellner, Dieter W.

A Camera-Based Calibration for Automotive Augmented Reality Head-Up-Displays

2013

IEEE Computer Society Visualization and Graphics Technical Committee (VGTC): 12th IEEE International Symposium on Mixed and Augmented Reality 2013. : ISMAR 2013. Los Alamitos, Calif.: IEEE Computer Society, 2013, pp. 189-197

IEEE International Symposium on Mixed and Augmented Reality (ISMAR) <12, 2013, Adelaide, SA, Australia>

Using Head-up-Displays (HUD) for Augmented Reality requires to have an accurate internal model of the image generation process, so that 3D content can be visualized perspectively correct from the viewpoint of the user. We present a generic and cost-effective camera-based calibration for an automotive HUD which uses the windshield as a combiner. Our proposed calibration model encompasses the view-independent spatial geometry, i.e. the exact location, orientation and scaling of the virtual plane, and a view-dependent image warping transformation for correcting the distortions caused by the optics and the irregularly curved windshield. View-dependency is achieved by extending the classical polynomial distortion model for cameras and projectors to a generic five-variate mapping with the head position of the viewer as additional input. The calibration involves the capturing of an image sequence from varying viewpoints, while displaying a known target pattern on the HUD. The accurate registration of the camera path is retrieved with state-of-the-art vision-based tracking. As all necessary data is acquired directly from the images, no external tracking equipment needs to be installed. After calibration, the HUD can be used together with a head-tracker to form a head-coupled display which ensures a perspectively correct rendering of any 3D object in vehicle coordinates from a large range of possible viewpoints. We evaluate the accuracy of our model quantitatively and qualitatively.

Show publication details

Thaller, Wolfgang; Krispel, Ulrich; Zmugg, René; Havemann, Sven; Fellner, Dieter W.

A Graph-Based Language for Direct Manipulation of Procedural Models

2013

International Journal on Advances in Software, Vol.6 (2013), 3-4, pp. 225-236

Creating 3D content requires a lot of expert knowledge and is often a very time consuming task. Procedural modeling can simplify this process for several application domains. However, creating procedural descriptions is still a complicated task. Graph based visual programming languages can ease the creation workflow; however, direct manipulation of procedural 3D content rather than of a visual program is desirable as it resembles established techniques in 3D modeling. In this paper, we present a dataflow language that features novel contributions towards direct interactive manipulation of procedural 3D models: We eliminate the need to manually program loops (via implicit handling of nested repetitions), we introduce partial reevaluation strategies for efficient execution, and we show the integration of stateful external libraries (scene graphs) into the dataflow model of the proposed language.

Show publication details

Zmugg, René; Krispel, Ulrich; Thaller, Wolfgang; Havemann, Sven; Pszeida, Martin; Fellner, Dieter W.

A New Approach for Interactive Procedural Modelling in Cultural Heritage

2013

Earl, Graeme (Ed.) et al.: Archaeology in the Digital Era Volume II : e-Papers from the 40th Conference on Computer Applications and Quantitative Methods in Archaeology. [cited 02 June 2015] Available from: http://dare.uva.nl/aup/en/record/500958: Amsterdam University Press, 2013, pp. 190-204

Conference on Computer Applications and Quantitative Methods in Archaeology (CAA) <40, 2012, Southhampton, England>

We present a novel approach for the efficient interactive creation of procedural 3D models for creating synthetic 3D reconstructions in cultural heritage. The benefit for the CH community is a 3D modelling tool that scales better than conventional forward modellers (like SketchUp) but does not require actual coding (like CityEngine). Our tool uses the split grammar approach, but it allows for code-free, direct model manipulation in 3D. To build up the procedural model we propose a modelling by example method that combines the intuitiveness of direct 3D interaction with almost the flexibility of programming. The user can interactively browse through the refinement hierarchy, apply rules and change parameters visually. The tool provides only a small set of well-defined modelling operations. The question is: Are they sufficient for the domain of classical architecture? For our case study we have chosen a prime example of classical architecture, the Louvre.

Show publication details

Pan, Xueming; Schröttner, Martin; Havemann, Sven; Schiffer, Thomas; Berndt, Rene; Hecher, Martin; Fellner, Dieter W.

A Repository Infrastructure for Working with 3D Assets in Cultural Heritage

2013

International Journal of Heritage in the Digital Era, Vol.2 (2013), 1, pp. 144-166

The development of a European market for digital cultural heritage assets is impeded by the lack of a suitable digital marketplace, i.e., a commonly accepted exchange platform for digital assets. We have developed the technology for such a platform over the last two years: The 3D-COFORM Repository Infrastructure (RI) is a secure content management infrastructure for the distributed processing of large-volume datasets. Three of the key features of this system are (1) owners have complete control over their data, (2) binary data must have attached metadata, and (3) processing histories are documented. Our system can support the complete production pipeline for digital assets from data acquisition (photo, 3D scan) over processing (cleaning, whole filling) to interactive presentation and content delivery over the internet. In this paper we present the components of the system and their interplay. One particular focus of the software development was to make it as easy as possible to connect client-side applications to the RI. Therefore we present the RIAPI in some detail and present several RI-enabled client-side applications that use it.

Show publication details

Nazemi, Kawa; Retz, Reimond; Bernard, Jürgen; Kohlhammer, Jörn; Fellner, Dieter W.

Adaptive Semantic Visualization for Bibliographic Entries

2013

Bebis, George (Ed.) et al.: Advances in Visual Computing. 9th International Symposium, ISVC 2013 : Proceedings, Part II. Berlin, Heidelberg, New York: Springer, 2013. (Lecture Notes in Computer Science (LNCS) 8034), pp. 13-24

International Symposium on Visual Computing (ISVC) <9, 2013, Rethymnon, Crete, Greece>

Adaptive visualizations aim to reduce the complexity of visual representations and convey information using interactive visualizations. Although the research on adaptive visualizations grew in the last years, the existing approaches do not make use of the variety of adaptable visual variables. Further the existing approaches often premises experts, who has to model the initial visualization design. In addition, current approaches either incorporate user behavior or data types. A combination of both is not proposed to our knowledge. This paper introduces the instantiation of our previously proposed model that combines both: involving different influencing factors for and adapting various levels of visual peculiarities, on visual layout and visual presentation in a multiple visualization environment. Based on data type and users' behavior, our system adapts a set of applicable visualization types. Moreover, retinal variables of each visualization type are adapted to meet individual or canonic requirements on both, data types and users' behavior. Our system does not require an initial expert modeling.

Show publication details

Rahman, Sami ur; Fellner, Dieter W. (Betreuer); Völker, Wolfram (Betreuer)

An Image Processing Based Patient-Specific Optimal Catheter Selection

2013

Darmstadt, TU, Diss., 2012

Coronary angiography is performed to investigate coronary diseases of the human heart. For better visualization of the arteries, a catheter is used to inject a contrast dye into the coronary arteries. Due to the anatomical variation of the aorta and the coronary arteries in different humans, one common catheter cannot be used for all patients. The cardiologists test different catheters for a patient and select the best one according to the patient's anatomy. To overcome these problems, we propose a computer-aided catheter selection procedure. The basic idea of this approach is to obtain MR/CT images before starting angiography. From these images, the patients' arteries are segmented and some geometric parameters are computed from the segmented images. At the same time, geometric parameters are computed from the available catheters. A model is developed, which is based on these parameters from the patients' image data and parameters from the catheters. This model reduces the number of catheter choices. In the next step, the reduced number of catheters are simulated and the most optimal catheter is obtained. A series of validation tests were conducted for segmentation, geometric parameters' estimation, parameters based catheter selection and simulation model. In our experiments, we compared catheters selected in the clinic with the catheters suggested by the image processing based model. For these experiments, the ground truth data were obtained from the clinical partner. In the clinic, angiography of twenty four cases was performed. An experienced cardiologist selected catheters based on his experience and knowledge in the field. In the next step, CT/MR image data that was acquired prior to the angiography was used for the image based catheter selection model to find optimal catheters. For every patient, three most optimal catheters were suggested by the model. These three optimal catheters were ranked as first, second and third ranked catheters. Catheters suggested by the model were compared with the catheters selected by the cardiologist. It was found that in 41% cases, model based top ranked suggestions were the same as that were used in the clinic. In 25% cases, the catheters used in the clinic were the model's second ranked catheters. In 21% cases, the catheters used in the clinic were the model's third ranked catheters. In 13% cases the catheters used in the clinic were not in the list of suggested catheters. In further experiments, the clinicians graded catheters based on catheter's performance and placement in the arteries. Most optimally placed catheters were assigned good grades, and less optimal catheters were assigned bad grades. It was seen that the model suggested similar catheters to the clinically good graded catheters but suggested different catheters to the clinically bad grade catheters. All these experiments showed that the method of an image processing based catheter selection is clinically applicable, and the only requirement is to have patient's image data before starting the angiography. It was shown that this tool will be of great help for the experienced as well as the non experienced cardiologists to have a catheter suggestion before starting the angiography.

Show publication details

Bockholt, Ulrich; Wientapper, Folker; Wuest, Harald; Fellner, Dieter W.

Augmented-Reality-basierte Interaktion mit Smartphone-Systemen zur Unterstützung von Servicetechnikern

2013

at - Automatisierungstechnik, Vol.61 (2013), 11, pp. 793-799

Smartphonesysteme erfordern neue Interaktionsparadigmen, die die integrierte Sensorik auswerten (GPS, Inertial, Kompass), die aber insbesondere auf die Smartphonekamera aufsetzten, mit der die Umgebung aufgezeichnet wird. In diesem Zusammenhang liefern Augmented Reality Verfahren großes Potential besonders für industrielle Anwendung bei Wartungs- und Instandsetzungsarbeiten.

Show publication details

Kahn, Svenja; Keil, Jens; Müller, Benedikt; Bockholt, Ulrich; Fellner, Dieter W.

Capturing of Contemporary Dance for Preservation and Presentation of Choreographies in Online Scores

2013

2013 Digital Heritage International Congress. Volume 1 : DigitalHeritage. New York: The Institute of Electrical and Electronics Engineers (IEEE), 2013, pp. 273-280

Digital Heritage International Congress (DigitalHeritage) <2013, Marseille, France>

In this paper, we present a generic and affordable approach for an automatized and markerless capturing of movements in dance, which was developed in the Motion Bank / The Forsythe Company project (www.motionbank.org). Thereby within Motion Bank we are considering the complete digitalization workflow starting with the setup of the camera array and ending with a web-based presentation of "Online Scores" visualizing different elements of choreography. Within our project, we have used our technology in two modern dance projects, one "Large Motion Space Performance" covering a large stage in solos and trios and one "Restricted Motion Space Performance" that is suited to be captured with range cameras. The project is realized in close cooperation with different choreographers and dance companies of modern ballet and with multi-media artists forming the visual representations of dance.

Show publication details

Caldera, Christian; Berndt, Rene; Fellner, Dieter W.

COMFy - A Conference Management Framework

2013

Lavesson, Niklas (Ed.) et al.: Mining the Digital Information Networks : Proceedings of the 17th International Conference on Electronic Publishing. Amsterdam; Berlin: IOS Press, 2013, pp. 45-54

International Conference on Electronic Publishing (ELPUB) <17, 2013, Karlskrona, Sweden>

Organizing the peer review process for a scientific conference can be a cumbersome task. Electronic conference management systems support chairs and reviewers in managing the huge amount of submissions. These system implement the complete work-flow of a scientific conference. We present a new approach to such systems. By providing an open API framework instead of a closed system it enables external programs to harvest and to utilize open information sources available on the internet today.

Show publication details

Thaller, Wolfgang; Zmugg, René; Krispel, Ulrich; Posch, Martin; Havemann, Sven; Fellner, Dieter W.

Creating Procedural Window Building Blocks Using the Generative Fact Labeling Method

2013

Boehm, Jan (Ed.) et al.: 3D-ARCH 2013 : 3D Virtual Reconstruction and Visualization of Complex Architectures. [cited 07 April 2014] Available from: http://www.int-arch-photogramm-remote-sens-spatial-inf-sci.net/XL-5-W1/, 2013. (The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-5/W1), pp. 235-242

ISPRS International Workshop 3D-ARCH <5, 2013, Trento, Italy>

The generative surface reconstruction problem can be stated like this: Given a finite collection of 3D shapes, create a small set of functions that can be combined to generate the given shapes procedurally. We propose generative fact labeling (GFL) as an attempt to organize the iterative process of shape analysis and shape synthesis in a systematic way. We present our results for the reconstruction of complex windows of neo-classical buildings in Graz, followed by a critical discussion of the limitations of the approach.

Show publication details

Havemann, Sven; Edelsbrunner, Johannes; Wagner, Philipp; Fellner, Dieter W.

Curvature-Controlled Curve Editing Using Piecewise Clothoid Curves

2013

Computers & Graphics, Vol.37 (2013), 6, pp. 764-773

International Conference on Shape Modeling and Applications (SMI) <15, 2013, Bournemouth, UK>

Two-dimensional curves are conventionally designed using splines or Bézier curves. Although formally they are C² or higher, the variation of the curvature of (piecewise) polynomial curves is difficult to control; in some cases it is practically impossible to obtain the desired curvature. As an alternative we propose piecewise clothoid curves (PCCs). We show that from the design point of view they have many advantages: control points are interpolated, curvature extrema lie in the control points, and adding control points does not change the curve. We present a fast localized clothoid interpolation algorithm that can also be used for curvature smoothing, for curve fitting, for curvature blending, and even for directly editing the curvature. We give a physical interpretation of variational curvature minimization, from which we derive our scheme. Finally, we demonstrate the achievable quality with a range of examples.

Show publication details

Zmugg, René; Thaller, Wolfgang; Krispel, Ulrich; Edelsbrunner, Johannes; Havemann, Sven; Fellner, Dieter W.

Deformation-Aware Split Grammars for Architectural Models

2013

Mao, Xiaoyang (Ed.) et al.: 2013 International Conference on Cyberworlds : Cyberworlds 2013. Los Alamitos, Calif.: IEEE Computer Society Conference Publishing Services (CPS), 2013, pp. 4-11

International Conference on Cyberworlds (CW) <12, 2013, Yokohama, Japan>

With the current state of video games growing in scale, manual content creation may no longer be feasible in the future. Split grammars are a promising technology for large scale procedural generation of urban structures, which are very common in video games. Buildings with curved parts, however, can currently only be approximated by static premodeled assets, and rules apply only to planar surface parts. We present an extension to current split grammar systems that allows the generation of curved architecture through freeform deformations that can be introduced at any level in a grammar. Further subdivision rules can then adapt to these deformations to maintain length constraints, and repetitions can adjust to more or less space.

Show publication details

Aderhold, Andreas; Jung, Yvonne; Wilkosinska, Katarzyna; Fellner, Dieter W.

Distributed 3D Model Optimization for the Web with the Common Implementation Framework for Online Virtual Museums

2013

2013 Digital Heritage International Congress. Volume 2 : DigitalHeritage. New York: The Institute of Electrical and Electronics Engineers (IEEE), 2013, pp. 719-726

Digital Heritage International Congress (DigitalHeritage) <2013, Marseille, France>

Internet services are becoming more ubiquitous and 3D graphics is increasingly gaining a strong foothold in the Web technology domain. Recently, with WebGL, real-time 3D graphics in the Browser became a reality and most major Browsers support WebGL natively today. This makes it possible to create applications like 3D catalogs of artifacts, or to interactively explore Cultural Heritage objects in a Virtual Museum on mobile devices. Frameworks like the open-source system X3DOM provide declarative access to low-level GPU routines along with seamless integration of 3D graphics into HTML5 applications through standardized Web technologies. Most 3D models also need to be optimized to address concerns like limited network bandwidth or reduced GPU power on mobile devices. Therefore, recently an online platform for the development of Virtual Museums with particular attention to presentation and visualization of Cultural Heritage assets in online virtual museums was proposed. This common implementation Framework (CIF) allows the user to upload large 3D models, which are subsequently converted and optimized for web display and embedded in an HTML5 application that can range from simple interactive display of the model to an entire virtual environment like a virtual walk-through. Generating these various types of applications is done via a templating mechanism, which will be further elaborated within this paper. Moreover, to efficiently convert many large models into an optimized form, a substantial amount of computing power is required, which a single system cannot yet provide in a timely fashion. Therefore, we also describe how the CIF can be used to utilize a dynamically allocated cloud-based or physical cluster of commodity hardware to distribute the workload of model optimization for the Web.

Show publication details

Weber, Daniel; Bender, Jan; Schnös, Markus; Stork, André; Fellner, Dieter W.

Efficient GPU Data Structures and Methods to Solve Sparse Linear Systems in Dynamics Applications

2013

Computer Graphics Forum, Vol.32 (2013), 1, pp. 16-26

We present graphics processing unit (GPU) data structures and algorithms to efficiently solve sparse linear systems that are typically required in simulations of multi-body systems and deformable bodies. Thereby, we introduce an efficient sparse matrix data structure that can handle arbitrary sparsity patterns and outperforms current state-of-the-art implementations for sparse matrix vector multiplication. Moreover, an efficient method to construct global matrices on the GPU is presented where hundreds of thousands of individual element contributions are assembled in a few milliseconds. A finite-element-based method for the simulation of deformable solids as well as an impulse-based method for rigid bodies are introduced in order to demonstrate the advantages of the novel data structures and algorithms. These applications share the characteristic that a major computational effort consists of building and solving systems of linear equations in every time step. Our solving method results in a speed-up factor of up to 13 in comparison to other GPU methods.

Show publication details

Peña Serna, Sebastian; Stork, André; Fellner, Dieter W.

Embodiment Discrete Processing

2013

Abramovici, Michael (Ed.) et al.: Smart Product Engineering : Proceedings of the 23rd CIRP Design Conference. Berlin, Heidelberg, New York: Springer, 2013. (Lecture Notes in Production Engineering (LNPE)), pp. 421-429

International CIRP Design Conference <23, 2013, Bochum, Germany>

The phases of the embodiment stage are sequentially conceived and in some domains even cyclic conceived. Nevertheless, there is no seamless integration between these, causing longer development processes, increment of time lags, loss of inertia, greater misunderstandings, and conflicts. Embodiment Discrete Processing enables the seamless integration of three building blocks. 1) Dynamic Discrete Representation: it is capable to concurrently handle the design and the analysis phases. 2) Dynamic Discrete Design: it deals with the needed modeling operations while keeping the consistency of the discrete shape. 3) Dynamic Discrete Analysis: it efficiently maps the dynamic changes of the shape within the design phase, while streamlining the interpretation processes. These integrated building blocks support the multidisciplinary work between designers and analysts, which was previously unusual. It creates a new understanding of what an integral processing is, whose phases were regarded as independent. Finally, it renders new opportunities toward a general purpose processing.

Show publication details

Sturm, Werner; Berndt, Rene; Halm, Andreas; Ullrich, Torsten; Eggeling, Eva; Fellner, Dieter W.

Energy Balance: A Web-based Visualization of Energy for Automotive Engineering Using X3DOM

2013

Sehring, Hans-Werner: CONTENT 2013 : The Fifth International Conference on Creative Content Technologies [online]. [cited 07 April 2014] Available from: http://www.thinkmind.org/index.php?view=instance&instance=CONTENT+2013: ThinkMind, 2013, pp. 1-6

International Conference on Creative Content Technologies (CONTENT) <5, 2013, Valencia, Spain>

Automotive systems can be very complex when using multiple forms of energy. To achieve better energy efficiency, engineers require specialized tools to cope with that complexity and to comprehend how energy is spread and consumed. This is especially essential to develop hybrid systems, which generate electricity by various available forms of energy. Therefore, highly specialized visualizations of multiple measured energies are needed. This paper examines several three-dimensional glyph-based visualization techniques for spatial multivariate data. Besides animated glyphs, two-dimensional visualization techniques for temporal data to allow detailed trend analysis are considered as well. Investigations revealed that Scaled Data-Driven Spheres are best suited for a detailed 3D exploration of measured data. To gain a better overview of the spatial data, Cumulative Glyphs are introduced. For trend analysis, Theme River and Stacked Area Graphs are used. All these visualization techniques are implemented as a web-based prototype without the need of additional web browser plugins using X3DOM and Data-Driven Documents.

Show publication details

Landesberger, Tatiana von; Bremm, Sebastian; Schreck, Tobias; Fellner, Dieter W.

Feature-based Automatic Identification of Interesting Data Segments in Group Movement Data

2013

Information Visualization, Vol.13 (2013), 3, pp. 190-212

The study of movement data is an important task in a variety of domains such as transportation, biology, or finance. Often, the data objects are grouped (e.g. countries by continents). We distinguish three main categories of movement data analysis, based on the focus of the analysis: (a) movement characteristics of an individual in the context of its group, (b) the dynamics of a given group, and (c) the comparison of the behavior of multiple groups. Examination of group movement data can be effectively supported by data analysis and visualization. In this respect, approaches based on analysis of derived movement characteristics (called features in this article) can be useful. However, current approaches are limited as they do not cover a broad range of situations and typically require manual feature monitoring. We present an enhanced set of movement analysis features and add automatic analysis of the features for filtering the interesting parts in large movement data sets. Using this approach, users can easily detect new interesting characteristics such as outliers, trends, and task-dependent data patterns even in large sets of data points over long time horizons. We demonstrate the usefulness with two real-world data sets from the socioeconomic and the financial domains.

Show publication details

Schwenk, Karsten; Behr, Johannes; Fellner, Dieter W.

Filtering Noise in Progressive Stochastic Ray Tracing: Four Optimizations to Improve Speed and Robustness

2013

The Visual Computer, Vol.29 (2013), 5, pp. 359-368. First published online 22 June 2012 as Online First Article

We present an improved version of a state-of-the-art noise reduction technique for progressive stochastic rendering. Our additions make the method significantly faster at the cost of an acceptable loss in quality. Additionally, we improve the robustness of the method in the presence of difficult features like glossy reflection, caustics, and antialiased edges. We show with visual and numerical comparisons that our extensions improve the overall performance of the original approach and make it more broadly applicable.

Show publication details

Schwenk, Karsten; Fellner, Dieter W. (Betreuer); Dachsbacher, Carsten (Betreuer)

Filtering Techniques for Low-Noise Previews of Interactive Stochastic Ray Tracing

2013

Darmstadt, TU, Diss., 2013

Progressive stochastic ray tracing algorithms are increasingly used in interactive applications such as design reviews and digital content creation. This dissertation contains three contributions to further advance this development. The first contribution is a noise reduction method for stochastic ray tracing that is especially tailored to interactive progressive rendering. Highvariance light paths are accumulated in a separate buffer, which is filtered by a high-quality, edge-preserving filter. Then a combination of the noisy unfiltered samples and the less noisy (but biased) filtered samples is added to the low-variance samples in order to form the final image. A novel perpixel blending operator combines both contributions in a way that respects a user-defined threshold on perceived noise. For progressive rendering, this method is superior to similar approaches in several aspects. First, the bias due to filtering vanishes in the limit, making the method consistent. Second, the user can interactively balance noise versus bias while the image is rendering, leaving the possibility to hide filtering artifacts under a low level of dithering noise. Third, the filtering step is more robust in the presence of reflecting/ refracting surfaces and high-frequency textures, making the method more broadly applicable than similar approaches for interactive rendering. The dissertation also contains some optimizations that improve runtime, recover antialiased edges, reduce blurring, and withhold spike noise from the preview images. The second contribution is the radiance filtering algorithm, another noise reduction method. Again, the basic idea is to exploit spatial coherence in the image and reuse information from neighboring pixels. However, in contrast to image filtering techniques, radiance filtering does not simply filter pixel values. Instead, it only reuses the incident illumination of neighboring pixels in a filtering step with shrinking kernels. This approach significantly reduces the variance in radiance estimates without blurring details in geometry or texture. Radiance filtering is consistent and orthogonal to many common optimizations such as importance, adaptive, and stratified sampling. In addition to the practical evaluation, the dissertation contains a theoretical analysis with convergence rates for bias and variance. It also contains some optimizations that improve the performance of radiance filtering on reflecting/ refracting surfaces and highly glossy surfaces. The last contribution of this dissertation is a system architecture for exchangeable rendering back-ends under a common application layer in distributed rendering systems. The primary goal was to find a practical and non-intrusive way to use potentially very different rendering back-ends without impairing their strengths and without burdening the back-ends or the application with details of the cluster environment. The approach is based on a mediator layer that can be plugged into the OpenSG infrastructure. This design allows the mediator to elegantly use OpenSG's multithreading and clustering capabilities. The mediator can also sync incremental changes very efficiently. The approach is evaluated with two case studies, including an interactive ray tracer.

Show publication details

Ullrich, Torsten; Silva, Nelson; Eggeling, Eva; Fellner, Dieter W.

Generative Modeling and Numerical Optimization for Energy Efficient Buildings

2013

IEEE Industrials Electronics Society: IECON 2013 - 39th Annual Conference of the IEEE Industrial Electronics Society. Proceedings. New York: IEEE Press, 2013, pp. 4756-4761

Annual Conference of the IEEE Industrial Electronics Society (IECON) <39, 2013, Vienna, Austria>

A procedural model is a script, which generates a geometric object. The script's input parameters offer a simple way to specify and modify the scripting output. Due to its algorithmic character, a procedural model is perfectly suited to describe geometric shapes with well-organized structures and repetitive forms. In this paper, we interpret a generative script as a function, which is nested into an objective function. Thus, the script's parameters can be optimized according to an objective. We demonstrate this approach using architectural examples: each generative script creates a building with several free parameters. The objective function is an energy-efficiency-simulation that approximates a building's annual energy consumption. Consequently, the nested objective function reads a set of building parameters and returns the energy needs for the corresponding building. This nested function is passed to a minimization and optimization process. Outcome is the best building (within the family of buildings described by its script) concerning energy-efficiency. Our contribution is a new way of modeling. The generative approach separates design and engineering: the complete design is encoded in a script and the script ensures that all parameter combinations (within a fixed range) generate a valid design. Then the design can be optimized numerically.

Show publication details

Scherer, Maximilian; Fellner, Dieter W. (Betreuer); Schreck, Tobias (Betreuer)

Information Retrieval for Multivariate Research Data Repositories

2013

Darmstadt, TU, Diss., 2013

In this dissertation, I tackle the challenge of information retrieval for multivariate research data by providing novel means of content-based access. Large amounts of multivariate data are produced and collected in different areas of scientific research and industrial applications, including the human or natural sciences, the social or economical sciences and applications like quality control, security and machine monitoring. Archival and re-use of this kind of data has been identified as an important factor in the supply of information to support research and industrial production. Due to increasing efforts in the digital library community, such multivariate data are collected, archived and often made publicly available by specialized research data repositories. A multivariate research data document consists of tabular data with m columns (measurement parameters, e.g., temperature, pressure, humidity, etc.) and n rows (observations). To render such data-sets accessible, they are annotated with meta-data according to well-defined meta-data standard when being archived. These annotations include time, location, parameters, title, author (and potentially many more) of the document under concern. In particular for multivariate data, each column is annotated with the parameter name and unit of its data (e.g., water depth [m]). The task of retrieving and ranking the documents an information seeker is looking for is an important and difficult challenge. To date, access to this data is primarily provided by means of annotated, textual meta-data as described above. An information seeker can search for documents of interest, by querying for the annotated meta-data. For example, an information seeker can retrieve all documents that were obtained in a specific region or within a certain period of time. Similarly, she can search for datasets that contain a particular measurement via its parameter name or search for data-sets that were produced by a specific scientist. However, retrieval via textual annotations is limited and does not allow for content-based search, e.g., retrieving data which contains a particular measurement pattern like a linear relationship between water depth and water pressure, or which is similar to example data the information seeker provides. In this thesis, I deal with this challenge and develop novel indexing and retrieval schemes, to extend the established, meta-data based access to multivariate research data. By analyzing and indexing the data patterns occurring in multivariate data, one can support new techniques for content-based retrieval and exploration, well beyond meta-data based query methods. This allows information seekers to query for multivariate data-sets that exhibit patterns similar to an example data-set they provide. Furthermore, information seekers can specify one or more particular patterns they are looking for, to retrieve multivariate data-sets that contain similar patterns. To this end, I also develop visual-interactive techniques to support information seekers in formulating such queries, which inherently are more complex than textual search strings. These techniques include providing an over-view of potentially interesting patterns to search for, that interactively adapt to the user's query as it is being entered. Furthermore, based on the pattern description of each multivariate data document, I introduce a similarity measure for multivariate data. This allows scientists to quickly discover similar (or contradictory) data to their own measurements.

Show publication details

Fellner, Dieter W.; Baier, Konrad; Dürre, Steffen; Bornemann, Heidrun; Fraunhoffer, Katrin

Jahresbericht 2012: Fraunhofer-Institut für Graphische Datenverarbeitung IGD

2013

Darmstadt, 2013

Die Forscher des Fraunhofer-Instituts für Graphische Datenverarbeitung IGD machen aus Informationen Bilder und aus Bildern Informationen. Die bild- und modellbasierte Informatik nennt man "Visual Computing". Hierzu zählen Graphische Datenverarbeitung, Computer Vision sowie Virtuelle und Erweiterte Realität. Mithilfe des Visual Computing werden Bilder, Modelle und Graphiken für alle denkbaren computerbasierten Anwendungen verwendet, erfasst und bearbeitet. Dabei setzen die Forscherinnen und Forscher graphische Anwendungsdaten in Wechselbeziehung mit nicht graphischen Daten, was bedeutet, dass sie Bilder, Videos und 3D-Modelle mit Texten, Ton und Sprache rechnergestützt anreichern. Daraus ergeben sich wiederum neue Erkenntnisse, die in innovative Produkte und Dienstleistungen umgesetzt werden können. Dafür werden entsprechend fortgeschrittene Dialogtechniken entworfen. Durch seine zahlreichen Innovationen hebt das Fraunhofer IGD die Interaktion zwischen Mensch und Maschine auf eine neue Ebene.

Show publication details

Eggeling, Eva; Fellner, Dieter W.; Halm, Andreas; Ullrich, Torsten

Optimization of an Autostereoscopic Display for a Driving Simulator

2013

Coquillart, Sabine (Ed.) et al.: GRAPP 2013 - IVAPP 2013 : Proceedings of the International Conference on Computer Graphics Theory and Applications and International Conference on Information Visualization Theory and Applications. SciTePress, 2013, pp. 318-326

International Conference on Computer Graphics Theory and Applications (GRAPP) <8, 2013, Barcelona, Spain>

In this paper, we present an algorithm to optimize a 3D stereoscopic display based on parallax barriers for a driving simulator. The main purpose of the simulator is to enable user studies in reproducible laboratory conditions to test and evaluate driving assistance systems. The main idea of our optimization approach is to determine by numerical analysis the best pattern for an autostereoscopic display with the best image separation for each eye, integrated into a virtual reality environment. Our implementation uses a differential evolution algorithm, which is a parallel, direct search method based on evolution strategies, because it converges fast and is inherently parallel. This allows an execution on a network of computers. The resulting algorithm allows optimizing the display and its corresponding pattern, such that a single user in the simulator environment sees a stereoscopic image without being supported by special eye-wear.

Show publication details

Eggeling, Eva; Fellner, Dieter W.; Ullrich, Torsten

Probability of Globality

2013

World Academy of Science, Engineering and Technology, Vol.73 (2013), pp. 483-487

The objective of global optimization is to find the globally best solution of a model. Nonlinear models are ubiquitous in many applications and their solution often requires a global search approach. This article presents a probabilistic approach to determine the probability of a solution being a global minimum. The approach is independent of the used global search method and only requires a limited, convex parameter domain A as well as a Lipschitz continuous function f whose Lipschitz constant is not needed to be known.

Show publication details

Ullrich, Torsten; Schinko, Christoph; Schiffer, Thomas; Fellner, Dieter W.

Procedural Descriptions for Analyzing Digitized Artifacts

2013

Applied Geomatics, Vol.5 (2013), 3, pp. 185-192

Within the last few years, generative modeling techniques have gained attention especially in the context of cultural heritage. As a generative model describes a rather ideal object than a real one, generative techniques are a basis for object description and classification. This procedural knowledge differs from other kinds of knowledge, such as declarative knowledge, in a significant way: It is an algorithm, which reflects the way objects are designed. Consequently, generative models are not a replacement for established geometry descriptions (based on points, triangles, etc.) but a semantic enrichment. In combination with variance analysis techniques, generative descriptions can be used to validate reconstructions. Detailed mesh comparisons can reveal smallest changes and damages. These analysis and documentation tasks are needed not only in the context of cultural heritage but also in engineering and manufacturing. Our contribution to this problem is a work flow, which automatically combines generative/procedural descriptions with reconstructed artifacts and performs a nominal/actual value comparison. The reference surface is a procedural model whose accuracy and systematics describe the semantic properties of an object, whereas the actual object is a real-world data set (laser scan or photogrammetric reconstruction) without any additional semantic information.

Show publication details

Settgast, Volker; Fellner, Dieter W. (Betreuer); Stricker, Didier (Betreuer)

Processing Semantically Enriched Content for Interactive 3D Visualizations

2013

Graz, TU, Diss., 2013

Interactive 3D graphics has become an essential tool in many fields of application: In manufacturing companies, e.g., new products are planned and tested digitally. The effect of new designs and testing of ergonomic aspects can be done with pure virtual models. Furthermore, the training of procedures on complex machines is shifted to the virtual world. In that way support costs for the usage of the real machine are reduced, and effective forms of training evaluation are possible. Virtual reality helps to preserve and study cultural heritage: Artifacts can be digitalized and preserved in a digital library making them accessible for a larger group of people. Various forms of analysis can be performed on the digital objects which are hardly possible to perform on the real objects or would destroy them. Using virtual reality environments like large projection walls helps to show virtual scenes in a realistic way. The level of immersion can be further increased by using stereoscopic displays and by adjusting the images to the head position of the observer. One challenge with virtual reality is the inconsistency in data. Moving 3D content from a useful state, e.g., from a repository of artifacts or from within a planning work flow to an interactive presentation is often realized with degenerative steps of preparation. The productiveness of Powerwalls and CAVEsTM is called into question, because the creation of interactive virtual worlds is a one way road in many cases: Data has to be reduced in order to be manageable by the interactive renderer and to be displayed in real time on various target platforms. The impact of virtual reality can be improved by bringing back results from the virtual environment to a useful state or even better: never leave that state. With the help of semantic data throughout the whole process, it is possible to speed up the preparation steps and to keep important information within the virtual 3D scene. The integrated support for semantic data enhances the virtual experience and opens new ways of presentation. At the same time the goal becomes feasible to bring back data from the presentation for example in a CAVETM to the working process. Especially in the field of cultural heritage it is essential to store semantic data with the 3D artifacts in a sustainable way. Within this thesis new ways of handling semantic data in interactive 3D visualizations are presented. The whole process of 3D data creation is demonstrated with regard to semantic sustainability. The basic terms, definitions and available standards for semantic markups are described. Additionally, a method is given to generate semantics of higher order automatically. An important aspect is the linking of semantic information with 3D data. The thesis gives two suggestions on how to store and publish the valuable combination of 3D content and semantic markup in a sustainable way. Different environments for virtual reality are compared and their special needs are pointed out. Primarily the DAVE in Graz is presented in detail, and novel ways of user interactions in such immersive environments are proposed. Finally applications in the field of cultural heritage, security and mobility are presented. The presented symbiosis of 3D content and semantic information is an important contribution for improving the usage of virtual environments in various fields of applications.

Show publication details

Schiffer, Thomas; Fellner, Dieter W.

Ray Tracing: Lessons Learned and Future Challenges

2013

IEEE Potentials, Vol.32 (2013), 5, pp. 34-37

Ray tracing on massively parallel hardware allows for the computation of images with a high visual quality in an increasingly short time. However, apping the computations to such architectures in an efficient manner is a challenging task.

Show publication details

Steger, Sebastian; Fellner, Dieter W. (Betreuer); Sakas, Georgios (Betreuer)

Registrierung und Segmentierung von Lymphknoten aus multimodalen Zeitreihen im Kopf-Hals-Bereich

2013

Darmstadt, TU, Diss., 2013

Der verlässlichste unabhängige prognostische Faktor für den Krankheitsverlauf von Patienten mit Kopf-Hals-Karzinom ist das Vorhandensein von Lymphknotenmetastasen. Eine computergestützte Untersuchung und zeitliche Verfolgung von Lymphknoten in mehreren Bildmodalitäten durch ein multimodales, multitemporales Modell bietet viele Vorteile, insbesondere in Bezug auf Reproduzierbarkeit. Grundvoraussetzung dafür ist jedoch eine robuste automatische Registrierung und Segmentierung von Lymphknoten aus multimodalen Zeitreihen. Da existierende Verfahren den Anforderungen nicht genügen, werden dazu in dieser Arbeit neuartige Methoden entwickelt und evaluiert. Zur Lymphknotensegmentierung aus CT-Datensätzen wird ein Radialstrahl-basiertes 3D-Verfahren umfangreich behandelt. Ausgehend von einem Saatpunkt werden Strahlen radial in alle möglichen Richtungen gleichverteilt geschickt und ein Optimierungsverfahren bestimmt für jeden Strahl unter Einbeziehung von Bildinformation und lokalem Formwissen den bestmöglichen Radius und somit eine Segmentierung. Erstmalig findet ein Vergleich unterschiedlicher bildbasierter Kostenfunktionen statt und die Parameter werden durch ein datengetriebenes Verfahren bestimmt. Mit einer durchschnittlichen Oberflächendistanz von nur 0.46 mm ist die Segmentierungsgenauigkeit im Bereich der manuellen Expertensegmentierung und deutlich besser als existierende semi-automatische Verfahren. Die Inter-Observer-Variabilität zur Volumenbestimmung ist um den Faktor 3 geringer als bei manueller Volumenbestimmung. Neben Lymphknoten eignet sich das Verfahren auch zur Segmentierung anderer rundlicher Strukturen, wie z.B. Tumore, und bietet auch für einige Organe, wie z.B. die Prostata, eine Alternative zur modellbasierten Segmentierung. Die Registrierung einzelner Lymphknoten erfordert die automatische deformierbare Registrierung des gesamten Kopf-Hals-Bereichs. Dazu wird in dieser Arbeit die erste vollautomatische generalisierbare multi-rigide Methode vorgestellt. Sie basiert auf einem neuartigen artikulierten Atlas, welcher neben Wissen über Form und Aussehen einzelner Knochen auch deren relative Lagen (Artikulation) aus Trainingsdaten lernt. Dieser wird zunächst zur gleichzeitigen Segmentierung der Knochen aus dem CT-Datensatz verwendet. Ausgehend davon wird er durch personalisiertes Wissen angereichert und an die andere Bildmodalität oder Zeitserienaufnahme unter Berücksichtigung des gelernten Artikulationsraums angepasst. Die daraus berechneten rigiden Transformationen werden in einem zweistufigen Prozess in das umliegende Weichteilgewebe propagiert und ein dichtes Deformationsfeld entsteht. Abschließend wird die Registrierungsgenauigkeit innerhalb der Lymphknoten durch ein lokal rigides Registrierungsverfahren verbessert. Die Vorteile der multi-rigiden Registrierung liegen in dem großen Konvergenzbereich und der geringen Anfälligkeit für Bildartefakte aufgrund der einzigartigen globalen Regularisierung. Innerhalb von Lymphknotenzentren wird eine Registrierungsgenauigkeit von durchschnittlich 5.05 mm erreicht. Gegenüber der B-Spline Registrierung ergibt sich eine Verbesserung um 37% und eine subjektiv deutlich besser empfundene Qualität der erzeugten deformierten Bilder. Die wichtigsten Beiträge dieser Arbeit sind zur Segmentierung von Lymphknoten eine neue Kostenfunktion, ein umfangreicher Vergleich verschiedener Kostenfunktionen und eine datengetriebene Parameterwahl. Die Hauptbeiträge zur Bildregistrierung sind das Lernen der relativen Elementlagen, ein adaptives Anpassungsverfahren und die Personalisierung des artikulierten Atlas.

Show publication details

Ebinger, Peter; Fellner, Dieter W. (Betreuer); Wolthusen, Stephen (Betreuer)

Robust Situation Awareness in Tactical Mobile Ad Hoc Networks

2013

Berlin : Logos Verlag, 2013

Darmstadt, TU, Diss., 2013

The objective of the research presented in the dissertation at hand is to improve situation awareness in tactical mobile ad hoc networks (MANETs). Tactical teams are supported to successfully accomplish their missions. This could be a team of first responders after a natural catastrophe or a terrorist attack. In such scenarios mobile devices connected by a MANET are a quick, flexible and efficient way to provide a backup communication infrastructure. We provide three main contributions in order to increase the robustness of situation awareness in tactical MANETs: cross data analysis, cooperative trust assessment and probabilistic state modeling. Cross data analysis provides a framework for exploiting all available data sources (direct and indirect sensor data, mission-specific knowledge and general information sources) in order to detect inconsistencies. The focus of cooperative trust assessment is to identify nodes with wrong or malicious behavior and exclude them from the situation awareness process. The probabilistic state modeling and estimation concept based on particle filters allows incorporating multiple information sources at several stages to adjust the likelihood of specific system states.

Show publication details

Thaller, Wolfgang; Krispel, Ulrich; Zmugg, René; Havemann, Sven; Fellner, Dieter W.

Shape Grammars on Convex Polyhedra

2013

Computers & Graphics, Vol.37 (2013), 6, pp. 707-717

International Conference on Shape Modeling and Applications (SMI) <15, 2013, Bournemouth, UK>

Shape grammars are the method of choice for procedural modeling of architecture. State of the art shape grammar systems define a bounding box for each shape; various operations can then be applied based on this bounding box. Most notably, the box can be split into smaller boxes along any of its three axes. We argue that a greater variety can be obtained by using convex polyhedra as bounding volumes instead. Split operations on convex polyhedra are no longer limited to the three principal axes but can use arbitrary planes. Such splits permit a volumetric decomposition into convex elements; as convex polyhedra can represent many shapes more faithfully than boxes, shape grammar rules can adapt to a much wider array of different contexts. We generalize established shape operations and introduce new operations that now become possible.

Show publication details

Kirschner, Matthias; Fellner, Dieter W. (Betreuer); Meinzer, Hans-Peter (Betreuer)

The Probabilistic Active Shape Model: From Model Construction to Flexible Medical Image Segmentation

2013

Darmstadt, TU, Diss., 2013

Automatic processing of three-dimensional image data acquired with computed tomography or magnetic resonance imaging plays an increasingly important role in medicine. For example, the automatic segmentation of anatomical structures in tomographic images allows to generate three-dimensional visualizations of a patient's anatomy and thereby supports surgeons during planning of various kinds of surgeries. Because organs in medical images often exhibit a low contrast to adjacent structures, and because the image quality may be hampered by noise or other image acquisition artifacts, the development of segmentation algorithms that are both robust and accurate is very challenging. In order to increase the robustness, the use of model-based algorithms is mandatory, as for example algorithms that incorporate prior knowledge about an organ's shape into the segmentation process. Recent research has proven that Statistical Shape Models are especially appropriate for robust medical image segmentation. In these models, the typical shape of an organ is learned from a set of training examples. However, Statistical Shape Models have two major disadvantages: The construction of the models is relatively difficult, and the models are often used too restrictively, such that the resulting segmentation does not delineate the organ exactly. This thesis addresses both problems: The first part of the thesis introduces new methods for establishing correspondence between training shapes, which is a necessary prerequisite for shape model learning. The developed methods include consistent parameterization algorithms for organs with spherical and genus 1 topology, as well as a nonrigid mesh registration algorithm for shapes with arbitrary topology. The second part of the thesis presents a new shape model-based segmentation algorithm that allows for an accurate delineation of organs. In contrast to existing approaches, it is possible to integrate not only linear shape models into the algorithm, but also nonlinear shape models, which allow for a more specific description of an organ's shape variation. The proposed segmentation algorithm is evaluated in three applications to medical image data: Liver and vertebra segmentation in contrast-enhanced computed tomography scans, and prostate segmentation in magnetic resonance images.

Show publication details

Leeb, Robert; Lancelle, Marcel; Kaiser, Vera; Fellner, Dieter W.; Pfurtscheller, Gert

Thinking Penguin: Multimodal Brain-Computer Interface Control of a VR Game

2013

IEEE Transactions on Computational Intelligence and AI in Games, Vol.5 (2013), 2, pp. 117-128

In this paper, we describe a multimodal brain-computer interface (BCI) experiment, situated in a highly immersive CAVE. A subject sitting in the virtual environment controls the main character of a virtual reality game: a penguin that slides down a snowy mountain slope.While the subject can trigger a jump action via the BCI, additional steering with a game controller as a secondary task was tested. Our experiment profits from the game as an attractive task where the subject is motivated to get a higher score with a better BCI performance. A BCI based on the so-called brain switch was applied, which allows discrete asynchronous actions. Fourteen subjects participated, of which 50% achieved the required performance to test the penguin game. Comparing the BCI performance during the training and the game showed that a transfer of skills is possible, in spite of the changes in visual complexity and task demand. Finally and most importantly, our results showed that the use of a secondary motor task, in our case the joystick control, did not deteriorate the BCI performance during the game. Through these findings, we conclude that our chosen approach is a suitable multimodal or hybrid BCI implementation, in which the user can even perform other tasks in parallel.

Show publication details

Kim, Hyosun; Schinko, Christoph; Havemann, Sven; Fellner, Dieter W.

Tiled Projection onto Bent Screens Using Multi-Projectors

2013

Xiao, Yingcai (Ed.): Proceedings of the IADIS International Conference Computer Graphics, Visualization, Computer Vision and Image Processing : CGVCVIP 2013. IADIS Press, 2013, pp. 67-74

IADIS International Conference Computer Graphics, Visualization, Computer Vision and Image Processing (CGVCVIP) <2013, Prague, Czech Republic>

We provide a quick and efficient method to project a coherent image that is seamless and perspectively corrected from one particular viewpoint using an arbitrary number of projectors. The rationale is that wide-angle high-resolution cameras have become much more affordable than short-throw projectors, and only one such camera is sufficient for calibration. Our method is suitable for ad-hoc installations since no 3D reconstruction is required. We provide our method as open source solution, including a demonstrative client program for the Processing framework.

Show publication details

Schiffer, Thomas; Fellner, Dieter W.

Towards Multi-Kernel Ray Tracing for GPUs

2013

Bronstein, Michael (Ed.) et al.: VMV 2013 : Vision, Modeling, and Visualization. Goslar: Eurographics Association, 2013, pp. 227-228

Workshop on Vision, Modeling, and Visualization (VMV) <18, 2013, Lugano, Switzerland>

Ray tracing is a widely used algorithm to compute images with high visual quality. Mapping ray tracing computations to massively parallel hardware architectures in an efficient manner is a difficult task. Based on an analysis of current ray tracing algorithms on GPUs, a new ray traversal scheme called batch tracing is proposed. It decomposes the task into multiple kernels, each of which is designed for efficient execution. Our algorithm achieves comparable performance to state-of-the-art approaches and represents a promising avenue for future research.

Show publication details

Kahn, Svenja; Bockholt, Ulrich; Kuijper, Arjan; Fellner, Dieter W.

Towards Precise Real-Time 3D Difference Detection for Industrial Applications

2013

Computers in Industry, Vol.64 (2013), 9, pp. 1115-1128

3D difference detection is the task to verify whether the 3D geometry of a real object exactly corresponds to a 3D model of this object. We present an approach for 3D difference detection with a hand-held depth camera. In contrast to previous approaches, with the presented approach geometric differences can be detected in real-time and from arbitrary viewpoints. The 3D difference detection accuracy is improved by two approaches: first, the precision of the depth camera's pose estimation is improved by coupling the depth camera with a high precision industrial measurement arm. Second, the influence of the depth measurement noise is reduced by integrating a 3D surface reconstruction algorithm. The effects of both enhancements are quantified by a ground-truth based quantitative evaluation, both for a time-of-flight (SwissRanger 4000) and a structured light depth camera (Kinect). With the proposed enhancements, differences of few millimeters can be detected from 1 m measurement distance.

Show publication details

Keil, Matthias; Sakas, Georgios (Betreuer); Fellner, Dieter W. (Betreuer); Mönch, Christian (Betreuer)

Ultraschallbasierte Navigation für die minimalinvasive onkologische Nieren- und Leberchirurgie

2013

Darmstadt, TU, Diss., 2013

In der minimalinvasiven onkologischen Nieren- und Leberchirurgie mit vielen Vorteilen für den Patienten wird der Chirurg häufig mit Orientierungsproblemen konfrontiert. Hauptursachen hierfür sind die indirekte Sicht auf die Patientenanatomie, das eingeschränkte Blickfeld und die intraoperative Deformation der Organe. Abhilfe können Navigationssysteme schaffen, welche häufig auf intraoperativem Ultraschall basieren. Durch die Echtzeit-Bildgebung kann die Deformation des Organs bestimmt werden. Da viele Tumore im Schallbild nicht sichtbar sind, wird eine robuste automatische und deformierbare Registrierung mit dem präoperativen CT benötigt. Ferner ist eine permanente Visualisierung auch während der Manipulation am Organ notwendig. Für die Niere wurde die Eignung von Ultraschall-Elastographieaufnahmen für die bildbasierte Registrierung unter Verwendung der Mutual Information evaluiert. Aufgrund schlechter Bildqualität und geringer Ausdehnung der Bilddaten hatte dies jedoch nur mäßigen Erfolg. Die Verzweigungspunkte der Blutgefäße in der Leber werden als natürliche Landmarken für die Registrierung genutzt. Dafür wurden Gefäßsegmentierungsalgorithmen für die beiden häufigsten Arten der Ultraschallbildgebung B-Mode und Power Doppler entwickelt. Die vorgeschlagene Kombination beider Modalitäten steigerte die Menge an Gefäßverzweigungen im Mittel um 35 %. Für die rigide Registrierung der Gefäße aus dem Ultraschall und CT werden mithilfe eines bestehenden Graph Matching Verfahrens [OLD11b] im Mittel 9 bijektive Punktkorrespondenzen definiert. Die mittlere Registrierungsgenauigkeit liegt bei 3,45 mm. Die Menge an Punktkorrespondenzen ist für eine deformierbare Registrierung nicht ausreichend. Das entwickelte Verfahren zur Landmarkenverfeinerung fügt zwischen gematchten Punkte weitere Landmarken entlang der Gefäßmittellinien ein und sucht nach weiteren korrespondierenden Gefäßsegmenten wodurch die Zahl der Punktkorrespondenzen im Mittel auf 70 gesteigert wird. Dies erlaubt die Bestimmung der Organdeformation anhand des unterschiedlichen Gefäßverlaufes. Anhand dieser Punktkorrespondenzen kann mithilfe der Thin-Plate-Splines ein Deformationsfeld für das gesamte Organ berechnet werden. Auf diese Weise wird die Genauigkeit der Registrierung im Mittel um 44 % gesteigert. Die wichtigste Voraussetzung für das Gelingen der deformierbaren Registrierung ist eine möglichst umfassende Segmentierung der Gefäße aus dem Ultraschall. Im Rahmen der Arbeit wurde erstmals der Begriff der Regmentation auf die Segmentierung von Gefäßen und die gefäßbasierte Registrierung ausgeweitet. Durch diese Kombination beider Verfahren wurde die extrahierte Gefäßlänge im Mittel um 32 % gesteigert, woraus ein Anstieg der Anzahl korrespondierender Landmarken auf 98 resultiert. Hierdurch lässt sich die Deformation des Organs und somit auch die Lageveränderung des Tumors genauer und mit höherer Sicherheit bestimmen. Mit dem Wissen über die Lage des Tumors im Organ und durch Verwendung eines Markierungsdrahtes kann die Lageveränderung des Tumors während der chirurgischen Manipulation mit einem elektromagnetischen Trackingsystem überwacht werden. Durch dieses Tumortracking wird eine permanente Visualisierung mittels Video Overlay im laparoskopischen Videobild möglich. Die wichtigsten Beiträge dieser Arbeit zur gefäßbasierten Registrierung sind die Gefäßsegmentierung aus Ultraschallbilddaten, die Landmarkenverfeinerung zur Gewinnung einer hohen Anzahl bijektiver Punktkorrespondenzen und die Einführung der Regmentation zur Verbesserung der Gefäßsegmentierung und der deformierbaren Registrierung. Das Tumortracking für die Navigation ermöglicht die permanente Visualisierung des Tumors während des gesamten Eingriffes.

Show publication details

Bremm, Sebastian; Fellner, Dieter W. (Betreuer); Schreck, Tobias (Betreuer)

Visual Analytics Approaches for Descriptor Space Comparison and the Exploration of Time Dependent Data

2013

Darmstadt, TU, Diss., 2013

Modern technologies allow us to collect and store increasing amounts of data. However, their analysis is often difficult. For that reason, Visual Analytics combines data mining and visualization techniques to explore and analyze large amounts of complex data. Visual Analytics approaches exist for various problems and applications, but all share the idea of a tight combination of visualization and automatic analysis. Their respective implementations are highly specialized on the given data and the analytical task. In this thesis I present new approaches for two specific topics, visual descriptor space comparison and the analysis of time series. Visual descriptor space comparison enables the user to analyze different representations of complex datasets e.g., phylogenetic trees or chemical compounds. I propose approaches for data sets with hierarchic or unknown structure, each combining an automatic analysis with interactive visualization. For hierarchically organized data, I suggest a novel similarity score embedded in an interactive analysis framework linking different views, each specialized on a particular analytical tasks. This analysis framework is evaluated in cooperation with biologists in the area of phylogenetic research. To extend the scalability of my approach, I introduce CloudTrees, a new visualization technique for the comparison of large trees with thousands of leaves. It reduces overplotting problems by ensuring the visibility of small but important details like high scoring subtrees. For the comparison of data with unknown structure, I assess several state of the art projection quality measures to analyze their capability for descriptor comparison. For the creation of appropriate ground truth test data. I suggest an interactive tool called PCDC for the controlled creation of high dimensional data with different properties like data distribution or number and size of contained clusters. For the visual comparison of unknown structured data, I introduce a technique which bases on the comparison of two dimensional projections of the descriptors using a two dimensional colormap. I present the approach for scatterplots and extended it to Self- Organizing Maps (SOMs) including reliability encoding. I embed the automatic and visual comparison in an interactive analysis pipeline, which automatically calculates a set of representative descriptors out of a larger collection of descriptors. For a deeper analysis of the proposed result and the underlying characteristics of the input data, the analyst can follow each step of the pipeline. The approach is applied to a large set of chemical data in a high throughput screening analysis scenario. For the analysis of time dependent, categorical data I propose a new approach called Time Parallel Sets (TIPS). It focuses on the analysis of group changes of objects in large datasets. Different automatic algorithms identify and select potentially interesting points in time for a detailed analysis. The user can interactively track groups or single objects, add or remove selected points in time or change parameters of the detection algorithms according to the analytical goal. The approach is applied to two scenarios: Emergency evacuation of buildings and tracking of mobile phone calls over long time periods. Large time series can be compressed by transforming them into sequences of symbols whereas each symbol represents a set of similar subsequences in time. For these time sequences, I propose new visual-analytical tools, starting with an interactive, semi-automatic definition of symbol similarity. Based on this, the sequences are visualized using different linked views; each specialized on other analytical problems. As an example use case, a financial dataset containing the risk estimations and return values of 60 companies over 500 days is analyzed.

Show publication details

Kuijper, Arjan; Sourin, Alexei; Fellner, Dieter W.

2012 International Conference on Cyberworlds. Proceedings: Cyberworlds 2012

2012

Los Alamitos, Calif. : IEEE Computer Society Conference Publishing Services (CPS), 2012

International Conference on Cyberworlds (CW) <11, 2012, Darmstadt, Germany>

Created intentionally or spontaneously, cyberworlds are information spaces and communities that immensely augment the way we interact, participate in business and receive information throughout the world. Cyberworlds seriously impact our lives and the evolution of the world economy by taking such forms as social networking services, 3D shared virtual communities and massively multiplayer online role-playing games. Cyberworlds 2012 was held 25-27 September 2012 and was organized by Fraunhofer IGD and TU Darmstadt, Germany, in cooperation with EUROGRAPHICS Association and supported by the IFIP Workgroup Computer Graphics and Virtual Worlds.

Show publication details

Stork, André; Fellner, Dieter W.

3D-COFORM - Tools and Expertise for 3D Collection Formation

2012

Bienert, Andreas (Ed.) et al.: EVA 2012 Berlin. Proceedings : Elektronische Medien & Kunst, Kultur, Historie. Berlin: Gesellschaft zur Förderung angewandter Informatik e.V., 2012, pp. 35-49

Electronic Imaging & the Visual Arts (EVA) <19, 2012, Berlin, Germany>

3D-COFORM has the overall aim to make 3D documentation the standard approach in cultural heritage institutions for collection formation and management. 3D-COFORM is addressing the whole life cycle of digital 3D objects (also called 3D documents) spanning the whole chain from acquisition to processing, and from semantic enrichment to modeling and high-quality presentation - all that on the basis of an integrated repository infrastructure. The paper will give an overview of 3D-COFORM and present its current results and contributions.

Show publication details

Franke, Tobias; Olbrich, Manuel; Fellner, Dieter W.

A Flexible Approach to Gesture Recognition and Interaction in X3D

2012

Mouton, Christophe (General Chair) et al.: Proceedings Web3D 2012 : 17th International Conference on 3D Web Technology. New York: ACM Press, 2012, pp. 171-174

International Conference on 3D Web Technology (WEB3D) <17, 2012, Los Angeles, CA, USA>

With the appearance of natural interaction devices such as the Microsoft Kinect or Asus Xtion PRO cameras, a whole new range of interaction modes have been opened up to developers. Tracking frameworks can make use of the additional depth image or skeleton tracking capabilities to recognize gestures. A popular example of one such implementation is the NITE framework from PrimeSense, which enables fine grained gesture recognition. However, recognized gestures come with additional information such as velocity, angle or accuracy, which are not encapsulated in a standardized format and therefore cannot be integrated into X3D in a meaningful way. In this paper, we propose a flexible way to inject gesture based meta data into X3D applications to enable fine grained interaction. We also discuss how to recognize these gestures if the underlying framework provides no mechanism to do so.

Show publication details

Schröttner, Martin; Havemann, Sven; Theodoridou, Maria; Doerr, Martin; Fellner, Dieter W.

A Generic Approach for Generating Cultural Heritage Metadata

2012

Ioannides, Marinos (Ed.) et al.: Progress in Cultural Heritage Preservation : 4th International Conference, EuroMed 2012. Berlin, Heidelberg, New York: Springer, 2012. (Lecture Notes in Computer Science (LNCS) 7616), pp. 231-240

International Euro-Mediterranean Conference (EuroMed) <4, 2012, Limassol, Cyprus>

Rich metadata is crucial for the documentation and retrieval of 3D datasets in cultural heritage. Generating metadata is expensive as it is a very time consuming semi-manual process. The exponential increase of digital assets requires novel approaches for the mass generation of metadata. We present an approach that is generic, minimizes user assistance, and is customizable for different metadata schemes and storage formats as it is based on generic forms. It scales well and was tested with a large database of digital CH objects.

Show publication details

Franke, Tobias; Fellner, Dieter W.

A Scalable Framework for Image-based Material Representations

2012

Mouton, Christophe (General Chair) et al.: Proceedings Web3D 2012 : 17th International Conference on 3D Web Technology. New York: ACM Press, 2012, pp. 83-91

International Conference on 3D Web Technology (WEB3D) <17, 2012, Los Angeles, CA, USA>

Complex material-light interaction is modeled mathematically in its most basic form through the 4D BRDF or the 6D spatially varying BRDF. To alleviate the overhead of calculating correct shading with a complex BRDF consisting of many parameters, many methods resort to textures as containers for BRDF information. The most common among them is the Bidirectional Texture Function, where a set of base textures of the material under different illumination and viewing conditions is stored and used as a lookup table at runtime. A wide variety of compression algorithms have been proposed, which usually differ only in some basis notation. Also, several other schemes aside from the BTF exist that make use of multiple textures as containers for surface appearance data, which either compress the surface transfer function or the response in change of luminance with a suitable basis function. We propose a common container for image-based material descriptors, the ImageMaterial node for X3D, with a common interface to unify these different implementations and make them accessible to the X3D developer. We also introduce a new texturing node, the PolynomialTextureMap, which can display Polynomial Texture Map binary container as regular static texture or work in conjunction with an ImageMaterial appearance to unfold its full potential.

Show publication details

Pan, Xueming; Schiffer, Thomas; Schröttner, Martin; Havemann, Sven; Hecher, Martin; Berndt, Rene; Fellner, Dieter W.

A Scalable Repository Infrastructure for CH Digital Object Management

2012

International Society on Virtual Systems and MultiMedia: Proceedings of the VSMM 2012 : Virtual Systems in the Information Society. Los Alamitos, Calif.: IEEE Computer Society, 2012, pp. 219-226

International Conference on Virtual Systems and MultiMedia (VSMM) <18, 2012, Milan, Italy>

In recent decades, researchers of archaeological 3D digitalization found that the collection and archive of processing intermediate data are extremely tiresome tasks. They need large of man power and material resources, even though, mistakes can be raised and break the whole working chain. The traditional documentation of digitalization process is also a pending challenge, although, the ISO standard CIDOC-CRM (ISO 21127:2006) has been introduced to the archaeologists and museum professionals since years, but there are still some obvious gaps between practice and theory: (1) How to connect the discrete archaeologists, museums, CH research institutions, and the public? (2) How to ensure the integrity of whole digitalization process and simplify the process? (3) How to maximize the usability of the public digital objects in CH community? (4) How to long term preserve the huge amount of datasets? (5) How to present and disseminate the digital object to the public? This paper presents an operational and optimal infrastructure that realizes not only a distributed storage system, but also a content management system. This infrastructure works as a backbone of whole digitalization process, provides a complete solution suite for archaeologists, museum professionals, museum visitors, and IT technicians.

Show publication details

Pan, Xueming; Schiffer, Thomas; Schröttner, Martin; Berndt, Rene; Hecher, Martin; Havemann, Sven; Fellner, Dieter W.

An Enhanced Distributed Repository for Working with 3D Assets in Cultural Heritage

2012

Ioannides, Marinos (Ed.) et al.: Progress in Cultural Heritage Preservation : 4th International Conference, EuroMed 2012. Berlin, Heidelberg, New York: Springer, 2012. (Lecture Notes in Computer Science (LNCS) 7616), pp. 349-358

International Euro-Mediterranean Conference on Cultural Heritage and Digital Libraries (EuroMed) <4, 2012, Limassol, Cyprus>

The development of a European market for digital cultural heritage assets is impeded by the lack of a suitable marketplace, i.e., a commonly accepted distributed exchange platform for digital assets. We have developed such a platform over the last two years, a centralized content management system with distributed storage capability and semantic query functionality. It supports the complete pipeline from data acquisition (photo, 3D scan) over processing (cleaning, hole filling) to interactive presentation, and allows collecting a complete process description (paradata) alongside. In this paper we present the components of the system and explain their interplay. Furthermore, we present and explain which functional components, from transactions to permission management, are needed to operate the system. Finally, we prove the suitability of the API and present a few software applications that use it.

Show publication details

Zmugg, René; Thaller, Wolfgang; Hecher, Martin; Schiffer, Thomas; Havemann, Sven; Fellner, Dieter W.

Authoring Animated Interactive 3D Museum Exhibits using a Digital Repository

2012

Arnold, David (Ed.) et al.: VAST 2012 : Eurographics Symposium Proceedings. Goslar: Eurographics Association, 2012, pp. 73-80

International Symposium on Virtual Reality, Archaeology and Cultural Heritage (VAST) <13, 2012, Brighton, UK>

We present the prototype of a software system to streamline the serial production of simple interactive 3D animations for the display in museum exhibitions. We propose dividing the authoring process in two phases, a designer phase and a curator phase. The designer creates a set of configurable 3D scene templates that fit with the look of the physical exhibition while the curator inserts 3D models and configures the scene templates; the finished scenes are uploaded to 3D kiosks in the museum. Distinguishing features of our system are the tight integration with an asset repository and the simplified scene graph authoring. We demonstrate the usefulness with a few examples.

Show publication details

Bein, Matthias; Peña Serna, Sebastian; Stork, André; Fellner, Dieter W.

Completing Digital Cultural Heritage Objects by Sketching Subdivision Surfaces toward Restoration Planning

2012

Ioannides, Marinos (Ed.) et al.: Progress in Cultural Heritage Preservation : 4th International Conference, EuroMed 2012. Berlin, Heidelberg, New York: Springer, 2012. (Lecture Notes in Computer Science (LNCS) 7616), pp. 301-309

International Euro-Mediterranean Conference on Cultural Heritage and Digital Libraries (EuroMed) <4, 2012, Limassol, Cyprus>

In the restoration planning process a curator evaluates the condition of a Cultural Heritage (CH) object and accordingly develops a set of hypotheses for improving it. This iterative process is complex, time consuming and requires many manual interventions. In this context, we propose interactive modeling techniques, based on subdivision surfaces, which can support the completion of CH objects toward restoration planning. The proposed technique starts with a scanned and incomplete object, represented by a triangle mesh, from which a subdivision surfaces can be generated. Based on the mixed representation, sketching techniques and modeling operations can be combined to extend and refine the subdivision surface, according to the curator's hypothesis. Thus, curators without rigorous modeling experience can directly create and manipulate surfaces in a similar way as they would do it on a piece of paper. We present the capabilities of the proposed technique on two interesting CH objects.

Show publication details

Halm, Andreas; Eggeling, Eva; Fellner, Dieter W.

Embedding Biological Information in a Scene Graph System

2012

Linsen, Lars (Ed.) et al.: Visualization in Medicine and Life Sciences II : Progress and New Challenges. Berlin, Heidelberg, New York: Springer, 2012. (Mathematics and Visualization), pp. 249-264

We present the Bio Scene Graph (BioSG) for visualization of biomolecular structures based on the scene graph system OpenSG. The hierarchical model of primary, secondary and tertiary structures of molecules used in the organic chemistry is mapped to a graph of nodes when loading molecular files. We show that using BioSG, displaying molecules can be integrated in other applications, for example in medical applications. Additionally, existing algorithms and programs can be easily adapted to display the results with BioSG.

Show publication details

Drechsler, Klaus; Sakas, Georgios (Betreuer); Fellner, Dieter W. (Betreuer); Mönch, Christian (Betreuer)

Extraction of Hepatic Veins in Contrast Enhanced CT with Application to Interventional Planning

2012

Darmstadt, TU, Diss., 2012

The Liver performs several important tasks that are essential for survival. However, liver cancer, the third most common type of cancer, affects these functions significantly. Different treatment options are available, but a surgical resection, if possible, offers the best prognosis for the patient. Thus, the decision, whether a surgical resection is feasible, is important and must be taken with care in a pre-interventional planning stage. Modern volumetric imaging techniques such as CT or magnetic resonance imaging (MRI) are utilized to decide which treatment is best for the patient and to plan the intervention. However, the amount of anatomical details visible in the acquired volumes is steadily increasing. This comes along with an increasing amount of data per patient. Manual examination is time consuming and prone to errors. As a matter of fact, several software systems were proposed to support the surgeon during the planning phase. The extraction of blood vessels plays an important role in these applications. The segmentation of vessels is a challenging problem that has to deal with acquisition-dependent problems such as noise, contrast, spatial resolution, and artifacts. Furthermore, blood vessel specific characteristics like high variability of size and curvature result in additional difficulties for segmentation algorithms. The liver, in particular, exhibits another challenge to vessel segmentation algorithms. Its supply and drain vessel systems are densely distributed within the liver, and because of partial volume effects and motion artifacts, they seem to be connected at some points. The focus of the present thesis is the robust extraction of hepatic veins in multiphase CT volumes. Therefore, an image processing pipeline is presented that covers vessel enhancement, vessel segmentation, graph creation and tree reconstruction. The pipeline was used to develop an application for interventional planning. It allows for the simulation of intraoperative hepatic vein clamping for (sub-)segment oriented liver resections and the execution of risk analysis to judge surgical risk during an atypical resection. Furthermore, results of the present thesis were also successfully used in an application for intraoperative navigation to extract liver vessels in 3D ultrasound data and matching of anatomical vessel trees and graphs of the liver for registration of 3D volumes.

Show publication details

Lancelle, Marcel; Voss, Gerrit; Fellner, Dieter W.

Fast Motion Rendering for Single-Chip Stereo DLP Projectors

2012

Boulic, Ronan (Ed.) et al.: Virtual Environments 2012 : Joint Virtual Reality Conference of EGVE - ICAT - EuroVR. Goslar: Eurographics Association, 2012, pp. 29-36

Joint Virtual Reality Conference (JVRC) <4, 2012, Madrid, Spain>

Single-chip color DLP projectors show the red, green and blue components one after another. When the gaze moves relative to the displayed pixels, color fringes are perceived. In order to reduce these artefacts, many devices show the same input image twice at the double rate, i.e. a 60Hz source image is displayed with 120Hz. Consumer stereo projectors usually work with time interlaced stereo, allowing to address each of these two images individually. We use this so called 3D mode for mono image display of fast moving objects. Additionally, we generate a separate image for each individual color, taking the display time offset of each color component into account. With these 360 images per second we can strongly reduce ghosting, color fringes and jitter artefacts on fast moving objects tracked by the eye, resulting in sharp objects with smooth motion. Real-time image generation at such a high frame rate can only be achieved for simple scenes or may only be possible by severely reducing quality. We show how to modify a motion blur post processing shader to render only 60frames=second and efficiently generate good approximations of the missing frames.

Show publication details

Settgast, Volker; Lancelle, Marcel; Bauer, Dietmar; Fellner, Dieter W.

Hands-Free Navigation in Immersive Environments for the Evaluation of the Effectiveness of Indoor Navigation Systems

2012

Geiger, Christian (Ed.) et al.: Virtuelle und Erweiterte Realität : 9. Workshop der GI-Fachgruppe VR/AR. Aachen: Shaker, 2012. (Berichte aus der Informatik), pp. 107-118

Workshop der GI-Fachgruppe VR/AR: Virtuelle und Erweiterte Realität <9, 2012, Aachen, Germany>

While navigation systems for cars are in widespread use, only recently, indoor navigation systems based on smartphone apps became technically feasible. Hence tools in order to plan and evaluate particular designs of information provision are needed. Since tests in real infrastructures are costly and environmental conditions cannot be held constant, one must resort to virtual infrastructures. In this paper we present a hands-free navigation in such virtual worlds using the Microsoft Kinect in our four-sided Definitely Affordable Virtual Environment (DAVE). We designed and implemented navigation controls using the user's gestures and postures as the input to the controls. The installation of expensive and bulky hardware like treadmills is avoided while still giving the user a good impression of the distance she has travelled in virtual space. An advantage in comparison to approaches using head mounted augmented reality is that the DAVE allows the users to interact with their smartphone. Thus the effects of different indoor navigation systems can be evaluated already in the planning phase using the resulting system.

Show publication details

Thaller, Wolfgang; Krispel, Ulrich; Havemann, Sven; Fellner, Dieter W.

Implicit Nested Repetition in Dataflow for Procedural Modeling

2012

Ullrich, Torsten (Ed.) et al.: Computation Tools 2012 : The Third International Conference on Computational Logics, Algebras, Programming, Tools, and Benchmarking. ThinkMind, 2012, pp. 45-50

International Conference on Computational Logics, Algebras, Programming, Tools, and Benchmarking (Computation Tools) <3, 2012, Nice, France>

Creating 3D content requires a lot of expert knowledge and is often a very time consuming task. Procedural modeling can simplify this process for several application domains. However, creating procedural descriptions is still a complicated task. Graph based visual programming languages can ease the creation workflow, however direct manipulation of procedural 3D content rather than of a visual program is desirable as it resembles established techniques in 3D modeling. In this paper, we present a dataflow language that features a novel approach to handling loops in the context of direct interactive manipulation of procedural 3D models and show compilation techniques to translate it to traditional languages used in procedural modeling.

Show publication details

Fellner, Dieter W.

Informatik und Open Access - von der idealistischen Sicht zum umsetzbaren "Goldenen Weg"

2012

Informatik Spektrum, Vol.35 (2012), 4, pp. 250-252

Die Vorstandsperspektive

Der erweiterte Vorstand der GI zeichnet regelmäßig im Informatik-Spektrum für eine Kolumne verantwortlich, in der aktuelle Themen der Informatik zur Diskussion gestellt werden. Die Texte eröffnen Perspektiven auf aktuelle Fragen, die Informatiker und Informatikerinnen betreffen. Im vorliegenden Heft betrachtet Prof. Fellner, Mitglied des erweiterten Vorstands der GI, Open Access im Kontext des wissenschaftlichen Publizierens.

Show publication details

Riemenschneider, Hayko; Krispel, Ulrich; Thaller, Wolfgang; Donoser, Michael; Havemann, Sven; Fellner, Dieter W.; Bischof, Horst

Irregular Lattices for Complex Shape Grammar Facade Parsing

2012

IEEE Computer Society: IEEE Conference on Computer Vision and Pattern Recognition : CVPR 2012. New York: IEEE, 2012, pp. 1640-1647

Conference on Computer Vision and Pattern Recognition (CVPR) <30, 2012, Providence, RI, USA>

High-quality urban reconstruction requires more than multi-view reconstruction and local optimization. The structure of facades depends on the general layout, which has to be optimized globally. Shape grammars are an established method to express hierarchical spatial relationships, and are therefore suited as representing constraints for semantic facade interpretation. Usually inference uses numerical approximations, or hard-coded grammar schemes. Existing methods inspired by classical grammar parsing are not applicable on real-world images due to their prohibitively high complexity. This work provides feasible generic facade reconstruction by combining low-level classifiers with mid-level object detectors to infer an irregular lattice. The irregular lattice preserves the logical structure of the facade while reducing the search space to a manageable size. We introduce a novel method for handling symmetry and repetition within the generic grammar. We show competitive results on two datasets, namely the Paris2010 and the Graz50. The former includes only Hausmannian, while the latter includes Classicism, Biedermeier, Historicism, Art Nouveau and post-modern architectural styles.

Show publication details

Fellner, Dieter W.; Baier, Konrad; Dürre, Steffen; Bornemann, Heidrun; Mentel, Katrin

Jahresbericht 2011: Fraunhofer-Institut für Graphische Datenverarbeitung IGD

2012

Darmstadt, 2012

Die Forscher des Fraunhofer-Instituts für Graphische Datenverarbeitung IGD machen aus Informationen Bilder und aus Bildern Informationen. Die bild- und modellbasierte Informatik nennt man "Visual Computing". Hierzu zählen Graphische Datenverarbeitung, Computer Vision sowie Virtuelle und Erweiterte Realität. Mithilfe des Visual Computing werden Bilder, Modelle und Graphiken für alle denkbaren computerbasierten Anwendungen verwendet, erfasst und bearbeitet. Dabei setzen die Forscherinnen und Forscher graphische Anwendungsdaten in Wechselbeziehung mit nicht graphischen Daten, was bedeutet, dass sie Bilder, Videos und 3D-Modelle mit Texten, Ton und Sprache rechnergestützt anreichern. Daraus ergeben sich wiederum neue Erkenntnisse, die in innovative Produkte und Dienstleistungen umgesetzt werden können. Dafür werden entsprechend fortgeschrittene Dialogtechniken entworfen. Durch seine zahlreichen Innovationen hebt das Fraunhofer IGD die Interaktion zwischen Mensch und Maschine auf eine neue Ebene.

Show publication details

Schinko, Christoph; Ullrich, Torsten; Fellner, Dieter W.

Minimally Invasive Interpreter Construction: How to Reuse a Compiler to Build an Interpreter

2012

Ullrich, Torsten (Ed.) et al.: Computation Tools 2012 : The Third International Conference on Computational Logics, Algebras, Programming, Tools, and Benchmarking. ThinkMind, 2012, pp. 38-44

International Conference on Computational Logics, Algebras, Programming, Tools, and Benchmarking (Computation Tools) <3, 2012, Nice, France>

Scripting languages are easy to use and very popular in various contexts. Their simplicity reduces a user's threshold of inhibitions to start programming - especially, if the user is not a computer science expert. As a consequence, our generative modeling framework Euclides for non-expert users is based on a JavaScript dialect. It consists of a JavaScript compiler including a front-end (lexer, parser, etc.) and back-ends for several platforms. In order to reduce our users' development times and for fast feedback, we integrated an interactive interpreter based on the already existing compiler. Instead of writing large proportions of new code, whose behavior has to be consistent with the already existing compiler, we used a minimally invasive solution, which allows us to reuse most parts of the compiler's front- and back-end.

Show publication details

May, Thorsten; Fellner, Dieter W. (Betreuer); Hauser, Helwig (Betreuer)

Modelle und Methoden für die Kopplung automatischer und visuell-interaktiver Verfahren für die Datenanalyse

2012

Darmstadt, TU, Diss., 2011

In dieser Arbeit werden neue Kopplungsvarianten von Visualisierungstechniken und Data- Mining-Verfahren für die Datenanalyse vorgestellt. Die betrachteten Teilschritte der Analyse sind hierbei die Suche von Mustern, deren Transformation in formale Modelle und die Überprüfung dieser Modelle. Die Suche nach Mustern und deren Modellierung ist in diesen Technologien bisher nicht getrennt. In diesem Konzept werden die Aufgaben neu aufgeteilt: Die Suche und Identifizierung von Mustern übernimmt der Mensch mit Hilfe von interaktiven Visualisierungstechniken. Die Transformation der gefundenen Muster wird durch Data-Mining Verfahren übernommen. Diese Aufteilung nutzt die spezifischen Stärken von Mensch und Maschine besser. Der Mensch trägt mit seinem flexiblen und robusten Wahrnehmungssystem zu dieser Aufgabe bei; er wird gleichzeitig von der kognitiv anspruchsvolleren Aufgabe der Transformation der Muster entlastet. Die Maschine kann auch insbesondere komplexe Muster leichter in formale Modelle umformen. Wenn der Mensch die Interpretation ohne Hilfsmittel oder Übung durchführen muss, können selten mehr als zwei bis dreidimensionale Muster interpretiert werden. In der Arbeit wird gezeigt, dass die Muster zehn- und mehrdimensionaler Zusammenhänge nicht nur dargestellt und wahrgenommen, sondern auch genutzt werden können. Durch die Aufteilung kann auch jene Schwäche automatischer Verfahren kompensiert werden, dass sie nur solche Muster finden, für deren Suche sie entwickelt wurden. Bei der Konstruktion der Modelle eliminiert die interaktive Vorgabe Mehrdeutigkeiten in den Mustern. Zur Überprüfung des konstruierten Modells wird daraus wieder ein Muster generiert, das mit den Mustern der Originaldaten visuell verglichen wird. Dies liefert qualitative Information über die Art des Modellierungsfehlers und die prinzipielle Eignung des automatischen Verfahrens, die quantitative Gütemaße allein nicht bereitstellen können. Zusammengenommen beschreibt das Konzept zwei Verbindungen zwischen visuell-interaktiven Techniken und automatischen Techniken in entgegengesetzte, sich ergänzende Richtungen. Mustererkennung, Interaktion und automatische Modellierung beschreiben den Weg von Muster zum Modell. Simulation, Feedback und visueller Abgleich beschreiben den Weg vom Modell zurück zum Muster. Das Modell der Daten wird in einem zyklischen, iterativen Prozess konstruiert, stetig überprüft und verfeinert. Um diese beiden Varianten der Kopplung von anderen, existierenden Verfahren abgrenzen zu können, wurde das allgemeine Modell des Visual-Analytics-Prozess verfeinert. Unabhängig von spezifischen Techniken wurden acht verschiedene Kopplungsvarianten identifiziert. Dabei werden sowohl bekannte Ansätze in die Systematik eingeordnet, als auch solche, die bisher nicht oder kaum in der Literatur umgesetzt werden.

Show publication details

Erdt, Marius; Sakas, Georgios (Betreuer); Fellner, Dieter W. (Betreuer); Vogl, Thomas J. (Betreuer)

Non-Uniform Deformable Volumetric Objects for Medical Organ Segmentation and Registration

2012

Darmstadt, TU, Diss., 2012

In medical imaging, large amounts of data are created during each patient examination, especially using 3-dimensional image acquisition techniques such as Computed Tomography. This data becomes more and more difficult to handle by humans without the aid of automated or semi-automated image processing means and analysis. Particularly, the manual segmentation of target structures in 3D image data is one of the most time consuming tasks for the physician in the context of using computerized medical applications. In addition, 3D image data increases the difficulty of mentally comparing two different images of the same structure. Robust automated organ segmentation and registration methods are therefore needed in order to fully utilize the potentials of modern medical imaging. This thesis addresses the described issues by introducing a new model based method for automated segmentation and registration of organs in 3D Computed Tomography images. In order to be able to robustly segment organs in low contrast images, a volumetric model based approach is proposed that incorporates texture information from the model's interior during adaptation. It is generalizable and extendable such that it can be combined with statistical shape modeling methods and standard boundary detection approaches. In order to increase the robustness of the segmentation in cases where the shape of the target organ significantly deviates from the model, local elasticity constraints are proposed. They limit the flexibility of the model in areas where shape deviation is unlikely. This allows for a better segmentation of untrained shapes and improves the segmentation of organs with complex shape variation like the liver. The model based methods are evaluated on the liver in the portal venous and arterial contrast phase, the bladder, the pancreas, and the kidneys. An average surface distance error between 0.5 mm and 2.0 mm is obtained for the tested structures which is in most cases close to the interobserver variability between different humans segmenting the same structure. In the case of the pancreas, for the first time, an automatic segmentation from single phase contrast enhanced CT becomes feasible. In the context of organ registration, the developed methods are applied to deformable registration of multi-phase contrast enhanced liver CT data. The method is integrated into a clinical demonstrator and is currently in use for testing in two clinics. The presented method for automatic deformable multi-phase registration has been quantitatively and qualitatively evaluated in the clinic. In nearly all tested cases, the registration quality is sufficient for clinical needs. The result of this thesis is a new approach for automatic organ segmentation and registration that can be applied to various clinical problems. In many cases, it can be used to significantly reduce or even remove the amount of manual contour drawing. In the context of registration, the approach can be used to improve clinical diagnosis by overlaying different images of the same anatomical structure with higher quality than existing methods. The combination of proposed segmentation and registration therefore saves valuable clinician time in dealing with today's 3D medical imaging data.

Show publication details

Bremm, Sebastian; Heß, Martin; Landesberger, Tatiana von; Fellner, Dieter W.

PCDC - On the Highway to Data. A Tool for the Fast Generation of Large Synthetic Data Sets

2012

Matkovic, Kresimir (Ed.) et al.: EuroVA 2012 : International Workshop on Visual Analytics. Goslar: Eurographics Association, 2012, pp. 7-11

International Workshop on Visual Analytics (EuroVA) <3, 2012, Vienna, Austria>

In this paper, we present Parallel Coordinates for Data Creation (PCDC), a new visual-interactive method for the fast generation of labeled multidimensional data sets. Multivariate data need to be analyzed in various domains such as finance, biology or medicine using complex data mining techniques. For the evaluation or presentation of the techniques, e.g., for assessing their sensitivity to specific data properties, test data need to be generated. PCDC allows for a fast and intuitive creation of multivariate data with several classes. It is based on interactive definition of data regions and data distributions in a parallel coordinates view. It offers a quick definition of data regions over several dimensions in one interface. Moreover, the users can directly see the outcome of their settings in the same view without the need for switching between data generation and output visualization. Our tool enables also an easy adjustment of the data generation parameters for creating additional similar datasets.

Show publication details

Schwenk, Karsten; Kuijper, Arjan; Behr, Johannes; Fellner, Dieter W.

Practical Noise Reduction for Progressive Stochastic Ray Tracing with Perceptual Control

2012

IEEE Computer Graphics and Applications, Vol.32 (2012), 6, pp. 46-55

A proposed method reduces noise in stochastic ray tracing for interactive progressive rendering. The method accumulates high-variance light paths in a separate buffer, which is filtered by a high-quality edge-preserving filter. Then, this method adds a combination of the noisy unfiltered samples and the less noisy (but biased) filtered samples to the low-variance samples to form the final image. A novel per-pixel blending operator combines both contributions in a way that respects a user-defined threshold on perceived noise. This method can provide fast, reliable previews, even in the presence of complex features such as specular surfaces and high-frequency textures. At the same time, it's consistent in that the bias due to filtering vanishes in the limit.

Show publication details

Zhou, Xuebing; Fellner, Dieter W. (Betreuer); Veldhuis, Raymond N. J. (Betreuer)

Privacy and Security Assessment of Biometric Template Protection

2012

Darmstadt, TU, Diss., 2011

Biometrics enables convenient authentication based on a person's physical or behavioral characteristics. In comparison with knowledge- or token-based methods, it links an identity directly to its owner. Furthermore, it can not be forgotten or handed over easily. As biometric techniques have become more and more efficient and accurate, they are widely used in numerous areas. Among the most common application areas are physical and logical access controls, border control, authentication in banking applications and biometric identification in forensics. In this growing field of biometric applications, concerns about privacy and security cannot be neglected. The advantages of biometrics can revert to the opposite easily. The potential misuse of biometric information is not limited to the endangerment of user privacy, since biometric data potentially contain sensitive information like gender, race, state of health, etc. Different applications can be linked through unique biometric data. Additionally, identity theft is a severe threat to identity management, if revocation and reissuing of biometric references are practically impossible. Therefore, template protection techniques are developed to overcome these drawbacks and limitations of biometrics. Their advantage is the creation of multiple secure references from biometric data. These secure references are supposed to be unlinkable and non-invertible in order to achieve the desired level of security and to fulfill privacy requirements. The existing algorithms can be categorized into transformation-based approaches and biometric cryptosystems. The transformation-based approaches deploy different transformation or randomization functions, while the biometric cryptosystems construct secrets from biometric data. The integration in biometric systems is commonly accepted in research and their feasibility according to the recognition performance is proved. Despite of the success of biometric template protection techniques, their security and privacy properties are investigated only limitedly. This predominant deficiency is addressed in this thesis and a systematic evaluation framework for biometric template protection techniques is proposed and demonstrated: Firstly, three main protection goals are identified based on the review of the requirements on template protection techniques. The identified goals can be summarized as security, privacy protection ability and unlinkability. Furthermore, the definitions of privacy and security are given, which allow to quantify the computational complexity estimating a pre-image of a secure template and to measure the hardness of retrieving biometric data respectively. Secondly, three threat models are identified as important prerequisites for the assessment. Threat models define the information about biometric data, system parameters and functions that can be accessed during the evaluation or an attack. The first threat model, so called naive model, assumes that an adversary has very limited information about a system. In the second threat model, the advanced model, we apply Kerckhoffs' principle and assume that essential details of algorithms as well as properties of biometric data are known. The last threat model assumes that an adversary owns large amount of biometric data and this allows him to exploit inaccuracy of biometric systems. It is called the collision threat model. Finally, a systematic framework for privacy and security assessment is proposed. Before an evaluation process, protection goals and threat models need to be clarified. Based on these, the metrics measuring different protection goals as well as an evaluation process determining the metrics will be developed. Both theoretical evaluation with metrics such as entropy, mutual information and practical evaluation based on individual attacks can be used. The framework for privacy and security assessment is applied on the biometric cryptosystems: fuzzy commitment for 3D face and iris recognition is assessed. I develop my own 3D face recognition algorithm based on the depth distribution of facial sub-surfaces and integrate it in the fuzzy commitment scheme. The iris recognition is based on an open source algorithm using Gabor filter. It is implemented in the fuzzy commitment scheme with the two layer coding method as proposed by Hao et al. Both features, the 3D face features and the iris features, represent local characteristics of the modalities. Thus, strong dependency within these features is observed. The second order dependency tree is applied to describe the distribution of 3D face features. The Markov model is applied to characterize the statistical properties of iris features. Thus, security and privacy of these algorithms can be measured with theoretical metrics. Due to strong feature dependency, the achieved security is much smaller than the secret size, which is the assumed security in a perfect secure case with uniformly identically distributed features. Moreover, the unlinkability is analyzed. The analysis shows that these protected systems are less vulnerable to leakage amplification. However, the secure templates contain much personal identifiable information. We demonstrate the attacks, which can identify a subject by linking auxiliary data stored in his secure templates. Cross matching is assessed with the performance of these attacks. Additionally, the characteristic of iris features is exploited to perform an attack retrieving features from secure templates. The efficiency of the practical attack confirms the result of the theoretical assessment of privacy with conditional entropy. The coding process plays a very important role for the security and privacy properties in the fuzzy commitment scheme. Designing a coding method should not only focus on the improvement of code rate. As shown in this thesis, security and privacy properties can be enhanced significantly by changing the dependency pattern in iris features and 3D face features. Therefore, the coding process should be adapted to properties of the underlying biometric features to increase the security and privacy performance. The security and privacy assessment within this thesis is completed by a comparison of two fuzzy commitment algorithms with the fuzzy vault algorithm for fingerprint recognition. Here, different threat models as well as the corresponding protection goals are considered. The fuzzy vault system has the best performance regarding security and irreversibility of biometric features. However, all of these systems are vulnerable to cross matching. The comparison results show that the proposed evaluation framework provides the fundamental basis for benchmarking different template protection algorithms. The proposed framework is also validated with the existing security analysis on transformation-based approaches. Unlike the analysis on biometric cryptosystems, the security is dependent on the hardness of transformation functions or randomization processes. Therefore, the presented analysis is based on efficiency of different kinds of attacks, which measure different protection goals in the appropriate threat models. The security of these approaches depends on the transformation parameters. The knowledge of these parameters allows generating a pre-image, while it is still hard to estimate the original biometric features practically. However, privacy leakage amplifications are still possible. This thesis defines a systematic evaluation framework, which adheres to essential criteria and requirements of biometric template protection techniques. Its applicability is demonstrated with the analysis of template protection algorithms for different biometric modalities. The assessment presented in this thesis is fundamental for a thorough analysis. Furthermore, it provides provable evidence on security and privacy performance. Therefore, it is the fundamental tool for technical innovation and improvement and helps system designers in selecting a suitable template protection algorithm for their applications and needs. It creates a basis for certification and benchmarking of biometric template protection.

Show publication details

Berndt, Rene; Blümel, Ina; Sens, Irina; Clausen, Michael; Damm, David; Klein, Reinhard; Thomas, Verena; Wessel, Raoul; Diet, Jürgen; Fellner, Dieter W.; Scherer, Maximilian

PROBADO - A Digital Library System for Heterogeneous Non-textual Documents

2012

Eleed Journal, (2012), 8, 5 p.

The goal for Probado was to develop and implement a service for providing contentbased access to non-textual documents, and put this service into production-use in scientific libraries. In particular, a semi-automatic indexing process was focused on, to deal with the ever-increasing amount of available documents. Within the scope of Probado we reached these goals. Particularly we deployed such service-capabilities with our collaboration partner Technische Informationsbibliothek, Hannover for digital, 3D architectural models. For digitized, classical music, content-based query-services were developed with the Bayerische Staatsbibliothek in Munich.

Show publication details

Weber, Daniel; Peña Serna, Sebastian; Stork, André; Fellner, Dieter W.

Rapid CFD für die frühe konzeptionelle Design Phase

2012

NAFEMS Online Magazin, Vol.21 (2012), 1, pp. 70-79

Ein wichtiger Teil des Produktentwicklungszyklus ist die Optimierung der strömungs- oder strukturmechanischen Eigenschaften einer Komponente, die normalerweise in einem iterativen und sehr aufwändigen Prozess stattfindet. Neben der Modifikation, Vereinfachung und des Vernetzens der Bauteilgeometrie, kann die Simulation mitunter Stunden bis Tage dauern. In frühen konzeptionellen Designphasen müssen verschiedene Materialparameter sowie unterschiedliche Geometrien ausprobiert und verglichen werden, um zu einem für das spätere Produkt optimalen Design zu gelangen. Dieser zeitaufwändige Prozess begrenzt deutlich die Anzahl der Möglichkeiten, die analysiert werden können. In dieser Arbeit wird das Framework "Rapid CFD" vorgestellt, das es ermöglicht, schnelle Strömungssimulationen für die frühe konzeptionelle Designphase einzusetzen. Um eine solche Geschwindigkeit zu erreichen, wird die Berechnung und Visualisierung von zweidimensionalen Strömungen in Echtzeit kombiniert. Das ermöglicht die interaktive Modifikation von Parametern und Randbedingungen und damit eine schnelle Analyse und Bewertung von unterschiedlichen Geometrien und eine frühzeitige Optimierung eines Bauteils. Das Framework führt alle Berechnungen auf der Graphikkarte (graphics processing unit - GPU) aus und vermeidet damit das aufwändige Kopieren zwischen CPU- und GPU-Hauptspeicher. Die Berechnungen werden auf einem Standard-Desktop PC ausgeführt, sodass die Simulationsergebnisse im Graphikkartenspeicher bleiben und direkt zur Visualisierung verwendet werden können. Für die Modellierung der Geometrie werden B-Splines verwendet, damit Benutzer lokal die Form durch einzelne Kontrollpunkte modifizieren können. Die Diskretisierung wird ebenfalls auf der GPU ausgeführt. Die Berechnung eines einzelnen Zeitschritts auch für Millionen von Unbekannten wird in Bruchteilen von Sekunden durchgeführt. Die intuitive geometrische Manipulation in Kombination mit der unmittelbaren Visualisierung der Simulationsgrößen wie Druck und Geschwindigkeit ermöglichen die direkte Analyse des Einflusses von Geometrie- und Parameteränderungen. Obwohl diese neuartige Simulationstechnik noch nicht die hohe Präzision konventioneller Simulationen erreicht, ermöglicht diese Technik die Beobachtung von Trends und Tendenzen.

Show publication details

Berndt, Rene; Settgast, Volker; Eggeling, Eva; Schinko, Christoph; Krispel, Ulrich; Havemann, Sven; Fellner, Dieter W.

Ring's Anatomy - Parametric Design of Wedding Rings

2012

Sehring, Hans-Werner et al.: CONTENT 2012 : The Fourth International Conference on Creative Content Technologies. ThinkMind, 2012, pp. 72-78

International Conference on Creative Content Technologies (CONTENT) <4, 2012, Nice, France>

We present a use case that demonstrates the effectiveness of procedural shape modeling for mass customization of consumer products. We show a metadesign that is composed of a few well-defined procedural shape building blocks. It can generate a large variety of shapes and covers most of a design space defined by a collection of exemplars, in our case wedding rings. We describe the process of model abstraction for the shape space spanned by these shapes, arguing that the same is possible for other shape design spaces as well.

Show publication details

Weber, Daniel; Peña Serna, Sebastian; Stork, André; Fellner, Dieter W.

Schnelle Strömungsberechnungen mit GPU: Rapid CFD für die frühe konzeptionelle Designphase

2012

Digital Engineering Magazin, Vol.15 (2012), 5, pp. 44-47

Eine neue Tragfläche entsteht am Computer. Ist ihr Auftrieb tatsächlich besser als bei den herkömmlichen? Eine Computersimulation kann hierüber Aufschluss geben. Konventionelle Simulationen liefern die gewünschten Ergebnisse gewöhnlich erst nach mehreren Stunden oder Tagen. Erst anschließend können Modifikationen an der Geometrie vorgenommen werden, um die Eigenschaften zu verbessern. Ein neues Verfahren liefert nun die ersten Simulationsergebnisse bereits in Echtzeit. Es nutzt die Prozessoren der Grafikkarten (Graphics Processing Unit- GPU) für die notwendigen Berechnungen.

Show publication details

Hecher, Martin; Möstl, Robert; Eggeling, Eva; Derler, Christian; Fellner, Dieter W.

Tangible Culture - Designing Virtual Exhibitions on Multi-Touch Devices

2012

Baptista, Ana Alice (Ed.) et al.: Social Shaping of Digital Publishing: Exploring the Interplay Between Culture and Technology : Proceedings of the 16th International Conference on Electronic Publishing. Amsterdam; Berlin; Tokyo; Washington, DC: IOS Press, 2012, pp. 104-113

International Conference on Electronic Publishing (ELPUB) <16, 2012, Guimarães, Portugal>

Cultural heritage institutions such as galleries, museums and libraries increasingly use digital media to present artifacts to their audience and enable them to immerse themselves in a cultural virtual world. With the application eXhibition: editor3D museum curators and editors have a software tool at hand to interactively plan and visualize exhibitions. In this paper we present an extension to the application that enhances the workflow when designing exhibitions. By introducing multi-touch technology to the graphical user interfaces the designing phase of an exhibition is efficiently simplified, especially for non-technical users. Furthermore, multi-touch technology offers a novel way of collaborative work to be integrated into a decision making process. A flexible export system allows to store created exhibitions in various formats to display them on websites, mobile devices or custom viewers. E.g. the widespread 3D scene standard Extensible 3D (X3D) is one of the export formats and we use it to directly incorporate a realtime preview of the exhibition in the authoring process. The combination of the tangible user interfaces with the realtime preview gives curators and exhibition planers a capable tool for efficiently presenting cultural heritage in electronic media.

Show publication details

Havemann, Sven; Ullrich, Torsten; Fellner, Dieter W.

The Meaning of 3D Shape and some Techniques to Extract it

2012

Maybury, Mark T. (Ed.): Multimedia Information Extraction : Advances in Video, Audio, and Imagery Analysis for Search, Data Mining, Surveillance and Authoring. New York et al.: John Wiley & Sons, 2012, pp. 81-97

In the context of information extraction, the question to begin with is: Which semantic information can a 3D model be expected to contain? The truth is that 3D data sets are used for conveying very different sorts of information. A 3D scanning process typically produces a number of textured triangle meshes, or maybe just a large set of colared points. So a single 3D scan is conceptually very much like a photograph; it is a result of an optical measuring process, only with additional depth information. One 3D scan may contain many objects at the same time, or a set of 3D scans may contain different views of the same object. The notion of an object is highly problematic in this context, of course, and must be used with care. For the time being, we define a 3D object pragmatically as a distinguishable unit according to a given interpretation or a given query. So the notion of what is regarded as an object may change as a function of interpretation and query context.

Show publication details

Settgast, Volker; Eggeling, Eva; Fellner, Dieter W.

The Preparation of 3D-Content for Interactive Visualization

2012

Schenk, Michael (Ed.): 15. IFF-Wissenschaftstage 2012. Tagungsband : Digitales Engineering zum Planen, Testen und Betreiben technischer Systeme. Stuttgart: Fraunhofer Verlag, 2012, pp. 187-192

IFF-Wissenschaftstage <15, 2012, Magdeburg, Germany>

The presentation of 3D content is an essential part of many industrial and scientific projects. Interactive visualizations are much more useful than images and pre-rendered videos. But the creation process can be an important cost factor. Further-more, the outcome of such visualizations has to compete with state of the art computer games. It is not sufficient for interactive presentations to have the 3D content and the rendering software. The content has to be modified. In the best case the data only has to be converted to be understood by the presentation application. This task can be automated by conversion software. But in most cases the content has to be modified beyond that. Optimized scenes for interactive rendering are hardly created automatically and the modification is a time and cost-intensive procedure. Suitable measures to reduce the time and cost effort are described in this article.

Show publication details

Landesberger, Tatiana von; Schreck, Tobias; Fellner, Dieter W.; Kohlhammer, Jörn

Visual Search and Analysis in Complex Information Spaces - Approaches and Research Challenges

2012

Dill, John (Ed.) et al.: Expanding the Frontiers of Visual Analytics and Visualization. Berlin, Heidelberg, New York: Springer, 2012, pp. 45-67

One of the central motivations for visual analytics research is the so-called information overload - implying the challenge for human users in understanding and making decisions in presence of too much information [37]. Visual-interactive systems, integrated with automatic data analysis techniques, can help in making use of such large data sets [35]. Visual Analytics solutions not only need to cope with data volumes that are large on the nominal scale, but also with data that show high complexity. Important characteristics of complex data are that the data items are difficult to compare in a meaningful way based on the raw data. Also, the data items may be composed of different base data types, giving rise to multiple analytical perspectives. Example data types include research data compound of several base data types, multimedia data composed of different media modalities, etc. In this paper, we discuss the role of data complexity for visual analysis and search, and identify implications for designing respective visual analytics applications. We first introduce a data complexity model, and present current example visual analysis approaches based on it, for a selected number of complex data types. We also outline important research challenges for visual search and analysis we deem important.

Show publication details

Bender, Jan; Kuijper, Arjan; Fellner, Dieter W.; Guérin, Eric

VRIPHYS 12: 9th Workshop in Virtual Reality Interactions and Physical Simulations

2012

Goslar : Eurographics Association, 2012

International Workshop in Virtual Reality Interaction and Physical Simulations (VRIPhys) <9, 2012, Lyon, France>

The workshop on Virtual Reality Interactions and Physical Simulations (VRIPHYS) is one of the well established international conferences in the field of computer animation and virtual reality. Since 2004, this annual workshop has provided an opportunity for researchers in computer animation and virtual reality to present and discuss their latest results, and to share ideas for potential directions of future research.

Show publication details

Brix, Torsten; Fellner, Dieter W.; Krämer, Bernd J.; Schrader, Thomas

Workshop: Centers of Excellence for Research Information - Digital Text and Data Centers for Science and Open Research

2012

Eleed Journal, (2012), 8

Status reports on four DFG projects: CampusContent / edu-sharing , DMG-Lib , OpEN.SC , and PROBADO.

Show publication details

Hecher, Martin; Möstl, Robert; Eggeling, Eva; Derler, Christian; Fellner, Dieter W.

"Tangible Culture" - Designing Virtual Exhibitions on Multi-touch Devices

2011

Information Services & Use, Vol.31 (2011), 3-4, pp.199-208

Cultural heritage institutions such as galleries, museums and libraries increasingly use digital media to present artifacts to their audience and enable them to immerse in a cultural virtual world. With the application eXhibition: editor3D museum curators and editors have a software tool at hand to interactively plan and visualize exhibitions. In this paper we present an extension to the application that enhances the workflow when designing exhibitions. By introducing multi-touch technologies to the graphical user interfaces the designing phase of an exhibition is efficiently simplified, especially for nontechnical users. Furthermore, multi-touch technologies offer a novel way of collaborative work to be integrated in a decision making process. The widespread 3D scene standard Extensible 3D (X3D) is used as the export format. In using X3D the resulting exhibitions are highly versatile. We show this versatility in directly using X3D's external authoring interface to integrate a realtime preview of the exhibition in the authoring phase. The combination of the tangible user interfaces with the realtime preview gives curators and editors a capable tool for efficiently presenting cultural heritage to a wide audience.

Show publication details

Huff, Rafael; Gierlinger, Thomas; Kuijper, Arjan; Stork, André; Fellner, Dieter W.

A Comparison of xPU Platforms Exemplified with Ray Tracing Algorithms

2011

IEEE Computer Society: XIII Symposium on Virtual Reality : SVR 2011. Los Alamitos, Calif.: IEEE Computer Society Conference Publishing Services (CPS), 2011, pp. 1-8

Symposium on Virtual Reality (SVR) <13, 2011, Uberlandia, Brazil>

Over the years, faster hardware - with higher clock rates - has been the usual way to improve computing times in computer graphics. Aside from highly costly parallel solutions only affordable by big industries - like the movie industry -, there was no alternative available to desktop users. Nevertheless, this scenario is dramatically changing with the introduction of more and more parallelism in current desktop PCs. Multi-core CPUs are a common basis in current PCs and the power of modern GPUs - which have been multi-core for a long time now - is getting unveiled to developers. nVidia's CUDA is a powerful weapon to explore GPUs parallelism. Yet, its specific target - nVidia graphic cards only - does not provide any solution to other parallel hardware present. OpenCL is a new royalty-free cross-platform intended to be portable across different hardware manufacturers or even different platforms. In this paper we focus on a comparison of advantages and disadvantages of xPU platforms with OpenCL and CUDA in terms of time efficiency. As an example application we use ray tracing algorithms. Three kinds of ray tracers have to be developed in order to conduct a fair comparison: one is CPU based, while the other two are GPU based - using CUDA and OpenCL, respectively. At the end, a comparison is done between them and results are presented and analyzed showing that the CUDA implementation has the best frame rate, but is very closely followed by the OpenCL implementation. Visually, results are identical, showing the high potential of OpenCL as an alternative for CUDA with identical performance.

Show publication details

Bernard, Jürgen; Brase, Jan; Fellner, Dieter W.; Koepler, Oliver; Kohlhammer, Jörn; Ruppert, Tobias; Schreck, Tobias; Sens, Irina

A Visual Digital Library Approach for Time-Oriented Scientific Primary Data

2011

International Journal on Digital Libraries, Vol.11 (2011), 2, pp. 111-123

European Conference on Research and Advanced Technology for Digital Libraries (ECDL) <14, 2010, Glasgow, UK>

Digital Library support for textual and certain types of non-textual documents has significantly advanced over the last years. While Digital Library support implies many aspects along the whole library workflow model, interactive and visual retrieval allowing effective query formulation and result presentation are important functions. Recently, new kinds of non-textual documents which merit Digital Library support, but yet cannot be fully accommodated by existing Digital Library technology, have come into focus. Scientific data, as produced for example, by scientific experimentation, simulation or observation, is such a document type. In this article we report on a concept and first implementation of Digital Library functionality for supporting visual retrieval and exploration in a specific important class of scientific primary data, namely, time-oriented research data. The approach is developed in an interdisciplinary effort by experts from the library, natural sciences, and visual analytics communities. In addition to presenting the concept and to discussing relevant challenges, we present results from a first implementation of our approach as applied on a real-world scientific primary data set. We also report from initial user feedback obtained during discussions with domain experts from the earth observation sciences, indicating the usefulness of our approach.

Show publication details

Binotto, Alecio; Pereira, Carlos Eduardo; Kuijper, Arjan; Stork, André; Fellner, Dieter W.

An Effective Dynamic Scheduling Runtime and Tuning System for Heterogeneous Multi and Many-Core Desktop Platforms

2011

Thulasiraman, Parimala (Ed.) et al.: Proceedings 2011 IEEE International Conference on High Performance Computing and Communications : HPCC 2011. Los Alamitos, Calif.: IEEE Computer Society Conference Publishing Services (CPS), 2011, pp. 78-85

IEEE International Conference on High Performance Computing and Communications (HPCC) <13, 2011, Banff, Alberta, Canada>

A personal computer can be considered as a one-node heterogeneous cluster that simultaneously processes several application tasks. It can be composed by, for example, asymmetric CPU and GPUs. This way, a high-performance heterogeneous platform is built on a desktop for data intensive engineering calculations. In our perspective, a workload distribution over the Processing Units (PUs) plays a key role in such systems. This issue presents challenges since the cost of a task at a PU is non-deterministic and can be affected by parameters not known a priori. This paper presents a context-aware runtime and tuning system based on a compromise between reducing the execution time of engineering applications - due to appropriate dynamic scheduling - and the cost of computing such scheduling applied on a platform composed of CPU and GPUs. Results obtained in experimental case studies are encouraging and a performance gain of 21.77% was achieved in comparison to the static assignment of all tasks to the GPU.

Show publication details

Schwenk, Karsten; Behr, Johannes; Fellner, Dieter W.

An Error Bound for Decoupled Visibility with Application to Relighting

2011

Avis, Nick (Ed.) et al.: Eurographics 2011. Short Papers. Eurographics Association, 2011, pp. 25-28

Eurographics <32, 2011, Llandudno, UK>

Monte Carlo estimation of direct lighting is often dominated by visibility queries. If an error is tolerable, the calculations can be sped up by using a simple scalar occlusion factor per light source to attenuate radiance, thus decoupling the expensive estimation of visibility from the comparatively cheap sampling of unshadowed radiance and BRDF. In this paper we analyze the error associated with this approximation and derive an upper bound. We demonstrate in a simple relighting application how our result can be used to reduce noise by introducing a controlled error if a reliable estimate of the visibility is already available.

Show publication details

Fellner, Dieter W.; Baier, Konrad; Klingelmeyer, Melanie; Bornemann, Heidrun; Mentel, Katrin

Annual Report 2010: Fraunhofer Institute for Computer Graphics Research IGD

2011

Darmstadt, 2011

Show publication details

Jung, Yvonne; Kuijper, Arjan; Fellner, Dieter W.; Kipp, Michael; Miksatko, Jan; Gratch, Jonathan; Thalmann, Daniel

Believable Virtual Characters in Human-Computer Dialogs

2011

John, Nigel (Co-Chair) et al.: Eurographics 2011. State of the Art Reports (STARs). Eurographics Association, 2011, pp. 75-100

Eurographics <32, 2011, Llandudno, UK>

For many application areas, where a task is most naturally represented by talking or where standard input devices are difficult to use or not available at all, virtual characters can be well suited as an intuitive man-machine-interface due to their inherent ability to simulate verbal as well as nonverbal communicative behavior. This type of interface is made possible with the help of multimodal dialog systems, which extend common speech dialog systems with additional modalities just like in human-human interaction. Multimodal dialog systems consist at least of an auditive and graphical component, and communication is based on speech and nonverbal communication alike. However, employing virtual characters as personal and believable dialog partners in multimodal dialogs entails several challenges, because this requires not only a reliable and consistent motion and dialog behavior but also regarding nonverbal communication and affective components. Besides modeling the "mind" and creating intelligent communication behavior on the encoding side, which is an active field of research in artificial intelligence, the visual representation of a character including its perceivable behavior, from a decoding perspective, such as facial expressions and gestures, belongs to the domain of computer graphics and likewise implicates many open issues concerning natural communication. Therefore, in this report we give a comprehensive overview how to go from communication models to actual animation and rendering.

Show publication details

Schwenk, Karsten; Behr, Johannes; Fellner, Dieter W.

CommonVolumeShader: Simple and Portable Specification of Volumetric Light Transport in X3D

2011

ACM SIGGRAPH: Proceedings Web3D 2011 : 16th International Conference on 3D Web Technology. New York: ACM Press, 2011, pp. 39-44

International Conference on 3D Web Technology (WEB3D) <16, 2011, Paris, France>

Rendering volumetric phenomena with believable appearance can add tremendous realism to virtual scenes. We introduce the CommonVolumeShader node, an extension of the X3D standard which has been specifically designed for physically-based rendering of participating media. CommonVolumeShader allows content authors to specify optical properties in a concise and purely declarative way and can accurately capture the appearance of many volumetric phenomena. We demonstrate results with implementations for an interactive ray tracer and a rasterization-based pipeline.

Show publication details

Peña Serna, Sebastian; Stork, André; Fellner, Dieter W.

Considerations toward a Dynamic Mesh Data Structure

2011

Larsson, Thomas (Ed.) et al.: SIGRAD 2011 : Evaluations of Graphics and Visualization - Efficiency, Usefullness, Accesibility, Usability. Linköping: Linköping University Electronic Press, 2011. (Linköping Electronic Conference Proceedings 65), pp. 83-90

SIGRAD Conference <10, 2011, Stockholm, Sweden>

The use of 3D shapes in different domains such as in engineering, entertainment, cultural heritage or medicine, is essential for representing 3D physical reality. Regardless of whether the 3D shapes are representing physically or digitally born objects, meshes are a versatile and common representation for the 3D reality. Nonetheless, the mesh generation process does not always produce qualitative results, thus incomplete, non-orientable or non-manifold meshes frequently are the input for the domain application. The domain application itself also demands special requirements, e.g. an engineering simulation requires a volumetric mesh either tetrahedral or hexahedral, while a cultural heritage color enhancement uses a triangular or quadrangular mesh, or in both cases even hybrid meshes. Moreover, the processes applied on the meshes (e.g. modeling, simulation, visualization) need to support some operations, such as querying neighboring information or enabling dynamic changes of geometry and topology. These operations need to be robust, hence the neighboring information can be consistently updated, during the dynamic changes. Dealing with this mesh diversity usually requires dedicated data structures for performing in the given domain application. This paper compiles the considerations toward designing a data structure for dynamic meshes in a generic and robust manner, despite the type and the quality of the input mesh. These aspects enable a flexible representation of 3D shapes toward general purpose geometry processing for dynamic meshes in 2D and 3D.

Show publication details

Thaller, Wolfgang; Krispel, Ulrich; Havemann, Sven; Redi, Ivan; Redi, Andrea; Fellner, Dieter W.

Developing Parametric Building Models-The Gandis Use Case

2011

Remondino, Fabio (Ed.) et al.: Proceedings of the 4th ISPRS International Workshop 3D-ARCH 2011 : 3D Virtual Reconstruction and Visualization of Complex Architectures [CD-ROM]. (The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XXXVIII-5/W16), 8 pp.

ISPRS International Workshop 3D-ARCH <4, 2011, Trento, Italy>

In the course of a project related to green building design, we have created a group of eight parametric building models that can be manipulated interactively with respect to dimensions, number of floors, and a few other parameters. We report on the commonalities and differences between the models and the abstractions that we were able to identify.

Show publication details

Jung, Yvonne; Fellner, Dieter W. (Betreuer); Stricker, Didier (Betreuer)

Dynamic Aspects of Character Rendering in the Context of Multimodal Dialog Systems

2011

Darmstadt, TU, Diss., 2011

Virtual characters offer a great potential as intuitive man-machine interface, because they allow simulating non-verbal communicative behavior as well, which requires the coordinated use of various modalities (e.g., speech and gesture). In this sense, multimodal dialogue systems extend current voice response systems, like for instance known from automated support hotlines, to other modalities. While multimodal dialogue systems are an active research area in artificial intelligence (AI) for over twenty years, in computer graphics (CG) further research is still needed. Hence, in this work two basic problems have been identified. On the one hand, there is a gap between the components of AI and CG, which makes it difficult to provide responsive characters in a manageable manner. On the other hand, embedding virtual agents in full 3D applications, particularly in the context of Mixed Reality, still remains problematic. Therefore, in this work a concept for the presentation component of multimodal dialogue systems has been presented, which can be easily integrated into current frameworks for virtual agents. Basically, it consists of a declarative control layer and a declarative execution layer. While the control layer is mainly used for communication with the AI modules, it also provides a declarative language for describing and flexibly controlling communicative behavior. Core technology components are provided in the execution layer. These include a flexible animation system that is integrated into the X3D standard, components for hair simulation, to represent psycho-physiologically caused skin tone changes such as blushing and the simulation of tears, furthermore methods for the declarative control of the virtual camera, and techniques for the realistic visualization of virtual objects in Mixed Reality scenarios. In addition to simplifying the integration into complex 3D applications, the whole environment can thus also be used by the system as another means of communication.

Show publication details

Ullrich, Torsten; Fellner, Dieter W.

Generative Object Definition and Semantic Recognition

2011

Laga, Hamid (Ed.) et al.: Eurographics 2011 Workshop on 3D Object Retrieval : EG 3DOR 2011. Goslar: Eurographics Association, 2011. (Eurographics Workshop and Symposia Proceedings Series), pp. 1-8; 125 (Color plate)

Eurographics Workshop on 3D Object Retrieval (EG 3DOR) <4, 2011, Llandudno, UK>

"What is the difference between a cup and a door?" These kinds of questions have to be answered in the context of digital libraries. This semantic information, which describes an object on a high, abstract level, is needed in order to provide digital library services such as indexing, mark-up and retrieval. In this paper we present a new approach to encode and to extract such semantic information. We use generative modelling techniques to describe a class of objects: each class is represented by one algorithm; and each object is one set of high-level parameters, which reproduces the object if passed to the algorithm. Furthermore, the algorithm is annotated with semantic information, i.e. a human-readable description of the object class it represents. We use such an object description to recognize objects in real-world data e.g. laser scans. Using an algorithmic object description, we are able to identify 3D subparts, which can be described and generated by the algorithm. Furthermore, we can determine the needed input parameters. In this way, we can classify objects, recognize them semantically and we can determine their parameters (cup's height, radius, etc.).

Show publication details

Bein, Matthias; Fellner, Dieter W.; Stork, André

Genetic B-Spline Approximation on Combined B-Reps

2011

The Visual Computer, Vol.27 (2011), 6-8, pp. 485-494

Computer Graphics International (CGI) <29, 2011, Ottawa, Canada>

We present a genetic algorithm for approximating densely sampled curves with uniform cubic B-Splines suitable for Combined B-reps. A feature of this representation is altering the continuity property of the B-Spline at any knot, allowing combining freeform curves and polygonal parts within one representation. Naturally there is a trade-off between different approximation properties like accuracy and the number of control points needed. Our algorithm creates very accurate B-Splines with few control points, as shown in Fig. 1. Since the approximation problem is highly nonlinear, we approach it with genetic methods, leading to better results compared to classical gradient based methods. Parallelization and adapted evolution strategies are used to create results very fast.

Show publication details

Nazemi, Kawa; Breyer, Matthias; Stab, Christian; Burkhardt, Dirk; Fellner, Dieter W.

Intelligent Exploration System - an Approach for User-centered Exploratory Learning

2011

Tzikopoulos, Argiris (Ed.) et al.: RURALeNTER : Lifelong Learning in Rural and Remote Areas. Pallini: Ellinogermaniki Agogi, 2011, pp. 71-83

Workshop of the EDEN Open Classroom Conference <2011, Pallini - Athens, Greece>

The following paper describes the conceptual design of an Intelligent Exploration System (IES) that offers a user-adapted graphical environment of web-based knowledge repositories, to support and optimize the explorative learning. The paper starts with a short definition of learning by exploring and introduces the Intelligent Tutoring System and Semantic Technologies for developing such an Intelligent Exploration System. The IES itself will be described with a short overview of existing learner or user analysis methods, visualization techniques for exploring knowledge with semantics technology and the explanation of the characteristics of adaptation to offer a more efficient learning environment.

Show publication details

Weber, Daniel; Kalbe, Thomas; Stork, André; Fellner, Dieter W.; Goesele, Michael

Interactive Deformable Models with Quadratic Bases in Bernstein-Bézier-Form

2011

The Visual Computer, Vol.27 (2011), 6-8, pp. 473-483

Computer Graphics International (CGI) <29, 2011, Ottawa, Canada>

We present a physically based interactive simulation technique for de formable objects. Our method models the geometry as well as the displacements using quadratic basis functions in Bernstein-Bézier form on a tetrahedral finite element mesh. The Bernstein-Bézier formulation yields significant advantages compared to approaches using the monomial form. The implementation is simplified, as spatial derivatives and integrals of the displacement field are obtained analytically avoiding the need for numerical evaluations of the elements' stiffness matrices. We introduce a novel traversal accounting for adjacency in order to accelerate the reconstruction of the global matrices. We show that our proposed method can compensate the additional effort introduced by the co-rotational formulation to a large extent. We validate our approach on several models and demonstrate new levels of accuracy and performance in comparison to current state-of-the-art.

Show publication details

Peña Serna, Sebastian; Stork, André; Fellner, Dieter W.

Interactive Exploration of Design Variations

2011

International Association for the Engineering Analysis Community (NAFEMS): A World of Engineering Simulation: Industrial Needs, Best Practice, Visions for the Future : NWC 2011. NAFEMS World Congress [Book of Abstracts & CD-ROM]. Glasgow: NAFEMS, 2011, 18 p.

International Congress on Simulation Technology for the Engineering Analysis Community (NWC) <13, 2011, Boston, USA>

The digital exploration of design variations is a key procedure in the embodiment phase of engineering design, in order to efficiently develop optimal solutions. This procedure requires the combination of modeling and simulation capabilities, enabling the engineer to assess the physical and functional behaviors of the proposed solution. Nowadays, this procedure is performed by means of iterating between designers and analysts with their corresponding tools and demanding reciprocal understanding between them. This is nonetheless a very time consuming activity with the currently available tools and technology, even the advance Computer Aided Design (CAD) systems, which can cope with almost any modeling requirement and which presently provide direct connection (i.e. meshing) to analysis modules for models with limited complexity, cannot deal with the interactive exploration of design variations. Moreover, the promising isogeometric analysis, which aims to simulate 3D NURBS representations also requires special transformations (i.e. meshing), which do not allow for interactive exploration of design variations. On the other side, the Computer Aided Engineering (CAE) systems offering morphing support, are only able to explore restricted variations, since large variations or deformations of the model involves expensive remeshing processes. In order to overcome the above mentioned issues and to enable a fully interactive exploration of design variations within an analysis environment, we enhance the simulating model with a high level representation for interacting with semantic features rather than with single elements, we perform combined morphing techniques with local mesh modification for preserving the stability of the numerical model during large variations, and we decouple the storage of the linear system entries and the sequential matrixvector multiplication for getting the solution, in order to permit the update of local entries of the matrix representing the local mesh modifications without the need of a rebuild of the entire system. Our methodology allows the engineers to independently and interactively explore conceptual design variations without restrictions. Hence, the investigation and understanding of the influence of different design features can easily and fast be evaluated and the development of optimal solutions for the design requirements can closely be fulfilled.

Show publication details

Fellner, Dieter W.; Baier, Konrad; Klingelmeyer, Melanie; Bornemann, Heidrun; Mentel, Katrin

Jahresbericht 2010: Fraunhofer-Institut für Graphische Datenverarbeitung IGD

2011

Darmstadt, 2011

Show publication details

Ullrich, Torsten; Fellner, Dieter W.

Linear Algorithms in Sublinear Time - a Tutorial on Statistical Estimation

2011

IEEE Computer Graphics and Applications, Vol.31 (2011), 2, p. 58-66

This tutorial presents probability theory techniques for boosting linear algorithms. The approach is based on statistics and uses educated guesses instead of comprehensive calculations. Because estimates can be calculated in sublinear time, many algorithms can benefit from statistical estimation. Several examples show how to significantly boost linear algorithms without negative effects on their results. These examples involve a Ransac algorithm, an image-processing algorithm, and a geometrical reconstruction. The approach exploits that, in many cases, the amount of information in a dataset increases asymptotically sublinearly if its size or sampling density increases. Conversely, an algorithm with expected sublinear running time can extract the most information.

Show publication details

Breuel, Frank; Berndt, Rene; Ullrich, Torsten; Eggeling, Eva; Fellner, Dieter W.

Mate in 3D - Publishing Interactive Content in PDF3D

2011

Tonta, Yasar (Ed.) et al.: Digital Publishing and Mobile Technologies : 15th International Conference on Electronic Publishing [online]. Ankara: Hacettepe University Department of Information Management, 2011, pp. 110-119

International Conference on Electronic Publishing (ELPUB) <15, 2011, Istanbul, Turkey>

In this paper we describe a pipeline for publishing interactive multimedia content. The Portable Document Format (PDF) offers the possibility to include 3D visualizations, textual representation and interactivity (via scripting technology) in one multimedia container, which will be the upcoming standard for multimedia long-term archiving. Our system demonstrates its potential for surplus value eBooks. By the example of chess we developed a publishing pipeline to create interactive books. Usually, chess games and positions are recorded using the algebraic chess notation, which is mainly an annotated list of moves. In combination with a time-dependent 3D visualization, each move corresponds to a specific game position. This correspondence is encoded in hyperlinks from the textual representation to the 3D visualization. This linkage improves readability and usability of chess notations significantly. Furthermore, using an established file format our eBooks can be opened by any compliant PDF viewer.

Show publication details

Webel, Sabine; Fellner, Dieter W. (Betreuer); Hirzinger, G. (Betreuer)

Multimodal Training of Maintenance and Assembly Skills Based on Augmented Reality

2011

Darmstadt, TU, Diss., 2011

The training of technicians in the acquisition of new maintenance and assembly tasks is an important factor in industry. As the complexity of these tasks can be enormous, the training of technicians to acquire the necessary skills to perform them efficiently is a challenging point. However, traditional training programs are usually highly theoretical and it is difficult for the trainees to transfer the acquired theoretical knowledge about the task to the real task conditions, or rather, to the physical performance of the task. In addition, traditional training programs are often expensive in terms of effort and cost. Previous research has shown that Augmented Reality is a powerful technology to support training in the particular context of industrial service procedures, since instructions on how to perform the service tasks can be directly linked to the machine parts to be processed. Various approaches exist, in which the trainee is guided step-by-step through the maintenance task, but these systems act more as guiding systems than as training systems and focus only on the trainees' sensorimotor capabilities. Due to the increasing complexity of maintenance tasks, it is not sufficient to train the technicians' execution of these tasks, but rather to train the underlying skills - sensorimotor and cognitive - that are necessary for an efficient acquisition and performance of new maintenance operations. All these facts lead to the need for efficient training systems for the training of maintenance and assembly skills which accelerate the technicians' learning and acquisition of new maintenance procedures. Furthermore, these systems should improve the adjustment of the training process to new training scenarios and enable the reuse of existing training material that has proven its worth. In this thesis a novel concept and platform for multimodal Augmented Reality-based training of maintenance and assembly skills is presented. This concept includes the identification of necessary sub-skills, the training of the involved skills, and the design of a training program for the training of maintenance and assembly skills. Since procedural skills are considered as the most important skills for maintenance and assembly operations, they are discussed in detail, as well as appropriate methods for improving them. We further show that the application of Augmented Reality technologies and the provision of multimodal feedback -and vibrotactile feedback in particular-have a great potential to enhance skill training in general. As a result, training strategies and specific accelerators for the training of maintenance and assembly skills in general and procedural skills in particular are elaborated. Here, accelerators are concrete methods used to implement the pursued training strategies. Furthermore, a novel concept for displaying location-dependent information in Augmented Reality environments is introduced, which can compensate tracking imprecisions. In this concept, the pointercontent metaphor of annotating documents is transferred to Augmented Reality environments. As a result, Adaptive Visual Aids are defined which consist of a tracking-dependent pointer object and a tracking-independent content object, both providing an adaptable level and type of information. Thus, the guidance level of Augmented Reality overlays in AR-based training applications can be easily controlled. Adaptive Visual Aids can be used to substitute traditional Augmented Reality overlays (i.e. overlays in form of 3D animations), which highly suffer from tracking inaccuracies. The design of the multimodal AR-based training platform proposed in this thesis is not specific for the training of maintenance and assembly skills, but is a general design approach for multimodal training platforms. We further present an implementation of this platform based on the X3D ISO standard which provides features that are useful for the development of Augmented Reality environments. This standard-based implementation increases the sustainability and portability of the platform. The implemented multimodal Augmented Reality-based platform for training of maintenance and assembly skills has been evaluated in industry and compared to traditional training methods. The results show that the developed training platform and the pursued training strategies are very well suited for the training of maintenance and assembly skills and enhance traditional training. With the presented framework we have overcome the problems sketched above. We are cheap in terms of effort and costs for the training of maintenance and assembly skills and we improve its efficiency compared with traditional training.

Show publication details

Nazemi, Kawa; Burkhardt, Dirk; Stab, Christian; Breyer, Matthias; Wichert, Reiner; Fellner, Dieter W.

Natural Gesture Interaction with Accelerometer-based Devices in Ambient Assisted Environments

2011

Wichert, Reiner (Ed.) et al.: Ambient Assisted Living : 4. AAL-Kongress 2011. Springer Science+Business Media, 2011. (Advanced Technologies and Societal Change), pp. 75-90

Ambient Assisted Living (AAL) <4, 2011, Berlin, Germany>

Using modern interaction methods and devices provides a more natural and intuitive interaction. Currently, only mobile phones and game consoles which are supporting such gesture-based interactions have good payment-rates. This comes along, that such devices will bought not only by the traditional technical experienced consumers. The interaction with such devices becomes so easy, that also older people playing or working with them. Especially older people have more handicaps, so for them it is hard to read small text, like they are used as description to buttons on remote controls for televisions. They also become fast overstrained, so that bigger technical systems are no help for them. If it is possible to interact with gestures, all these problems can be avoided. But to allow an intuitive and easy gesture interaction, gestures have to be supported, which are easy to understand. Because of that fact, in this paper we tried to identify intuitive gestures for common interaction scenarios on computer-based systems for uses in ambient assisted environment. In this evaluation, the users should commit their opinion of intuitive gestures for different presented scenarios/tasks. Basing on these results, intuitively useable systems can be developed, so that users are able to communicate with technical systems on more intuitive level with accelerometer-based devices.

Show publication details

Burkhardt, Dirk; Nazemi, Kawa; Stab, Christian; Breyer, Matthias; Wichert, Reiner; Fellner, Dieter W.

Natürliche Gesteninteraktion mit Beschleunigungssensorbasierten Eingabegeräten in unterstützenden Umgebungen

2011

Verband der Elektrotechnik Elektronik Informationstechnik (VDE): Ambient Assisted Living : 4. Deutscher AAL-Kongress mit Ausstellung. Demographischer Wandel-Assistenzsysteme aus der Forschung in den Markt [CD-ROM]. Berlin u.a.: VDE-Verl., 2011, 10 pp. ; Paper 5.3

Ambient Assisted Living (AAL) <4, 2011, Berlin, Germany>

Die Verwendung von modernen Interaktionsmethoden und Geräten erlaubte eine natürlichere und intuitive Interaktion. Gegenwärtig haben lediglich die Smartphones und Spielekonsolen großen Absatz, welche eine gestenbasierte Interaktion unterstützen. Dies geht einher, dass solche Geräte nicht nur von technisch versierten Konsumenten gekauft werden. Die Interaktion mit solchen Geräten gestaltet sich so einfach, dass oftmals auch ältere Personen mit diesen spielen oder arbeiten. Insbesondere ältere Personen sind häufig gehandicapt, so haben sie oftmals Probleme kleinere Text zu lesen, wie sie häufig auf Fernbedienungen gedruckt sind. Ebenso neigen sie dazu, schnell überfordert zu sein, so dass gerade größere technische Systeme keine Hilfe sind. Wenn die Geräte mit Gesten steuerbar sind, sind die genannten Probleme oftmals vermeidbar. Um aber eine intuitive und einfache Gesteninteraktion zu ermöglichen, müssen entsprechend verständliche und nachvollziehbare Gesten unterstützt werden. Aus diesem Grund versuchen wir in diesem Paper intuitive Gesten für gängige Interaktionsszenarien an computerbasierten Systemen für den Einsatz in unterstützenden Umgebungen zu identifizieren. Im Rahmen der Evaluation sollen die Probanden hierfür ihre bevorzugten Gesten für die verschiedenen Interaktionsszenarien einbringen. Auf Grundlage der Ergebnisse kann später ein intuitiv bedienbares System, unter Verwendung eines beschleunigungssensorbasierten Geräts, entwickelt werden, mit welchem die Nutzer auf intuitive Weise kommunizieren können.

Show publication details

Kalbe, Thomas; Fellner, Dieter W. (Betreuer); Theisel, Holger (Betreuer)

New Models for High-quality Surface Reconstruction and Rendering

2011

Darmstadt, TU, Diss., 2011

The efficient reconstruction and artifact-free visualization of surfaces from measured real-world data is an important issue in various applications, such as medical and scientific visualization, quality control, and the media-related industry. The main contribution of this thesis is the development of the first efficient GPU-based reconstruction and visualization methods using trivariate splines, i.e., splines defined on tetrahedral partitions. Our methods show that these models are very well-suited for real-time reconstruction and high-quality visualizations of surfaces from volume data. We create a new quasi-interpolating operator which for the first time solves the problem of finding a globally C¹-smooth quadratic spline approximating data and where no tetrahedra need to be further subdivided. In addition, we devise a new projection method for point sets arising from a sufficiently dense sampling of objects. Compared with existing approaches, high-quality surface triangulations can be generated with guaranteed numerical stability.

Show publication details

Fellner, Dieter W.; Havemann, Sven; Beckmann, Philipp; Pan, Xueming

Practical 3D Reconstruction of Cultural Heritage Artefacts from Photographs - Potentials and Issues

2011

VAR. Virtual Archaeology Review [online], Vol.2 (2011), 4, pp. 95-103. [cited 30 November 2011] Available from: http://www.varjournal.es/doc/varj02_004_09.pdf

International Meeting on Graphic Archeology and Informatics, Cultural Heritage and Innovation (Arqueológica 2.0) <2, 2010, Sevilla, Spain>

A new technology is on the rise that allows the 3D-reconstruction of Cultural Heritage objects from image sequences taken by ordinary digital cameras. We describe the first experiments we made as early adopters in a community-funded research project whose goal is to develop it into a standard CH technology. The paper describes in detail a step-by-step procedure that can be reproduced using free tools by any CH professional. We also give a critical assessment of the workflow and describe several ideas for developing it further into an automatic procedure for 3D reconstruction from images.

Show publication details

Weber, Daniel; Peña Serna, Sebastian; Stork, André; Fellner, Dieter W.

Rapid CFD for the Early Conceptual Design Phase

2011

International Association for the Engineering Analysis Community (NAFEMS): The Integration of CFD into the Product Development Process : Seminar. Glasgow: NAFEMS, 2011, 9 p.

Seminar the Integration of CFD into the Product Development Process <2011, Wiesbaden, Germany>

An important step of the product development is the optimization of the components' physical behavior, which is usually done in a costly iterative process. Besides the modification, simplification, and (re-) meshing of the component's geometry, simulating its behavior can take hours or even days. In the early conceptual design phase, different material properties and shapes need to be tested and compared, in order to optimally design the component. Nonetheless, time consuming simulations limit the realm of possibilities. We have developed a framework for enabling rapid Computational Fluid Dynamics (CFD) for the early conceptual design phase. In order to achieve this, we combine the computation and visualization of 2D fluid flow in real time with the modification of fluid parameters, boundary conditions and geometry. This allows for the rapid assessment and analysis of different shapes and therefore the optimization of the component. Our framework is completely based on graphic processing units (GPUs), i.e., all computations are performed on the GPU avoiding costly memory transfers between graphic hardware and CPU memory. The computations are performed on a single desktop PC, thus the simulation results can reside in GPU memory and can directly be visualized. B-Spline curves are used for modelling the geometry and the user can interactively modify it by means of inserting and moving control points or applying local smooth deformations, with the corresponding rapid update of the discretization on the GPU. Computing one single time step is performed in fractions of a second, even if the fluid flow is modelled with about one million degrees of freedom. The fast geometric manipulation combined with the direct visualization of quantities like velocity or pressure field allows for an immediate feedback of shape or parameter changes. Although fast simulations do not yet achieve the high precision compared to conventional simulations, their results are suitable for analyzing trends.

Show publication details

Schiffer, Thomas; Schinko, Christoph; Ullrich, Torsten; Fellner, Dieter W.

Real-World Geometry and Generative Knowledge

2011

Ercim News, (2011), 86, pp. 15-16

The current methods of describing the shape of three-dimensional objects can be classified into two groups: composition of primitives and procedural description. As a 3D acquisition device simply returns an agglomeration of elementary objects (eg a laser scanner returns points) a real-world data set is always a - more or less noisy - composition of primitives. A generative model, on the other hand, describes an ideal object rather than a real one. Owing to this abstract view of an object, generative techniques are often used to describe objects semantically. Consequently, generative models, rather than being a replacement for established geometry descriptions (based on points, triangles, etc.), offer a sensible, semantic enrichment.

Show publication details

Ullrich, Torsten; Fellner, Dieter W. (Betreuer); Klein, Reinhard (Betreuer)

Reconstructive Geometry

2011

Graz, TU, Diss., 2011

This thesis "Reconstructive Geometry" presents a new collision detection algorithm, a novel approach to generative modeling, and an innovative shape recognition technique. All these contributions are centred around the questions "how to combine acquisition data with generative model descriptions" and "how to perform this combination efficiently". Acquisition data - such as point clouds and triangle meshes - are created e.g. by a 3D scanner or a photogrammetric process. They can describe a shape's geometry very well, but do not contain any semantic information. With generative descriptions it's the other way round: a procedure describes a rather ideal object and its construction process. This thesis builds a bridge between both types of geometry descriptions and combines them to a semantic unit. An innovative shape recognition technique, presented in this thesis, determines whether a digitized real-world object might have been created by a given generative description, and if so, it identifies the high-level parameters that have been passed to the generative script. Such a generative script is a simple JavaScript function. Using the generative modeling compiler "Euclides" the function can be understood in a mathematical sense; i.e. it can be differentiated with respect to its input parameters, it can be embedded into an objective function, and it can be optimized using standard numerical analysis. This approach offers a wide range of applications for generative modeling techniques; parameters do not have to be set manually - they can be set automatically according to a reasonable objective function. In case of shape recognition, the objective function is distance-based and measures the similarity of two objects. The techniques that are used to efficiently perform this task (space partitioning, hierarchical structures, etc.) are the same in collision detection where the question, whether two objects have distance zero, is answered. To sum up, distance functions and distance calculations are a main part of this thesis along with their application in geometric object descriptions, semantic enrichment, numerical analysis and many more.

Show publication details

Schinko, Christoph; Strobl, Martin; Ullrich, Torsten; Fellner, Dieter W.

Scripting Technology for Generative Modeling

2011

International Journal on Advances in Software, Vol.4 (2011), 3-4, pp. 308-326

In the context of computer graphics, a generative model is the description of a three-dimensional shape: Each class of objects is represented by one algorithm M. Furthermore, each described object is a set of high-level parameters x, which reproduces the object, if an interpreter evaluates M(x). This procedural knowledge differs from other kinds of knowledge, such as declarative knowledge, in a significant way. Generative models are designed by programming. In order to make generative modeling accessible to non-computer scientists, we created a generative modeling framework based on the easy-to-use scripting language JavaScript (JS). Furthermore, we did not implement yet another interpreter, but a JS-translator and compiler. As a consequence, our framework can translate generative models from JavaScript to various platforms. In this paper we present an overview of Euclides and quintessential examples of supported platforms: Java, Differential Java, and GML. Java is a target language, because all frontend and framework components are written in Java making it easier to be embedded in an integrated development environment. The Differential Java backend can compute derivatives of functions, which is a necessary task in many applications of scientific computing, e.g., validating reconstruction and fitting results of laser scanned surfaces. The postfix notation of GML is very similar to that of Adobes Postscript. It allows the creation of high-level shape operators from low-level shape operators. The GML serves as a platform for a number of applications because it is extensible and comes with an integrated visualization engine. This innovative meta-modeler concept allows a user to export generative models to other platforms without losing its main feature - the procedural paradigm. In contrast to other modelers, the source code does not need to be interpreted or unfolded, it is translated. Therefore, it can still be a very compact representation of a complex model.

Show publication details

Fellner, Dieter W.; Schaub, Jutta

Selected Readings in Computer Graphics 2010

2011

Darmstadt : Fraunhofer IGD, 2011

Selected Readings in Computer Graphics 21

The Fraunhofer Institute for Computer Graphics Research IGD with offices in Darmstadt as well as in Rostock, Singapore, and Graz, the partner institutes at the respective universities, the Interactive Graphics Systems Group of Technische Universität Darmstadt, the Computergraphics and Communication Group of the Institute of Computer Science at Rostock University, Nanyang Technological University (NTU), Singapore, and the Visual Computing Cluster of Excellence of Graz University of Technology, cooperate closely within projects and research and development in the field of Computer Graphics. The "Selected Readings in Computer Graphics 2010" consist of 45 articles selected from a total of 186 scientific publications contributed by all these institutions. All articles previously appeared in various scientific books, journals, conferences and workshops, and are reprinted with permission of the respective copyright holders. The publications had to undergo a thorough review process by internationally leading experts and established technical societies. Therefore, the Selected Readings should give a fairly good and detailed overview of the scientific developments in Computer Graphics in the year 2010. They are published by Professor Dieter W. Fellner, the director of Fraunhofer Institute for Computer Graphics Research IGD in Darmstadt, at the same time professor at the Department of Computer Science at Technische Universität Darmstadt, and professor at the Faculty of Computer Science at Graz University of Technology.

Show publication details

Fellner, Dieter W.; Schaub, Jutta

Selected Readings in Computer Graphics 2010. CD-ROM

2011

Darmstadt : Fraunhofer IGD, 2011

Selected Readings in Computer Graphics 21

The Fraunhofer Institute for Computer Graphics Research IGD with offices in Darmstadt as well as in Rostock, Singapore, and Graz, the partner institutes at the respective universities, the Interactive Graphics Systems Group of Technische Universität Darmstadt, the Computergraphics and Communication Group of the Institute of Computer Science at Rostock University, Nanyang Technological University (NTU), Singapore, and the Visual Computing Cluster of Excellence of Graz University of Technology, cooperate closely within projects and research and development in the field of Computer Graphics. The "Selected Readings in Computer Graphics 2010" consist of 45 articles selected from a total of 186 scientific publications contributed by all these institutions. All articles previously appeared in various scientific books, journals, conferences and workshops, and are reprinted with permission of the respective copyright holders. The publications had to undergo a thorough review process by internationally leading experts and established technical societies. Therefore, the Selected Readings should give a fairly good and detailed overview of the scientific developments in Computer Graphics in the year 2010. They are published by Professor Dieter W. Fellner, the director of Fraunhofer Institute for Computer Graphics Research IGD in Darmstadt, at the same time professor at the Department of Computer Science at Technische Universität Darmstadt, and professor at the Faculty of Computer Science at Graz University of Technology.

Show publication details

Schinko, Christoph; Ullrich, Torsten; Fellner, Dieter W.

Simple and Efficient Normal Encoding with Error Bounds

2011

Grimstead, Ian (Ed.) et al.: Theory and Practice of Computer Graphics 2011 : Eurographics UK Chapter Proceedings. Goslar: Eurographics Association, 2011, pp. 63-65

Theory and Practice of Computer Graphics (TPCG) <9, 2011, Warwick, UK>

Normal maps and bump maps are commonly used techniques to make 3D scenes more realistic. Consequently, the efficient storage of normal vectors is an important task in computer graphics. This work presents a fast, lossy compression/decompression algorithm for arbitrary resolutions. The complete source code is listed in the appendix and is ready to use.

Show publication details

Lancelle, Marcel; Fellner, Dieter W.

Smooth Transitions for Large Scale Changes in Multi-Resolution Images

2011

Eisert, Peter (Ed.) et al.: VMV 2011 : Vision, Modeling, and Visualization. Goslar: Eurographics Association, 2011, pp. 81-87

Vision, Modeling, and Visualization Workshop (VMV) <16, 2011, Berlin, Germany>

Today's super zoom cameras offer a large optical zoom range of over 30x. It is easy to take a wide angle photograph of the scene together with a few zoomed in high resolution crops. Only little work has been done to appropriately display the high resolution photo as an inset. Usually, to hide the resolution transition, alpha blending is used. Visible transition boundaries or ghosting artefacts may result. In this paper we introduce a different, novel approach to overcome these problems. Across the transition, we gradually attenuate the maximum image frequency. We achieve this with a Gaussian blur with an exponentially increasing standard deviation.

Show publication details

Lancelle, Marcel; Fellner, Dieter W.

Soft Edge and Soft Corner Blending

2011

Bohn, Christian-Arved (Ed.) et al.: Virtuelle und Erweiterte Realität : 8. Workshop der GI-Fachgruppe VR/AR. Aachen: Shaker, 2011. (Berichte aus der Informatik), 9 p.

Workshop der GI-Fachgruppe VR/AR: Virtuelle und Erweiterte Realität <8, 2011, Aachen, Germany>

We address artifacts at corners in soft edge blend masks for tiled projector arrays. We compare existing and novel modifications of the commonly used weighting function and analyze the first order discontinuities of the resulting blend masks. In practice, e.g. when the projector lamps are not equally bright or with rear projection screens, these discontinuities may lead to visible artifacts. By using first order continuous weighting functions, we achieve significantly smoother results compared to commonly used blend masks.

Show publication details

Hecher, Martin; Möstl, Robert; Eggeling, Eva; Derler, Christian; Fellner, Dieter W.

Tangible Culture - Designing virtual Exhibitions on Multi-Touch Devices

2011

Ercim News, (2011), 86, pp. 21-22

Cultural heritage institutions such as galleries, museums and libraries increasingly use digital media to present artifacts to their audience and enable them to immerse themselves in a cultural virtual world. With the application eXhibition:editor3D, museum curators and editors have a software tool at hand to interactively plan and visualize exhibitions. The software is running on standard PCs as well as multi-touch devices, which allow a user to utilize intuitive gestures for positioning exhibition objects. Furthermore, multi-touch technology offers the integration of collaborative work into a decision making process.

Show publication details

Havemann, Sven; Fellner, Dieter W.

Towards a New Shape Description Paradigm Using the Generative Modeling Language

2011

Calude, Cristian S. (Ed.) et al.: Rainbow of Computer Science : Dedicated to Hermann Maurer on the Occasion of His 70th Birthday. Berlin, Heidelberg, New York: Springer, 2011, pp. 200-214

A procedural description of a three-dimensional shape has undeniable advantages over conventional descriptions that are all based on the exhaustive enumeration paradigm. Although it is a true generalization, a procedural description of a given shape class is not always easy to obtain. The main problem is that procedural descriptions are typically Turing-complete, which makes 3D shape design formally (and practically) a programming task. We describe an approach that circumvents this problem, is efficient, extensible, and conceptually simple. We demonstrate the broad applicability with a number of examples from different domains and sketch possible future applications. But we also discuss some practical and theoretical limitations of the generative paradigm.

Show publication details

Ullrich, Torsten; Schiffer, Thomas; Schinko, Christoph; Fellner, Dieter W.

Variance Analysis and Comparison in Computer-Aided Design

2011

Remondino, Fabio (Ed.) et al.: Proceedings of the 4th ISPRS International Workshop 3D-ARCH 2011 : 3D Virtual Reconstruction and Visualization of Complex Architectures [CD-ROM]. (The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XXXVIII-5/W16), 5 p.

ISPRS International Workshop 3D-ARCH <4, 2011, Trento, Italy>

The need to analyze and visualize differences of very similar objects arises in many research areas: mesh compression, scan alignment, nominal/actual value comparison, quality management, and surface reconstruction to name a few. In computer graphics, for example, differences of surfaces are used for analyzing mesh processing algorithms such as mesh compression. They are also used to validate reconstruction and fitting results of laser scanned surfaces. As laser scanning has become very important for the acquisition and preservation of artifacts, scanned representations are used for documentation as well as analysis of ancient objects. Detailed mesh comparisons can reveal smallest changes and damages. These analysis and documentation tasks are needed not only in the context of cultural heritage but also in engineering and manufacturing. Differences of surfaces are analyzed to check the quality of productions. Our contribution to this problem is a workflow, which compares a reference / nominal surface with an actual, laser-scanned data set. The reference surface is a procedural model whose accuracy and systematics describe the semantic properties of an object; whereas the laser-scanned object is a real-world data set without any additional semantic information.

Show publication details

Landesberger, Tatiana von; Kuijper, Arjan; Schreck, Tobias; Kohlhammer, Jörn; van Wijk, Jarke; Fekete, Jean-Daniel; Fellner, Dieter W.

Visual Analysis of Large Graphs: State-of-the-Art and Future Research Challenges

2011

Computer Graphics Forum, Vol.30 (2011), 6, pp. 1719-1749

The analysis of large graphs plays a prominent role in various fields of research and is relevant in many important application areas. Effective visual analysis of graphs requires appropriate visual presentations in combination with respective user interaction facilities and algorithmic graph analysis methods. How to design appropriate graph analysis systems depends on many factors, including the type of graph describing the data, the analytical task at hand and the applicability of graph analysis methods. The most recent surveys of graph visualization and navigation techniques cover techniques that had been introduced until 2000 or concentrate only on graph layouts published until 2002. Recently, new techniques have been developed covering a broader range of graph types, such as timevarying graphs. Also, in accordance with ever growing amounts of graph-structured data becoming available, the inclusion of algorithmic graph analysis and interaction techniques becomes increasingly important. In this State-of-the-Art Report, we survey available techniques for the visual analysis of large graphs. Our review first considers graph visualization techniques according to the type of graphs supported. The visualization techniques form the basis for the presentation of interaction approaches suitable for visual graph exploration. As an important component of visual graph analysis, we discuss various graph algorithmic aspects useful for the different stages of the visual graph analysis process. We also present main open research challenges in this field.

Show publication details

Lancelle, Marcel; Fellner, Dieter W. (Betreuer); Havemann, Sven (Betreuer)

Visual Computing in Virtual Environments

2011

Graz, TU, Diss., 2011

This thesis covers research on new and alternative ways of interaction with computers. Virtual Reality and multi touch setups are discussed with a focus on three dimensional rendering and photographic applications in the field of Computer Graphics. Virtual Reality (VR) and Virtual Environments (VE) were once thought to be the future interface to computers. However, a lot of problems prevent an everyday use. This work shows solutions to some of the problems and discusses remaining issues. Hardware for Virtual Reality is diverse and many new devices are still being developed. An overview on historic and current devices and VE setups is given and our setups are described. The DAVE, an immersive projection room, and the HEyeWall Graz, a large high resolution display with multi touch input are presented. Available processing power and in some parts rapidly decreasing prices lead to a continuous change of the best choice of hardware. A major influence of this choice is the application. VR and multi touch setups often require sensing or tracking the user, optical tracking being a common choice. Hardware and software of an optical 3D marker tracking and an optical multi touch system are explained. The Davelib, a software framework for rendering 3D models in Virtual Environments is presented. It allows to easily port existing 3D applications to immersive setups with stereoscopic rendering and head tracking. Display calibration and rendering issues that are special to VR setups are explained. User interfaces for navigation and manipulation are described, focusing on interaction techniques for the DAVE and for multi touch screens. Intuitive methods are shown that are easy to learn and use, even for computer illiterates. Exemplary applications demonstrate the potential of immersive and non-immersive setups, showing which applications can most benefit from Virtual Environments. Also, some image processing applications in the area of computational photography are explained, that help to better depict the captured scene.

Show publication details

Bhatti, Nadeem; Fellner, Dieter W. (Betreuer); Schreck, Tobias (Betreuer)

Visual Semantic Analysis to Support Semi Automatic Modeling of Service Descriptions

2011

Darmstadt, TU, Diss., 2011

A new trend Web service ecosystems for Service-Oriented Architectures (SOAs) and Web services is emerging. Services can be offered and traded like products in these ecosystems. The explicit formalization of services' non-functional parameters, e.g. price plans and legal aspects, as Service Descriptions (SD) is one of the main challenges to establish such Web service ecosystems. The manual modeling of Service Descriptions (SDs) is a tedious and cumbersome task. In this thesis, we introduce the innovative approach Visual Semantic Analysis (VSA) to support semi-automatic modeling of service descriptions in Web service ecosystems. This approach combines the semantic analysis and interactive visualization techniques to support the analysis, modeling, and reanalysis of services in an iterative loop. For example, service providers can analyze first the price plans of the already existing services and extract semantic information from them (e.g. cheapest offers and functionalities). Then they can reuse the extracted semantics to model the price plans of their new services. Afterwards, they can reanalyze the new modeled price plans with the already existing services to check their market competitiveness in Web service ecosystems. The experts from different domains, e.g. service engineers, SD modeling experts, and price plan experts, were interviewed in a study to identify the requirements for the VSA approach. These requirements cover aspects related to the analysis of already exiting services and reuse of the analysis results to model new services. Based on the user requirements, we establish a generic process model for the Visual Semantic Analysis. It defines sub processes and transitions between them. Additionally, the technologies used and the data processed in these sub processes are also described. We present also the formal specification of this generic process model that serves as a basis for the conceptual framework of the VSA. A conceptual framework of the VSA elucidates structure and behavior of the Visual Semantic Analysis System. It specifies also system components of the VSA system and interaction between them. Additionally, we present the external interface of the VSA system for the communication with Web service ecosystems. Finally, we present the results of a user study conducted by means of the VSA system that is developed on the base of the VSA conceptual framework. The results of this user study show that the VSA system leads to strongly significant improvement of the time efficiency and offers better support for the analysis, modeling and reanalysis of service descriptions.

Show publication details

Bhatti, Nadeem; Fellner, Dieter W.

Visual Semantic Analysis to Support Semi-Automatic Modeling of Semantic Service Descriptions

2011

Dogru, Ali H. (Ed.) et al.: Modern Software Engineering Concepts and Practices : Advanced Approaches. Hershey: IGI Global, 2011, pp. 151-195

The service-oriented architecture has become one of the most popular approaches for distributed business applications. A new trend service ecosystem is merging, where service providers can augment their core services by using business service delivery-related available functionalities like distribution and delivery. The semantic service description of services for the business service delivery will become a bottleneck in the service ecosystem. In this chapter, the Visual Semantic Analysis approach is presented to support semi-automatic modeling of semantic service description by combining machine learning and interactive visualization techniques. Furthermore, two application scenarios from the project THESEUSTEXO (funded by German federal ministry of economics and technology) are presented as evaluation of the Visual Semantic Analysis approach.

Show publication details

Kahn, Svenja; Wuest, Harald; Stricker, Didier; Fellner, Dieter W.

3D Discrepancy Check via Augmented Reality

2010

Höllerer, Tobias (Ed.) et al.: 9th IEEE International Symposium on Mixed and Augmented Reality 2010 : ISMAR. Science & Technology Proceedings. Los Alamitos, Calif.: IEEE Computer Society, 2010, pp. 241-242

IEEE International Symposium on Mixed and Augmented Reality (ISMAR) <9, 2010, Seoul, South Korea>

For many tasks like markerless model-based camera tracking it is essential that the 3D model of a scene accurately represents the real geometry of the scene. It is therefore very important to detect deviations between a 3D model and a scene. We present an innovative approach which is based on the insight that camera tracking can not only be used for Augmented Reality visualization but also to solve the correspondence problem between 3D measurements of a real scene and their corresponding positions in the 3D model. We combine a time-of-flight camera (which acquires depth images in real time) with a custom 2D camera (used for the camera tracking) and developed an analysis-by-synthesis approach to detect deviations between a scene and a 3D model of the scene.

Show publication details

Wendt, Lars Henning; Stork, André; Kuijper, Arjan; Fellner, Dieter W.

3D Reconstruction from Line Drawings

2010

Institute for Systems and Technologies of Information, Control and Communication (INSTICC): VISIGRAPP 2010. Proceedings : International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications. INSTICC Press, 2010, pp. 65-71

International Conference on Computer Graphics Theory and Applications (GRAPP) <5, 2010, Angers, France>

In this work we introduce an approach for reconstructing digital 3D models from multiple perspective line drawings. One major goal is to keep the required user interaction simple and at a minimum, while making no constraints to the objects shape. Such a system provides a useful extension for digitalization of paper-based styling concepts, which today is still a time consuming process. In the presented method the line drawings are first decomposed in curves assembling a network of curves. In a second step, the positions for the endpoints of the curves are determined in 3D, using multiple sketches and a virtual camera model given by the user. Then the shapes of the 3D curves between the reconstructed 3D endpoints are inferred. This leads to a network of 3D curves, which can be used for first visual evaluations in 3D. During the whole process only little user interaction is needed, which only takes place in the pre- and post-processing phases. The approach has been applied on multiple sketches and it is shown that the approach creates plausible results within reasonable timing.

Show publication details

Pan, Xueming; Beckmann, Philipp; Havemann, Sven; Tzompanaki, Katerina; Doerr, Martin; Fellner, Dieter W.

A Distributed Object Repository for Cultural Heritage

2010

Artusi, Alessandro (Ed.) et al.: VAST 2010. : Eurographics Symposium Proceedings. Goslar: Eurographics Association, 2010, pp. 105-114; 182 (Color plate)

International Symposium on Virtual Reality, Archaeology and Cultural Heritage (VAST) <11, 2010, Paris, France>

This paper describes the design and the implementation of a distributed object repository that offers cultural heritage experts and practitioners a working platform to access, use, share and modify digital content. The principle of collecting paradata to document each step in a potentially long sequence of processing steps implies a number of design decisions for the data repository, which are described and explained. Furthermore, we provide a description of the concise API our implementation. Our intention is to provide an easy-to-understand recipe that may be valuable also for other data repository implementations that incorporate and operationalize the more theoretical concepts of intellectual transparency, collecting paradata, and compatibility to semantic networks.

Show publication details

Huff, Rafael; Neves, Tiago; Gierlinger, Thomas; Kuijper, Arjan; Stork, André; Fellner, Dieter W.

A General Two-Level Acceleration Structure for Interactive Ray Tracing on the GPU

2010

Computer Graphics Society (CGS): Computer Graphics International 2010. Short Papers : CGI [online]. [cited 01 February 2011] Available from: http://cgi2010.miralab.unige.ch/CGI_ShortPapersD4.html, 2010, 4 p.

Computer Graphics International (CGI) <28, 2010, Singapore>

Despite the superior image quality generated by ray tracing, programmers of time-critical applications have historically avoided it because of its computational costs. Nowadays, the hardware of modern desktops allows the execution of realtime ray tracers but requires a specialized implementation based on specific characteristics of each application, such as scene complexity, kinds of motion, ray distribution, model structure and hardware. The evaluation and development of these requirements are complex and time-consuming, especially for developers with no familiarity in rendering algorithms and graphics hardware programming. The aim of our work is to provide a general and practical method to efficiently execute interactive ray tracing in most systems. We considered the most common aspects of current computer graphics applications, like the use of a scene graph and support to static and dynamic objects. In addition, we also took into account the common desktop hardware. This led us to the development of a special acceleration structure and its implementation on the GPU. In this paper, we present the development of our work showing the combination of different techniques and our results.

Show publication details

Hohmann, Bernhard; Havemann, Sven; Krispel, Ulrich; Fellner, Dieter W.

A GML Shape Grammar for Semantically Enriched 3D Building Models

2010

Computers & Graphics, Vol.34 (2010), 4, pp. 322-334; First published as article in Press 16 June 2010, DOI: 10.1016/j.cag.2010.05.007

The creation of building and facility models is a tedious and complicated task. Existing CAD models are typically not well suited since they contain too much or not enough detail; the manual modeling approach does not scale; different views on the same model are needed, and different levels of detail and abstraction; and finally, conventional modeling tools are inappropriate for models with many internal parameter dependencies. As a solution to this problem we propose a combination of a procedural approach with shape grammars. The model is created in a top-down manner; high-level changeability and re-usability are much less of a problem; and it can be interactively evaluated to provide different views at runtime. We present some insights on the relation between imperative and declarative grammar descriptions, and show a detailed case study with facility surveillance as a practical application.

Show publication details

Schwenk, Karsten; Jung, Yvonne; Behr, Johannes; Fellner, Dieter W.

A Modern Declarative Surface Shader for X3D

2010

ACM SIGGRAPH: Proceedings Web3D 2010 : 15th International Conference on 3D Web Technology. New York: ACM Press, 2010, pp. 7-15

International Conference on 3D Web Technology (WEB3D) <15, 2010, Los Angeles, USA>

This paper introduces a modern, declarative surface shader for the X3D standard that allows for a compact, expressive, and implementation-independent specification of surface appearance. X3D's Material node is portable, but its feature set has become inadequate over the last years. Explicit shader programs, on the other hand, offer the expressive power to specify advanced shading techniques, but are highly implementation-dependent. The motivation for our proposal is to bridge the gap between these two worlds - to provide X3D with renderer-independent support for modern materials and to increase interoperability with DCC tools. At the core of our proposal is the CommonSurfaceShader node. This node provides no explicit shader code, only a slim declarative interface consisting of a set of parameters with clearly defined semantics. Implementation details are completely hidden and portability is maximized. It supports diffuse and glossy surface reflection, bump mapping, and perfect specular reflection and refraction. This feature set can capture the appearance of many common materials accurately and is easily mappable to the material descriptions of other software packages and file formats. To verify our claims, we have implemented and analyzed the proposed node in three different rendering pipelines: a renderer based on hardware accelerated rasterization, an interactive ray tracer, and a path tracer.

Show publication details

Berndt, Rene; Buchgraber, Gerald; Havemann, Sven; Settgast, Volker; Fellner, Dieter W.

A Publishing Workflow for Cultural Heritage Artifacts from 3D-Reconstruction to Internet Presentation

2010

Ioannides, Marinos (Ed.) et al.: Digital Heritage. Third International Conference, EuroMed 2010. Berlin, Heidelberg, New York: Springer, 2010. (Lecture Notes in Computer Science (LNCS) 6436), pp. 166-178

International Euro-Mediterranean Conference (EuroMed) <3, 2010, Lemessos, Cyprus>

Publishing cultural heritage as 3D models with embedded annotations and additional information on the web is still a major challenge. This includes the acquisition of the digital 3D model, the authoring and editing of the additional information to be attached to the digital model as well as publishing it in a suitable format. These steps usually require very expensive hardware and software tools. Especially small museums cannot afford an expensive scanning campaign in order to generate the 3D models from the real artefacts. In this paper we propose an affordable publishing workflow from acquisition of the data to authoring and enriching it with the related metadata and information to finally publish it in a way suitable for access by means of a web browser over the internet. All parts of the workflow are based on open source solutions and free services.

Show publication details

Behr, Johannes; Jung, Yvonne; Keil, Jens; Drevensek, Timm; Zöllner, Michael; Eschler, Peter; Fellner, Dieter W.

A Scalable Architecture for the HTML5 / X3D Integration Model X3DOM

2010

ACM SIGGRAPH: Proceedings Web3D 2010 : 15th International Conference on 3D Web Technology. New York: ACM Press, 2010, pp. 185-193

International Conference on 3D Web Technology (WEB3D) <15, 2010, Los Angeles, USA>

We present a scalable architecture, which implements and further evolve the HTML/X3D integration model X3DOM introduced in [Behr et al. 2009]. The goal of this model is to integrate and update declarative X3D content directly in the HTML DOM tree. The model was previously presented in a very abstract and generic way by only suggesting implementation strategies. The available opensource x3dom.js architecture provides concrete solutions to the previously open points and extents the generic model if necessary. The outstanding feature of the architecture is to provide a single declarative interface to application developers and at the same time support of various backends through a powerful fallback-model. This fallback-model does not provide a single implementation strategy for the runtime and rendering module but supports different methods transparently. This includes native browser implementations and X3D-plugins as well as a WebGL-based scene-graph, which allows running the content without the need for installing additional plugins on all browsers that support WebGL. The paper furthermore discusses generic aspects of the architecture like encoding and introspection, but also provides details concerning two backends. It shows how the system interfaces with X3D-plugins and WebGL and also discusses implementation specific features and limitations.

Show publication details

Bernard, Jürgen; Brase, Jan; Fellner, Dieter W.; Koepler, Oliver; Kohlhammer, Jörn; Ruppert, Tobias; Schreck, Tobias; Sens, Irina

A Visual Digital Library Approach for Time-Oriented Scientific Primary Data

2010

Lalmas, Mounia (Ed.) et al.: Research and Advanced Technology for Digital Libraries : 14th European Conference ECDL. Proceedings. Berlin, Heidelberg, New York: Springer, 2010. (Lecture Notes in Computer Science (LNCS) 6273), pp. 352-363

European Conference on Research and Advanced Technology for Digital Libraries (ECDL) <14, 2010, Glasgow, UK>

Digital Library support for textual and certain types of non-textual documents has significantly advanced over the last years. While Digital Library support implies many aspects along the whole library workflow model, interactive and visual retrieval allowing effective query formulation and result presentation are important functions. Recently, new kinds of non-textual documents which merit Digital Library support, but yet cannot be accommodated by existing Digital Library technology, have come into focus. Scientific primary data, as produced for example, by scientific experimentation, earth observation, or simulation, is such a data type. We report on a concept and first implementation of Digital Library functionality, supporting visual retrieval and exploration in a specific important class of scientific primary data, namely, time-oriented data. The approach is developed in an interdisciplinary effort by experts from the library, natural sciences, and visual analytics communities. In addition to presenting the concept and discussing relevant challenges, we present results from a first implementation of our approach as applied on a real-world scientific primary data set.

Show publication details

Schwenk, Karsten; Franke, Tobias; Drevensek, Timm; Kuijper, Arjan; Bockholt, Ulrich; Fellner, Dieter W.

Adapting Precomputed Radiance Transfer to Real-time Spectral Rendering

2010

Lensch, Hendrik P. A. (Ed.) et al.: Eurographics 2010. Short Papers, pp. 49-52

Eurographics <31, 2010, Norrköping, Sweden>

Spectral rendering takes the full visible spectrum into account when calculating light-surface interaction and can overcome the well-known deficiencies of rendering with tristimulus color models. We present a variant of the precomputed radiance transfer algorithm that is tailored towards real-time spectral rendering on modern graphics hardware. Our method renders diffuse, self-shadowing objects with spatially varying spectral reflectance properties under distant, dynamic, full-spectral illumination. To achieve real-time frame rates and practical memory requirements we split the light transfer function into an achromatic part that varies per vertex and a wavelength-dependent part that represents a spectral albedo texture map. As an additional optimization, we project reflectance and illuminant spectra into an orthonormal basis. One area of application for our research is virtual design applications that require relighting objects with high color fidelity at interactive frame rates.

Show publication details

Fellner, Dieter W.; Baier, Konrad; Fey, Thekla; Bornemann, Heidrun; Wehner, Detlef; Mentel, Katrin

Annual Report 2009: Fraunhofer Institute for Computer Graphics Research IGD

2010

Darmstadt, 2010

Show publication details

Burkhardt, Dirk; Stab, Christian; Nazemi, Kawa; Breyer, Matthias; Fellner, Dieter W.

Approaches for 3D-Visualizations and Knowledge Worlds for Exploratory Learning

2010

Gómez Chova, Luis (Ed.) et al.: International Conference on Education and New Learning Technologies. Proceedings [CD-ROM] : EDULEARN10. Valencia: IATED, 2010, pp. 006427-006437

International Conference on Education and New Learning Technologies (EDULEARN) <2, 2010, Barcelona, Spain>

Graphical knowledge representations open promising perspectives to support the explorative learning on web. 2D-visualization are recently evaluated as gainful knowledge exploration systems, whereas 3D-visualization systems did not find their way into web-based explorative learning. 3D-visualizations and "3D Knowledge Worlds", as virtual environment in context of e-learning, comprise a high degree of authenticity, because the used metaphors are known by the users from the real world. But different challenges like the usage of 3D-Knowledge World without losing the learning context and the focused learning goals are rarely investigated and considered. New technologies provide the opportunity to introduce 3D-visualizations and environments on web to support a web-based explorative learning. Therefore it is necessary to investigate the prospects of 3D-visualization for transferring and adopting knowledge on web. The following paper describes different approaches to use 3D-visualization and Knowledge Worlds for conveying knowledge on web-based systems using web-based contents. The approaches for 3D visualizations are classified into different layout algorithm and the knowledge worlds are classified interaction character.

Show publication details

Berndt, Rene; Blümel, Ina; Clausen, Michael; Damm, David; Diet, Jürgen; Fellner, Dieter W.; Fremerey, Christian; Klein, Reinhard; Scherer, Maximilian; Schreck, Tobias; Sens, Irina; Thomas, Verena; Wessel, Raoul

Aufbau einer verteilten digitalen Bibliothek für nichttextuelle Dokumente - Ansatz und Erfahrungen des PROBADO Projekts

2010

Mittermaier, Bernhard (Ed.): eLibrary - den Wandel gestalten. Jülich: Forschungszentrum Jülich, 2010. (Schriften des Forschungszentrums Jülich, Reihe Bibliothek / Library 20), pp. 219-233

Konferenz der Zentralbibliothek im Forschungszentrum Jülich (Wisskomm) <5, 2010, Jülich>

Das PROBADO Projekt ist ein von der DFG gefördertes Leistungszentrum für Forschungsinformation mit dem Ziel des prototypischen Aufbaus und Betriebs einer verteilten digitalen Bibliothek für heterogene, nicht-textuelle Dokumente. Betrachtet werden in diesem Projekt alle Schritte der bibliothekarischen Verwertungskette vom Bestandsaufbau über semi-automatische Inhaltserschließung, bis hin zu visuellinteraktiver Suche und Präsentation sowie Betriebsaspekten. In diesem Beitrag werden der gewählte Ansatz beschrieben und die bislang im Projekt gemachten praktischen und konzeptionellen Erfahrungen systematisiert und eingeordnet.

Show publication details

Hofmann, Cristian Erik; Boettcher, Uwe; Fellner, Dieter W.

Change Awareness for Collaborative Video Annotation

2010

Lewkowicz, Myriam (Ed.) et al.: Proceedings of COOP 2010 : Proceedings of the 9th International Conference on the Design of Cooperative Systems. London: Springer, 2010, pp. 101-117

International Conference on the Design of Cooperative Systems (COOP) <9, 2010, Aix-en-Provence, France>

Collaborative Video Annotation is a broad field of research and is widely used in productive environments. While it is easy to follow changes in small systems with few users, keeping in touch with all changes in large environments can easily get overwhelming. The easiest way and a first approach to prevent the users from getting lost are to show them all changes in an appropriate way. This list of changes can also become very large when many contributors add new information to shared data resources. To prevent users from getting lost while having a list of changes, this paper introduces a way to subscribe to parts of the system and only to have the relevant changes shown. To achieve this goal, the framework provides an approach to check the relevance of changes, which is not trivial in three dimensional spaces, and to be accumulated for later reference by the subscribing user. The benefit for users is to need fewer times to be up-to-date and to have more time for applying own changes.

Show publication details

Ioannides, Marinos; Fellner, Dieter W.; Georgopoulos, Andreas; Hadjimitsis, Diofantos G.

Digital Heritage. Third International Conference, EuroMed 2010

2010

Berlin, Heidelberg, New York : Springer, 2010

International Euro-Mediterranean Conference (EuroMed) <3, 2010, Lemessos, Cyprus>

Lecture Notes in Computer Science (LNCS) 6436

The focal point of this conference was digital heritage, which all of us involved in the documentation of cultural heritage continually strive to implement. The excellent selection of papers published in the proceedings reflects in the best possible way the benefits of exploiting modern technological advances for the restoration, preservation and e-documentation of any kind of cultural heritage. The topics covered included experiences in the use of innovative recording technologies and methods, and how to take best advantage of the results obtained to build up new instruments and improved methodologies for documenting in multimedia formats, archiving in digital libraries and managing a cultural heritage.

Show publication details

Binotto, Alecio; Pedras, Bernardo; Götz, Marcelo; Kuijper, Arjan; Pereira, Carlos Eduardo; Stork, André; Fellner, Dieter W.

Effective Dynamic Scheduling on Heterogeneous Multi/Manycore Desktop Platforms

2010

Bentes, Cristiana (Ed.) et al.: SBAC-PADW 2010 : 1st Workshop on Applications for Multi and Many Core Architectures (WAMMCA). Los Alamitos, Calif.: IEEE Computer Society Conference Publishing Services (CPS), 2010, pp. 37-42

Workshop on Applications for Multi and Many Core Architectures (WAMMCA) <1, 2010, Petrópolis, Brazil>

GPUs (Graphics Processing Units) have become one of the main co-processors that contributed to desktops towards high performance computing. Together with multicore CPUs and other co-processors, a powerful heterogeneous execution platform is built on a desktop for data intensive calculations. In our perspective, we see the modern desktop as a heterogeneous cluster that can deal with several applications' tasks at the same time. To improve application performance and explore such heterogeneity, a distribution of workload over the asymmetric PUs (Processing Units) plays an important role for the system. However, this problem faces challenges since the cost of a task at a PU is non-deterministic and can be influenced by several parameters not known a priori, like the problem size domain. We present a context-aware architecture that maximizes application performance on such platforms. This approach combines a model for a first scheduling based on an offline performance benchmark with a runtime model that keeps track of tasks' real performance. We carried a demonstration using a CPU-GPU platform for computing iterative SLEs (Systems of Linear Equations) solvers using the number of unknowns as the main parameter for assignment decision. We achieved a gain of 38.3% in comparison to the static assignment of all tasks to the GPU (which is done by current programming models, such as OpenCL and CUDA for Nvidia).

Show publication details

Peña Serna, Sebastian; Stork, André; Fellner, Dieter W.

Embodiment Mesh Processing

2010

Fischer, Xavier (Ed.) et al.: Research in Interactive Design. Volume 3 : Virtual, Interactive and Integrated Product Design and Manufacturing for Industrial Innovation. Paris, Berlin, Heidelberg: Springer, 2010, 6 p.

International Conference on Integrated, Interactive and Virtual Product Engineering (IDMME - Virtual Concept) <2010, Boreaux, France>

During the last two decades, several approaches have been proposed, in order to deal with the integration in the embodiment phase of the engineering design. This phase deals with the virtual product development and it is supported by Computer-Aided Design (CAD) and Computer-Aided Engineering (CAE). Nonetheless, this integration has not really been achieved. There is a well established communication from design to analysis, but there is a lack of design operations and functionalities within an analysis environment. This lack of integration will always be presented as long as there are used different representation schemes for design and analysis. Hence, Embodiment Mesh Processing (EMP) is based on a common mesh representation and it aims to provide mesh-based modeling functionalities within an analysis environment. We present our reasoning behind EMP and the needed building blocks for enabling a fully integrated design-analysis interaction loop and the exploration of design variations.

Show publication details

Schiffer, Thomas; Schiefer, Andreas; Berndt, Rene; Ullrich, Torsten; Settgast, Volker; Fellner, Dieter W.

Enlightened by the Web: A Service-oriented Architecture for Real-time Photorealistic Rendering

2010

Kalkbrenner, Stefan (Ed.): 5. Multimediakongress Wismar 2010 : Netzwerk - Forschung - Innovation [CD-ROM], 8 p.

Kongress Multimediatechnik <5, 2010, Wismar, Germany>

Integrating a web service into an application based on the OpenSG scene graph system makes it possible to access and modify the contents of the scene graph during runtime without recompilation of the application. These features offer many new possibilities in interacting with the OpenSG application and the scene graph. The main features of our web service are querying the contents of the scene graph, adding and deleting of nodes and changing properties of nodes.

Show publication details

Strobl, Martin; Schinko, Christoph; Ullrich, Torsten; Fellner, Dieter W.

Euclides - A JavaScript to PostScript Translator

2010

International Academy, Research, and Industry Association (IARIA): Computation Tools 2010 : The First International Conference on Computational Logics, Algebras, Programming, Tools, and Benchmarking, pp. 14-21

International Conference on Computational Logics, Algebras, Programming, Tools, and Benchmarking (Computation Tools) <1, 2010, Lisbon, Portugal>

Offering an easy access to programming languages that are difficult to approach directly dramatically reduces the inhibition threshold. The Generative Modeling Language is such a language and can be described as being similar to Adobe's PostScript. A major drawback of all PostScript dialects is their unintuitive reverse Polish notation, which makes both - reading and writing - a cumbersome task. A language should offer a structured and intuitive syntax in order to increase efficiency and avoid frustration during the creation of code. To overcome this issue, we present a new approach to translate JavaScript code to GML automatically. While this translation is basically a simple infix-to-postfix notation rewrite for mathematical expressions, the correct translation of control flow structures is a non-trivial task, due to the fact that there is no concept of "goto" in the PostScript language and its dialects. The main contribution of this work is the complete translation of JavaScript into a PostScript dialect including all control flow statements. To the best of our knowledge, this is the first complete translator.

Show publication details

Krispel, Ulrich; Havemann, Sven; Fellner, Dieter W.

FaMoS - A Visual Editor for Hierachical Volumetric Modeling

2010

Kalkbrenner, Stefan (Ed.): 5. Multimediakongress Wismar 2010 : Netzwerk - Forschung - Innovation [CD-ROM], 6 p.

Kongress Multimediatechnik <5, 2010, Wismar, Germany>

Shape grammar systems are the suitable method for modeling hierarchically structured 3D objects. A great variety of similar objects can be modeled using only a small set of rules. We present a prototypical graphical user interface for hierarchical volumetric modeling using the split grammar approach. Our focus is on creating split rules interactively rather than through scripting. Furthermore, we extend the concept of subdividing boxes to a more general representation and evaluate it in the context of generating 3D-facades of complex buildings.

Show publication details

Buchgraber, Gerald; Berndt, Rene; Havemann, Sven; Fellner, Dieter W.

FO3D - Formatting Objects for PDF3D

2010

ACM SIGGRAPH: Proceedings Web3D 2010 : 15th International Conference on 3D Web Technology. New York: ACM Press, 2010, pp. 63-71

International Conference on 3D Web Technology (WEB3D) <15, 2010, Los Angeles, USA>

3D is useful in many real-world applications beyond computer games. The efficiency of communication is greatly enhanced by combining interlinked verbal descriptions with 3D content. However, there is a wide gap between the great demand for 3D content and the inconvenience and cost of delivering it. We propose using PDF, which is extremely well supported by standard content production workflows. Producing PDF with embedded 3D is currently not an easy task. As a solution to the problem we offer a freely available tool that makes embedding 3D in PDF documents an easy to use technology. Our solution is very flexible, extensible, and can be easily integrated with existing document workflow technology.

Show publication details

Nazemi, Kawa; Breyer, Matthias; Stab, Christian; Burkhardt, Dirk; Fellner, Dieter W.

Intelligent Exploration System - an Approach for User-Centered Exploratory Learning

2010

Gómez Chova, Luis (Ed.) et al.: International Conference on Education and New Learning Technologies. Proceedings [CD-ROM] : EDULEARN10. Valencia: IATED, 2010, pp. 006476-006484

International Conference on Education and New Learning Technologies (EDULEARN) <2, 2010, Barcelona, Spain>

The following paper describes the conceptual design of an Intelligent Exploration System (IES) that offers a user-adapted graphical environment of web-based knowledge repositories, to support and optimize the explorative learning. The paper starts with a short definition of learning by exploring and introduces the Intelligent Tutoring System and Semantic Technologies for developing such an Intelligent Exploration System. The IES itself will be described with a short overview of existing learner or user analysis methods, visualization techniques for exploring knowledge with semantics technology and the explanation of the characteristics of adaptation to offer a more efficient learning environment.

Show publication details

Nazemi, Kawa; Stab, Christian; Fellner, Dieter W.

Interaction Analysis for Adaptive User Interfaces

2010

Huang, De-Shuang (Ed.) et al.: Advanced Intelligent Computing Theories and Applications : 6th International Conference on Intelligent Computing. Berlin, Heidelberg, New York: Springer, 2010. (Lecture Notes in Computer Science (LNCS) 6215), pp. 362-371

International Conference on Intelligent Computing (ICIC) <6, 2010, Changsha, China>

Adaptive User Interfaces are able to facilitate the handling of computer systems through the automatic adaptation to users' needs and preferences. For the realization of these systems, information about the individual user is needed. This user information can be extracted from user events by applying analytical methods without the active information input by the user. In this paper we introduce a reusable interaction analysis system based on probabilistic methods that predicts user interactions, recognizes user activities and detects user preferences on different levels of abstraction. The evaluation reveals that the prediction quality of the developed algorithm outperforms the quality of other established prediction methods.

Show publication details

Nazemi, Kawa; Stab, Christian; Fellner, Dieter W.

Interaction Analysis: An Algorithm for Interaction Prediction and Activity Recognition in Adaptive Systems

2010

Chen, Wen (Ed.) et al.: IEEE International Conference on Intelligent Computing and Intelligent Systems. Proceedings : ICIS 2010. New York: IEEE Press, 2010, pp. 607-612

IEEE International Conference on Intelligent Computing and Intelligent Systems (ICIS) <2, 2010, Xiamen, China>

Predictive statistical models are used in the area of adaptive user interfaces to model user behavior and to infer user information from interaction events in an implicit and non-intrusive way. This information constitutes the basis for tailoring the user interface to the needs of the individual user. Consequently, the user analysis process should model the user with information, which can be used in various systems to recognize user activities, intentions and roles to accomplish an adequate adaptation to the given user and his current task. In this paper we present the improved prediction algorithm KO*/19, which is able to recognize, beside interaction predictions, behavioral patterns for recognizing user activities. By means of this extension, the evaluation shows that the KO*/19-Algorithm improves the Mean Prediction Rank more than 19% compared to other well-established prediction algorithms.

Show publication details

Jung, Yvonne; Webel, Sabine; Olbrich, Manuel; Drevensek, Timm; Franke, Tobias; Roth, Marcus; Fellner, Dieter W.

Interactive Textures as Spatial User Interfaces in X3D

2010

ACM SIGGRAPH: Proceedings Web3D 2010 : 15th International Conference on 3D Web Technology. New York: ACM Press, 2010, pp. 147-150

International Conference on 3D Web Technology (WEB3D) <15, 2010, Los Angeles, USA>

3D applications, e.g. in the context of visualization or interactive design review, can require complex user interaction to manipulate certain elements, a typical task which requires standard user interface elements. However, there are still no generalized methods for selecting and manipulating objects in 3D scenes and 3D GUI elements often fail to gather support for reasons of simplicity, leaving developers encumbered to replicate interactive elements themselves. Therefore, we present a set of nodes that introduce different kinds of 2D user interfaces to X3D. We define a base type for these user interfaces called "InteractiveTexture", which is a 2D texture node implementing slots for input forwarding. From this node we derive several user interface representations to enable complex user interaction suitable for both, desktop and immersive interaction.

Show publication details

Burkhardt, Dirk; Hofmann, Cristian Erik; Nazemi, Kawa; Stab, Christian; Breyer, Matthias; Fellner, Dieter W.

Intuitive Semantic-Editing for Regarding Needs of Domain-Experts

2010

Herrington, Jan (Ed.) et al.: ED-Media 2010 : World Conference on Educational Multimedia, Hypermedia & Telecommunications [online]. Chesapeake: AACE, 2010, pp. 860-869

World Conference on Educational Multimedia, Hypermedia & Telecommunications (ED-Media) <2010, Toronto, Canada>

Ontologies are used to represent knowledge and their semantic information from different topics, to allow users a better way to explore knowledge and find information faster, because of the data-structuring. To achieve a well filled knowledgebase, editors have to be used, to enter new and to edit existing information. But most of the existing ontology-editors are designed for experienced ontology-experts. Experts from other topic fields e.g. physicians are often novices in the area of ontology-creating, they need adequate tools, which hide the complexity of ontology-structures. In the area of e-learning experts are also teachers as well. In this paper we will present a method, how the needs of domain-experts can be regarded and so an editor can designed, which allows an editing and adding of information by users without having experiences of creating ontologies. With such an editor domain-experts are able to commit their expert-knowledge into the ontology.

Show publication details

Binotto, Alecio; Daniel, Christian G.; Weber, Daniel; Kuijper, Arjan; Stork, André; Pereira, Carlos Eduardo; Fellner, Dieter W.

Iterative SLE Solvers over a CPU-GPU Platform

2010

IEEE Computer Society: Proceedings 2010 12th IEEE International Conference on High Performance Computing and Communications : HPCC 2010. Los Alamitos, Calif.: IEEE Computer Society Conference Publishing Services (CPS), 2010, pp. 305-313

IEEE International Conference on High Performance Computing and Communications (HPCC) <12, 2010, Melbourne, Australia>

GPUs (Graphics Processing Units) have become one of the main co-processors that contributed to desktops towards high performance computing. Together with multi-core CPUs, a powerful heterogeneous execution platform is built for massive calculations. To improve application performance and explore this heterogeneity, a distribution of workload in a balanced way over the PUs (Processing Units) plays an important role for the system. However, this problem faces challenges since the cost of a task at a PU is non-deterministic and can be influenced by several parameters not known a priori, like the problem size domain. We present a comparison of iterative SLE (Systems of Linear Equations) solvers, used in many scientific and engineering applications, over a heterogeneous CPU-GPUs platform and characterize scenarios where the solvers obtain better performances. A new technique to improve memory access on matrix vector multiplication used by SLEs on GPUs is described and compared to standard implementations for CPU and GPUs. Such timing profiling is analyzed and break-even points based on the problem sizes are identified for this implementation, pointing whether our technique is faster to use GPU instead of CPU. Preliminary results show the importance of this study applied to a real-time CFD (Computational Fluid Dynamics) application with geometry modification.

Show publication details

Fellner, Dieter W.; Baier, Konrad; Fey, Thekla; Bornemann, Heidrun; Wehner, Detlef; Mentel, Katrin

Jahresbericht 2009: Fraunhofer-Institut für Graphische Datenverarbeitung IGD

2010

Darmstadt, 2010

Show publication details

Yoon, Sang Min; Encarnação, José L. (Betreuer); Fellner, Dieter W. (Betreuer)

Markerless Motion Analysis in Diffusion Tensor Fields and Its Applications

2010

Darmstadt, TU, Diss., 2010

The analysis of deformable objects which have a high-degree of freedom has long been encouraged by numerous researchers because it can be applied to such diverse areas as medical engineering, video surveillance and monitoring, Human Computer Interaction, browsing of video databases, interactive gaming and other growing applications. Within the computerized environments, the systems are largely separated into marker based motion capture and markerless motion capture. In particular, markerless motion capture and analysis have also been heavily studied by numerous researchers using local features, color, shape, texture, and depth map from stereo vision, but it is still a challenging issue in the area of computer vision and computer graphics due to partial occlusion, clutter, dependency of camera viewpoints, high-dimensional state space and pose ambiguity within the target object. In this thesis, we address the issue of the efficient markerless motion capture and representation methodology using skeletal features for the purpose of analysis and recognition of their motion patterns in video sequences. To localize the motion of the target object in a 2D image and 3D volume, we extract the skeletal features by analyzing its Normalized Gradient Vector Flow in the space of diffusion tensor fields since skeletal features are more robust and efficient than other features in recognizing and analyzing the deformable object. The skeletal features within the target object are automatically merged and split by measuring the dissimilarity of tensorial characteristics between neighbor pixels and voxels. The split skeletal features are used as features in human action recognition to understand human motion and target object detection and retrieval for Content based Image Retrieval. This thesis provides the following contributions to the fields of computer vision and computer graphics: (i) it introduces the notion of the features in the space of diffusion tensor fields and evaluates the successful analysis method of such features for motion interpretation, (ii) it presents a theory and an evaluation of the methods for automatic skeleton splitting and merging with respect to similarity measure between neighbor pixels in two dimension or voxels in three dimension and, (iii) it presents and demonstrates our proposed principle methodologies for diverse applications such as human action recognition or sketch-based image retrieval. With our system we can robustly handle several computer vision tasks to recognize and understand the motion of the target object without any prior information. In particular, the human action recognition using 3D reconstruction from multiple images and the skeleton splitting procedure is firstly proposed in this thesis and shown to be a useful and stable methodology. Furthermore, users can easily express their intention by sketching the characteristics of a target object and derive available related objects from a data base by using our proposed method.

Show publication details

Schinko, Christoph; Strobl, Martin; Ullrich, Torsten; Fellner, Dieter W.

Modeling Procedural Knowledge: A Generative Modeler for Cultural Heritage

2010

Ioannides, Marinos (Ed.) et al.: Digital Heritage. Third International Conference, EuroMed 2010. Berlin, Heidelberg, New York: Springer, 2010. (Lecture Notes in Computer Science (LNCS) 6436), pp. 153-165

International Euro-Mediterranean Conference (EuroMed) <3, 2010, Lemessos, Cyprus>

Within the last few years generative modeling techniques have gained attention especially in the context of cultural heritage. As a generative model describes a rather ideal object than a real one, generative techniques are a basis for object description and classification. This procedural knowledge differs from other kinds of knowledge, such as declarative knowledge, in a significant way. It can be applied to a task. This similarity to algorithms is reflected in the way generative models are designed: they are programmed. In order to make generative modeling accessible to cultural heritage experts, we created a generative modeling framework which accounts for their special needs. The result is a generative modeler (http://www.cgv.tugraz.at/euclides) based on an easy-to-use scripting language (JavaScript). The generative model meets the demands on documentation standards and fulfils sustainability conditions. Its integrated meta-modeler approach makes it independent from hardware, software and platforms

Show publication details

Ullrich, Torsten; Schiefer, Andreas; Fellner, Dieter W.

Modeling with Subdivision Surfaces

2010

Skala, Vaclav (Ed.): WSCG 2010. Full Papers Proceedings. Plzen: University of West Bohemia, 2010, pp. 1-8

International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision (WSCG) <18, 2010, Plzen, Czech Republic>

Subdivision surfaces are an established modeling tool in computer graphics and computer-aided design. While the theoretical foundations of subdivision surfaces are well studied, the correlation between a control mesh and its subdivided limit surface still has some open-ended questions: Which topology should a control mesh have? Where should control vertices be placed? A modeler - human or software - is confronted with these questions and has to answer them. In this paper we analyze four characteristic situations. Each one consists of an analytical reference surface S and several variants of control meshes Ci. In order to concentrate on the topology of the control meshes, the geometrical positions of their control vertices have been determined and optimized automatically. As a result we identified the best topology of all Ci to represent the given surface S. Based on these results we derived heuristics to model with subdivision surfaces. These heuristics are beneficial for all modelers.

Show publication details

Huff, Rafael; Neves, Tiago; Gierlinger, Thomas; Kuijper, Arjan; Stork, André; Fellner, Dieter W.

OpenCL vs. CUDA for Ray Tracing

2010

Brazilian Computer Society (SBC): XII Symposium on Virtual and Augmented Reality : SVR 2010. Brazil: Everton Cavalcante, 2010, 4 p.

Symposium on Virtual and Augmented Reality (SVR) <12, 2010, Natal, Brazil>

For many years the Graphics Processing Unit (GPU) of common desktops was just used to accelerate certain parts of the graphics pipeline. After developers had access to the native instruction set and memory of the massive parallel computational elements of GPUs a lot has changed. GPUs became powerful and programmable. Nowadays two SDKs are most used for GPU programming: CUDA and OpenCL. CUDA is the most adopted general purpose parallel computing architecture for GPUs but is restricted to Nvidia graphic cards only. In contrast, OpenCL is a new royalityfree framework for parallel programming intended to be portable across different hardware manufacturers or even different platforms. In this paper, we evaluate both solutions considering a typical parallel algorithm: Ray Tracing. We show our performance results and experiences on developing both implementations that could be easily adapted to solve other problems.

Show publication details

Ullrich, Torsten; Schinko, Christoph; Fellner, Dieter W.

Procedural Modeling in Theory and Practice

2010

Skala, Vaclav (Ed.): WSCG 2010. Poster Proceedings. Plzen: University of West Bohemia, 2010, pp. 5-8

International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision (WSCG) <18, 2010, Plzen, Czech Republic>

Procedural modeling is a technique to describe 3D objects by a constructive, generative description. In order to tap the full potential of this technique the content creator needs to be familiar with two worlds - procedural modeling techniques and computer graphics on the one hand as well as domain-specific expertise and specialized knowledge on the other hand. This article presents a JavaScript-based approach to combine both worlds. It describes a modeling tool for generative modeling whose target audience consists of beginners and intermediate learners of procedural modeling techniques. Our approach will be beneficial in various contexts. JavaScript is a wide-spread, easy-to-use language. With our tool procedural models can be translated from JavaScript to various generative modeling and rendering systems.

Show publication details

Hofmann, Cristian Erik; Fellner, Dieter W. (Betreuer); Wulf, Volker (Betreuer)

Process-based Design of Multimedia Annotation Systems

2010

Darmstadt, TU, Diss., 2010

Annotation of digital multimedia comprises a range of different application scenarios, supported media and annotation formats, and involved techniques. Accordingly, recent annotation environments provide numerous functions and editing options. This results in complexly designed user interfaces, so that human operators are disoriented with respect to task procedures and the selection of accurate tools. In this thesis we contribute to the operability of multimedia annotation systems in several novel ways. We introduce concepts to support annotation processes, at which principles of Workflow Management are transferred. Particularly focusing on the behavior of graphical user interface components, we achieve a significant decrease of user disorientation and processing times. - In three initial studies, we investigate multimedia annotation from two different perspectives. A Feature-oriented Analysis of Annotation Systems describes applied techniques and forms of processed data. Moreover, a conducted Empirical Study and Literature Survey elucidate different practices of annotation, considering case examples and proposed workflow models. - Based on the results of the preliminary studies, we establish a Generic Process Model of Multimedia Annotation, summarizing identified sub-processes and tasks, their sequential procedures, applied services, as well as involved data formats. - By a transfer into a Formal Process Specification we define information entities and their interrelations, constituting a basis for workflow modeling, and declaring types of data which need to be managed and processed by the technical system. - We propose a Reference Architecture Model, which elucidates the structure and behavior of a process-based annotation system, also specifying interactions and interfaces between different integrated components. - As central contribution of this thesis, we introduce a concept for Process-driven User Assistance. This implies visual and interactive access to a given workflow, representation of the workflow progress, and status-dependent invocation of tools. We present results from a User Study conducted by means of the so-called SemAnnot framework. We implemented this novel framework based on our considerations mentioned above. In this study we show that the application of our proposed concept for process-driven user assistance leads to strongly significant improvements of the operability of multimedia annotation systems. These improvements are associated with the partial aspects efficiency, learnability, usability, process overview, and user satisfaction.

Show publication details

Fellner, Dieter W.; Schaub, Jutta

Selected Readings in Computer Graphics 2009

2010

Stuttgart : Fraunhofer Verlag, 2010

Selected Readings in Computer Graphics 20

The Fraunhofer Institute for Computer Graphics Research IGD with offices in Darmstadt as well as in Rostock, Singapore, and Graz, the partner institutes at the respective universities, the Interactive Graphics Systems Group of Technische Universität Darmstadt, the Computergraphics and Communication Group of the Institute of Computer Science at Rostock University, Nanyang Technological University (NTU), Singapore, and the Visual Computing Cluster of Excellence of Graz University of Technology, cooperate closely within projects and research and development in the field of Computer Graphics. The "Selected Readings in Computer Graphics 2009" consist of 38 articles selected from a total of 183 scientific publications contributed by all these institutions. All articles previously appeared in various scientific books, journals, conferences and workshops, and are reprinted with permission of the respective copyright holders. The publications had to undergo a thorough review process by internationally leading experts and established technical societies. Therefore, the Selected Readings should give a fairly good and detailed overview of the scientific developments in Computer Graphics in the year 2009. They are published by Professor Dieter W. Fellner, the director of Fraunhofer Institute for Computer Graphics Research IGD in Darmstadt, at the same time professor at the Department of Computer Science at Technische Universität Darmstadt, and professor at the Faculty of Computer Science at Graz University of Technology.

Show publication details

Fellner, Dieter W.; Schaub, Jutta

Selected Readings in Computer Graphics 2009. CD-ROM

2010

Stuttgart : Fraunhofer Verlag, 2010

Selected Readings in Computer Graphics 20

The Fraunhofer Institute for Computer Graphics Research IGD with offices in Darmstadt as well as in Rostock, Singapore, and Graz, the partner institutes at the respective universities, the Interactive Graphics Systems Group of Technische Universität Darmstadt, the Computergraphics and Communication Group of the Institute of Computer Science at Rostock University, Nanyang Technological University (NTU), Singapore, and the Visual Computing Cluster of Excellence of Graz University of Technology, cooperate closely within projects and research and development in the field of Computer Graphics. The "Selected Readings in Computer Graphics 2009" consist of 38 articles selected from a total of 183 scientific publications contributed by all these institutions. All articles previously appeared in various scientific books, journals, conferences and workshops, and are reprinted with permission of the respective copyright holders. The publications had to undergo a thorough review process by internationally leading experts and established technical societies. Therefore, the Selected Readings should give a fairly good and detailed overview of the scientific developments in Computer Graphics in the year 2009. They are published by Professor Dieter W. Fellner, the director of Fraunhofer Institute for Computer Graphics Research IGD in Darmstadt, at the same time professor at the Department of Computer Science at Technische Universität Darmstadt, and professor at the Faculty of Computer Science at Graz University of Technology.

Show publication details

Nazemi, Kawa; Burkhardt, Dirk; Breyer, Matthias; Stab, Christian; Fellner, Dieter W.

Semantic Visualization Cockpit: Adaptable Composition of Semantics-Visualization Techniques for Knowledge-Exploration

2010

Auer, Michael (Ed.): ICL 2010 Proceedings [CD-ROM] : International Conference Interactive Computer Aided Learning. Kassel: University Press, 2010, pp. 163-173

International Conference Interactive Computer Aided Learning (ICL) <13, 2010, Hasselt, Belgium>

Semantic-Web and ontology-based information processing systems are established technologies and techniques, in more than only research areas and institutions. Different worldwide projects and enterprise companies identified already the added value of semantic technologies and work on different sub-topics for gathering and conveying knowledge. As the process of gathering and structuring semantic information plays a key role in the most developed applications, the process of transferring and adopting knowledge to and by humans is neglected, although the complex structure of knowledge-design opens many research-questions. The following paper describes a new approach for visualizing semantic information as a composition of different adaptable ontology-visualization techniques. We start with a categorized description of existing ontology visualization techniques and show potential gaps. After that the new approach will be described and its added value to existing systems. A case study within the greatest German program for semantic information processing will show the usage of the system in real scenarios.

Show publication details

Stab, Christian; Breyer, Matthias; Nazemi, Kawa; Burkhardt, Dirk; Hofmann, Cristian Erik; Fellner, Dieter W.

SemaSun: Visualization of Semantic Knowledge Based on an Improved Sunburst Visualization Metaphor

2010

Herrington, Jan (Ed.) et al.: ED-Media 2010 : World Conference on Educational Multimedia, Hypermedia & Telecommunications [online]. Chesapeake: AACE, 2010, pp. 911-919

World Conference on Educational Multimedia, Hypermedia & Telecommunications (ED-Media) <2010, Toronto, Canada>

Ontologies have become an established data model for conceptualizing knowledge entities and describing semantic relationships between them. They are used to model the concepts of specific domains and are widespread in the areas of the semantic web, digital libraries and multimedia database management. To gain the most possible benefit from this data model, it is important to offer adequate visualizations, so that users can easily acquire the knowledge. Most ontology visualization techniques are based on hierarchical or graph-based visualization metaphors. This may result in information-loss, visual clutter, cognitive overload or context-loss. In this paper we describe a new approach of ontology visualization technique called SemaSun that is based on the sunburst visualization metaphor. We improved this metaphor, which is naturally designed for displaying hierarchical data, to the tasks of displaying multiple inheritance and semantic relations. This approach also offers incremental ontology exploring to reduce the cognitive load without losing the informational context.

Show publication details

Stab, Christian; Nazemi, Kawa; Fellner, Dieter W.

SemaTime - Timeline Visualization of Time-Dependent Relations and Semantics

2010

Bebis, George (Ed.) et al.: Advances in Visual Computing. 6th International Symposium, ISVC 2010 : Proceedings, Part III. Berlin, Heidelberg, New York: Springer, 2010. (Lecture Notes in Computer Science (LNCS) 6455), pp. 514-523

International Symposium on Visual Computing (ISVC) <6, 2010, Las Vegas, NV, USA>

Timeline based visualizations arrange time-dependent entities along a time-axis and are used in many different domains like digital libraries, criminal investigation and medical information systems to support users in understanding chronological structures. By the use of semantic technologies, the information is categorized in a domain-specific, hierarchical schema and specified by semantic relations. Commonly semantic relations in timeline visualizations are depicted by interconnecting entities with a directed edge. However it is possible that semantic relations change in the course of time. In this paper we introduce a new timeline visualization for time-dependent semantics called SemaTime that offers a hierarchical categorization of time-dependent entities including navigation and filtering features. We also present a novel concept for visualizing time-dependent relations that allows the illustration of time-varying semantic relations and affords an easy understandable visualization of complex, time-dependent interrelations.

Show publication details

Schiefer, Andreas; Berndt, Rene; Ullrich, Torsten; Settgast, Volker; Fellner, Dieter W.

Service-Oriented Scene Graph Manipulation

2010

ACM SIGGRAPH: Proceedings Web3D 2010 : 15th International Conference on 3D Web Technology. New York: ACM Press, 2010, pp. 55-61

International Conference on 3D Web Technology (WEB3D) <15, 2010, Los Angeles, USA>

In this paper we present a software architecture for the integration of a RESTful web service interface in OpenSG applications. The proposed architecture can be integrated into any OpenSG application with minimal changes to the sources. Extending a scene graph application with a web service interface offers many new possibilities. Without much effort it is possible to review and control the scene and its components using a web browser. New ways of (browser based) user interactions can be added on all kinds of web enabled devices. As an example we present the integration of "SweetHome3D" into an existing virtual reality setup.

Show publication details

Echizen, Isao; Pan, Jeng-Shyang; Fellner, Dieter W.; Nouak, Alexander; Kuijper, Arjan; Jain, Lakhmi C.

Sixth International Conference on Intelligent Information Hiding and Multimedia Signal Processing. Proceedings: IIH-MSP 2010

2010

Los Alamitos, Calif. : IEEE Computer Society, 2010

International Conference on Intelligent Information Hiding and Multimedia Signal Processing (IIH-MSP) <6, 2010, Darmstadt, Germany>

Signal Processing (IIH-MSP-2010), it is our pleasure to present to you the conference proceedings. Multimedia technologies with an increasing level of intelligence are emerging to enable the process of creating a global information infrastructure that will interconnect heterogeneous computer networks and various forms of information technologies around the world. IIHMSP-2010, which is the sixth edition of the IIHMSP series of conferences, is intended as an international forum for researchers and professionals in all area of information hiding and multimedia signal processing. The major areas covered at the conference and presented in the proceedings include: · Information Hiding and Security · Multimedia Signal Processing and Networking · Bio-Inspired Multimedia Technologies and Systems

Show publication details

Jung, Yvonne; Wagner, Sebastian; Jung, Christoph; Behr, Johannes; Fellner, Dieter W.

Storyboarding and Pre-Visualization with X3D

2010

ACM SIGGRAPH: Proceedings Web3D 2010 : 15th International Conference on 3D Web Technology. New York: ACM Press, 2010, pp. 73-81

International Conference on 3D Web Technology (WEB3D) <15, 2010, Los Angeles, USA>

This paper presents methods based on the open standard X3D to rapidly describe life-like characters and other scene elements in the context of storyboarding and pre-visualization. Current frameworks that employ virtual agents often rely on non-standardized pipelines and lack functionality to describe lighting, camera staging or character behavior in a descriptive and simple manner. Even though demand for such a system is high, ranging from edutainment to pre-visualization in the movie industry, few such systems exist. Thereto, we present the ANSWER framework, which provides a set of interconnected components that aid a film director in the process of film production from the planning stage to post-production. Rich and intuitive user interfaces are used for scene authoring and the underlying knowledge model is populated using semantic web technologies over which reasoning is applied. This transforms the user input into animated pre-visualizations that enable a director to experience and understand certain film making decisions before production begins. In this context we also propose some extensions to the current X3D standard for describing cinematic contents.

Show publication details

Hofmann, Cristian Erik; Fellner, Dieter W.

Supporting Collaborative Workflows of Digital Multimedia Annotation

2010

Lewkowicz, Myriam (Ed.) et al.: Proceedings of COOP 2010 : Proceedings of the 9th International Conference on the Design of Cooperative Systems. London: Springer, 2010, pp. 79-99

International Conference on the Design of Cooperative Systems (COOP) <9, 2010, Aix-en-Provence, France>

Collaborative annotation techniques for digital multimedia contents have found their way into a vast amount of areas of daily use as well as professional fields. Attendant research has issued a large number of research projects that can be assigned to different specific subareas of annotation. These projects focus on one or only few aspects of digital annotation. However, the whole annotation process as a operative unit has not sufficiently been taken into consideration, especially for the case of collaborative settings. In order to attend to that lack of research, we present a framework that supports multiple collaborative workflows related to digital multimedia annotation. In that context, we introduce a process-based architecture model, a formalized specification of collaborative annotation processes, and a concept for personalized workflow visualization and user assistance.

Show publication details

Peña Serna, Sebastian; Stork, André; Fellner, Dieter W.

Tetrahedral Mesh-Based Embodiment Design

2010

The American Society of Mechanical Engineers (ASME): ASME International Design Engineering Technical Conferences & Computers and Information in Engineering Conference : DETC 2010. New York: The American Society of Mechanical Engineers, 2010, 10 p.

ASME International Design Engineering Technical Conferences & Computers and Information in Engineering Conference (DETC) <2010, Montreal, Quebec, Canada>

The engineering design is a systematic approach implemented in the product development process, which is composed of several phases and supported by different tools. Computer-Aided Design (CAD) and Computer-Aided Engineering (CAE) tools are particularly dedicated to the embodiment phase and these enable engineers to design and analyze a potential solution. Nonetheless, the lack of integration between CAD and CAE restricts the exploration of design variations. Hence, we aim at incorporating functionalities of a CAD system within a CAE environment, by means of building a high level representation of the mesh and allowing the engineer to handle and manipulate semantic features, avoiding the direct manipulation of single elements. Thus, the engineer will be able to perform extruding, rounding or dragging operations regardless of geometrical and topological limitations. We present in this paper, the intelligence that a simulating mesh needs to support, in order to enable such operations.

Show publication details

Berndt, Rene; Blümel, Ina; Clausen, Michael; Damm, David; Diet, Jürgen; Fellner, Dieter W.; Fremerey, Christian; Klein, Reinhard; Krahl, Frank; Scherer, Maximilian; Schreck, Tobias; Sens, Irina; Thomas, Verena; Wessel, Raoul

The PROBADO Project - Approach and Lessons Learned in Building a Digital Library System for Heterogeneous Non-textual Documents

2010

Lalmas, Mounia (Ed.) et al.: Research and Advanced Technology for Digital Libraries : 14th European Conference ECDL. Proceedings. Berlin, Heidelberg, New York: Springer, 2010. (Lecture Notes in Computer Science (LNCS) 6273), pp. 376-383

European Conference on Research and Advanced Technology for Digital Libraries (ECDL) <14, 2010, Glasgow, UK>

The PROBADO project is a research effort to develop and operate advanced Digital Library support for non-textual documents. The main goal is to contribute to all parts of the Digital Library work flow from content acquisition over indexing to search and presentation. While not limited in terms of supported document types, reference support is developed for classical digital music and 3D architectural models. In this paper, we review the overall goals, approaches taken, and lessons learned so far in a highly integrated effort of university researchers and library experts. We address the problem of technology transfer, aspects of repository compilation, and the problem of inter-domain retrieval. The experiences are relevant for other project efforts in the non-textual Digital Library domain.

Show publication details

Kahn, Svenja; Wuest, Harald; Fellner, Dieter W.

Time-of-Flight Based Scene Reconstruction with a Mesh Processing Tool for Model Based Camera Tracking

2010

Institute for Systems and Technologies of Information, Control and Communication (INSTICC): VISIGRAPP 2010. Proceedings : International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications. INSTICC Press, 2010, pp. 302-309

International Conference on Computer Vision Theory and Applications (VISAPP) <5, 2010, Angers, France>

The most challenging algorithmical task for markerless Augmented Reality applications is the robust estimation of the camera pose. With a given 3D model of a scene the camera pose can be estimated via model-based camera tracking without the need to manipulate the scene with fiducial markers. Up to now, the bottleneck of model-based camera tracking is the availability of such a 3D model. Recently time-of-flight cameras were developed which acquire depth images in real time. With a sensor fusion approach combining the color data of a 2D color camera and the 3D measurements of a time-of-flight camera we acquire a textured 3D model of a scene. We propose a semi-manual reconstruction step in which the alignment of several submeshes with a mesh processing tool is supervised by the user to ensure a correct alignment. The evaluation of our approach shows its applicability for reconstructing a 3D model which is suitable for model-based camera tracking even for objects which are difficult to measure reliably with a time-of-flight camera due to their demanding surface characteristics.

Show publication details

Zmugg, René; Havemann, Sven; Fellner, Dieter W.

Towards a Voting Scheme for Calculating Light Source Positions from a Given Target Illumination

2010

Puppo, Enrico (Ed.) et al.: Eigth Eurographics Italian Chapter Conference : Eurographics Italian Chapter Proceedings. Goslar: Eurographics Association, 2010, pp. 41-48

Eurographics Italian Chapter Conference <8, 2010, Genova, Italy>

Lighting conditions can make the difference between success or failure of an architectural space. The vision of space-light co-design is that architects can control the impression of an illuminated space already at an early design stage, instead of first designing spaces and then searching for a good lighting setup. As a first step towards this vision we propose a novel method to calculate potential light source positions from a given user defined target illumination. The method is independent of the tessellation of the scene and assumes a homogeneous diffuse Lambertian material. This allows using a voting system that determines potential positions for standard light sources with chosen size and brightness. Votes are cast from an illuminated surface point to all potential positions of a light source that would yield this illumination. Vote clusters consequently indicate a more probable light source position. With a slight extension the method can also identify mid-air light source positions.

Show publication details

Hofmann, Cristian Erik; Burkhardt, Dirk; Breyer, Matthias; Nazemi, Kawa; Stab, Christian; Fellner, Dieter W.

Towards a Workflow-Based Design of Multimedia Annotation Systems

2010

Herrington, Jan (Ed.) et al.: ED-Media 2010 : World Conference on Educational Multimedia, Hypermedia & Telecommunications [online]. Chesapeake: AACE, 2010, pp. 1224-1233

World Conference on Educational Multimedia, Hypermedia & Telecommunications (ED-Media) <2010, Toronto, Canada>

Annotation techniques for multimedia contents have found their way into multiple areas of daily use as well as professional fields. A large number of research projects can be assigned to different specific subareas of digital annotation. Nevertheless, the annotation process, bringing out multiple workflows depending on different application scenarios, has not sufficiently been taken into consideration. A consideration of respective processes and workflows requires detailed knowledge about practices of digital multimedia annotation. In order to establish fundamental groundwork towards workflow-related research, this paper presents a comprehensive process model of multimedia annotation which results from a conducted empirical study. Furthermore, we provide a survey of the tasks that have to be accomplished by users and computing devices, tools and algorithms that are used to handle specific tasks, and types of data that are transferred between workflow steps. These aspects are assigned to the identified sub-processes of the model.

Show publication details

Binotto, Alecio; Pereira, Carlos Eduardo; Fellner, Dieter W.

Towards Dynamic Reconfigurable Load-balancing for Hybrid Desktop Platforms

2010

IEEE Computer Society: 2010 IEEE International Symposium on Parallel & Distributed Processing, Workshops and Phd Forum : IPDPSW. New York: IEEE Computer Society Press, 2010, 4 p.

IEEE International Parallel and Distributed Processing Symposium (IPDPS) <24, 2010, Atlanta, GA, USA>

High-performance platforms are required by applications that use massive calculations. Actually, desktop accelerators (like the GPUs) form a powerful heterogeneous platform in conjunction with multi-core CPUs. To improve application performance on these hybrid platforms, load-balancing plays an important role to distribute workload. However, such scheduling problem faces challenges since the cost of a task at a Processing Unit (PU) is non-deterministic and depends on parameters that cannot be known a priori, like input data, online creation of tasks, scenario changing, etc. Therefore, self-adaptive computing is a potential paradigm as it can provide flexibility to explore computational resources and improve performance on different execution scenarios. This paper presents an ongoing PhD research focused on a dynamic and reconfigurable scheduling strategy based on timing profiling for desktop accelerators. Preliminary results analyze the performance of solvers for SLEs (Systems of Linear Equations) over a hybrid CPU and multi-GPU platform applied to a CFD (Computational Fluid Dynamics) application. The decision of choosing the best solver as well as its scheduling must be performed dynamically considering online parameters in order to achieve a better application performance.

Show publication details

Landesberger, Tatiana von; Kuijper, Arjan; Schreck, Tobias; Kohlhammer, Jörn; van Wijk, Jarke; Fekete, Jean-Daniel; Fellner, Dieter W.

Visual Analysis of Large Graphs

2010

Hauser, Helwig (Ed.) et al.: Eurographics 2010. State of the Art Reports (STARs), pp. 113-136

Eurographics <31, 2010, Norrköping, Sweden>

The analysis of large graphs plays a prominent role in various fields of research and is relevant in many important application areas. Effective visual analysis of graphs requires appropriate visual presentations in combination with respective user interaction facilities and algorithmic graph analysis methods. How to design appropriate graph analysis systems depends on many factors, including the type of graph describing the data, the analytical task at hand, and the applicability of graph analysis methods. The most recent surveys of graph visualization and navigation techniques were presented by Herman et al. [HMM00] and Diaz [DPS02]. The first work surveyed the main techniques for visualization of hierarchies and graphs in general that had been introduced until 2000. The second work concentrated on graph layouts introduced until 2002. Recently, new techniques have been developed covering a broader range of graph types, such as time-varying graphs. Also, in accordance with ever growing amounts of graph-structured data becoming available, the inclusion of algorithmic graph analysis and interaction techniques becomes increasingly important. In this State-of-the-Art Report, we survey available techniques for the visual analysis of large graphs. Our review firstly considers graph visualization techniques according to the type of graphs supported. The visualization techniques form the basis for the presentation of interaction approaches suitable for visual graph exploration. As an important component of visual graph analysis, we discuss various graph algorithmic aspects useful for the different stages of the visual graph analysis process.

Show publication details

Landesberger, Tatiana von; Fellner, Dieter W. (Betreuer); van Wijk, Jarke (Betreuer)

Visual Analytics of Large Weighted Directed Graphs and Two-Dimensional Time-Dependent Data

2010

Darmstadt, TU, Diss., 2010

Analysts need to effectively assess large amounts of data. Often, their focus is on two types of data: weighted directed graphs and two-dimensional time dependent data. These types of data are commonly examined in various application areas such as transportation, finance, or biology. The key elements in supporting the analysis are systems that seamlessly integrate interactive visualization techniques and data processing. The systems also need to offer the analyst the possibility to flexibly steer the analytical process. In this thesis, we present new techniques providing such flexible integrated combinations with tight user involvement in the analytical process for the two selected data types. We first develop new techniques for visual analysis of weighted directed graphs. - We enhance the analysis of entity relationships by integration of algorithmic analysis of connections in interactive visualization. - We improve the analysis of graph structure by several ways of motif-based analysis. - We introduce interactive visual clustering of graph connected components for gaining overview of the dataspace. Second, we develop new methods for visual analysis of two-dimensional time dependent data. We thereby combine animation and trajectory-based interactive visualizations with user-driven feature-based data analysis. - We extend guidelines for the use of animation by conducting a perception study of motion direction change. - We introduce interactive monitoring of a new set of data features in order to analyze the data dynamics. - We present visual clustering of trajectories of individual entities using self-organizing maps (SOM) with user control of the clustering process. As a basis for the development of the new approaches, we discuss the methodology of Visual Analytics and its related fields. We thereby extend classification of Information Visualization and Interaction techniques used in Visual Analytics systems. The developed techniques can be used in various application domains such as finance and economics, geography, social science, biology, transportation, or meteorology. In the financial domain, the techniques support analysts in making investment decisions, in assessment of company value, or in analysis of economy structure. We demonstrate our new methods on two real world data sets: shareholder networks and time-varying risk-return data.

Show publication details

Nazemi, Kawa; Breyer, Matthias; Burkhardt, Dirk; Fellner, Dieter W.

Visualization Cockpit: Orchestration of Multiple Visualizations for Knowledge-Exploration

2010

International Journal of Advanced Corporate Learning [online], Vol.3 (2010), 4, pp. 26-34

Semantic-Web technologies and ontology-based information processing systems are established techniques, in more than only research areas and institutions. Different worldwide projects and enterprise companies identified already the added value of semantic technologies, so they work on different sub-topics for gathering and conveying knowledge. As the process of gathering and structuring semantic information plays a key role in the most developed applications, the process of transferring and adopting knowledge to and by humans is neglected, although the complex structure of knowledge-design opens many research-questions. The customization of the presentation itself and the interaction techniques with these presentation artifacts is a key question for gainful and effective work with semantic information. The following paper describes a new approach for visualizing semantic information as a composition of different adaptable ontology-visualization techniques. We start with a categorized description of existing ontology visualization techniques and show potential gaps.

Show publication details

Havemann, Sven; Fellner, Dieter W.

3D Modeling in a Web Browser to Formulate Content-Based 3D Queries

2009

Fellner, Dieter W. (General Co-Chair) et al.: Proceedings Web3D 2009 : 14th International Conference on 3D Web Technology. New York: ACM Press, 2009, pp. 111-118

International Conference on 3D Web Technology (WEB3D) <14, 2009, Darmstadt, Germany>

We present a framework for formulating domain-dependent 3D search queries suitable for content-based 3D search over the web. Users are typically not willing to spend much time to create a 3D query object. They expect to quickly see a result set in which they can navigate by further differentiating the query object. Our system innovates by using a streamlined parametric 3D modeling engine on both client and server side. Parametric tools have greater expressiveness, they allow shape manipulation through a few highlevel parameters, as well as incremental assembly of query objects. Short command strings are sent from client to server to keep the query objects on both sides in sync. This reduces turnaround times and allows asynchronous updates of live result sets.

Show publication details

Hofmann, Cristian Erik; Hollender, Nina; Fellner, Dieter W.

A Workflow Model for Collaborative Video Annotation: Supporting the Workflow of Collaborative Video Annotation and Analysis Performed in Educational Settings

2009

Cordeiro, José (Ed.) et al.: CSEDU 2009 : Proceedings of the First International Conference on Computer Supported Education. Volume 2. Setúbal: INSTICC Press, 2009, pp. 199-204

International Conference on Computer Supported Education (CSEDU) <1, 2009, Lisboa, Portugal>

There is a growing number of application scenarios for computer-supported video annotation and analysis in educational settings. In related research work, a large number of different research fields and approaches have been involved. Nevertheless, the support of the annotation workflow has been little taken into account. As a first step towards developing a framework that assist users during the annotation process, the single work steps, tasks and sequences of the workflow had to be identified. In this paper, a model of the underlying annotation workflow is illustrated considering its single phases, tasks, and iterative loops that can be especially associated with the collaborative processes taking place.

Show publication details

Ullrich, Torsten; Settgast, Volker; Ofenböck, Christian; Fellner, Dieter W.

Desktop Integration in Graphics Environments

2009

Hirose, Michitaka (Ed.) et al.: Virtual Environments 2009 : Joint Virtual Reality Conference of EGVE - ICAT - EuroVR. Aire-la-Ville: Eurographics Association, 2009, pp. 109-112

Joint Virtual Reality Conference (JVRC) <1, 2009, Lyon, France>

In this paper, we present the usage of the Remote Desktop Protocol to integrate arbitrary, legacy applications in various environments. This approach accesses a desktop on a real computer or within a virtual machine. The result is not one image of the whole desktop, but a sequence of images of all desktop components (windows, dialogs, etc.). These components are rendered into textures and fed into a rendering framework (OpenSG). There the functional hierarchy is represented by a scene graph. In this way the desktop components can be rearranged freely and painted according to circumstances of the graphical environment supporting a wide range of display settings - from immersive environments via high-resolution tiled displays to mobile devices.

Show publication details

Fellner, Dieter W.; Behr, Johannes; Bockholt, Ulrich

instantreality - A Framework for Industrial Augmented and Virtual Reality Applications

2009

The 2nd Sino-German Workshop "Virtual Reality & Augmented Reality in Industry" : Invited Paper Proceedings. Participants Edition. Shanghai: Shanghai Jiao Tong University, 2009, pp. 78-83

Sino-German Workshop "Virtual Reality & Augmented Reality in Industry" <2, 2009, Shanghai, China>

Rapid development in processing power, graphic cards and mobile computers open up a wide domain for Mixed Reality applications. Thereby the Mixed Reality continuum covers the complete spectrum from Virtual Reality using immersive projection technology to Augmented Reality using mobile systems like Smartphones and UMPCs. At the Fraunhofer Institute for Computer Graphics (IGD) the Mixed Reality Framework instantreality (www.instantreality.org) has been developed as a single and consistent interface for AR/VR developers. This framework provides a comprehensive set of features to support classic Virtual Reality (VR) as well as mobile Augmented Reality (AR). The goal is to provide a very simple application interface which includes the latest research results in the fields of high-realistic rendering, 3D user interaction and total-immersive display technology. The system design is based on various industry standards to facilitate application development and deployment.

Show publication details

Fellner, Dieter W.; Baier, Konrad; Wehner, Detlef; Toll, Andrea

Jahresbericht 2008: Fraunhofer-Institut für Graphische Datenverarbeitung IGD

2009

Darmstadt, 2009

Show publication details

Godehardt, Eicke; Fellner, Dieter W. (Betreuer); Jähnichen, Stefan (Betreuer)

Kontextualisierte Visualisierung am wissensintensiven Arbeitsplatz

2009

Darmstadt, TU, Diss., 2009

Wissensarbeiter sind heutzutage hohen Anforderungen ausgesetzt. Sie müssen ständig neue Arbeitsfelder erschließen und sich schnell auf neue Gegebenheiten einstellen. Die vorliegende Dissertationsschrift stellt einen neuen Ansatz vor, Arbeitende an wissensintensiven Arbeitsplätzen mit kontextualisierten Visualisierungen zu unterstützen. Diese visuelle Unterstützung richtet sich dabei an der aktuellen Arbeitssituation (dem Kontext) des Wissensarbeiters aus. Dadurch wird es möglich, den spezifischen Problemen und Herausforderungen direkt am Arbeitsplatz zu begegnen und automatisierte Hilfestellung während der Arbeit anzubieten. Geeignete angepasste Visualisierungen können die Daten am wissensintensiven Arbeitsplatz intuitiv darstellen. Dies gestattet zum Beispiel die einfachere Identifikation von wichtigen Informationen, Dokumenten und Personen. Kontextinformationen stellen dabei eine sehr gut geeignete Quelle für die Anpassung der Visualisierungen an die aktuelle Arbeitssituation (Kontextualisierung) dar. Um diesen Möglichkeiten zu begegnen wurde ein Rahmenwerk zur kontextualisierten Visualisierung entwickelt. Dieses Rahmenwerk wurde im Zuge der vorliegenden Arbeit in mehreren Szenarios eingesetzt, instantiiert und sowohl quantitativ als auch qualitativ evaluiert. Dabei zeigte die Evaluation mit realistischen Daten und repräsentativen Versuchspersonen eine signifikante Steigerung der Produktivität. Dies wird vor allem in der gemessenen Zeit und Arbeitsqualität reflektiert. Zudem erfolgte in der Evaluation eine positive Rückmeldung zur individuellen Wahrnehmung des Rahmenwerks. Als wissenschaftliches und technisches Ergebnis dieser Arbeit liegt das Rahmenwerk mit allen Schnittstellen, Definitionen und Instantiierungen vor. Dabei ermöglicht das Rahmenwerk insbesondere ein einfaches Einbringen verschiedener Visualisierungsformen, Daten- und Kontextquellen.

Show publication details

Fellner, Dieter W.; Sourin, Alexei; Behr, Johannes; Walczak, Krzysztof; Spencer, Stephen N.

Proceedings Web3D 2009: 14th International Conference on 3D Web Technology

2009

New York : ACM Press, 2009

International Conference on 3D Web Technology (WEB3D) <14, 2009, Darmstadt, Germany>

Show publication details

Hofmann, Cristian Erik; Hollender, Nina; Fellner, Dieter W.

Prozesse und Abläufe beim kollaborativen Wissenserwerb mittels computergestützter Videoannotation

2009

Schwill, Andreas (Ed.) et al.: DeLFI 2009 : DeLFI 2009. Die 7. E-Learning Fachtagung Informatik der Gesellschaft für Informatik e.V.. Bonn: Köllen, 2009. (GI-Edition - Lecture Notes in Informatics (LNI) P-153), pp. 115-126

Fachtagung e-Learning der Gesellschaft für Informatik (DeLFI) <7, 2009, Berlin, Germany>

Computergestützte Annotation und Analyse von Videoinhalten finden zunehmend Anwendung in unterschiedlichen Lehr-Lernszenarien. Eine Reihe von Projekten hat sich mit dem Forschungsbereich Videoannotation mit unterschiedlichen Forschungsschwerpunkten beschäftigt, diese fokussierten jedoch stets einen oder nur wenige Bestandteile des gesamten Annotationsprozesses. Bisher wurde den einzelnen Aufgaben, Prozessen und Abläufen, die einer (kollaborativen) Annotation von Videos zugrunde liegen, keine ausreichende Beachtung geschenkt. In diesem Beitrag möchten wir unter besonderer Berücksichtigung von einer Applikation in kollaborativen Lehr-Lernsituationen ein Modell präsentieren, das die Phasen, die zu erledigenden Aufgaben sowie die konkreten Abläufe innerhalb von Videoannotationsprozessen beschreibt.

Show publication details

Strobl, Martin; Berndt, Rene; Havemann, Sven; Fellner, Dieter W.

Publishing 3D Content as PDF in Cultural Heritage

2009

Debattista, Kurt (Ed.) et al.: VAST 2009. Proceedings. Aire-la-Ville: Eurographics Association, 2009, pp. 117-124

International Symposium on Virtual Reality, Archaeology and Cultural Heritage (VAST) <10, 2009, St. Julians, Malta>

Sharing 3D models with embedded annotations and additional information in a general accessible way still is a major challange. Using 3D technologies must become much easier, in particular in areas such as Cultural Heritage, where archeologists, art historians, and museum curators rely on robust, easy to use solutions. Sustainable exchange standards are vital since unlike in industry, no sophisticated PLM or PDM solutions are common in CH. To solve this problem we have examined the PDF file format and developed concepts and software for the exchange of annotated 3D models in a way that is not just comfortable but also sustainable. We show typical use cases for authoring and using PDF documents containing annotated 3D geometry. The resulting workflow is eficient and suitable for experienced users as well as for users working only with standard word processing tools and e-mail clients (plus, currently, Acrobat Pro Extended).

Show publication details

Encarnação, José L.; Fellner, Dieter W.; Schaub, Jutta

Selected Readings in Computer Graphics 2008

2009

Stuttgart : Fraunhofer Verlag, 2009

Selected Readings in Computer Graphics 19

The Fraunhofer Institute for Computer Graphics Research IGD with offices in Darmstadt as well as in Rostock and Graz, the partner institutes at the respective universities, the Interactive Graphics System Group of Technische Universität Darmstadt, the Computergraphics and Communication Group of the Institute of Computer Science at Rostock University, and the Visual Computing Cluster of Excellence of Graz University of Technology, together with the Centre for Advances Media Technology CAMTech, a co-foundation of Fraunhofer IGD with Nanyang Technological University (NTU), Singapore, and the Spanish Visual Communication and Interaction Technologies Centre Vicomtech, cooperate closely within projects and research and development in the field of Computer Graphics. The "Selected Readings in Computer Graphics 2008" consists of 41 articles selected from a total of 190 scientific publications contributed by all these institutions. All articles previously appeared in various scientific books, journals, conferences and workshops, and are reprinted with permission of the respective copyright holders. The publications had to undergo a thorough review process by internationally leading experts and established technical societies. Therefore, the Selected Readings should give a fairly good and detailed overview of the scientific developments in Computer Graphics in the year 2008. They are published by Professor José Luís Encarnação, and Professor Dieter W. Fellner, the director of Fraunhofer Institute for Computer Graphics Research IGD in Darmstadt. Both are professors at the Department of Computer Science at Technische Universität Darmstadt.

Show publication details

Encarnação, José L.; Fellner, Dieter W.; Schaub, Jutta

Selected Readings in Computer Graphics 2008. CD-ROM

2009

Stuttgart : Fraunhofer Verlag, 2009

Selected Readings in Computer Graphics 19

The Fraunhofer Institute for Computer Graphics Research IGD with offices in Darmstadt as well as in Rostock and Graz, the partner institutes at the respective universities, the Interactive Graphics System Group of Technische Universität Darmstadt, the Computergraphics and Communication Group of the Institute of Computer Science at Rostock University, and the Visual Computing Cluster of Excellence of Graz University of Technology, together with the Centre for Advances Media Technology CAMTech, a co-foundation of Fraunhofer IGD with Nanyang Technological University (NTU), Singapore, and the Spanish Visual Communication and Interaction Technologies Centre Vicomtech, cooperate closely within projects and research and development in the field of Computer Graphics. The "Selected Readings in Computer Graphics 2008" consists of 41 articles selected from a total of 190 scientific publications contributed by all these institutions. All articles previously appeared in various scientific books, journals, conferences and workshops, and are reprinted with permission of the respective copyright holders. The publications had to undergo a thorough review process by internationally leading experts and established technical societies. Therefore, the Selected Readings should give a fairly good and detailed overview of the scientific developments in Computer Graphics in the year 2008. They are published by Professor José Luís Encarnação, and Professor Dieter W. Fellner, the director of Fraunhofer Institute for Computer Graphics Research IGD in Darmstadt. Both are professors at the Department of Computer Science at Technische Universität Darmstadt.

Show publication details

Bein, Matthias; Havemann, Sven; Stork, André; Fellner, Dieter W.

Sketching Subdivision Surfaces

2009

Grimm, Cindy (Ed.) et al.: ACM SIGGRAPH / Eurographics Symposium on Sketch-Based Interfaces and Modeling 2009. Proceedings. New York: ACM, 2009, pp. 61-68

ACM SIGGRAPH / Eurographics Symposium on Sketch-Based Interfaces and Modeling (SBIM) <6, 2009, New Orleans, LA, USA>

We describe a 3D modeling system that combines subdivision surfaces with sketch-based modeling in order to meet two conflicting goals: ease of use and fine-grained shape control. For the excellent control, low-poly modeling is still the method of choice for creating high-quality 3D models, e.g., in the games industry. However, direct mesh editing can be very tedious and time consuming. Our idea is to include also stroke-based techniques for rapidly modeling regular surface parts. We propose a simple and efficient algorithm for converting a 2D stroke to a control polygon suitable for Catmull/Clark subdivision surfaces. We have realized a small but reasonably rich set of interactive modeling tools to assess the expressiveness of stroke-based mesh design with a number of examples.

Show publication details

Hofmann, Cristian Erik; Hollender, Nina; Fellner, Dieter W.

Task- and Process-related Design of Video Annotation Systems

2009

Wandke, Hartmut (Ed.) et al.: Mensch & Computer 2009 : 9. Fachübergreifende Konferenz für interaktive und kooperative Medien. München: Oldenbourg, 2009, pp. 173-182

Mensch & Computer <9, 2009, Berlin, Germany>

Various research projects already followed up the design of video annotation applications. Nevertheless, collaborative application scenarios as well as the need of users regarding the annotation workflow have been taken little into account. This paper discusses requirements for the design of video annotation systems. As our main contribution, we consider aspects that can be associated with collaborative use scenarios as well as requirements respecting the support of the annotation workflow not only considering the tasks but also the processes and sequences within. Our goals are to provide the reader with an understanding of the specific characteristics and requirements of video annotation, to establish a framework for evaluation, and to guide the design of video annotation tools.

Show publication details

Fünfzig, Christoph; Ullrich, Torsten; Fellner, Dieter W.; Bachelder, Edward N.

Terrain and Model Queries Using Scalar Representations with Wavelet Compression

2009

IEEE Transactions on Instrumentation and Measurement, Vol.58 (2009), 9, pp. 3086-3093

In this paper, we present efficient height/distance field data structures for line-of-sight (LOS) queries on terrains and collision queries on arbitrary 3-D models. The data structure uses a pyramid of quad-shaped regions with the original height/distance field at the highest level and an overall minimum/maximum value at the lower levels. The pyramid can compactly be stored in a wavelet-like decomposition but using max and plus operations. Additionally, we show how to get minimum/maximum values for regions in a wavelet decomposition using real algebra. For LOS calculations, we compare with a kd-tree representation containing the maximum height values. Furthermore, we show that the LOS calculation is a special case of a collision detection query. Using our wavelet-like approach, even general and arbitrary collision detection queries can efficiently be answered.

Show publication details

Havemann, Sven; Settgast, Volker; Eide, Oyvind; Fellner, Dieter W.

The Arrigo Showcase Reloaded - Towards a Sustainable Link Between 3D and Semantics

2009

ACM Journal on Computing and Cultural Heritage, Vol.2 (2009), 1, pp. 4:1-4:13

It is still a big technical problem to establish a relation between a shape and its meaning in a sustainable way. We present a solution with a markup method that allows for labeling parts of a 3D object very much like labeling parts of a hypertext. A 3D markup can serve both as hyperlink and as link anchor, which is the key to bidirectional linking between 3D objects and Web documents. Our focus is on a sustainable 3D software infrastructure for application scenarios ranging from email and Internet over authoring and browsing semantic networks to interactive museum presentations. We demonstrate the workflow and the effectiveness of our tools by redoing the Arrigo 3D Showcase. We are working towards a best practice example for information modeling in cultural heritage.

Show publication details

2009

Bullinger, Hans-Jörg (Ed.): Technology Guide : Principles, Applications, Trends. Berlin, Heidelberg, New York: Springer, 2009, pp. 250-255

The rapid development of microprocessors and graphics processing units (GPUs) has had an impact on information and communication technologies (lCT) over recent years. "Shaders" offer real-time visualisation of complex, computer-generated 3D models with photorealistic quality. Shader technology includes hardware and software modules which colour virtual 3D objects and model reflective properties. These developments have laid the foundations for mixed reality systems which enable both immersion into and realtime interaction with the environment. These environments are based on Milgram's mixed reality continuum where reality is a gradated spectrum ranging from real to virtual spaces.

Show publication details

Hofmann, Cristian Erik; Hollender, Nina; Fellner, Dieter W.

Workflow-Based Architecture for Collaborative Video Annotation

2009

HCI International 2009. Proceedings and Posters [DVD-ROM] : With 10 further Associated Conferences. Berlin, Heidelberg, New York: Springer, 2009. (Lecture Notes in Computer Science (LNCS)), LNCS 5621, pp. 33-42

International Conference on Online Communities and Social Computing (OCSC) <3, 2009, San Diego, CA, USA>

In video annotation research, the support of the video annotation workflow has been taken little into account, especially concerning collaborative use cases. Previous research projects focus each on a different essential part of the whole annotation process. We present a reference architecture model which is based on identified phases of the video annotation workflow. In a first step, the underlying annotation workflow is exemplified with respect to its single phases, tasks, and loops. Secondly, the system architecture is going to be exemplified with respect to its elements, their internal procedures, as well as the interaction between these elements. The goals of this paper are to provide the reader with a basic understanding of the specific characteristics and requirements of collaborative video annotation processes, and to define a reference framework for the design of video annotation systems that include a workflow management system.

Show publication details

Ullrich, Torsten; Settgast, Volker; Fellner, Dieter W.

Abstand: Distance Visualization for Geometric Analysis

2008

Ioannides, Marinos (Ed.) et al.: Digital Heritage. Proceedings of the 14th International Conference on Virtual Systems and Multimedia. Project Papers : VSMM 2008, pp. 334-340

International Conference on Virtual Systems and MultiMedia (VSMM) <14, 2008, Limassol, Cyprus>

The need to analyze and visualize differences of very similar objects arises in many research areas: mesh compression, scan alignment, nominal/actual value comparison, quality management, and surface reconstruction to name a few. Although the problem to visualize some distances may sound simple, the creation of a good scene setup including the geometry, materials, colors, and the representation of distances is challenging. Our contribution to this problem is an application which optimizes the work-flow to visualize distances. We propose a new classification scheme to group typical scenarios. For each scenario we provide reasonable defaults for color tables, material settings, etc. Completed with predefined file exporters, which are harmonized with commonly used rendering and viewing applications, the presented application is a valuable tool. Based on web technologies it works out-of-the-box and does not need any configuration or installation. All users who have to analyze and document 3D geometry will stand to benefit from our new application.

Show publication details

Fellner, Dieter W.; Schaub, Jutta

Achievements and Results - Annual Report 2007: Fraunhofer Institute for Computer Graphics IGD [CD-ROM]

2008

Darmstadt, 2008

Show publication details

Offen, Lars; Fellner, Dieter W.

BioBrowser - Visualization of and Access to Macro-Molecular Structures

2008

Linsen, Lars (Ed.) et al.: Visualization in Medicine and Life Sciences. Berlin; Heidelberg; New York: Springer, 2008. (Mathematics and Visualization), pp. 257-274

Based on the results of an interdisciplinary research project the paper addresses the embedding of knowledge about the function of different parts/structures of a macro molecule (protein, DNA, RNA) directly into the 3D model of this molecule. Thereby the 3D visualization becomes an important user interface component when accessing domain-specific knowledge - similar to a web browser enabling its users to access various kinds of information. In the prototype implementation - named Biobrowser - various information related to bio-research is managed by a database using a fine-grain access control. This also supports restricting the access to parts of the material based on the user privileges. The database is supplied by a SOAP web service so that is it possible (after identifying yourself by a login procedure of course) to query, to change, or to add some information remotely by using the 3D model of the molecule. All these actions are performed on sub structures of the molecules. These can be selected either by an easy query language or by just picking them in the 3D model with the mouse.

Show publication details

Ullrich, Torsten; Krispel, Ulrich; Fellner, Dieter W.

Compilation of Procedural Models

2008

Spencer, Stephen N. (Ed.): Proceedings WEB3D 2008 : 13th International Symposium on 3D Web Technology. New York: ACM Press, 2008, pp. 75-81

International Conference on 3D Web Technology (WEB3D) <13, 2008, Los Angeles, CA, USA>

Scripting techniques are used in various contexts. The field of application ranges from layout description languages (PostScript), user interface description languages (XUL) and classical scripting languages (JavaScript) to action nodes in scene graphs (VRMLScript) and web-based desktop applications (AJAX). All these applications have an increase of scripted components in common - especially in computer graphics. As the interpretation of a geometric script is computationally more intensive than the handling of static geometry, optimization techniques, such as justin- time compilation, are of great interest. Unfortunately, scripting languages tend to support features such as higher order functions or self-modification, etc. These language characteristic are difficult to compile into machine/byte-code. Therefore, we present a hybrid approach: an interpreter with an integrated compiler. In this way we speed up the script evaluation without having to remove any language features e.g. the possibility of self-modifications. We demonstrate its usage at XGML - a dialect of the generative modeling language GML, which is characterized by its dynamic behavior.

Show publication details

Lancelle, Marcel; Settgast, Volker; Fellner, Dieter W.

Definitely Affordable Virtual Enviroment. DAVE: Demo (Video + Description)

2008

Institute of Electrical and Electronics Engineers (IEEE): IEEE Virtual Reality 2008. Proceedings [DVD-ROM] : VR '08. Los Alamitos, Calif.: IEEE Computer Society, 2008, Video 35 MB; Poster 1 p.

IEEE Virtual Reality Conference (VR) <15, 2008, Reno, Nevada, USA>

The DAVE is an immersive projection environment, a foursided CAVE. DAVE stands for "definitely affordable virtual environment". "Affordable" means that by mostly using standard hardware components we can greatly reduce costs compared to other commercial systems. We show the hardware setup and some applications in the accompanying video. In 2005 we buildt a new version of our DAVE at the University of Technology in Graz, Austria. Room restrictions motivated a new compact design to optimally use the available space. The back projection material with a custom shape is streched to the wooden frame to provide a flat surface without ripples.

Show publication details

Wuest, Harald; Fellner, Dieter W. (Betreuer); Stricker, Didier (Betreuer)

Efficient Line and Patch Feature Characterization and Management for Real-time Camera Tracking

2008

Darmstadt, TU, Diss., 2008

One of the key problems of augmented reality is the tracking of the camera position and viewing direction in real-time. Current vision-based systems mostly rely on the detection and tracking of fiducial markers. Some markerless approaches exist, which are based on 3D line models or calibrated reference images. These methods require a high manual preprocessing work step, which is not applicable for the efficient development and design of industrial AR applications. The problem of the preprocessing overload is addressed by the development of vision-based tracking algorithms, which require a minimal workload of the preparation of reference data. A novel method for the automatic view-dependent generation of line models in real-time is presented. The tracking system only needs a polygonal model of a reference object, which is often available from the industrial construction process. Analysis-by-synthesis techniques are used with the support of graphics hardware to create a connection between virtual model and real model. Point-based methods which rely on optical flow-based template tracking are developed for the camera pose estimation in partially known scenarios. With the support of robust reconstruction algorithms a real-time tracking system for augmented reality applications is developed, which is able to run with only very limited previous knowledge about the scene. The robustness and real-time capability is improved with a statistical approach for a feature management system which is based on machine learning techniques.

Show publication details

Mendez, Erick; Schall, Gerhard; Havemann, Sven; Junghanns, Sebastian; Fellner, Dieter W.; Schmalstieg, Dieter

Generating Semantic 3D Models of Underground Infrastructure

2008

IEEE Computer Graphics and Applications, Vol.28 (2008), 3, pp. 48-57

Procedural Methods for Urban Modelling

By combining two previously unrelated techniques - semantic markup in a scene-graph and generative modeling - a new framework retains semantic information until late in the rendering pipeline. This is a crucial prerequisite for achieving enhanced visualization effects and interactive behavior that doesn't compromise interactive frame rates. The proposed system creates interactive 3D visualizations from 2D geospatial databases in the domain of utility companies' underground infrastructure, creating urban models based on the companies' real-world data. The system encodes the 3D models in a scene-graph that mixes visual models with semantic markup that interactively filters and styles the models. The actual graphics primitives are generated on the fly by scripts that are attached to the scene-graph nodes.

Show publication details

Nazari Shirehjini, Ali A.; Fellner, Dieter W. (Betreuer); Ferscha, Alois (Betreuer)

Interaktion in Ambient Intelligence: Konzeption eines intuitiven Assistenten zur ganzheitlichen und konfliktfreien Interaktion in adaptiven Umgebungen

2008

Darmstadt, TU, Diss., 2008

Ambient Intelligence bezeichnet insbesondere ein neues Paradigma der Interaktion zwischen Dem Menschen und seiner Alltagsumgebung. Betrachtet man AmI-Umgebungen näher, kann festgestellt werden, dass die Anzahl elektronischer Geräte und deren funktionaler Komplexität ständig wächst. Dabei ist der Nutzer mit der Bedienung und Kontrolle der Technik überfordert. Viele Geräte besitzen aufgrund der Miniaturisierung eine umständliche und eingeschränkte Bedienmöglichkeit. Dies stellt neben der wachsenden Komplexität ein weiteres Problem für die Mensch-Umgebungs-Interaktion dar. Die Akzeptanz der Technologie und damit der Erfolg des AmI-Paradigmas hängt wesentlich von einer intuitiven Interaktion ab. Damit der Nutzer sich nicht in Fülle der Technik hilflos verliert, werden intuitive Bedienungskonzepte benötigt. Deshalb fokussiert diese Arbeit auf den Forschungsbereich der Assistenz zur ganzheitlichen und konfliktfreien Interaktion in adaptiven Umgebungen. Die Fragestellungen hierzu lauten zum einen, wie der Nutzer in solchen Umgebungen mit so einer großen Anzahl von komplexen Systemen interagieren wird. Zum anderen wie interagiert der Nutzer in einer unbekannten Umgebung mit einer Technik, welche wie "Strom, Wasser und Telefon" allgegenwärtig ist und zugleich "unsichtbar im Hintergrund verschwindet"? Die wichtigste Fragestellung dieser Arbeit lautet jedoch wie der Nutzer Geräte manuell auswählen kann ohne technische Informationen über die Infrastruktur zu haben? Vertiefend und im Zusammenhang mit der Behandlung der oben skizzierten Randbedingungen liefert diese Dissertation folgende wesentliche Beiträge zu den Gebieten Domainanalyse und Ontologien, Kontextbewußtsein sowie Mensch-Umgebungs-Interaktion.

Show publication details

Steiner, Marc; Reiter, Philipp; Ofenböck, Christian; Settgast, Volker; Ullrich, Torsten; Lancelle, Marcel; Fellner, Dieter W.

Intuitive Navigation in Virtual Environments

2008

Mohler, Betty J. (Ed.) et al.: Virtual Environments 2008. Posters. Aire-la-Ville: Eurographics Association, 2008, pp. 5-8

Eurographics Symposium on Virtual Environments (EGVE) <14, 2008, Eindhoven, the Netherlands>

We present several novel ways of interaction and navigation in virtual worlds. Using the optical tracking system of our four-sided Definitely Affordable Virtual Environment (DAVE), we designed and implemented navigation and movement controls using the user's gestures and postures. Our techniques are more natural and intuitive than a standard 3D joystick-based approach, which compromises the immersion's impact.

Show publication details

Fellner, Dieter W.; Schaub, Jutta

Leistungen und Ergebnisse - Jahresbericht 2007: Fraunhofer-Institut für Graphische Datenverarbeitung IGD

2008

Darmstadt, 2008

Show publication details

Havemann, Sven; Fellner, Dieter W.

Progressive Combined B-reps - Multi-Resolution Meshes for Interactive Real-time Shape Design

2008

Skala, Vaclav (Ed.): Journal of WSCG Vol. 16 No. 1-3, 2008. Proceedings. Plzen: University of West Bohemia, 2008, pp. 121-133

International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision (WSCG) <16, 2008, Plzen, Czech Republic>

We present the Combined B-rep (cB-rep) as a multiresolution data structure suitable for interactive modeling and visualization of models composed of both free-form and polygonal parts. It is based on a half-edge data structure combined with Catmull/Clark subdivision surfaces. In addition to displaying the curved parts of the surface at an adaptive level-of-detail, the control mesh itself can be changed interactively at runtime using Euler operators. The tessellation of changed parts of the mesh is incrementally updated in real time. All changes in the mesh are logged, so that a complete undo/redo mechanism can be provided. We introduce Euler macros as a grouping mechanism for Euler operator sequences. The macro dependency graph, a directed acyclic graph, can be used for creating progressively increasing resolutions of the control mesh, and to guide the view-dependent refinement (pcB-rep). We consider Progessive Combined B-reps to be of use for data visualization and interactive 3D modeling, as well as a compact representation of synthetic 3D models.

Show publication details

Dold, Christian; Fellner, Dieter W. (Betreuer); Bischof, Horst (Betreuer)

Retrospective and Prospective Motion Correction for Magnetic Resonance Imaging of the Head

2008

Graz, TU, Diss., 2010

The compensation of motion artifacts during Magnetic Resonance Imaging (MRI) of the head is essential to get excellent images. The detection of motion is sometimes done by the MRI system itself to reduce the artifacts (so-called navigators or navigators echoes). This prolongs the scan time and needs additional radio frequency (RF) pulses, not utilizing the whole available magnetization for imaging and interrupting steady-state image readout. Moreover the motion information is delayed by at least one repetition time (TR) available, because of data processing. Further on, eddy currents are produced so that detection of motion is not entirely compatible with the imaging acquisition sequences. Also, the planning time for the image acquisition is longer, because the localization of the navigator differs among patients. Therefore these techniques are not settled and rarely used in the clinical routine. In this work a new method was developed, which is able to detect and compensate the head motion simultaneously to the image acquisition. To achieve this, the gradients and the RF are updated depending on motion data detected by an optical tracking system in order to compensate the motion. The compatibility of the stereoscopic tracking system and the MRI was analyzed first. Markers, developed to be visible in both modalities, are used in the calibration procedure for correlating the coordinate systems. In a first version a time synchronized acquisition scheme of both systems was used to compensate a translational motion retrospectively. In a second version the motion data is handled from the MRI system directly to compensate motion prospectively. For the prospective trials, two optical systems with an accuracy of 200µm RMS and 60µm RMS were evaluated. It has been shown that the accuracy of the motion detection is significant for the success of the correction. A "volume to volume -" followed by a "slice to slice-" and finally a "line to line-" correction was performed. For that the MR sequences for echo planar imaging (EPI), spin echo (SE), gradient echo (GRE) and turbo spin echo (TSE) were adapted to work with optical prospective motion correction (OPROMOC) on phantoms and humans. Several times the calibration phase and the construction were re-engineered in order to eventually get an isotropic accuracy of 60µm with a latency time of less than 32 ms. In addition the technique was compared to established motion compensation techniques. In spectroscopic MR data acquisition, which is used to display metabolic information, the OPROMOC technique was the first technique to correct motion artifacts. Initial applications of the optical tracking system in functional imaging (fMRI), high resolution imaging and spectroscopy have shown that the technique will be convenient for brain imaging, registration and acquisition of metabolic information.

Show publication details

Ullrich, Torsten; Fellner, Dieter W.

Robust Shape Fitting and Semantic Enrichment

2008

Georgopoulos, Andreas: XXI CIPA Symposium 2007. Proceedings : AntiCIPAting the Future of the Cultural Past [online]. [cited 27 March 2014] Available from: http://cipa.icomos.org/index.php?id=63, 2008, 6 p.

International Workshop on e-Documentation and Standardisation in Cultural Heritage (CIPA) <21, 2007, Athens, Greece>

A robust fitting and reconstruction algorithm has to cope with two major problems. First of all it has to be able to deal with noisy input data and outliers. Furthermore it should be capable of handling multiple data set mixtures. The decreasing exponential approach is robust towards outliers and multiple data set mixtures. It is able to fit a parametric model to a given point cloud. As parametric models use a description which may not only contain a generative shape but information about the inner structure of an object, the presented approach can enrich measured data with an ideal description. This technique offers a wide range of applications.

Show publication details

Encarnação, José L.; Fellner, Dieter W.; Schaub, Jutta

Selected Readings in Computer Graphics 2007. CD-ROM: Veröffentlichungen aus dem INI-GraphicsNet

2008

Darmstadt : INI-GraphicsNet Stiftung, 2008

Selected Readings in Computer Graphics. CD-ROM 18

The International Network of Institutions for advanced education, training and R&D in Computer Graphics technology, systems and applications (INI-GraphicsNet) is the largest research network worldwide entirely dedicated to the field of Computer Graphics. The "Selected Readings in Computer Graphics 2007" consist of 34 articles selected from a total of 177 scientific publications contributed by all institutions of the INI-GraphicsNet. All articles previously appeared in various scientific books, journals, conferences and workshops, and are reprinted with permission of the respective copyright holders. The publications had to undergo a thorough review process by internationally leading experts and established technical societies. Therefore, the Selected Readings should give a fairly good and detailed overview of the scientific developments in the INI-GraphicsNet in the year 2007. They are published by Professor José Luis Encarnação, the director of the board of the INI-GraphicsNet Stiftung and Professor Dieter W. Fellner, the director of Fraunhofer Institute for Computer Graphics Research IGD in Darmstadt, the largest member of the INI-GraphicsNet.

Show publication details

Encarnação, José L.; Fellner, Dieter W.; Schaub, Jutta

Selected Readings in Computer Graphics 2007: Veröffentlichungen aus dem INI-GraphicsNet

2008

Stuttgart : Fraunhofer IRB Verlag, 2008

Selected Readings in Computer Graphics 18

The International Network of Institutions for advanced education, training and R&D in Computer Graphics technology, systems and applications (INI-GraphicsNet) is the largest research network worldwide entirely dedicated to the field of Computer Graphics. The "Selected Readings in Computer Graphics 2007" consist of 34 articles selected from a total of 177 scientific publications contributed by all institutions of the INI-GraphicsNet. All articles previously appeared in various scientific books, journals, conferences and workshops, and are reprinted with permission of the respective copyright holders. The publications had to undergo a thorough review process by internationally leading experts and established technical societies. Therefore, the Selected Readings should give a fairly good and detailed overview of the scientific developments in the INI-GraphicsNet in the year 2007. They are published by Professor José Luis Encarnação, the director of the board of the INI-GraphicsNet Stiftung and Professor Dieter W. Fellner, the director of Fraunhofer Institute for Computer Graphics Research IGD in Darmstadt, the largest member of the INI-GraphicsNet.

Show publication details

Ullrich, Torsten; Settgast, Volker; Fellner, Dieter W.

Semantic Fitting and Reconstruction

2008

ACM Journal on Computing and Cultural Heritage, Vol.1 (2008), 2, 20 pp.

The current methods to describe the shape of three-dimensional objects can be classified into two groups: methods following the composition of primitives approach and descriptions based on procedural shape representations. As a 3D acquisition device returns an agglomeration of elementary objects (e.g. a laser scanner returns points), the model acquisition pipeline always starts with a composition of primitives. Due to the semantic information carried with a generative description, a procedural model provides valuable metadata that make up the basis for digital library services: retrieval, indexing, and searching. An important challenge in computer graphics in the field of cultural heritage is to build a bridge between the generative and the explicit geometry description combining both worlds-the accuracy and systematics of generative models with the realism and the irregularity of real-world data. A first step towards a semantically enriched data description is a reconstruction algorithm based on decreasing exponential fitting. This approach is robust towards outliers and multiple dataset mixtures. It does not need a preceding segmentation and is able to fit a generative shape template to a point cloud identifying the parameters of a shape.

Show publication details

Berndt, Rene; Havemann, Sven; Settgast, Volker; Fellner, Dieter W.

Sustainable Markup and Annotation of 3D Geometry

2008

Ioannides, Marinos (Ed.) et al.: Digital Heritage. Proceedings of the 14th International Conference on Virtual Systems and Multimedia. Full Papers : VSMM 2008, pp. 187-193

International Conference on Virtual Systems and MultiMedia (VSMM) <14, 2008, Limassol, Cyprus>

We propose a novel general method to enrich ordinary 3D models with semantic information. Based on the Collada format this approach fits perfectly into the XML world: It allows bi-directional linking, from a web resource to a (part of) a 3D model, and the reverse direction as well. We also describe our software framework prototype for 3D-annotation by non-3D-specialists, in our case cultural heritage professionals.

Show publication details

Havemann, Sven; Settgast, Volker; Berndt, Rene; Eide, Oyvind; Fellner, Dieter W.

The Arrigo Showcase Reloaded - Towards a Sustainable Link between 3D and Semantics

2008

Ashley, Michael (Ed.) et al.: VAST 2008. Proceedings. Aire-la-Ville: Eurographics Association, 2008, pp. 125-132

International Symposium on Virtual Reality, Archaeology and Cultural Heritage (VAST) <9, 2008, Braga, Portugal>

It is still a big technical problem to establish a relation between a shape and its meaning in a sustainable way. We present a solution with a markup method that allows to label parts of a 3D object in a similar way to labeling parts of a hypertext. A 3D-markup can serve both as hyperlink and as link anchor, which is the key to bi-directional linking between 3D objects and web documents. Our focus is on a sustainable 3D software infrastructure for application scenarios ranging from e-mail and internet over authoring and browsing semantic networks to interactive museum presentations. We demonstrate the workflow and the effectiveness of our tools by re-doing the Arrigo 3D showcase. We are working towards a "best practice" example for information modeling in cultural heritage

Show publication details

Schreck, Tobias; Fellner, Dieter W.; Keim, Daniel A.

Towards Automatic Feature Vector Optimization for Multimedia Applications

2008

Association for Computing Machinery (ACM): ACM Symposium on Applied Computing 2008. Proceedings : SAC 2008. New York: ACM Press, 2008, pp. 1197-1201

Annual ACM Symposium on Applied Computing (SAC) <23, 2008, Fortaleza, Ceará, Brazil>

We systematically evaluate a recently proposed method for unsupervised discrimination power analysis for feature selection and optimization in multimedia applications. A series of experiments using real and synthetic benchmark data is conducted, the results of which indicate the suitability of the method for unsupervised feature selection and optimization. We present an approach for generating synthetic feature spaces of varying discrimination power, modelling main characteristics from real world feature vector extractors. A simple, yet powerful visualization is used to communicate the results of the automatic analysis to the user.

Show publication details

Fellner, Dieter W.; Kamps, Thomas; Kohlhammer, Jörn; Stricker, Anna

Vorsprung durch Wissen

2008

ZWF Zeitschrift für wirtschaftlichen Fabrikbetrieb, Vol.103 (2008), 4, pp. 205-208

Seit über 20 Jahren überlassen die Wissenschaftler des FraunhoferInstituts für Graphische Datenverarbeitung IGD Wissensmanagement nicht dem Zufall. Die Forscher erarbeiten intelligente Suchlösungen und Informationsvisualisierungstechnologien und bieten mit ihren Innovationen vielen Unternehmen und Organisationen die Möglichkeit, auf Anforderungen der heutigen dynamischen Informationsgesellschaft schnell reagieren zu können. Die am Fraunhofer IGD entwickelte Software ConWeaver bietet Unternehmen eine semantische und integrierte Suche über Datenbankgrenzen hinweg. Das Suchsystem extrahiert Unternehmenswissen automatisiert aus heterogenen Datenquellen und repräsentiert es in Form multilingualer, semantischer Wissensnetze. Mit der Datenvisualisierung und Informationsanalyse beschäftigt sich die Visual Analytics-Gruppe des Fraunhofer IGD. Die Wissenschafter entwickeln Echtzeitlösungen für die Simulation und die interaktive Visualisierung großer multidimensionaler Daten- und Informationsmengen.

Show publication details

Ullrich, Torsten; Techmann, Torsten; Fellner, Dieter W.

Web-based Algorithm Tutorials in Different Learning Scenarios

2008

Luca, Joseph (Ed.) et al.: Proceedings of ED-Media 2008 : World Conference on Educational Multimedia, Hypermedia & Telecommunications [CD-ROM]. Chesapeake, 2008, pp. 5467-5472

World Conference on Educational Multimedia, Hypermedia & Telecommunications (ED-Media) <2008, Vienna, Austria>

The combination of scripting languages with web technologies offers many possibilities in teachings. This paper presents a scripting framework that consists of a Java and JavaScript engine and an included editor. It allows editing scripts and source code online, writing new applications, modifying existing applications and starting them from within the editor by a simple mouse click. This framework is a good basis for online tutorials. Included scripts that are ready to run are able to replace simple Java applets without drawbacks but with much more possibilities. Furthermore these scripts are perfect in different teaching scenarios: demo applications can be started via web browser and can be modified just in time. This modification can be done during lecture or within a drill-and-practice session. Examples in the context of computer graphics illustrate the usefulness of our framework in lectures.

Show publication details

Havemann, Sven; Settgast, Volker; Lancelle, Marcel; Fellner, Dieter W.

3D-Powerpoint - Towards a Design Tool for Digital Exhibitions of Cultural Artifacts

2007

Arnold, David (Ed.) et al.: VAST 2007. Proceedings : The 8th International Symposium on Virtual Reality, Archaeology and Intelligent Cultural Heritage. Aire-la-Ville: Eurographics Association, 2007, pp. 39-46; 145 [Color Plate]

International Symposium on Virtual Reality, Archaeology and Cultural Heritage (VAST) <8, 2007, Brighton, United Kingdom>

We describe first steps towards a suite of tools for CH professionals to set up and run digital exhibitions of cultural 3D artifacts in museums. Both the authoring and the presentation views shall finally be as easy to use as, e.g., Microsoft Powerpoint. But instead of separated slides our tool uses pre-defined 3D scenes, called "layouts", containing geometric objects acting as placeholders, called "drop targets". They can be replaced quite easily, in a drag-and-drop fashion, by digitized 3D models, and also by text and images, to customize and adapt a digital exhibition to the style of the real museum. Furthermore, the tool set contains easy-to-use tools for the rapid 3D modeling of simple geometry and for the alignment of given models to a common coordinate system. The technical innovation is that the tool set is not a monolithic application. Instead it is completely based on scripted designs, using the OpenSG scene graph engine and the GML scripting language. This makes it extremely flexible: Anybody capable of drag-and-drop can design 3D exhibitions. Anybody capable of GML scripting can create new designs. And finally, we claim that the presentation setup of our designs is "grandparent-compliant", meaning that it permits to the public audience the detailed inspection of beautiful cultural 3D objects without getting lost or feeling uncomfortable.

Show publication details

Hopp, Armin; Havemann, Sven; Fellner, Dieter W.

A Single Chip DLP Projector for Stereoscopic Images of High Color Quality and Resolution

2007

Fröhlich, Bernd (Ed.) et al.: Virtual Environments 2007. IPT-EGVE 2007 : Short Papers and Posters. Aire-la-Ville: Eurographics Association, 2007, pp. 21-26

Eurographics Symposium on Virtual Environments (EGVE) <13, 2007, Weimar, Germany>

We present a novel stereoscopic projection system. It combines all the advantages of modern single-chip DLP technology - attractive price, great brightness, high contrast, superior resolution and color quality - with those of active stereoscopy: invariance to the orientation of the user and an image separation of nearly 100%. With a refresh rate of 60 Hz per eye (120 Hz in total) our system is flicker-free even for sensitive users. The system permits external projector synchronisation which allows to build up affordable stereoscopic multi-projector systems, e.g., for immersive visualisation.

Show publication details

Wesarg, Stefan; Fellner, Dieter W. (Betreuer); Giannitsis, Evangelos (Betreuer)

Automatisierte Analyse und Visualisierung der Koronararterien und großen Kavitäten des Herzens für die klinische Anwendung

2007

Darmstadt, TU, Diss., 2007

Es werden verschiedene Verfahren zur Analyse von Bilddaten des kardiovaskulären Systems behandelt. Damit wird eine Verbesserung sowohl der Diagnose als auch der Planung von eventuell notwendigen Eingriffen erreicht. Die beschriebenen Verfahren zeichnen sich durch eine hohe Automatisierung und Reproduzierbarkeit der Analyseergebnisse sowie ihre klinische Anwendbarkeit aus. Augenmerk wird vor allem auf die auf der Oberfläche des Herzens liegenden Herzkranzgefäße und die den Hauptteil des Herzens bildenden Kavitäten - das linke und rechte Ventrikel gelegt. Hier werden verschiedene im Rahmen dieser Arbeit entwickelte, neue Segmentierungs- und Analyseverfahren vorgestellt und diskutiert. Im Falle der Herzkranzgefäße sind das trackingbasierte Segmentierungsansätze, die die Basis bilden für eine Analyse des Gefäßes hinsichtlich - der Detektion und Vermessung von Stenosen - des Vorhandenseins von harten Arterienverkalkungen - der Zusammensetzung des umliegenden Gewebes Desweiteren wird ein Verfahren vorgestellt, das es ermöglicht, die damit erreichten Ergebnisse mit der Koronarangiographie - dem Gold-Standard - zu vergleichen. Für eine angepaßte Präsentation der Analyseergebnisse werden speziell entwickelte Verfahren für deren optimale Visualisierung als auch die der Bilddaten selbst vorgestellt. Letztere betreffend wird ein automatisches Verfahren eingeführt, mit dessen Hilfe sich Strukturen wie der Brustkorb ausmaskieren lassen, die die direkte Sicht auf das Herz stören. Für die Analyse von linkem (LV) und rechtem Ventrikel (RV) werden automatisierte Segmentierungsverfahren vorgestellt, aus deren Ergebnis sich die die Dynamik der Ventrikel beschreibenden physikalischen Parameter ableiten lassen. Für das LV wird eine umfassende, automatische und detaillierte Analyse der Wandbewegung, Wanddickenzunahme und Volumenänderung vorgestellt. Als neuer Deskriptor für die Dynamik wird die Asynchronität eingeführt. Die für das LV entwickelten Analyseverfahren werden auf das RV übertragen und ermöglichen so eine ganz neue Qualität dessen Analyse. Die Präsentation der berechneten Parameter erfolgt in einer standardisierten Weise entsprechend den Empfehlungen der American Heart Association. Als Erweiterung dieser Darstellungsmöglichkeit wird die direkte Visualisierung dieser Größen zusammen mit einem 3D-Rendering des LV eingeführt. Dies fließt ein in eine kombinierte Darstellung von dynamischen Parametern und Infarktbereichen des Herzens. Letztere werden zudem automatisch quantifiziert. Die wesentlichen Fortschritte dieser Arbeit sind: 1. die Entwicklung zweier neuer trackingbasierter Algorithmen für die Segmentierung von Koronararterien in kontratverstärkten CT-Daten, 2. die Einführung neuer Visualisierungsmethoden für die Präsentation der Ergebnisse der Koronaranalyse, 3. die Schaffung direkter Vergleichsmöglichkeiten zwischen CT-Angiographie und konventioneller Angiographie, 4. die Kombination bestehender Segmentierungstechniken mit anatomischer Kenntnis für eine automatisierte Extraktion von linkem und rechtem Ventrikel, 5. die Etablierung von umfassenden Analyseverfahren für die Dynamik des linken Ventrikels, 6. die erstmalige Anwendung dieser Ansätze auf die Dynamik des rechten Ventrikels, 7. die Einführung eines das asynchrone Verhalten von Bereichen des Ventrikels beschreibenden neuen Parameters in die LV-Analyse und 8. die Erweiterung der Diagnostik von Infarktbereichen um eine automatische Narbenquantifizierung und neue Visualisierungsmethoden.

Show publication details

Ullrich, Torsten; Fellner, Dieter W.

Client-Side Scripting in Blended Learning Environments

2007

Ercim News, (2007), 71, pp.43-44

The computer graphics tutorial CGTutorial was developed by the Institute of Computer Graphics and Knowledge Visualization at Graz University of Technology in Austria. It combines a scripting engine and a development environment with Java-based Web technology. The result is a flexible framework which allows algorithms to be developed and studied without the need to install libraries or set up compiler configurations. Together with already written example scripts, the framework is ready to use. Each example script is a small runnable demonstration application that can be started directly within a browser. Using a scripting engine that interprets Java and JavaScript on a client, the demos can be modified and analysed by the user and then restarted. This combination of scripting engines and Web technology is thus a perfect environment for blended learning scenarios.

Show publication details

Hopp, Armin; Fellner, Dieter W.; Havemann, Sven

Cube3D² - Ein Single Chip DLP Stereo Projektor

2007

Schenk, Michael (Ed.): 10. IFF-Wissenschaftstage 2007. Tagungsband [CD-ROM] : 15 Jahre Fraunhofer IFF. Magdeburg: Fraunhofer IFF, 2007, VR-Teil, pp. 77-86

IFF-Wissenschaftstage <10, 2007, Magdeburg, Germany>

Der Artikel beschreibt die erfolgreiche Entwicklung eines stereoskopiefähigen Digitalprojektors nach Zielvorgaben speziell für die Bereiche VR/AR. Dabei wurde nicht von den vorhandenen Technologien ausgegangen um ein VR/AR System zusammenzustellen, sondern explizit Zielvorgaben für ein solches System entwickelt um die Vor- und Nachteile bekannter Technologien zu minimieren. Dies führte zur Entwicklung eines völlig neuen 3D Projektors.

Show publication details

Ullrich, Torsten; Settgast, Volker; Krispel, Ulrich; Fünfzig, Christoph; Fellner, Dieter W.

Distance Calculation between a Point and a Subdivision Surface

2007

Lensch, Hendrik P. A. (Ed.) et al.: Vision, Modeling, and Visualization 2007. Proceedings : VMV 2007. Saarbrücken: Max Planck Institut für Informatik, 2007, pp. 161-169

Vision, Modeling, and Visualization (VMV) <12, 2007, Saarbrücken, Germany>

This article focuses on algorithms for fast computation of the Euclidean distance between a query point and a subdivision surface. The analyzed algorithms include uniform tessellation approaches, an adaptive evalution technique, and an algorithm using Bézier conversions. These methods are combined with a grid hashing structure for space partitioning to speed up their runtime. The results show that a pretessellated surface is sufficient for small models. Considering the runtime, accuracy and memory usage an adaptive on-the-fly evaluation of the surface turns out to be the best choice.

Show publication details

Fünfzig, Christoph; Ullrich, Torsten; Fellner, Dieter W.; Bachelder, Edward N.

Empirical Comparison of Data Structures for Line-Of-Sight Computation

2007

IEEE Instrumentation and Measurement Society: IEEE International Symposium on Intelligent Signal Processing : WISP 2007. Piscataway, NJ: IEEE Service Center, 2007, 6 p.

IEEE International Symposium on Intelligent Signal Processing (WISP) <2007, Alcala de Henares, Spain>

Line-of-sight (LOS) computation is important for interrogation of heightfield grids in the context of geo information and many simulation tasks like electromagnetic wave propagation and flight surveillance. Compared to searching the regular grid directly, more advanced data structures like a 2.5 d kd-tree offer better performance. We describe the definition of a 2.5 d kd-tree from the digital elevation model and its use for LOS computation on a point-reconstructed or bilinear-reconstructed terrain surface. For compact storage, we use a wavelet-like storage scheme which saves one half of the storage space without considerably compromising the runtime performance. We give an empirical comparison of both approaches on practical data sets which show the method of choice for CPU computation of LOS.

Show publication details

Bustos, Benjamin; Fellner, Dieter W.; Havemann, Sven; Keim, Daniel A.; Saupe, Dietmar; Schreck, Tobias

Foundations of 3D Digital Libraries: Current Approaches and Urgent Research Challenges

2007

Castelli, Donatella (Ed.) et al.: Foundations of Digital Libraries : Pre-Proceedings of the First International Workshop on "Digital Libraries Foundations", pp. 7-12

International Workshop on Digital Libraries Foundations (DLF) <1, 2007, Vancouver, BC, Canada>

3D documents are an indispensable data type in many important application domains such as Computer Aided Design, Simulation and Visualization, and Cultural Heritage, to name a few. The 3D document type can represent arbitrarily complex information by composing geometrical, topological, structural, or material properties, among others. It often is integrated with meta data and annotation by the various application systems that produce, process, or consume 3D documents. We argue that due to the inherent complexity of the 3D data type in conjunction with and imminent pervasive usage and explosion of available content, there is pressing need to address key problems of the 3D data type. These problems need to be tackled before the 3D data type can be fully supported by Digital Library technology in the sense of a generalized document, unlocking its full potential. If the problems are addressed appropriately, the expected benefits are manifold and may lead to radically improved production, processing, and consumption of 3D content.

Show publication details

Settgast, Volker; Ullrich, Torsten; Fellner, Dieter W.

Information Technology for Cultural Heritage

2007

IEEE Potentials, Vol.26 (2007), 4, pp. 38-43

Information technology applications in the field of cultural heritage include various disciplines of computer science. The work flow from archaeological discovery to scientific preparation demands multidisciplinary cooperation and interaction at various levels. This article describes the information technology pipeline from the computer science point of view. The description starts with the model acquisition. Computer vision algorithms are able to generate a raw three-dimensional (3-D) model using input data such as photos and scans. In the next step, computer graphics methods create an accurate, highlevel model description. Besides geometric information, each model needs semantic metadata to perform digital library tasks such as storage, markup, indexing, and retrieval. A structured repository of virtual artifacts completes the pipeline - at least from the computer science point of view.

Show publication details

Encarnação, José L.; Fellner, Dieter W.; Schaub, Jutta

Leistungen und Ergebnisse - Jahresbericht 2006: Fraunhofer-Institut für Graphische Datenverarbeitung IGD

2007

Darmstadt, 2007

Show publication details

Krottmaier, Harald; Kurth, Frank; Steenweg, Thorsten; Appelrath, Hans-Jürgen; Fellner, Dieter W.

PROBADO - A Generic Repository Integration Framework

2007

Kovács, László (Ed.) et al.: Research and Advanced Technology for Digital Libraries. Proceedings : 11th European Conference, ECDL 2007. Berlin, Heidelberg, New York: Springer, 2007. (Lecture Notes in Computer Science (LNCS) 4675), pp. 518-521

European Conference on Research and Advanced Technology for Digital Libraries (ECDL) <11, 2007, Budapest, Hungary>

The number of newly generated multimedia documents (e.g. music, e-learning material, or 3D-graphics) increases year by year. Today, the workflow in digital libraries focuses on textual documents only. Hence, considering content-based retrieval tasks, multimedia documents are not analyzed and indexed sufficiently. To facilitate content-based retrieval and browsing, it is necessary to introduce recent techniques for multimedia document processing into the workflow of nowadays digital libraries. In this short paper, we introduce the PROBADO-framework which will (a) integrate different types of content-repositories - each one specialized for a specific multimedia domain - into one seamless system, and (b) will add features available in text-based digital libraries (such as automatic annotation, full-text retrieval, or recommender services) to non-textual documents. Existing libraries will benefit from the framework since it extends existing technology for handling textual documents with features for dealing with the non-textual domain.

Show publication details

Encarnação, José L.; Fellner, Dieter W.; Schaub, Jutta

Selected Readings in Computer Graphics 2006. CD-ROM: Veröffentlichungen aus dem INI-GraphicsNet

2007

Darmstadt : INI-GraphicsNet Stiftung, 2007

Selected Readings in Computer Graphics. CD-ROM 17

The International Network of Institutions for advanced education, training and R&D in Computer Graphics technology, systems and applications (INI-GraphicsNet) is the largest research network worldwide entirely dedicated to the field of Computer Graphics. The "Selected Readings in Computer Graphics 2006" consist of 29 articles selected from a total of 168 scientific publications contributed by all institutions of the INI-GraphicsNet. All articles previously appeared in various scientific books, journals, conferences and workshops, and are reprinted with permission of the respective copyright holders. The publications had to undergo a thorough review process by internationally leading experts and established technical societies. Therefore, the Selected Readings should give a fairly good and detailed overview of the scientific developments in the INI-GraphicsNet in the year 2006. They are published by Professor José Luis Encarnação, the director of the board of the INI-GraphicsNet Stiftung and Professor Dieter W. Fellner, the director of Fraunhofer Institute for Computer Graphics Research IGD in Darmstadt, the largest member of the INI-GraphicsNet.

Show publication details

Encarnação, José L.; Fellner, Dieter W.; Schaub, Jutta

Selected Readings in Computer Graphics 2006: Veröffentlichungen aus dem INI-GraphicsNet

2007

Stuttgart : Fraunhofer IRB Verlag, 2007

Selected Readings in Computer Graphics 17

The International Network of Institutions for advanced education, training and R&D in Computer Graphics technology, systems and applications (INI-GraphicsNet) is the largest research network worldwide entirely dedicated to the field of Computer Graphics. The "Selected Readings in Computer Graphics 2006" consist of 29 articles selected from a total of 168 scientific publications contributed by all institutions of the INI-GraphicsNet. All articles previously appeared in various scientific books, journals, conferences and workshops, and are reprinted with permission of the respective copyright holders. The publications had to undergo a thorough review process by internationally leading experts and established technical societies. Therefore, the Selected Readings should give a fairly good and detailed overview of the scientific developments in the INI-GraphicsNet in the year 2006. They are published by Professor José Luis Encarnação, the director of the board of the INI-GraphicsNet Stiftung and Professor Dieter W. Fellner, the director of Fraunhofer Institute for Computer Graphics Research IGD in Darmstadt, the largest member of the INI-GraphicsNet.

Show publication details

Leeb, Robert; Settgast, Volker; Fellner, Dieter W.; Pfurtscheller, Gert

Self-paced Exploration of the Austrian National Library Through Thought

2007

International Journal of Bioelectromagnetism, Vol.9 (2007), 4, pp. 237-244

The results of a self-paced Brain-Computer Interface (BCI) are presented which are based on the detection of senorimotor electroencephalogram rhythms during motor imagery. The participants were given the task of moving through a virtual model of the Austrian National Library by performing motor imagery. This work shows that five participants which were trained in a synchronous BCI could sucessfully perform the asynchronous experiment.

Show publication details

Havemann, Sven; Fellner, Dieter W.

Seven Research Challenges of Generalized 3D Documents

2007

IEEE Computer Graphics and Applications, Vol.27 (2007), 3, pp. 70-76

Graphically Speaking

The rapid evolution of information and communication technology has always been a source for challenging new research questions in computer science. The vision of the emerging research field of semantic 3D is to establish the notion of generalized 3D documents that are full members of the family of generalized documents. This means that access would be content-based rather than based on metadata. The purpose of this article is to highlight the research issues that impede the realization of this vision today. The seven research challenges include: (1) '3D data set' can have many meanings, (2) a sustainable 3D file format, (3) representation-independent stable 3D markup, (4) representation-independent 3D query operations, (5) documenting provenance and processing history, (6) consistency between shape and meaning, and (7) closing the semantic gap.

Show publication details

Schreck, Tobias; Tekusová, Tatiana; Kohlhammer, Jörn; Fellner, Dieter W.

Trajectory-Based Visual Analysis of Large Financial Time Series Data

2007

ACM SIGKDD Explorations Newsletter, Vol.9 (2007), 2, pp. 30-37

Visual Analytics seeks to combine automatic data analysis with visualization and human-computer interaction facilities to solve analysis problems in applications characterized by occurrence of large amounts of complex data. The financial data analysis domain is a promising field for research and application of Visual Analytics technology, as it prototypically involves the analysis of large data volumes in solving complex analysis tasks. We introduce a Visual Analytics system for supporting the analysis of large amounts of financial time-varying indicator data. A system, driven by the idea of extending standard technical chart analysis from one to two-dimensional indicator space, is developed. The system relies on an unsupervised clustering algorithm combined with an appropriately designed movement data visualization technique. Several analytical views on the full market and specific assets are offered for the user to navigate, to explore, and to analyze. The system includes automatic screening of the potentially large visualization space, preselecting possibly interesting candidate data views for presentation to the user. The system is applied to a large data set of time varying 2-D stock market data, demonstrating its effectiveness for visual analysis of financial data. We expect the proposed techniques to be beneficial in other application areas as well.

Show publication details

Ullrich, Torsten; Fünfzig, Christoph; Fellner, Dieter W.

Two Different Views on Collision Detection

2007

IEEE Potentials, Vol.26 (2007), 1, pp. 26-30

In this article, we present two algorithms for precise collision detection between two potentially colliding objects. The first one uses axis-aligned bounding boxes (AABB) and is a typical representative of a computational geometry algorithm. The second one uses spherical distance fields originating in image processing. Both approaches address the following challenges of collision detection algorithms: just in time, little resources, inclusive etc. Thus both approaches are scalable in the information they give in collision determination and the analysis up to a fixed refinement level, the collision time depends on the granularity of the bounding volumes and it is also possible to estimate the time bounds for the collision test tightly.

Show publication details

Fellner, Dieter W.; Hansen, Charles

Eurographics 2006. Short Papers

2006

Aire-la-Ville : Eurographics Association, 2006

Eurographics <27, 2006, Vienna, Austria>

Show publication details

Hildenbrand, Dietmar; Alexa, Marc (Betreuer); Fellner, Dieter W. (Betreuer); Straßer, Wolfgang (Betreuer)

Geometric Computing in Computer Graphics and Robotics Using Conformal Geometric Algebra

2006

Darmstadt, TU, Diss., 2006

In computer graphics and robotics a lot of different mathematical systems like vector algebra, homogenous coordinates, quaternions or dual quaternions are used for different applications. Now it seems that a change of paradigm is lying ahead of us based on Conformal Geometric Algebra unifying all of these different approaches in one mathematical system. Conformal Geometric Algebra is a very powerful mathematical framework. Due to its geometric intuitiveness, compactness and simplicity it is easy to develop new algorithms. Algorithms based on Conformal Geometric Algebra lead to enhanced quality, a reduction of development time, better understandable and better maintainable solutions. Often a clear structure and greater elegance results in lower runtime performance. However, it will be shown that algorithms based on Conformal Geometric Algebra can even be faster than conventional algorithms. The main contribution of this thesis is the geometrically intuitive and - nevertheless - efficient algorithm for a computer animation application, namely an inverse kinematics algorithm for a virtual character. This algorithm is based on an embedding of quaternions in Conformal Geometric Algebra. For performance reasons two optimization approaches are used in a way to make the application now three times faster than the conventional solution. With these results we are convinced that Geometric Computing using Conformal Geometric Algebra will become more and more fruitful in a great variety of applications in computer graphics and robotics.

Show publication details

Volmer, Stephan; Encarnação, José L. (Betreuer); Fellner, Dieter W. (Betreuer)

Inhaltsbasierte Bildsuche mittels visueller Merkmale

2006

Darmstadt, TU, Diss., 2006

Die ständig wachsende Menge an verfügbaren digitalen Daten erfordert neuartige Methoden, die einen gezielten Zugriff auf relevante Information ermöglichen. Eine zentrale Rolle spielt in diesem Zusammenhang die automatische Erschließung bildlicher Information in digitalen Bilddaten. Der klassische Ansatz - die manuelle Annotation des Bildinhaltes mittels alphanumerischer Texte - hat sich in der Vergangenheit als zu fehleranfällig und zu kostenintensiv erwiesen. Im Rahmen dieser Arbeit wird ein alternativer Ansatz entwickelt, der es ermöglicht, große Mengen von digitalen Bilddaten mittels merkmalsbasierter Verfahren zu erschließen. Dies geschieht unter der Annahme, dass das zugrundeliegende Bildmaterial weder aufgrund seines Erscheinungsbildes, noch aufgrund seiner Bedeutung irgendeiner Einschränkung unterliegt. Zunächst wird ein allgemeingültiges Modell für die merkmalsbasierte Suche nach visuellen Inhalten in digitalen Bildern definiert. Dieses Modell stellt den formalen Rahmen für die Entwicklung und Kombinierung neuartiger Algorithmen zur Merkmalsextraktion und -indexierung dar. Das Modell ermöglicht eine inhaltsbasierte Bildsuche auf der Basis eines Systems mit einheitlicher Architektur und standardisierten Schnittstellen. Ein solches System kann durch die Entwicklung einzelner anwendungsspezifischer Bausteine für eine bestimmte Problemstellung erweitert werden. Startpunkt für die Entwicklung eines Algorithmus zur Merkmalsextraktion ist die sinnvolle Interpretation der Farben einzelner diskreter Bildpunkte. Der Mensch unterscheidet im Gegensatz zur technischen Darstellung nur zwischen einer handvoll verschiedener Farben. Daher wird im Rahmen dieser Arbeit die neuartige Farbrepräsentation vorgestellt, die die Farbinformation eines Bildes auf der Basis von - für den Menschen bedeutungsvollen - Farbnamen zugänglich macht. Das zugrundeliegende mathematische Gerüst ermöglicht einen einfachen und schnellen Vergleich von Farben. Eine solche Interpretation von Farbinformation kann nahezu für jede Aufgabe im Bereich der digitalen Bildverarbeitung nutzbringend eingesetzt werden. Auf dieser Farbrepräsentation aufbauend werden einige universelle Extraktionsalgorithmen vorgestellt, die gewisse visuelle Aspekte eines digitalen Bildes kompakt beschreiben. Große Datenmengen bedingen entsprechend lange Verarbeitungszeiten bei der Suche nach Information. Daher wird im letzten Teil der Arbeit ein Indexierungsverfahren vorgestellt, dass die proportionale Beziehung zwischen Datenmenge und Verarbeitungszeit aufbricht. Das Indexierungsverfahren basiert auf der Lokaliserung der Suche in der unmittelbaren Umgebung der Suchanfrage im Merkmalsraum. Durch das Eingrenzen des Suchraums kann eine signifikante Beschleunigung der Suche erreicht werden. Da ein solches Verfahren immanent mit einer gewissen Ungenauigkeit behaftet ist, werden experimentelle Ergebnisse präsentiert, die den Nutzen des Verfahrens im praktischen Einsatz dokumentieren.

Show publication details

Havemann, Sven; Fellner, Dieter W. (Betreuer); Müller, Heinrich (Betreuer)

Generative Mesh Modeling

2005

Braunschweig, TU, Diss., 2005

Show publication details

Roth, Marcus; Encarnação, José L. (Betreuer); Fellner, Dieter W. (Betreuer)

Parallele Bildberechnung in einem Netzwerk von Workstations

2005

Darmstadt, TU, Diss., 2005

Die vorliegende Arbeit beschreibt den Einsatz von PC Clustern für die interaktive Visualisierung dreidimensionaler polygonaler Modelle. Ziel der Arbeit war es, durch Bündelung der Leistung vieler Rechner die Qualität der Darstellung zu verbessern. Darüber hinaus sollte die Entwicklung neuer Cluster basierter graphischer Anwendungen vereinfacht werden. Es wird gezeigt, wie ein Szenengraphsystem erweitert werden muss, um eine verteilte Bildberechnung in einem Cluster zu ermöglichen. Hierfür wurden verschiedene Synchronisationsmechanismen untersucht und neue Netzwerkprotokolle entwickelt. Auf der Basis des verteilten Szenengraphen wurden verschiedene Parallelisierungsstrategien erforscht und optimierte Verfahren für die parallele Berechnung des Bildraums und die parallele Berechnung der Szene vorgestellt. Mit Hilfe der entwickelten Verfahren lassen sich beliebige zusammengesetzte Projektionssysteme wie z.B Tiled Display, Workbench oder Cave sehr effizient mit Hilfe eines PC-Clusters betreiben. Weiterhin ist es möglich, auch sehr große Szenen mit hohen Bildwiederholraten anzuzeigen.

Show publication details

Fellner, Dieter W.; Scopignio, Roberto

Eurographics 2002. State of the Art Reports: Bridges between real and virtual worlds

2002

The Eurographics Association, 2002

Eurographics <23, 2002, Saarbrücken, Germany>

Show publication details

Fellner, Dieter W.

Seminal Contributions from Computers & Graphics: In Honor of the 60th Birthday of J.L. Encarnação

2001

Amsterdam : Pergamon Press, Elsevier Science Ltd., 2001

Show publication details

Fellner, Dieter W.

Digitale Bibliotheken: Informatik-Lösungen für globale Wissensmärkte

2000

Heidelberg : dpunkt, 2000