• Publikationen
Show publication details

Tausch, Reimar; Domajnko, Matevz; Ritz, Martin; Knuth, Martin; Santos, Pedro; Fellner, Dieter W.

Towards 3D Digitization in the GLAM (Galleries, Libraries, Archives, and Museums) Sector – Lessons Learned and Future Outlook


The IPSI BgD Transactions on Internet Research

The European Cultural Heritage Strategy for the 21st century, within the Digital Agenda, one of the flagship initiatives of the Europe 2020 Strategy, has led to an increased demand for fast, efficient and faithful 3D digitization technologies for cultural heritage artefacts. 3D digitization has proven to be a promising approach to enable precise reconstructions of objects. Yet, unlike the digital acquisition of cultural goods in 2D which is widely used and automated today, 3D digitization often still requires significant manual intervention, time and money. To enable heritage institutions to make use of large scale, economic, and automated 3D digitization technologies, the Competence Center for Cultural Heritage Digitization at the Fraunhofer Institute for Computer Graphics Research IGD has developed CultLab3D, the world’s first fully automatic 3D mass digitization technology for collections of three-dimensional objects. 3D scanning robots such as the CultArm3D-P are specifically designed to automate the entire 3D digitization process thus allowing to capture and archive objects on a large-scale and produce highly accurate photo-realistic representations. The unique setup allows to shorten the time needed for digitization from several hours to several minutes per artefact.

Show publication details

Domajnko, Matevz; Tanksale, Tejas Madan; Tausch, Reimar; Ritz, Martin; Knuth, Martin; Santos, Pedro; Fellner, Dieter W.

End-to-end Color 3D Reproduction of Cultural Heritage Artifacts: Roseninsel Replicas


GCH 2019

Eurographics Workshop on Graphics and Cultural Heritage (GCH) <17, 2019, Sarajevo, Bosnia and Herzegovina>

Planning exhibitions of cultural artifacts is always challenging. Artifacts can be very sensitive to the environment and therefore their display can be risky. One way to circumvent this is to build replicas of these artifacts. Here, 3D digitization and reproduction, either physical via 3D printing or virtual, using computer graphics, can be the method of choice. For this use case we present a workflow, from photogrammetric acquisition in challenging environments to representation of the acquired 3D models in different ways, such as online visualization and color 3D printed replicas. This work can also be seen as a first step towards establishing a workflow for full color end-to-end reproduction of artifacts. Our workflow was applied on cultural artifacts found around the “Roseninsel” (Rose Island), an island in Lake Starnberg (Bavaria), in collaboration with the Bavarian State Archaeological Collection in Munich. We demonstrate the results of the end-to-end reproduction workflow leading to virtual replicas (online 3D visualization, virtual and augmented reality) and physical replicas (3D printed objects). In addition, we discuss potential optimizations and briefly present an improved state-of-the-art 3D digitization system for fully autonomous acquisition of geometry and colors of cultural heritage objects.

Show publication details

Ritz, Martin; Knuth, Martin; Santos, Pedro; Fellner, Dieter W.

CultArc3D_mini: Fully Automatic Zero-Button 3D Replicator


GCH 2018

Eurographics Workshop on Graphics and Cultural Heritage (GCH) <16, 2018, Vienna, Austria>

3D scanning and 3D printing are two rapidly evolving domains, both generating results with a huge and growing spectrumof applications. Especially in Cultural Heritage, a massive and increasing amount of objects awaits digitization for variouspurposes, one of them being replication. Yet, current approaches to optical 3D digitization are semi-automatic at best andrequire great user effort whenever high quality is desired. With our solution we provide the missing link between both domains,and present a fully automatic 3D object replicator which does not require user interaction. The system consists of ourphotogrammetric 3D scanner CultArc3D_mini that captures an optimal image set for 3D geometry and texture reconstructionand even optical material properties of objects in only minutes, a conveyor system for automatic object feed-in and -out,a 3D printer, and our sensor-based process flow software that handles every single process step of the complex sequencefrom image acquisition, sensor-based object transportation, 3D reconstruction involving different kinds of calibrations, to3D printing of the resulting virtual replica immediately after 3D reconstruction. Typically, one-button machines require theuser to start the process by interacting over a user interface. Since positioning and pickup of objects is automatically registered,the only thing left for the user to do is placing an object at the entry and retrieving it from the exit after scanning. Shortly after,the 3D replica can be picked up from the 3D printer. Technically, we created a zero-button 3D replicator that provides highthroughput digitization in 3D, requiring only minutes per object, and it is publicly showcased in action at 3IT Berlin.

Show publication details

Santos, Pedro; Ritz, Martin; Fuhrmann, Constanze; Monroy Rodriguez, Rafael; Schmedt, Hendrik; Tausch, Reimar; Domajnko, Matevz; Knuth, Martin; Fellner, Dieter W.

Acceleration of 3D Mass Digitization Processes: Recent Advances and Challenges


Mixed Reality and Gamification for Cultural Heritage

In the heritage field, the demand for fast and efficient 3D digitization technologies for historic remains is increasing. Besides, 3D has proven to be a promising approach to enable precise reconstructions of cultural heritage objects. Even though 3D technologies and postprocessing tools are widespread and approaches to semantic enrichment and Storage of 3D models are just emerging, only few approaches enable mass capture and computation of 3D virtual models from zoological and archeological findings. To illustrate how future 3D mass digitization systems may look like, we introduce CultLab3D, a recent approach to 3D mass digitization, annotation, and archival storage by the Competence Center for Cultural Heritage Digitization at the Fraunhofer Institute for Computer Graphics Research IGD. CultLab3D can be regarded as one of the first feasible approaches worldwide to enable fast, efficient, and cost-effective 3D digitization. lt specifically designed to automate the entire process and thus allows to scan and archive large amounts of heritage objects for documentation and preservation in the best possible quality, taking advantage of integrated 30 visualization and annotation within regular Web browsers using technologies such as WebGI and X3D.

Show publication details

Knuth, Martin; Bender, Jan; Goesele, Michael; Kuijper, Arjan

Deferred Warping


IEEE Computer Graphics and Applications

We introduce deferred warping, a novel approach for real-time deformation of 3D objects attached to an animated or manipulated surface. Our target application is virtual prototyping of garments where 2D pattern modeling is combined with 3D garment simulation which allows an immediate validation of the design. The technique works in two steps: First, the surface deformation of the target object is determined and the resulting transformation field is stored as a matrix texture. Then the matrix texture is used as look-up table to transform a given geometry onto a deformed surface. Splitting the process in two steps yields a large flexibility since different attachment types can be realized by simply defining specific mapping functions. Our technique can directly handle complex topology changes within the surface. We demonstrate a fast implementation in the vertex shading stage allowing the use of highly decorated surfaces with millions of triangles in real-time.

Show publication details

Knuth, Martin; Fellner, Dieter W. [Referent]; Bender, Jan [Referent]

Realistic Visualization of Accessories within Interactive Simulation Systems for Garment Prototyping


Darmstadt, TU, Diss., 2017

In virtual garment prototyping, designers create a garment design by using Computer Aided Design (CAD). In difference to traditional CAD the word "aided" in this case refers to the computer replicating real world behavior of garments. This allows the designer to interact naturally with his design. The designer has a wide range of expressions within his work. This is done by defining details on a garment which are not limited to the type of cloth used. The way how cloth patterns are sewn together and the style and usage of details of the cloth's surface, like appliqués, have a strong impact on the visual appearance of a garment to a large degree. Therefore, virtual and real garments usually have a lot of such surface details. Interactive virtual garment prototyping itself is an interdisciplinary field. Several problems have to be solved to create an efficiently usable real-time virtual prototyping system for garment manufacturers. Such a system can be roughly separated into three sub-components. The first component deals with acquisition of material and other data needed to let a simulation mimic plausible real world behavior of the garment. The second component is the garment simulation process itself. Finally, the third component is centered on the visualization of the simulation results. Therefore, the overall process spans several scientific areas which have to take into account the needs of each other in order to get an overall interactive system. In my work I especially target the third section, which deals with the visualization. On the scientific side, the developments in the last years have shown great improvements on both speed and reliability of simulation and rendering approaches suitable for the virtual prototyping of garments. However, with the currently existing approaches there are still many problems to be solved, especially if interactive simulation and visualization need to work together and many object and surface details come into play. This is the case when using a virtual prototyping in a productive environment. The currently available approaches try to handle most of the surface details as part of the simulation. This generates a lot of data early in the pipeline which needs to be transferred and processed, requiring a lot of processing time and easily stalls the pipeline defined by the simulation and visualization system. Additionally, real world garment examples are already complicated in their cloth arrangement alone. This requires additional computational power. Therefore, the interactive garment simulation tends to lose its capability to allow interactive handling of the garment. In my work I present a solution, which solves this problem by moving the handling of design details from the simulation stage entirely to a completely GPU based rendering stage. This way, the behavior of the garment and its visual appearance are separated. Therefore, the simulation part can fully concentrate on simulating the fabric behavior, while the visualization handles the placing of surface details lighting, materials and self-shadowing. Thus, a much higher degree of surface complexity can be achieved within an interactive virtual prototyping system as can be done with the current existing approaches.

Show publication details

Ritz, Martin; Knuth, Martin; Domajnko, Matevz; Posniak, Oliver; Santos, Pedro; Fellner, Dieter W.

c-Space: Time-evolving 3D Models (4D) from Heterogeneous Distributed Video Sources


GCH 2016

Eurographics Workshop on Graphics and Cultural Heritage (GCH) <14, 2016, Genova, Italy>

We introduce c-Space, an approach to automated 4D reconstruction of dynamic real world scenes, represented as time-evolving 3D geometry streams, available to everyone. Our novel technique solves the problem of fusing all sources, asynchronously captured from multiple heterogeneous mobile devices around a dynamic scene at a real word location. To this end all captured input is broken down into a massive unordered frame set, sorting the frames along a common time axis, and finally discretizing the ordered frame set into a time-sequence of frame subsets, each subject to photogrammetric 3D reconstruction. The result is a time line of 3D models, each representing a snapshot of the scene evolution in 3D at a specific point in time. Just like a movie is a concatenation of time-discrete frames, representing the evolution of a scene in 2D, the 4D frames reconstructed by c-Space line up to form the captured and dynamically changing 3D geometry of an event over time, thus enabling the user to interact with it in the very same way as with a static 3D model. We do image analysis to automatically maximize the quality of results in the presence of challenging, heterogeneous and asynchronous input sources exhibiting a wide quality spectrum. In addition we show how this technique can be integrated as a 4D reconstruction web service module, available to mobile end-users.

Show publication details

Puhl, Julian; Knuth, Martin; Kuijper, Arjan

Image-Based Post-processing for Realistic Real-Time Rendering of Scenes in the Presence of Fluid Simulations and Image-Based Lighting


Advances in Visual Computing. 12th International Symposium, ISVC 2016, Proceedings, Part I

International Symposium on Visual Computing (ISVC) <12, 2016, Las Vegas, NV, USA>

Lecture Notes in Computer Science (LNCS), 10072

For real-time fluid simulation currently two methods are available: grid-based simulation and particle-based simulation. They both approximate the simulation of a fluid and have in common that they do not directly generate a visually pleasant surface. Due to time constraints, the subsequent generation of the fluid surface may not consume much time. What is usually generated is an approximate surface, which consists of many individual mesh elements and has no optical properties of a fluid. The visualization of a fluid in image space may contain different detail densities depending on the distance between observer and the fluid. Therefore, filters need to be applied in order to smooth these details to a consistent surface. Many approaches use strong filters in this step, which results in a too smooth surface. To this surface then noise is added in order to give it a rough appearance. To avoid this ad-hoc approach we present a post-processing approach of the direct visualization of the simulation data via image processing applications by both smoothing filters and an image pyramid. Our presented approach based on an image pyramid provides access to various levels of detail. These are used as a controllable low pass filter. Thus, different amounts of smoothing can be selected depending on the distance to the viewer, granting a better surface reconstruction.


Show publication details

Arnold, Fabio; Knuth, Martin [1. Gutachter]; Kuijper, Arjan [2. Gutachter]

Approximation von Reflexionsmodellen für das interaktive Kleidungsdesign unter natürlicher Beleuchtung


Darmstadt, TU, Bachelor Thesis, 2015

Im Bereich des virtuellen Prototyping ist die Echtzeitsimulation ein unerlässliches Werkzeug geworden. Neben der schnellen und interaktiven Simulation ist die fotorealistische Darstellung der Materialien ein wichtiger Bestandteil, falls das System für Design- und Simulationszwecke eingesetzt werden soll. Das Ziel hierbei ist es, die Produktentwicklungszeit zu verkürzen. Als problematisch hat sich die Darstellung bereits existierender, physikalischer Materialien erwiesen, da diese zunächst aufwändig erfasst werden müssen. Nach der Erfassung müssen die Messdaten oft in ein Format überführt werden, das für eine Darstellung per Echtzeitrendering geeignet ist. Hierbei gehen üblicherweise viele Details verloren, da ein Echtzeitrenderer aus Geschwindigkeitsgründen ein vereinfachtes Modell der Materialien verwendet. Das Ziel dieser Arbeit ist es, für beliebige gemessene BRDF-Daten, die detaillierte Reflektanzeigenschaften vieler physikalischer Materialien enthalten, eine Approximation zu erforschen, die eine Darstellung mit der in "Efficient Self-Shadowing Using Image-Based Lighting on Glossy Surfaces" beschriebenen Beleuchtungsmethode erlaubt. Mit dieser Methode ist es bereits möglich, Oberflächen stetig von diffus über glänzend bis zu vollständig spiegelnd darzustellen. In dieser Bachelorarbeit wird zu Beginn eine geeignete BRDF-Datenbank ausgewählt. Auf der Grundlage verwandter Arbeiten wird eine Approximationsmethode entwickelt, die auf Basis von gemessenen BRDF, geeignete Parameter für die Renderingmethode ermittelt. Parallel wird zum Vergleich eine Groundtruth-Methode entwickelt, welche die BRDF unabhängig von Geschwindigkeit und Speicherverbrauch möglichst genau approximiert. Mit diesen beiden Methoden werden sowohl qualitative als auch quantitative Vergleiche durchgeführt. Die Implementierung wurde in einem eigens für diese Zwecke entwickeltem Framework vorgenommen, welches den Fokus auf schnelles Prototyping legt. Jedoch sollten sich die entwickelten Verfahren leicht auf andere Frameworks übertragen lassen.

Show publication details

Knuth, Martin; Altenhofen, Christian; Kuijper, Arjan; Bender, Jan

Efficient Self-Shadowing Using Image-Based Lighting on Glossy Surfaces


VMV 2014

Workshop on Vision, Modeling, and Visualization (VMV) <19, 2014, Darmstadt, Germany>

In this paper we present a novel natural illumination approach for real-time rasterization-based rendering with environment map-based high dynamic range lighting. Our approach allows to use all kinds of glossiness values for surfaces, ranging continuously from completely diffuse up to mirror-like glossiness. This is achieved by combining cosine-based diffuse, glossy and mirror reflection models in one single lighting model. We approximate this model by filter functions, which are applied to the environment map. This results in a fast, image-based lookup for the different glossiness values which gives our technique the high performance that is necessary for real-time rendering. In contrast to existing real-time rasterization-based natural illumination techniques, our method has the capability of handling high gloss surfaces with directional self-occlusion. While previous works exchange the environment map by virtual point light sources in the whole lighting and shadow computation, we keep the full image information of the environment map in the lighting process and only use virtual point light sources for the shadow computation. Our technique was developed for the usage in real-time virtual prototyping systems for garments since here typically a small scene is lit by a large environment which fulfills the requirements for imagebased lighting. In this application area high performance rendering techniques for dynamic scenes are essential since a physical simulation is usually running in parallel on the same machine. However, also other applications can benefit from our approach.

Show publication details

Puhl, Julian; Kuijper, Arjan [1. Gutachter]; Knuth, Martin [2. Gutachter]

Materialsysteme für das realistische Echtzeit-Rendering von Szenen in Anwesenheit von Flüssigkeitssimulationen und Image-Based Lighting


Darmstadt, TU, Bachelor Thesis, 2014

Für eine Simulation einer Flüssigkeit in Echtzeit werden aktuell Verfahren angewandt, die entweder das Modell diskretisieren und somit gitterbasiert arbeiten oder die einzelnen Atome zu größeren Partikeln zusammenfassen und so das Verhalten simulieren. Beiden Ansätzen ist gemein, dass als Ergebnis keine glatte Oberfläche vorliegt, sondern eine angenäherte, die uneben ist, aus vielen einzelnen Elementen besteht und zudem keine optischen Eigenschaften einer Flüssigkeit besitzt. Aus Zeitgründen darf die nachfolgende Generierung der Oberfläche nicht sehr zeitaufwendig sein. Aus diesem Grund wird in dieser Arbeit die Nachbearbeitung der Daten via Bildverarbeitung unter der Anwendung von Glättungsfiltern sowie einer Bildpyramide untersucht. Die Pyramide bietet Zugriff auf verschiedene Detailstufen. Hierdurch können unterschiedliche Glättungen in Abhängigkeit von der Entfernung zum Betrachter gewählt werden. Viele Verfahren filtern hier sehr stark und fügen nachträglich wieder ein Rauschen ein, um eine nicht ganz so glatte Oberfläche zu simulieren. In Szenen, bei denen die simulierte Flüssigkeit seitlich betrachtet wird, können sowohl nahe als auch weiter entfernte Partikel nahe beieinander existieren. Hier spielt das Verfahren seine Stärke aus gleichzeitig auf unterschiedlich stark gefilterte Werte zugreifen zu können.

Show publication details

Hutter, Marco; Knuth, Martin; Kuijper, Arjan

Mesh Partitioning for Parallel Garment Simulation


WSCG 2014. Communication Papers Proceedings

International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision (WSCG) <22, 2014, Plzen, Czech Republic>

We present a method for partitioning meshes that allows a simple and efficient parallel implementation of different simulation methods. It is based on a generalization of the concept of independent sets from graph theory to sets of simulation elements. The general description makes it versatile and flexibly applicable in existing simulation systems. Every simulation method that formerly worked by sequentially processing a set of simulation elements can now be parallelized by partitioning the underlying set, without affecting the behavior of the simulated model.

Show publication details

Schmitt, Nikolas; Knuth, Martin [Gutachter]; Kuijper, Arjan [Gutachter]

Multilevel Cloth Simulation Through GPU Surface Sampling


Darmstadt, TU, Master Thesis, 2013

Nowadays, cloth simulation is more and more common technique used in garment industry. Most available simulation systems use triangular mesh models due to their flexibility for the mesh generation. However, using regular grids opens the door for many optimizations. Connectivity is implicit, warp and weft directions of the cloth are aligned to grid edges and distances between particles are equal, which gives valuable speed advances for the computations. This work focusses in combining both grid model types into one hybrid simulation system. The presented system features CPU computations for a low-resolution triangle mesh. Additionally, a GPU-based method is performed efficiently on a high-resolution grid representation. This improves the fine details of the garment. Coupling is performed by a texture based approach for fast up and down projections of the hybrid grid simulation system. The flexible system allows individual computation types to be performed on different architectures, data representations and detail levels. The results show the ability to handle simulations with more than 250k particles in real-time, featuring the new hierarchical solver in conjunction with GPU collision handling.

Show publication details

Schmitt, Nikolas; Knuth, Martin; Bender, Jan; Kuijper, Arjan

Multilevel Cloth Simulation using GPU Surface Sampling


VRIPHYS 13: 10th Workshop in Virtual Reality Interactions and Physical Simulations

International Workshop in Virtual Reality Interaction and Physical Simulations (VRIPhys) <10, 2013, Lille, France>

Today most cloth simulation systems use triangular mesh models. However, regular grids allow many optimizations as connectivity is implicit, warp and weft directions of the cloth are aligned to grid edges and distances between particles are equal. In this paper we introduce a cloth simulation that combines both model types. All operations that are performed on the CPU use a low-resolution triangle mesh while GPU-based methods are performed efficiently on a high-resolution grid representation. Both models are coupled by a sampling operation which renders triangle vertex data into a texture and by a corresponding projection of texel data onto a mesh. The presented scheme is very flexible and allows individual components to be performed on different architectures, data representations and detail levels. The results are combined using shader programs which causes a negligible overhead. We have implemented CPU-based collision handling and a GPU-based hierarchical constraint solver to simulate systems with more than 230k particles in real-time.

Show publication details

Bauer, Fabian; Knuth, Martin [Gutachter]; Kuijper, Arjan [Gutachter]

Realistic Realtime Rendering of Garment with Transparency and Ambient Occlusion


Darmstadt, TU, Master Thesis, 2013

To simulate behaviour and appearance of clothing articels the garment industry relies on CAD programs 3D garment design like Clo3D [21]. These simulation programs allow the designer to test out different parameters and applications of the garment instead of producing several real-world prototypes. From modeling human characters to shading cloth material BRDF and BSSRDF these programs require state-of-the-art rendering and animation systems to closely reflect cloth material attributes. The physics simulation for one part is responsible for the movement of the cloth under different environmental situations. The lighting solution of the rendering pipeline must be able to simulate different material effects like physically based reflections, refraction, sub-surface scattering, transparency. To give a plausible environment induces look to the rendered meshes global illumination techniques can be applied as well to test the materials lighting behaviour for different scenes. Common in other fields like movie rendering and video games many of these required techniques can be adopted to achieve the desired effect.

Show publication details

Bauer, Fabian; Knuth, Martin; Kuijper, Arjan; Bender, Jan

Screen-Space Ambient Occlusion Using A-Buffer Techniques


13th International Conference on Computer-Aided Design and Computer Graphics. Proceedings

International Conference on Computer-Aided Design and Computer Graphics <13, 2013, Hong Kong, China>

Computing ambient occlusion in screen-space (SSAO) is a common technique in real-time rendering applications which use rasterization to process 3D triangle data. However, one of the most critical problems emerging in screen-space is the lack of information regarding occluded geometry which does not pass the depth test and is therefore not resident in the G-buffer. These occluded fragments may have an impact on the proximity-based shadowing outcome of the ambient occlusion pass. This not only decreases image quality but also prevents the application of SSAO on multiple layers of transparent surfaces where the shadow contribution depends on opacity. We propose a novel approach to the SSAO concept by taking advantage of per-pixel fragment lists to store multiple geometric layers of the scene in the G-buffer, thus allowing order independent transparency (OIT) in combination with high quality, opacity-based ambient occlusion (OITAO). This A-buffer concept is also used to enhance overall ambient occlusion quality by providing stable results for low-frequency details in dynamic scenes. Furthermore, a flexible compression-based optimization strategy is introduced to improve performance while maintaining high quality results.

Show publication details

Altenhofen, Christian; Kuijper, Arjan [Gutachter]; Knuth, Martin [Gutachter]

Effiziente Selbstschattierung in Szenen mit bildbasierter Beleuchtungsinformation und glänzenden Materialien


Darmstadt, TU, Master Thesis, 2012

Beim Design und Virtual Prototyping von Produkten werden heutzutage oftmals interaktive Computersimulationen eingesetzt. Um die Vorstellungen des Designers möglichst realitätsnah wiedergeben zu können, müssen diese Simulatoren über eine qualitativ hochwertige und plausible Beleuchtung verfügen. Hinzu kommt eine möglichst hohe Flexibilität bezüglich der Gestaltung der Oberflächen und der Wahl der verwendeten Materialien. In den meisten Fällen ermöglichen die verwendeten Simulatoren zusätzlich die direkte Manipulation der Geometrie oder Anordnung der Objekte durch den Benutzer. Es gibt bereits Verfahren, die solche dynamischen Szenen mit bildbasierten Beleuchtungsinformationen in Echtzeit beleuchten und Schattenwürfe berechnen können; die so genannten "Image Based Directional Occlusion" Verfahren (IBDO). Allerdings sind sie bezüglich der Materialien der zu beleuchtenden Objekte sehr eingeschränkt: Sie unterstützen nur diffuse und leicht glänzende Materialien; für hoch glänzende und spiegelnde Oberflächen sind sie nicht geeignet. In dieser Arbeit wird ein neues Beleuchtungssystem vorgestellt, das ähnlich wie die IBDO Algorithmen viele Lichtquellen aus einer Environment Map erzeugt, diese dann jedoch nicht direkt zur Beleuchtung der Szene, sondern lediglich zum Sammeln der Verdeckungsinformationen verwendet. Für die eigentliche Beleuchtung werden wie beim Environment Mapping die Farbwerte über einen oberflächenabhängigen Sample- Vektor aus der Environment Map (EM) ausgelesen. Die über die Lichtquellen gesammelten Verdeckungsinformationen in Form von Variance Shadow Maps (VSMs) werden dazu eingesetzt, nicht sichtbare Teile der EM dynamisch auszublenden. Die geforderte Flexibilität in der Materialwahl wird durch verschiedene MipMap Stufen der EM gewährleistet. Das Beleuchtungssystem verbindet somit die freie Materialwahl des Environment Mappings mit der bildbasierten Schattenberechnung der IBDO Verfahren. Die verwendeten VSMs bieten einen weichen Schattenwurf bei vergleichsweise geringem Rechenaufwand, verlieren aber bei niedrigen Auflösungen wichtige Details bei der Selbstschattierung. Die Steuerung der Reflektivität der Objekte über MipMaps erlaubt einen fließenden Übergang von 100% spiegelnd bis 100% diffus. Zusätzlich können Algorithmen zur Manipulation der Oberflächengeometrie, wie beispielsweise Bump-, Normalund Displacement-Mapping sowie Tesselierung, problemlos vorgelagert werden; das Beleuchtungssystem ist kompatibel. Der entscheidende Faktor für die Performance des Verfahrens ist die Zahl der verwendeten Lichtquellen. Für angemessene Mengen (100 bis 200) liefert es auf aktueller Mittelklasse-Hardware (z.B. AMD Radeon HD6850 oder NVIDIA Geforce GTX 280) interaktive Bildraten. Die hohe Zahl der benötigten Texturen und die damit verbundene Menge an Grafikspeicher sowie die zwanghafte Aufteilung des Renderings in mehrere Render-Passes stellen jedoch einen Nachteil gegenüber den oben erwähnten IBDO Verfahren dar.

Show publication details

Knuth, Martin; Kohlhammer, Jörn; Kuijper, Arjan

A Geometry-Shader-Based Adaptive Mesh Refinement Scheme Using Semiuniform Quad/Triangle Patches and Warping


VRIPHYS 10: 7th Workshop in Virtual Reality Interactions and Physical Simulations

International Workshop in Virtual Reality Interaction and Physical Simulations (VRIPhys) <7, 2010, Copenhagen, Denmark>

In the field of garment simulation the resolution of the simulation mesh has a direct impact on visual quality. Unfortunately, an increase in mesh resolution introduces a much higher computational cost and potentially causes instability inside the simulation. In addition, it increases the amount of data sent to the renderer for visualisation. Therefore, a GPU-based refinement of the simulated mesh has several advantages, since all additional data is generated immediately before rendering. This allows an increase in visual quality without adding to computational costs for the simulation process or bandwidth necessary for rendering. In this paper we present a view-dependent, adaptive tessellation method designed for the geometry processing stage of modern GPUs. It uses uniform meshes internally, removing the necessity to store external patches. Since we deal with a local refinement scheme, sudden changes in the mesh structure size on adjacent patches may occur incidentally. To reduce this effect as far as possible, we control the triangle density distribution of the refinement process inside a refined triangle patch.

Show publication details

Führer, Jan Benedikt; Knuth, Martin [Betreuer]

Approximation von Shadow-Maps mit Hilfe von Parallax-Mapping


Darmstadt, TU, Bachelor Thesis, 2010

Die vorliegende Arbeit präsentiert ein Verfahren zur Approximation von Shadow-Maps und der Analyse des damit verbundenen Approximationsfehlers in Abhängigkeit verschiedener Parameter. Die Umsetzung basiert auf dem bildbasierten Relief-Mapping-Algorithmus, der eine Erweiterung des klassischen Parallax-Mapping beschreibt. Das Verfahren kann in beliebigen Beleuchtungssystemen eingesetzt werden, die auf nahe beieinander stehenden Richtungslichtquellen basieren. Es wird gezeigt, dass der Approximationsfehler unter dieser Voraussetzung eine hinreichend geringe Größenordnung besitzt. Durch die Verarbeitung im Bildraum ist das Verfahren unabhängig von der Szenenkomplexität. Der größte Zeitgewinn kann daher bei komplexen Szenen erwartet werden.

Show publication details

Knuth, Martin; Kohlhammer, Jörn; Kuijper, Arjan

Embedding Hierachical Deformation within a Realtime Scene Graph


VISIGRAPP 2010. Proceedings

International Conference on Computer Graphics Theory and Applications (GRAPP) <5, 2010, Angers, France>

Scene graphs are widely used as a description of spatial relations between objects in a scene. Current scene graphs use linear transformations for this purpose. This limits the relation of two objects in the hierarchy to simple transformations like sheer, translation, rotation and scaling. In contrast to this, we want to represent and control deformations that result from propagating the dynamics of objects to deformable attached objects. Our solution is to replace the linear 4x4 matrix-based transformation of a scene graph by a more generic trilinear transformation. The linear transformation allows the composition of the transformation hierarchy into one transformation. Our approach additionally allows the handling of deformations on the same level. Building on this concept we present a system capable of real-time rendering. The computations of the applied deformations of the scene graph are performed in real-time on the GPU. We allow the approximation of arbitrary nonlinear transformations and deformations by utilising grids of trilinear transformations in our system. As an application we show geometric attachments on deformable objects and their deformation on a scene graph level.

Show publication details

Knuth, Martin; Kohlhammer, Jörn

A Hybrid Ambient Occlusion Technique for Dynamic Scenes


WSCG 2009. Communication Papers Proceedings

International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision (WSCG) <17, 2009, Plzen, Czech Republic>

In this paper we present a hybrid technique for illuminating a dynamic scene with self shadowing. Our goal is to perform the necessary calculations with real-time or interactive frame rates. The main idea is to split up the self shadowing process into a global and a local part. This separation allows us to choose combinations of completely independent algorithmic approaches. The global part calculates the self shadowing information of the entire scene. Since it has to process the whole scene, it has a high computational cost; however, a coarse approximation can be created in a short time. In contrast the process for the local part deals with fine details, which have local impact on the scene. It only processes a relevant subset of the scene. Finally we present results based on an implementation using a GPU based self shadowing approximation combined with screen space ambient occlusion for solving the local part.

Show publication details

Landesberger, Tatiana von; Knuth, Martin; Schreck, Tobias; Kohlhammer, Jörn

Data Quality Visualization for Multivariate Hierarchic Data


VisWeek 08. Conference Proceedings [DVD-ROM]

IEEE Information Visualization Conference (INFOVIS) <14, 2008, Columbus, OH, USA>

In many business applications, decision makers often have to base their decisions on large amounts of data from various sources. Often, the quality of this data varies substantially, affecting the degree of certainty the analyst can put into the analyzed data values. The data quality measures may be of qualitative or quantitative nature, and consist of one or many dimensions. In this poster paper, we first present a brief survey of currently available uncertainty visualization techniques. We then present experimental results we obtained with several techniques for visualization of multidimensional data quality information, applied on multivariate hierarchic data used in an economic data analysis scenario.

Show publication details

Schoger, Carsten; Knuth, Martin [Betreuer]

Globale Beleuchtung interaktiver Szenen


Darmstadt, TU, Diplomarbeit, 2007

Im Rahmen dieser Diplomarbeit wird ein Algorithmus zur globalen Beleuchtungsrechnung vorgestellt. Es wird die Möglichkeit beschrieben, diffuse Lichtreflektion interaktiv zu berechnen und den Algorithmus in bestehende Echtzeitsysteme zu integrieren. Auf Basis eines Radiosity-Ansatzes wird die Lichtausbreitung in einer Szene simuliert. Weiterhin werden verschiedene Möglichkeiten der Verdeckungsrechnung untersucht und Methoden des Real-Time-Rendering in das Verfahren integriert. Es wird erklärt, inwieweit Parametrisierungen der Simulation auf Basis der menschlichen Wahrnehmung eingeführt werden können, ohne die Qualität des entstehenden Ergebnisses sichtbar zu beeinflussen aber dennoch die Geschwindigkeit der Berechnung zu erhöhen. Das Verfahren ist als Modul ohne zusätzliche Anpassungen in bestehende 3D-Echtzeitsysteme integrierbar. Die Abschätzung globaler Beleuchtung durch die klassische Addition eines ambienten Terms wird durch eine komplette Simulation diffuser Lichtausbreitung ersetzt.

Show publication details

Tinz, Jens; Fuhrmann, Arnulph [Betreuer]; Knuth, Martin [Betreuer]

GPU-basierte Bekleidungssimulation


Darmstadt, TU, Diplomarbeit, 2007

Die interaktive Simulation von Bekleidung bleibt trotz intensiver Forschungsarbeit nach wie vor rechenintensiv. Die Nutzung der Rechenkraft moderner Grafikkarten kann helfen auch komplexe und mehrlagige Schnittmuster in Echtzeit zu simulieren. In der Diplomarbeit wird die Verwendung der GPU zur physikalischen Simulation von Kleidungsstücken in Prototyping-Applikationen umgesetzt und evaluiert. Der Geschwindigkeitsgewinn wird für reguläre und irreguläre Diskretisierungen des simulierten Materials untersucht, und eine Verbesserung der Simulation durch die Verwendung von Nebenbedingungen getestet. Weiterhin wurde eine Kollisionserkennung und -auflösung mit Hilfe von Distanzfeldern auf der Grafikkarte mit sehr guten Ergebnissen umgesetzt.

Show publication details

Benz, Michael; Knuth, Martin [Betreuer]; Fuhrmann, Arnulph [Betreuer]

Cluster basiertes Rendern mit verteilten Frame Buffern


Darmstadt, TU, Diplomarbeit, 2005

Im Laufe der letzten Jahrzehnte sind die Anforderungen an die Graphische Datenverarbeitung rasant gestiegen. Darzustellende Objekte werden zunehmend detaillierter und komplexer um eine immer genauere Simulation der Realität auf dem Bildschirm zu erreichen. Mit Hilfe von Laser Scannern wurden bereits Modelle mit mehr als 300 Millionen Dreiecken generiert. Das entspricht einer Datenmenge von ungefähr 3.7 Gigabyte. Auf einem einzelnen Rechner lässt sich eine solche Datenmenge gar nicht oder nur sehr schwer visualisieren. Aus diesem Grunde bietet es sich an, zur Visualisierung ein Clustersystem zu verwenden. Solche Systeme haben einen hohen Kosten/Nutzen Faktor und stellen eine günstige Alternative zu vergleichbaren Großrechnern dar. Es gibt verschiedene Ziele die man bei der parallelen Bildberechnung anstreben kann. Uns war es wichtig mit niedrigen Vorberechnungszeiten auszukommen und trotzdem eine hohe Bildwiederholrate zu erreichen. Darzustellende Objekte sollten sich zusätzlich animieren und deformieren lassen. In dieser Arbeit werden nun verschiedene Ansätze zur parallelen Bildberechnung vorgestellt und analysiert. Es wird weiterhin ein Rechenmodell für sequentielle Sort-Last Systeme entwickelt, womit sich eine obere Schranke für die möglichen Bildwiederholraten berechnen lässt. Zudem stelle ich zwei Erweiterungen für den Binary Swap Algorithmus vor. Dabei handelt es sich erstens um eine weitere Möglichkeit der Parallelisierung des Algorithmus, und zweitens um adaptive Lastverteilung. Aus diesen Beobachtungen heraus wurde ein Framework entwickelt, womit sich die einzelnen Algorithmen vergleichen und bewerten lassen. Dabei wurde besonders Wert auf die Widerverwendbarkeit und Erweiterbarkeit des Systems gelegt. Zudem wurden verschiedene Erweiterungen der Graphikkarten benutzt um das System weiter zu beschleunigen.

Show publication details

Knuth, Martin; Fuhrmann, Arnulph

Self-Shadowing of Dynamic Scenes with Environment Maps Using the GPU


WSCG 2005. Full Papers

International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision (WSCG) <13, 2005, Plzen, Czech Republic>

In this paper we present a method for illuminating a dynamic scene with a high dynamic range environment map with real-time or interactive frame rates, taking into account self shadowing. Current techniques require static geometry (pre-computed light transport), are limited to few and small area lights or are limited in the frequency of the shadows. We facilitate importance sampling of the environment map and GPU based shadow calculation in an efficient way. The shadows are calculated per pixel, so no highly tessellated models are necessary in opposition to other techniques. Our method provides a novel and highly efficient way for using shadow maps as data structure for visibility computations done entirely on the GPU. We achieve real-time frame rates for moderate sized models on current graphics hardware. Since we evaluate the light transport of the scene per frame, complex dynamically animated models can be rendered efficiently.

Show publication details

Fuhrmann, Arnulph; Groß, Clemens; Knuth, Martin; Kohlhammer, Jörn

Virtual Prototyping of Garments


ProSTEP iVip Science Days 2005

ProSTEP iVip Science Days <2, 2005, Darmstadt, Germany>

This paper presents a system for virtual prototyping of garments. The garments are constructed using standard 2D tools for pattern design. We present data structures for representing virtual garments, which are used for creating an interface between the 2D pattern construction and 3D garment simulation. The clothing is visualized three-dimensionally on virtual humans. Our system allows interactive draping and adjustment of the virtual garments. The system creates cost savings by reducing the number of real prototypes and decreasing the time-to-market.

Show publication details

Knuth, Martin; Fuhrmann, Arnulph [Betreuer]; Luckas, Volker [Betreuer]

Interaktives Rendering von Bekleidung


Darmstadt, TU, Diplomarbeit, 2004

In letzter Zeit sind immer bessere und schnellere Simulatoren für Kleidung verfügbar, welche in Szenarien wie z.B. der virtuellen Anprobe Verwendung finden. Dabei fällt auf, dass die hierbei verwendeten Rendersysteme der Leitungsfähigkeit dieser Simulatoren hinterherhinken. Wichtige Aspekte sind hierbei vor allem Geschwindigkeit und Realismus der Visualisierung, da ein Kunde nicht darauf warten will, zumal hierdurch interaktives Verändern der Simulationsdaten durch den Kunden möglich wird. Die zu diesem Problem untersuchten bestehenden Algorithmen waren dabei nicht in der Lage, alle Anforderungen des Szenarios zu erfüllen. Es wurden z.B. statische Szenen oder zu komplexe Approximationen gefordert. Basierend auf dieser Problematik wurde daher in dieser Arbeit ein eigener Lösungsansatz entwickelt, der (unter Einsatz programmierbarer Graphikhardware zur Geschwindigkeitssteigerung) die Auswertung der Beleuchtungssituation ohne Vorberechnungszeit mit interaktiven Frameraten erlaubt. Zur Entwicklung der Beleuchtungsmodelle wurde angenommen, dass die verwendeten Stoffe eng gewebt sind und darauf aufbauend, Optimierungen durchgeführt. Als Schattenberechnungsmethode wurden der Flexibilität und der Geschwindigkeit wegen Shadowbuffer gewählt. Um Stoffe und diffuse Materialien zu simulieren, wurde neben einem Lambert'schen diffusen Reflektionsmodell, ein an Lommel-Selinger angelehntes diffuses Reflektionsmodell mit Oberflächenstreuung entworfen. Für strukturierte Materialen wurde ein Reflektionsmodell für Leder und für Cord entworfen. Das Rendersystem wurde auf der Basis von Vertex- und Fragmentshadern flexibel aufgebaut, um die Leistungsfähigkeit der GPU auszuloten. Als Beleuchtung für das 3D-Modell wurden HDR-Umgebungsaufnahmen gewählt, aus denen ein Lichtquellensetup berechnet wird, welches die Beleuchtung, unter Berücksichtigung von Selbstabschattung, übernimmt. Zur Glättung der einzelnen Schattenwürfe wurde eine neue Filtermethode entwickelt. Die Implementierung der erarbeiteten Algorithmen hat gezeigt, dass mit dem in dieser Arbeit vorgestellten Ansatz interaktive Frameraten mit realistisch wirkenden Abschattungen und bekleidungstypischen Reflexionsverhalten der 3D-Szene erreicht werden. Zudem wird über das aus der HDR-Umgebungsaufnahme gewonnene, Lichtquellensetup eine realitätsnahe Ausleuchtung der Szene gewährleistet.