• Publikationen
Show publication details

Guthe, Stefan; Thürck, Daniel

Algorithm 1015: A Fast Scalable Solver for the Dense Linear (Sum) Assignment Problem

2021

ACM Transactions on Mathematical Software

We present a new algorithm for solving the dense linear (sum) assignment problem and an efficient, parallel implementation that is based on the successive shortest path algorithm. More specifically, we introduce the well-known epsilon scaling approach used in the Auction algorithm to approximate the dual variables of the successive shortest path algorithm prior to solving the assignment problem to limit the complexity of the path search. This improves the runtime by several orders of magnitude for hard-to-solve real-world problems, making the runtime virtually independent of how hard the assignment is to find. In addition, our approach allows for using accelerators and/or external compute resources to calculate individual rows of the cost matrix. This enables us to solve problems that are larger than what has been reported in the past, including the ability to efficiently solve problems whose cost matrix exceeds the available systems memory. To our knowledge, this is the first implementation that is able to solve problems with more than one trillion arcs in less than 100 hours on a single machine.

Show publication details

Rak, Arne-Tobias; Guthe, Stefan [1. Gutachten]; Bülow, Maximilian von [2. Gutachten]

Registration of Two Broken Specimen Parts from Orthographic Multi-View Images

2021

Darmstadt, TU, Master Thesis, 2021

Measuring the properties of tensile specimens is a fundamental task in the field of materials science. These are metal bars subjected to heat strain that eventually break into two pieces. The reassembly of both specimen parts allows for interesting further analysis for material engineers, but is not easily accomplished manually. While the reassembly of broken objects with the aid of computer vision methods has been thoroughly researched within fields like archaeology and medicine, no computer-driven approach for reassembling broken tensile specimens has been proposed. The geometry of the break point is particularly complex, thus state-of-the-art approaches are unable to extract useful features from specimen image data. In this work we propose a novel method for automatically detecting and registering the break point edges of broken tensile specimens from multi-view orthographic images. Leveraging the cylindrical shape of the specimen and the properties of it’s parallel projection, multiple panoramic views of it’s break point are generated. Comparing these allows for a distinction between the break point- and the specimen surface, where the boundary represents the break point edge. Two break point edges are then matched by optimizing correlation and error metrics. In our experiments, the implemented system was able to successfully register the edges of multiple real-world datasets, as well as a synthetic dataset, where a rotational error of less than 1 degree and a translational error of 1 pixel or less was achieved for the latter.

Show publication details

Mertz, Tobias; Guthe, Stefan [1. Gutachten]; Kuijper, Arjan [2. Gutachten]

Automatic View Planning for 3D Reconstruction of Objects with Thin Features

2020

Darmstadt, TU, Master Thesis, 2020

View planning describes the process of planning view points, from which to record an object or environment for digitization. This thesis examines the applicability of view planning to the 3D reconstruction of insect specimens from extended depth of field images and depth maps generated with a focus stacking method. Insect specimens contain very thin features, such as legs and antennae, while the depth maps, generated during the focus stacking, contain large levels of uncertainty. Since focus stacking is usually not used for 3D reconstruction, there are no state-of-the-art view planning systems, which deal with the unique challenges of this data. Within this thesis, a view planning system with two components is designed to deal with the uncertainty explicitly. The first component utilizes volumetric view planning methods from well established research along with a novel sensor model, to represent the synthetic camera, generated from the focus stack. The second component is a novel 2D feature tracking module, which is designed to capture small details, which can not be recorded within a volumetric representation. The evaluation of the system shows that the application of view planning can still significantly reduce the time required for scene exploration and provide similar amounts of detail as an unplanned approach. Some future improvements are suggested, which may enable the system to capture even more detail.

Show publication details

Wang, Yu; Yu, Weidong; Liu, Xiuqing; Wang, Chunle; Kuijper, Arjan; Guthe, Stefan

Demonstration and Analysis of an Extended Adaptive General Four-Component Decomposition

2020

IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing

The overestimation of volume scattering is an essentialshortcoming of the model-based polarimetric syntheticaperture radar (PolSAR) target decomposition method. It islikely to affect the measurement accuracy and result in mixedambiguity of scattering mechanism. In this paper, an extendedadaptive four-component decomposition method (ExAG4UThs)is proposed. First, the orientation angle compensation (OAC)is applied to the coherency matrix and artificial areas areextracted as the basis for selecting the decomposition method.Second, for the decomposition of artificial areas, one of the twocomplex unitary transformation matrices of the coherency matrixis selected according to the wave anisotropy (Aw). In addition, thebranch condition that is used as a criterion for the hierarchicalimplementation decomposition is the ratio of the correlationcoefficient (Rcc). Finally, the selected unitary transformationmatrix and discriminative threshold are used to determine thestructure of the selected volume scattering models, which aremore effectively to adapt to various scattering mechanisms. Inthis paper, the performance of the proposed method is evaluatedon GaoFen-3 full PolSAR data sets for various time periods andregions. The experimental results demonstrate that the proposedmethod can effectively represent the scattering characteristics ofthe ambiguous regions and the oriented building areas can bewell discriminated as dihedral or odd-bounce structures.

Show publication details

Knauthe, Volker; Ballweg, Kathrin; Wunderlich, Marcel; Landesberger, Tatiana von; Guthe, Stefan

Influence of Container Resolutions on the Layout Stability of Squarified and Slice-And-Dice Treemaps

2020

EuroVis 2020. Eurographics / IEEE VGTC Conference on Visualization 2020. Short Papers

Eurographics / IEEE VGTC Conference on Visualization (EuroVis) <22, 2020, online>

In this paper, we analyze the layout stability for the squarify and slice-and-dice treemap layout algorithms when changingthe visualization containers resolution. We also explore how rescaling a finished layout to another resolution compares toa recalculated layout, i.e. fixed layout versus changing layout. For our evaluation, we examine a real world use-case anduse a total of 240000 random data treemap visualizations. Rescaling slice-and-dice or squarify layouts affects the aspectratios. Recalculating slice-and-dice layouts is equivalent to rescaling since the layout is not affected by changing the containerresolution. Recalculating squarify layouts, on the other hand, yields stable aspect ratios but results in potentially huge layoutchanges. Finally, we provide guidelines for using rescaling, recalculation and the choice of algorithm.

Show publication details

Bülow, Maximilian von; Tausch, Reimar; Knauthe, Volker; Wirth, Tristan; Guthe, Stefan; Santos, Pedro; Fellner, Dieter W.

Segmentation-Based Near-Lossless Compression of Multi-View Cultural Heritage Image Data

2020

GCH 2020

Eurographics Workshop on Graphics and Cultural Heritage (GCH) <18, 2020, online>

Cultural heritage preservation using photometric approaches received increasing significance in the past years. Capturing of these datasets is usually done with high-end cameras at maximum image resolution enabling high quality reconstruction results while leading to immense storage consumptions. In order to maintain archives of these datasets, compression is mandatory for storing them at reasonable cost. In this paper, we make use of the mostly static background of the capturing environment that does not directly contribute information to 3d reconstruction algorithms and therefore may be approximated using lossy techniques. We use a superpixel and figure-ground segmentation based near-lossless image compression algorithm that transparently decides if regions are relevant for later photometric reconstructions. This makes sure that the actual artifact or structured background parts are compressed with lossless techniques. Our algorithm achieves compression rates compared to the PNG image compression standard ranging from 1:2 to 1:4 depending on the artifact size.

Show publication details

Knauthe, Volker; Landesberger, Tatiana von [1. Gutachten]; Guthe, Stefan [2. Gutachten]

Influence of Bounding Box Sizes on the Treemap Visualizations Created by the Squarify Layout Algorithm

2019

Darmstadt, TU, Master Thesis, 2019

Squarify layout algorithms visualize tree data with node weights as a space efficient treemap visualization. Layout changes for changing tree data visualized by the same algorithm were extensively studied. However the influence of changing bounding box aspect ratios and resolutions visualized by the same algorithm and tree data was never examined. This work presents the adaptation of existing change metrics and two new change metrics to measure the influence of bounding box resolutions and aspect ratios. 240,000 treemap visualizations are used to evaluate changes for three squarify layout algorithm variations and the slice-and-dice layout algorithm. Furthermore a real-world example is evaluated in depth. The visualizations produced by the squarify algorithm variations change significantly for changing bounding box aspect ratios. All visualizations produced by the slice-and-dice algorithm remain changeless. Additionally the trade off between rescaling and recalculating treemap images for new bounding box aspect ratios on aspect ratios is examined for the real-world example. Rescaling can worsen the average aspect ratios of leaf-rectangles for all squarify variations while recalculation keeps aspect ratios stable. Guidelines for choosing the right algorithm and whether rescaling or recalculation is applicable are proposed.

Show publication details

Bülow, Maximilian von; Guthe, Stefan; Ritz, Martin; Santos, Pedro; Fellner, Dieter W.

Lossless Compression of Multi-View Cultural Heritage Image Data

2019

GCH 2019

Eurographics Workshop on Graphics and Cultural Heritage (GCH) <17, 2019, Sarajevo, Bosnia and Herzegovina>

Photometric multi-view 3D geometry reconstruction and material capture are important techniques for cultural heritage digitalization. Capturing images of artifacts with high resolution and high dynamic range and the possibility to store them losslessly enables future proof application of this data. As the images tend to consume immense amounts of storage, compression is essential for long time archiving. In this paper, we present a lossless image compression approach for multi-view and material reconstruction datasets with a strong focus on data created from cultural heritage digitalization. Our approach achieves compression rates of 2:1 compared against an uncompressed representation and 1.24:1 when compared against Gzip.

Show publication details

Bülow, Maximilian von; Guthe, Stefan [Supervisor]

Lossless Compression of Structured and Unstructured Multi-View Image Data

2019

Darmstadt, TU, Master Thesis, 2019

Photometric multi-view 3D geometry reconstruction and material captures are important techniques for cultural heritage digitalization. Capturing images of these datasets with high resolutions and high dynamic range and store them using the proprietary raw image format of the camera enables future proof application of this data. As these images tend to consume immense amounts of storage, compression is essential for long time archiving. In this thesis, I present multiple approaches for compressing multi-view and material reconstruction datasets with a strong focus on data created from cultural heritage digitalization. These approaches address different types of redundancies occuring in these datasets and are able to compress datasets with arbitrary resolutions, bit depths and color encodings. The individual approaches are further evaluated against each other and state-of-the-art image and file compression algorithms. The approach with highest compression efficiency archieves rates from 1:77:1 to 2:09:1 compared to an uncompressed representation for multiview datasets and 2:75:1 for a material capture dataset. Compared to the PNG algorithm, it archieves compression rates of 1:33:1 in average on both dataset types.

Show publication details

Czappa, Fabian Alexander; Guthe, Stefan [1. Gutachten]; Goesele, Michael [2. Gutachten]

Hardware zur Texturkompression

2018

Darmstadt, TU, Bachelor Thesis, 2018

Aktuelle Computerspiele werden immer anspruchsvoller, die dargestellten Objekte lassen sich in Modelle und Texturen aufteilen, wobei die Texturen den größeren Teil des Speicherplatzes verbrauchen. Der dedizierte Grafikspeicher allerdings ist im Normalfall fest auf der Grafikkarte verbaut und lässt sich nicht nachrüsten, was Entwicklungsstudios vor das Problem stellt, dass sie die Texturen nicht so hochauflösend verbreiten können, wie es möglich wäre. In dieser Bachelorarbeit präsentiere ich Erweiterungen für einen verlustfreien Kompressionsalgorithmus, sodass alle aktuell relevanten Texturen komprimiert werden können. Außerdem lassen sich durch diesen Ansatz zukünftig relevante Texturformate durch einfache Erweiterungen komprimieren. Der Algorithmus wird mit echten Daten getestet und diese Daten werden zur Analyse von realen Grafikkarten benutzt, um die Effizienz unter echten Bedingungen zu testen. Dabei verringert sich die Anzahl der nötigen Datentransfere zwischen dem Level 2 Cache und dem Videospeicher, wenn die Textur komprimiert wurde, um ungefähr 75%.

Show publication details

Günther, Robert; Guthe, Stefan; Guthe, Michael

A Visual Model for Quality Driven Refinement of Global Illumination

2017

Proceedings of the ACM Symposium on Applied Perception

ACM Symposium on Applied Perception (SAP) <2017, Cottbus, Germany>

When rendering complex scenes using path-tracing methods, long processing times are required to calculate a sufficient number of samples for high quality results. In this paper, we propose a new method for priority sampling in path-tracing that exploits restrictions of the human visual system by recognizing whether an error is perceivable or not. We use the stationary wavelet transformation to efficiently calculate noise-contrasts in the image based on the standard error of the mean. We then use the Contrast Sensitivity Function and Contrast Masking of the Human Visual System to detect if an error is perceivable for any given pixel in the output image. Errors that can not be detected by a human observer are then ignored in further sampling steps, reducing the amount of samples calculated while producing the same perceived quality. This approach leads to a drastic reduction in the total number of samples required and therefore in total rendering time.

Show publication details

Bülow, Maximilian von; Guthe, Stefan; Goesele, Michael

Compression of Non-Manifold Polygonal Meshes Revisited

2017

VMV 2017

International Symposium on Vision, Modeling and Visualization (VMV) <22, 2017, Bonn, Germany>

Polygonal meshes are used in various fields ranging from CAD to gaming and web based applications. Reducing the size required for storing and transmitting these meshes by taking advantage of redundancies is an important aspect in all of these cases. In this paper, we present a connectivity based compression approach that predicts attributes and stores differences to the predictions together with minimal connectivity information. It is an extension to the Cut-Border Machine and applicable to arbitrary manifold and non-manifold polygonal meshes containing multiple attributes of different types. It compresses both the connectivity and attributes without loss outside of re-ordering vertices and polygons. In addition, an optional quantization step can be used to further reduce the data if a certain loss of accuracy is acceptable. Our method outperforms state-of-the-art compression techniques, including specialized triangle mesh compression approaches when applicable. Typical compression rates for our approach range from 2:1 to 6:1 for lossless compression and up to 25:1 when quantizing to 14 bit accuracy.

Show publication details

Bülow, Maximilian von; Guthe, Stefan [Supervisor]; Goesele, Michael [Supervisor]

Connectivity and Attribute Compression of Triangle Meshes

2017

Darmstadt, TU, Bachelor Thesis, 2017

Triangle meshes are used in various fields of applications and are able to consume a voluminous amount of space due to their sheer size and redundancies caused by common formats. Compressing connectivity and attributes of these triangle meshes decreases the storage consumption, thus making transmissions more efficient. I present in this thesis a compression approach using Arithmetic Coding that predicts attributes and only stores differences to the predictions, together with minimal connectivity information. It is applicable for arbitrary triangle meshes and compresses to use both of their connectivity and attributes with no loss of information outside of re-ordering the triangles. My approach achieves a compression rate of approximately 3.50:1, compared to the original representations and compresses in the majority of cases with rates between 1.20:1 to 1.80:1, compared to GZIP.

Show publication details

Mustafa, Maryam; Guthe, Stefan; Tauscher, Jan-Philipp; Goesele, Michael; Magnor, Marcus A.

How Human Am I? EEG-based Evaluation of Animated Virtual Characters

2017

CHI '17. Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems

Conference on Human Factors in Computing Systems (CHI) <35, 2017, Denver, CO, USA>

There is a continuous effort by animation experts to create increasingly realistic and more human-like digital characters. However, as virtual characters become more human they risk evoking a sense of unease in their audience. This sensation, called the Uncanny Valley effect, is widely acknowledged both in the popular media and scientific research but empirical evidence for the hypothesis has remained inconsistent. In this paper, we investigate the neural responses to computer-generated faces in a cognitive neuroscience study. We record brain activity from participants (N = 40)} using electroencephalography (EEG) while they watch videos of real humans and computer-generated virtual characters. Our results show distinct differences in neural responses for highly realistic computer-generated faces such as Digital Emily compared with real humans. These differences are unique only to agents that are highly photorealistic, i.e. the `uncanny' response. Based on these specific neural correlates we train a support vector machine~(SVM) to measure the probability of an uncanny response for any given computer-generated character from EEG data. This allows the ordering of animated characters based on their level of `uncanniness'.

Show publication details

Rudolph, Andreas; Goesele, Michael [Supervisor]; Guthe, Stefan [Supervisor]; Heß, Martin [Supervisor]

Immersive Object Replacement in Augmented Reality

2017

Darmstadt, TU, Bachelor Thesis, 2017

The usage of virtual reality (VR) and augmented reality (AR) for visualizing information in form of overlays on real world objects is on its way to the consumer market. This is made possible by new technological advances and VR/AR devices becoming more compact. One possible scenario is the virtual object replacement of furniture in everyday life. In this thesis, we analyze the viability of the Microsoft HoloLens for that purpose. A userstudy is performed, which evaluates the impression and accuracy of furniture overlayed with its virtual 3D-reconstruction. The main focus of this study is to assess, how successfully real objects can be covered by their 3D models and how immersed the participants feel during the experiments. We show that the HoloLens is capable within certain limits. Furthermore, we propose possible solutions to address some of these limitations such as the cumbersome manual object positioning and the limited user immersion.

Show publication details

Aroudj, Samir; Seemann, Patrick; Langguth, Fabian; Guthe, Stefan; Goesele, Michael

Visibility-Consistent Thin Surface Reconstruction Using Multi-Scale Kernels

2017

ACM Transactions on Graphics

Conference on Computer and Exhibition on Computer Graphics and Interactive Techniques in Asia (SIGGRAPH ASIA) <10, 2017, Bangkok, Thailand>

One of the key properties of many surface reconstruction techniques is that they represent the volume in front of and behind the surface, e.g., using a variant of signed distance functions. This creates significant problems when reconstructing thin areas of an object since the backside interferes with the reconstruction of the front. We present a two-step technique that avoids this interference and thus imposes no constraints on object thickness. Our method first extracts an approximate surface crust and then iteratively refines the crust to yield the final surface mesh. To extract the crust, we use a novel observation-dependent kernel density estimation to robustly estimate the approximate surface location from the samples. Free space is similarly estimated from the samples' visibility information. In the following refinement, we determine the remaining error using a surface-based kernel interpolation that limits the samples' influence to nearby surface regions with similar orientation and iteratively move the surface towards its true location. We demonstrate our results on synthetic as well as real datasets reconstructed using multi-view stereo techniques or consumer depth sensors.

Show publication details

Widmer, Sven; Wodniok, Dominik; Thul, Daniel; Guthe, Stefan; Goesele, Michael

Decoupled Space and Time Sampling of Motion and Defocus Blur for Unified Rendering of Transparent and Opaque Objects

2016

Computer Graphics Forum

Pacific Conference on Computer Graphics and Applications (PG) <24, 2016, Okinawa>

We propose a unified rendering approach that jointly handles motion and defocus blur for transparent and opaque objects at interactive frame rates. Our key idea is to create a sampled representation of all parts of the scene geometry that are potentially visible at any point in time for the duration of a frame in an initial rasterization step. We store the resulting temporally-varying fragments (t-fragments) in a bounding volume hierarchy which is rebuild every frame using a fast spatial median construction algorithm. This makes our approach suitable for interactive applications with dynamic scenes and animations. Next, we perform spatial sampling to determine all t-fragments that intersect with a specific viewing ray at any point in time. Viewing rays are sampled according to the lens uv-sampling for depth-of-field effects. In a final temporal sampling step, we evaluate the predetermined viewing ray/t-fragment intersections for one or multiple points in time. This allows us to incorporate all standard shading effects including transparency. We describe the overall framework, present our GPU implementation, and evaluate our rendering approach with respect to scalability, quality, and performance.

Show publication details

Guthe, Stefan; Schardt, Pascal; Goesele, Michael; Cunningham, Douglas W.

Ghosting and Popping Detection for Image-Based Rendering

2016

2016 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video

International Conference on 3DTV (3DTV-CON) <10, 2016, Hamburg, Germany>

Film sequences generated using image-based rendering techniques are commonly used in broadcasting, especially for sporting events. In many cases, however, image-based rending sequences contain artifacts, and these must be manually located. Here, we propose an algorithm to automatically detect not only the presence of the two most disturbing classes of artifact (popping and ghosting), but also the strength of each instance of an artifact. A simple perceptual evaluation of the technique shows that it performs well.

Show publication details

Guthe, Stefan; Goesele, Michael

GPU-Based Lossless Volume Data Compression

2016

2016 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video

International Conference on 3DTV (3DTV-CON) <10, 2016, Hamburg, Germany>

In rendering, textures are usually consuming more graphics memory than the geometry. This is especially true when rendering regular sampled volume data as the geometry is a single box. In addition, volume rendering suffers from the curse of dimensionality. Every time the resolution doubles, the number of projected pixels is multiplied by four but the amount of data is multiplied by eight. Data compression is thus mandatory even with the increasing amount of memory available on today's GPUs. Existing compression schemes are either lossy or do not allow on-the-fly random access to the volume data while rendering. Both of these properties are, however, important for high quality direct volume rendering. In this paper, we propose a lossless compression and caching strategy that allows random access and decompression on the GPU using a compressed volume object.

Show publication details

Weber, Nicolas; Wächter, Michael; Amend, Sandra C.; Guthe, Stefan; Goesele, Michael

Rapid, Detail-Preserving Image Downscaling

2016

ACM Transactions on Graphics

Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia (SIGGRAPH ASIA) <9, 2016, Macao>

Image downscaling is arguably the most frequently used image processing tool. We present an algorithm based on convolutional filters where input pixels contribute more to the output image the more their color deviates from their local neighborhood, which preserves visually important details. In a user study we verify that users prefer our results over related work. Our efficient GPU implementation works in real-time when downscaling images from 24M to 70 k pixels. Further, we demonstrate empirically that our method can be successfully applied to videos.

Show publication details

Guthe, Stefan; Goesele, Michael

Variable Length Coding for GPU-Based Direct Volume Rendering

2016

VMV 2016

International Symposium on Vision, Modeling and Visualization (VMV) <21, 2016, Bayreuth, Germany>

The sheer size of volume data sampled in a regular grid requires efficient lossless and lossy compression algorithms that allow for on-the-fly decompression during rendering. While all hardware assisted approaches are based on fixed bit rate block truncation coding, they suffer from degradation in regions of high variation while wasting space in homogeneous areas. On the other hand, vector quantization approaches using texture hardware achieve an even distribution of error in the entire volume at the cost of storing overlapping blocks or bricks. However, these approaches suffer from severe blocking artifacts that need to be smoothed over during rendering. In contrast to existing approaches, we propose to build a lossy compression scheme on top of a state-of-the-art lossless compression approach built on non-overlapping bricks by combining it with straight forward vector quantization. Due to efficient caching and load balancing, the rendering performance of our approach improves with the compression rate and can achieve interactive to real-time frame rates even at full HD resolution.

Show publication details

Schardt, Pascal; Goesele, Michael [Gutachter]; Guthe, Stefan [Gutachter]

Ghosting- und Poppingdetektor für Image Based Rendering-Sequenzen

2015

Darmstadt, TU, Master Thesis, 2015

Image Based Rendering-Videosequenzen finden in einer steigenden Anzahl von Bereichen Verwendung, wie in virtuellen Führungen, virtuellen Erkundungen oder TV-Sportübertragungen. Bei der Erstellung solcher Videosequenzen kann es aufgrund verschiedener Ursachen zu Bildartefakten kommen. Diese Artefakte können bisher kaum automatisiert erkannt und deren Störfaktor für menschliche Betrachter ermittelt werden und müssen daher mühsam per Hand gesucht und bewertet werden. Diese Arbeit beschäftigt sich damit, die am häufigsten vorkommenden und störendsten Artefakte "Popping" und "Ghosting" maschinell zu detektieren und die Qualität gefundener Artefakte für einen menschlichen Betrachter anzugeben. Dazu werden unter Berücksichtigung bisheriger, verwandter Arbeiten Algorithmen zur Detektion der beiden genannten Artefaktarten untersucht und weiterentwickelt. Da diese Detektoren keine Ergebnisse der gewünschte Qualität liefern beziehungsweise es für einen Artefakttyp noch keine veröffentlichte Detektionsmöglichkeit gibt, werden neue, eigene Ansätze verfolgt, um Detektionsalgorithmen mit zufriedenstellenderen Resultaten zu implementieren. Um festzustellen, wie stark die gefundenen Artefakte einem menschlichen Betrachter auffallen, werden für beide Detektoren Qualitätsmetriken aufgestellt, die sich an der menschlichen Wahrnehmung orientieren. Im Zuge der Überprüfung der Güte dieser Qualitätsmetriken wird eine Nutzerstudie durchgeführt, um deren Vergleichbarkeiten mit dem menschlichen visuellen System zu validieren. Das Ergebnis dieser Arbeit ist ein Detektionsverfahren für Bildartefakte in Image Based Rendering-Videosequenzen, das es erlaubt, solche Sequenzen automatisiert zu verarbeiten. Damit wäre es möglich zum Beispiel eine Aussage zu treffen, ob Teile einer Videosequenz als sehr störend empfunden werden und man versuchen sollte, dieses Video durch erneutes Rendern mit genaueren Tiefenkarten, die durch mehr Eingabebilder erreicht werden können, qualitativ zu verbessern.