PAVED: Pareto Front Visualization for Engineering Design
EuroVis 2020. Eurographics / IEEE VGTC Conference on Visualization 2020
Eurographics / IEEE VGTC Conference on Visualization (EuroVis) <22, 2020, Norrköping, Sweden>
Design problems in engineering typically involve a large solution space and several potentially conflicting criteria. Selecting a compromise solution is often supported by optimization algorithms that compute hundreds of Pareto-optimal solutions, thus informing a decision by the engineer. However, the complexity of evaluating and comparing alternatives increases with the number of criteria that need to be considered at the same time. We present a design study on Pareto front visualization to support engineers in applying their expertise and subjective preferences for selection of the most-preferred solution. We provide a characterization of data and tasks from the parametric design of electric motors. The requirements identified were the basis for our development of PAVED, an interactive parallel coordinates visualization for exploration of multi-criteria alternatives. We reflect on our user-centered design process that included iterative refinement with real data in close collaboration with a domain expert as well as a summative evaluation in the field. The results suggest a high usability of our visualization as part of a real-world engineering design workflow. Our lessons learned can serve as guidance to future visualization developers targeting multi-criteria optimization problems in engineering design or alternative domains
A Visual Analytics Approach to Sensor Analysis for End-of-Line Testing
Darmstadt, TU, Master Thesis, 2019
End-of-Line testing is the final step of modern production lines that assures the quality of produced units before they are shipped to customers. Automatically deciding between functional and defective units as well as classifying the type of defect are main objectives. In this thesis, a dataset consisting of three phase internal rotor engine simulations is used to outline opportunities and challenges of Visual Analytics for End-of-Line testing. At first the simulation data is visually analyzed to understand the influence of the simulation input parameters. Afterwards features are extracted from the signals using discrete Fourier transform (DFT) and discrete Wavelet transform (DWT) to represent the different simulations. Principal Component Analysis (PCA) is applied to further reduce the dimensionality of the data to finally apply K-Means to cluster the datasets and also perform a classification using a support vector machine (SVM). It is discussed which methods are beneficial for the End-of-Line testing domain and how they can be integrated to improve the overall testing process.
Visualizing Time Series Consistency for Feature Selection
Journal of WSCG Vol. 27, No.1-2, 2019. Proceedings
International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision (WSCG) <27, 2019, Plzen, Czech Republic>
Feature selection is an effective technique to reduce dimensionality, for example when the condition of a system is to be understood from multivariate observations. The selection of variables often involves a priori assumptions about underlying phenomena. To avoid the associated uncertainty, we aim at a selection criterion that only considers the observations. For nominal data, consistency criteria meet this requirement: a variable subset is consistent, if no observations with equal values on the subset have different output values. Such a model-agnostic criterion is also desirable for forecasting. However, consistency has not yet been applied to multivariate time series. In this work, we propose a visual consistency-based technique for analyzing a time series subset’s discriminating ability w.r.t. characteristics of an output variable. An overview visualization conveys the consistency of output progressions associated with comparable observations. Interaction concepts and detail visualizations provide a steering mechanism towards inconsistencies. We demonstrate the technique’s applicability based on two real-world scenarios. The results indicate that the technique is open to any forecasting task that involves multivariate time series, because analysts could assess the combined discriminating ability without any knowledge about underlying phenomena.
Towards Visual Feature Selection for Multivariate Time Series Data
Magdeburg, Univ., Master Thesis, 2017
Time series analysis and modeling are essential tools for the transfer of knowledge across time, also called forecasting. This often involves the task of identifying the least number of features that are most useful for building a model that accurately forecasts a target without suffering from dimensionality issues. This is challenging, because time series involve many different characteristics that need to be captured by a model. Traditional wrapper approaches are bound to the actual learning algorithm that builds the model, which requires computational effort and limits their range of application. Filter methods are independent of the future model, but mostly take the form of a black box algorithm, which does not allow analysts to monitor and interactively guide the feature selection. In this thesis, the filter concept for multivariate time series is advanced by making use of the human perception and interpretation abilities for independent evaluation of a feature subset's quality. To ensure independence, we derive a quality criterion from a general assumption about the relationship between input and output in a valid model. An overview visualization enables analysts to visually assess its validity and to steer the analysis towards regions of interest, where the feature subset's quality is not sufficient. Critical regions can be analyzed in detail using the surrounding system of linked views. Findings contribute to an interactive refinement of the feature subset, which might also include the analyst's expertise. We evaluate the proposed method by applying it to real-world sensor data and an artificial time-oriented data set. The analyst was able to quickly distinguish well-explained regions from critical parts of the feature space, for which the identification of an additional explanatory feature could be tackled straight-away. Due to visualization constraints, the approach can handle only two-dimensional feature subsets, which are taken as input to perform one feature selection iteration. Still, it might be an inspiring step in the direction of universal dimension reduction that involves the human strengths.