Investigating Large Curved Interaction Devices
Personal and Ubiquitous Computing
Large interactive surfaces enable novel forms of interaction for their users, particularly in terms of collaborative interaction. During longer interactions, the ergonomic factors of interaction systems have to be taken into consideration. Using the full interaction space may require considerable motion of the arms and upper body over a prolonged period of time, potentially causing fatigue. In this work, we present Curved, a large-surface interaction device, whose shape is designed based on the natural movement of an outstretched arm. It is able to track one or two hands above or on its surface by using 32 capacitive proximity sensors. Supporting both touch and mid-air interaction can enable more versatile modes of use. We use image processing methods for tracking the user's hands and classify gestures based on their motion. Virtual reality is a potential use case for such interaction systems and was chosen for our demonstration application. We conducted a study with ten users to test the gesture tracking performance, as well as user experience and user preference for the adjustable system parameters.
SurfaceVox - Exploring Sound Control for Gesture-Tracking Interactive Surfaces
14th International Conference on Signal-Image Technology & Internet-Based Systems
International Conference on Signal Image Technology & Internet-Based Systems (SITIS) <14, 2018, Las Palmas de Gran Canaria, Spain>
Almost 100 years ago, the thereminvox was the first electronic musical instrument that could be controlled without contact. With precise positioning of two hands, the player controls pitch and volume of a sine sound, by changing the distance from two antennas. We present SurfaceVox, which combines the technology behind the thereminvox with an additional acoustic sensor to create a musical instrument that combines mid-air and touch gesture control. We explore various scenarios of sound synthesis and combine the system with an augmented reality application. SurfaceVox has been evaluated in a study with thirteen users for input precision, perceived workload, as well as pragmatic and hedonistic qualities of the application.
3D-printed Electrodes for Electric Field Sensing Technologies
Darmstadt, TU, Master Thesis, 2017
Electrical field sensing and capacitive sensing have been an intensively explored research topic for over a century. Combined with the rising popularity of rapid prototyping technologies, like affordable all- in-one micro-controller boards and especially fused filament fabrication 3D-printing, new possibilities occur. 3D-printing drives the ambitions of custom designed objects with fully integrated and unobtrusive electronics. Conductive 3D-printing materials (filaments) can be used to create electrodes for electrical field sensing. These electrodes can be 3D-printed as an integral part into the overall object. However, none of the previous work examines the properties of these conductive materials, the chosen 3D-printing configurations, and patters regarding their sensing performance and costs. This thesis provides a first insight into the interdependency between the chosen 3D- printing parameters and the overall sensing performance. For this, 30 3D-printed electrodes were created from graphene filament and evaluated against one copper electrode, and a placebo electrode. The evaluation was performed by a custom made measuring toolkit, the CapLiper, which was also evaluated for proper sensing behavior. The results show, that 3D-printed electrodes can compete with the sensing performance of copper electrodes, with some exceeding its performance. Using these results, as well as lessons learned in creating two different prototypes, the thesis establishes best practice and gives an outlook on potential future work in this domain.
Curved - Free-Form Interaction Using Capacitive Proximity Sensors
Procedia Computer Science [online]
International Conference on Ambient Systems, Networks and Technologies (ANT) <8, 2017, Madeira, Portugal>
Large interactive surfaces have found increased popularity in recent years. However, with increased surface size ergonomics become more important, as interacting for extended periods may cause fatigue. Curved is a large-surface interaction device, designed to follow the natural movement of a stretched arm when performing gestures. It tracks one or two hands above the surface, using an array of capacitive proximity sensors and supports both touch and mid-air gestures. It requires specific object tracking methods and the synchronized measurement from 32 sensors. We have created an example application for users wearing a virtual reality headset while seated that may benefit from haptic feedback and ergonomically shaped surfaces. A prototype with adaptive curvature has been created that allows us to evaluate gesture recognition performance and different surface inclinations.
Invisible Human Sensing in Smart Living Environments Using Capacitive Sensors
Ambient Assisted Living
Ambient Assisted Living (AAL) <9, 2016, Frankfurt, Germany>
Smart living environments aim at supporting their inhabitants in daily tasks by detecting their needs and dynamically reacting accordingly. This generally requires several sensor devices, whose acquired data is combined to assess the current situation. Capturing the full range of situations necessitates many sensors. Often cameras and motion detectors are used, which are rather large and difficult to hide in the environment. Capacitive sensors measure changes in the electric field and can be operated through any non-conductive material. They gained popularity in research in the last few years, with some systems becoming available on the market. In this work we will introduce how those sensors can be used to sense humans in smart living environments, providing applications in situation recognition and human-computer interaction. We will discuss opportunities and challenges of capacitive sensing and give an outlook on future scenarios.
Investigating Low-Cost Wireless Occupancy Sensors for Beds
Distributed, Ambient, and Pervasive Interactions
International Conference on Distributed, Ambient and Pervasive Interactions (DAPI) <4, 2016, Toronto, Canada>
Occupancy sensors are used in care applications to measure the presence of patients on beds or chairs. Sometimes it is necessary to swiftly alert help when patients try to get up, in order to prevent falls. Most systems on the market are based on pressure-mats that register changes in compression. This restricts their use to applications below soft materials. In this work we want to investigate two categories of occupancy sensors with the requirements of supporting wireless communication and a focus on low-cost of the systems. We chose capacitive proximity sensors and accelerometers that are placed below the furniture. We outline two prototype systems and methods that can be used to detect occupancy from the sensor data. Using object detection and activity recognition algorithms, we are able to distinguish the required states and communicate them to a remote system. The systems were evaluated in a study and reached a classification accuracy between 79 % and 96 % with ten users and two different beds.
Unsichtbare Erkennung menschlicher Aktivitäten in Smart Living Umgebungen mit Kapazitiven Sensoren
Zukunft Lebensräume 2016
Zukunft Lebensräume <2016, Frankfurt/Main, Germany>
Smart Living Umgebungen versuchen ihre Bewohner bei der Bewältigung alltäglicher Aufgaben zu unterstützen. Wünsche und Notwendigkeiten werden dynamisch erkannt und eine angemessene Reaktion erzeugt. Dies benötigt mehrere Sensoren, deren Daten intelligent kombiniert werden, um eine Vielzahl von Situationen zu erkennen. Häufig greift man hierbei auf Kameras und Bewegungsmelder zurück, die sich nur schwer unsichtbar in der Umgebung anbringen lassen. Kapazitive Sensoren messen Änderungen in elektrischen Feldern und können durch nicht-leitende Materialien hindurch Messungen vornehmen. In den letzten Jahren stieg ihre Popularität in Forschung und am Markt; insbesondere der fingerkontrollierte Touchscreen ist ein populäres Beispiel. In dieser Arbeit führen wir diese Art von Sensorik ein und stellen vor, inwiefern mit diesen menschliche Aktivitäten in Smart Living Umgebungen gemessen werden können. Wir stellen verschiedene Anwendungen in den Bereichen der Aktivitätserkennung und Mensch-Maschine-Interaktion vor, diskutieren Möglichkeiten und Herausforderungen der kapazitiven Sensorik und stellen zukünftige Forschungsrichtungen vor.
CapSeat - Capacitive Proximity Sensing for Automotive Activity Recognition
The 7th International Conference on Automotive User Interfaces and Interactive Vehicular Applications. Proceedings
International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI) <7, 2015, Nottingham, United Kingdom>
Inattentiveness is one of the major causes of traffic accidents. Advanced car safety systems try to mitigate this by detecting potential signs of distraction or tiredness, and providing alerts to the driver. In this paper we present CapSeat - a car seat equipped with integrated capacitive proximity sensors that are used to measure a wide range of physiological parameters about the driver. This can support safety systems by detecting inattentiveness and increase passive safety by facilitating suitable seat adjustments and posture detection. We present a sensor electrode layout suitable for detecting the necessary parameters and processing methods that acquire multiple physiological parameters from sensor data, using a variety of different algorithms. A prototype of the system is presented that was evaluated for all detectable parameters in a proof-of-concept study. We achieved a classification precision between 95% and 100%.
Gesture-Based Configuration of Location Information in Smart Environments with Visual Feedback
Distributed, Ambient, and Pervasive Interactions
International Conference on Distributed, Ambient and Pervasive Interactions (DAPI) <3, 2015, Los Angeles, CA, USA>
The location of objects and devices in a smart environment is a very important piece of information to enable advanced and sophisticated use cases for interaction and for supporting the user in daily activities and emergency situations. To acquire this information, we propose a semi-automatic approach to configure the location, size, and orientation of objects in the environment together with their semantic meaning. This configuration is typically done with graphical user interfaces showing either a list of objects or a representation of objects in form of 2D or 3D virtual representations. However, there is a gap between the real physical world and the abstract virtual representation that needs to be bridged by the user himself. Therefore, we propose a visual feedback directly in the physical world using a robotic laser pointing system.
An Optical Guiding System for Gesture Based Interactions in Smart Environments
Distributed, Ambient, and Pervasive Interactions
International Conference on Distributed, Ambient and Pervasive Interactions (DAPI) <2, 2014, Heraklion, Crete, Greece>
Using gestures to control Ambient Intelligence environments can result in mismatches between the user's intention and the perception of the gesture by the system. One way to cope with this problem is to provide the user with an instant feedback on what the system has perceived. In this work, we present an approach for providing visual feedback to users of Ambient Intelligence systems that rely on gestures to control individual devices within their environments. This paper extends our previous work on this topic  and introduces several enhancements to the system.
inDAgo - ein Mobilitätsunterstützungssystem für Senioren auf dem Weg in die Praxis
Wohnen - Pflege - Teilhabe. Besser leben durch Technik
Ambient Assisted Living (AAL) <7, 2014, Berlin, Germany>
Inklusion und soziale Teilhabe sind zentrale Themen in der Ambient Assisted Living (AAL) Forschung und eine Voraussetzung für soziale Teilhabe ist Mobilität. Im Rahmen der Initiative "Mobil bis ins hohe Alter" fördert das Bundes-ministerium für Bildung und Forschung (BMBF) verschiedene nationale Forschungsprojekte, die zum Ziel haben, Mobilitätsunterstützungssysteme für Senioren zu entwickeln. Eines dieser Projekte ist das inDAgo-Projekt, das im Herbst 2013 kurz vor der Vorstellung seiner Ergebnisse steht. In diesem Beitrag präsentieren wir das Konzept von inDAgo und den aktuellen Entwicklungsstand des Systems.
Context-Based Bounding Volume Morphing in Pointing Gesture Application
Human-Computer Interaction: Part IV
International Conference on Human-Computer Interaction (HCII) <15, 2013, Las Vegas, NV, USA>
In the last few years the number of intelligent systems has been growing rapidly and classical interaction devices like mouse and keyboard are replaced in some use cases. Novel, goal-based interaction systems, e.g. based on gesture and speech allow a natural control of various devices. However, these are prone to misinterpretation of the user's intention. In this work we present a method for supporting goal-based interaction using multimodal interaction systems. Combining speech and gesture we are able to compensate the insecurities of both interaction methods, thus improving intention recognition. Using a prototypical system we have proven the usability of such a system in a qualitative evaluation.
Providing Visual Support for Selecting Reactive Elements in Intelligent Environments
Transactions on Computational Science XVIII
International Conference on Cyberworlds (CW) <11, 2012, Darmstadt, Germany>
When realizing gestural interaction in a typical living environment there often is an offset between user-perceived and machine-perceived direction of pointing, which can hinder reliable selection of elements in the surroundings. This work presents a support system that provides visual feedback to a freely gesturing user; thus enabling reliable selection of and interaction with reactive elements in intelligent environments. We have created a prototype that is showcasing this feedback method based on gesture recognition using the Microsoft Kinect and visual support provision using a custom built laser-robot. Finally an evaluation has been performed, in order to prove the efficiency of such a system, acquire usability feedback and determine potential learning effects for gesture-based interaction.
Graphical User Interface for an Elderly Person with Dementia
Constructing Ambient Intelligence
International Joint Conference on Ambient Intelligence (AmI) <2, 2011, Amsterdam, The Netherlands>
Developing Graphical User Interfaces for elderly people with dementia requires a special care for the needs of the target group. This paper addresses the requirements and the development of a Graphical User Interface for elderly people with dementia with the focus of developing a calendar-like application to support the elderly person in everyday life. Furthermore, it describes the design of an interface for caregivers to enter data into the system.
Visual Support System for Selecting Reactive Elements in Intelligent Environments
2012 International Conference on Cyberworlds. Proceedings
International Conference on Cyberworlds (CW) <11, 2012, Darmstadt, Germany>
Concerning gestural interaction in realistic environments there often is an offset between perceived and actual direction of pointing that makes it difficult to reliably select elements in the environment. This work presents a visual support system that provides feedback to a user gesturing freely in an environment and thus enabling reliable selection of and interaction with reactive elements in intelligent environments. A prototype has been created that is showcasing this feedback method based on gesture recognition using the Microsoft Kinect and feedback provision using a custom laser-robot. Finally an evaluation has been performed, in order to prove the efficiency of such a system, acquire usability feedback and determine potential learning effects for gesture-based interaction.
Visual-aided Selection of Reactive Elements in Intelligent Environments
Darmstadt, TU, Bachelor Thesis, 2012
Since the vision of the vanishing, ubiquitous computer was formulated in the 1990s, Intelligent Environments have become the main topic of many research efforts. Interacting with Intelligent Environments is preferably following the multi-modal interaction paradigm such as the notable research on natural interaction that allows communication through facial expressions, voice commands and gestures. Gestural interaction in terms of pointing for selection is the main focus of this thesis. Although being regarded as intuitive for the user, it leads to a significant offset between the user's intention and the system's interpretation. This offset makes interaction with reactive elements in Intelligent Environments unintuitive and hardly predictable if no guidance is provided to the user. This thesis shows the challenges during the pointing for selection process, including the drawbacks of current guiding systems and presents a concept for solving these challenges with a ubiquitous visual guiding system. This system supports marker-free, full-body gestural interaction in Intelligent Environments by providing a visual cue on the location the user is currently pointing at. We expect this system to place the users in a situation where they are able to correct their pointing themselves, without extensive training of user or machine. This results in a more accurate and intuitive selection of reactive elements in Intelligent Environments. A prototype system - the E.A.G.L.E. - was build to realize this concept using a robotic laser pointing system. A comparative evaluation with a group of 20 subjects was performed to confirm our expectations regarding the intention-to-interpretation offset and the effects of the self-correction process caused by the visual cue, resulting in a significant gain in accuracy.