Automated Fall Risk Assessment of Elderly Using Wearable Devices
Journal of Rehabilitation and Assistive Technologies Engineering (RATE)
Introduction: Falls cause major expenses in the healthcare sector. We investigate the ability of supporting a fall risk assessment by introducing algorithms for automated assessments of standardized fall risk-related tests via wearable devices. Methods: In a study, 13 participants conducted the standardized 6-Minutes Walk Test, the Timed-Up-and-Go Test, the 30-Second Sit-to-Stand Test, and the 4-Stage Balance Test repeatedly, producing 226 tests in total. Automated algorithms computed by wearable devices, as well as a visual analysis of the recorded data streams, were compared to the observational results conducted by physiotherapists. Results: There was a high congruence between automated assessments and the ground truth for all four test types (ranging from 78.15% to 96.55%), with deviations ranging all well within one standard deviation of the ground truth. Fall risk (assessed by questionnaire) correlated with the individual tests. Conclusions: The automated fall risk assessment using wearable devices and algorithms matches the validity of the ground truth, thus providing a resourceful alternative to the effortful observational assessment, while minimizing the risk of human error. No single test can predict overall fall risk; instead, a much more complex model with additional input parameters (e.g., fall history, medication etc.) is needed.
Transforming Seismocardiograms Into Electrocardiograms by Applying Convolutional Autoencoders
2020 IEEE International Conference on Acoustics, Speech, and Signal Processing. Proceedings
45th International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2020) <45, 2020, online>
Electrocardiograms constitute the key diagnostic tool for cardiologists. While their diagnostic value is yet unparalleled, electrode placement is prone to errors, and sticky electrodes pose a risk for skin irritations and may detach in long-term measurements. Heart.AI presents a fundamentally new approach, transforming motion-based seismocardiograms into electrocardiograms interpretable by cardiologists. Measurements are conducted simply by placing a sensor on the user’s chest. To generate the transformation model, we trained a convolutional autoencoder with the publicly available CEBS dataset. The transformed ECG strongly correlates with the ground truth (r=.94, p<.01), and important features (number of R-peaks, QRS-complex durations) are modeled realistically (Bland-Altman analyses, p>0.12). On a 5- point Likert scale, 15 cardiologists rated the morphological and rhythmological validity as high (4.63/5 and 4.8/5, respectively). Our electrodeless approach solves crucial problems of ECG measurements while being scalable, accessible and inexpensive. It contributes to telemedicine, especially in low-income and rural regions worldwide.
Detection of surface properties using image recognition techniques using deep learning algorithms
Rostock, Univ., Master Thesis, 2019
The era of Artificial Intelligence has achieved great advancements in the field of robotics. Deep convolutional neural networks which are branch in artificial intelligence have succeeded in solving many computer vision problems. Therefore, we chose to use ConvNets in detecting vegetation types and the state of the vegetation according to the nutrition content. To implement this, we have approached multi-task learning, where the same model is used to detect the type of the vegetation first, followed by the detection of the nutrition level. We have designed multiple architectures and finally used modified VGGNet model in classifying the nutrition level and custom architecture in classifying the type of the vegetation. As a pioneer in implementing the task using ConvNets, we have created our own dataset. Two patches with vegetation are planted and the nutrition for one patch is not provided while for the second patch regular nutrition is implemented. Images are extracted from both of the patches at regular intervals and are divided into different classes at every consecutive week after restricting the nutrition. The data is divided into 5 classes with 2000 images in each class. These five classes are divided according to the state of the vegetation without nutrition after every consecutive week. In this work, the possibilities to improve the accuracy considering time and resources into account are investigated and discussed. We have compared the obtained results using different architectures with different hyper-parameters.
Unobtrusive Vital Data Recognition by Robots to Enhance Natural Human–Robot Communication
Social Robots: Technological, Societal and Ethical Aspects of Human-Robot Interaction
The ongoing technical improvement of robotic assistants, such as robot vacuum cleaners, telepresence robots, or shopping assistance robots, requires a powerful but unobtrusive form of communication between humans and robots. The capabilities of robots are expanding, which entails a need to improve and increase the perception of all possible communication channels. Therefore, the modalities of text- or speech-based communication have to be extended by body language and direct feedback such as non-verbal communication. In order to identify the feelings or bodily reactions of their interlocutor, we suggest that robots should use unobtrusive vital data assessment to recognize the emotional state of the human. Therefore, we present the concept of vital data recognition through the robot touching and scanning body parts. Thereby, the robot measures tiny movements of the skin, muscles, or veins caused by the pulse and heartbeat. Furthermore, we introduce a camera-based, non-body contact optical heart rate recognition method that can be used in robots in order to identify humans’ reactions during robot-human communication or interaction. For the purpose of heart rate and heart rate variability detection, we have used standard cameras (webcams) that are located inside the robot’s eye. Although camera-based vital sign identification has been discussed in previous research, we noticed that certain limitations with regard to real-world applications still exist. We identified artificial light sources as one of the main influencing factors. Therefore, we propose strategies that aim to improve natural communication between social robots and humans.