Explainability and interpretability in biometrics

Modern biometric systems use deep learning methods. As a result, the automatically trained models are very accurate, but also highly complex, which makes them very difficult to interpret and understand. But explainability and transparency are the precise aspects of decision-making that are necessary to consolidate user trust and identify potential problems at an early stage. Especially with the processing of sensitive, personal data becoming prevalent in more and more areas, biometric systems often have a direct impact on people’s lives. Purely automated processing and lack of transparency soon raises ethical questions here.

 

In order to include more information in a biometric verification system than just a match/non-match decision, we developed a method that additionally reflects the reliability of the model used in that decision. This enables users to detect uncertain model decisions and, as appropriate, to perform a human check of the result. 

 

To the research study

 

The research on explainability and interpretability in biometrics is part of the Secure Identity Management project within the ATHENE Next Generation Biometrics mission. ATHENE, the National Research Center for Applied Cybersecurity is funded by the German Federal Ministry of Education and Research (BMBF) and the Hessen Ministry of Science and the Arts (HMWK).

Overview of our biometrics research