Fairness and bias

Biometric systems have become increasingly established in various fields such as criminology, border control, and even in the private sphere (for example, on smartphones). The situations in which such systems are used have an increasingly greater impact on individuals, and it is all the more worrying that recent studies have shown that they are, to some extent, biased. This means that they perform differently depending on demographic (gender, age, ethnicity) or even non-demographic characteristics and can discriminate against individuals as a result. This realization is now motivating innovative approaches and solutions to detect bias and enhance the fairness of biometric systems.

In our research on non-demographic bias, we were able to investigate and show how a range of factors such as different beard styles, hair colors and the like affect recognition accuracy. The result is a map that, for the first time, identifies and depicts positive and negative features, thereby presenting a comprehensive analysis.

Find out more in our study

 

To improve the fairness of a facial recognition system, we developed a group-based approach Link.

Instead of using only one global decision threshold for biometric comparisons, this approach automatically generates different groups on the basis of their biometric characteristics. Different thresholds are calculated for the overall data and individually for each group before the system is applied. Whenever a comparison between two facial images takes place, they are first assigned to their most similar group and their precomputed reference value is used to normalize the final biometric comparison of the facial images. This normalization has the effect of producing fairer treatment of groups that are disadvantaged by the system in the first place.

Overview of our biometrics research