Beyond Identity: What Information Is Stored in Biometric Face Templates ?
IJCB 2020. IEEE/IARP International Joint Conference on Biometrics
IEEE/IARP International Joint Conference on Biometrics (IJCB) <2020, online>
Deeply-learned face representations enable the success of current face recognition systems. Despite the ability of these representations to encode the identity of an individual, recent works have shown that more information is stored within, such as demographics, image characteristics, and social traits. This threatens the user's privacy, since for many applications these templates are expected to be solely used for recognition purposes. Knowing the encoded information in face templates helps to develop bias-mitigating and privacy-preserving face recognition technologies. This work aims to support the development of these two branches by analysing face templates regarding 113 attributes. Experiments were conducted on two publicly available face embeddings. For evaluating the predictability of the attributes, we trained a massive attribute classifier that is additionally able to accurately state its prediction confidence. This allows us to make more sophisticated statements about the attribute predictability. The results demonstrate that up to 74 attributes can be accurately predicted from face templates. Especially non-permanent attributes, such as age, hairstyles, haircolors, beards, and various accessories, found to be easily-predictable. Since face recognition systems aim to be robust against these variations, future research might build on this work to develop more understandable privacy preserving solutions and build robust and fair face templates.
Enhancing the Privacy of Face Recognition and its Representations
Darmstadt, TU, Master Thesis, 2019
For these reasons, this work aims at preventing unauthorized deduction of private softbiometriccharacteristics from image representations. Latent features should be extractedfrom facial images, so that sparse feature representations are obtained. The featurerepresentations should be transformed in a way, that the predictive performance of softbiometricestimators is reduced. Biometric systems should still be able to recognize anindividual using the transformed representations.These objectives are achieved by the main contribution, the Thomson loss, that is presentedin this work. By using the Thomson loss a neural network learns a transformation that canbe applied to feature representations of facial images. After the feature representationshave been transformed, even non-binary soft-biometric estimators cannot make reliablepredictions anymore.