Anomaly-based Face Search
Darmstadt, TU, Bachelor Thesis, 2020
Biometric face identification refers to the use of face images for the automatic identification of individuals. Due to the high performance achieved by current face search algorithms, these algorithms are useful tools, e.g. in criminal investigations. Based on the facial description of a witness, the number of suspects can be significantly reduced. However, while modern face image retrieval approaches either require an accurate verbal description or an example image of the suspect’s face, eyewitness testimonies can seldom provide this level of detail. Moreover, while eyewitness’ recall is one of the most convincing pieces of evidence, it is also one of the most unreliable. Hence, exploiting the more reliable, but vague memories about distinctive facial features directly, such as obvious tattoos, scars or birthmarks, should be considered to filter potential suspects in a first step. This might reduce the risk of wrongful convictions caused by retroactively inferred details in the witness’ recall for subsequent steps. Therefore, this thesis proposes an anomaly-based face search solution that aims at enabling a reduction of the search space solely based on locations of anomalous facial features. We developed an unsupervised image anomaly detection approach based on a cascaded image completion network that allows to roughly localize anomalous regions in face images. (1) This completion model is assumed to fill in deleted regions with probable values conditioned on all the remaining parts of the face image. (2) The reconstruction errors of this model were used as an anomaly signal to create a grid of potential anomaly locations in a given face image. (3) These grids, in the form of a thresholded matrix, were then subsequently used to search for the most relevant images. We evaluated the respective retrieval model on a preprocessed subset of 17.855 images of the VGGFace2 dataset. The three main contributions of this work are (1) a cascaded face image completion approach, (2) an unsupervised inpainting-based anomaly localization approach, and (3) a query-by-anomaly face image retrieval approach. The face inpainting achieved promising results when compared to other recent completion approaches since we didn’t leverage any adversarial component in order to simplify the entire training procedure. These inpaintings enabled to roughly localize anomalies in face images. The proposed retrieval model achieved a 60% hit rate at a penetration rate of about 20% over a gallery of 17.855 images. Despite the limitations of the proposed searching approach, the results revealed the potential benefits of using the more reliable anomaly information to reduce the search space, instead of entirely relying on the elicitation of detailed perpetrator descriptions, either in textual or in visual form.
Comparison-Level Mitigation of Ethnic Bias in Face Recognition
IWBF 2020. Proceedings
International Workshop on Biometrics and Forensics (IWBF) <8, 2020, online>
Current face recognition systems achieve high performance on several benchmark tests. Despite this progress,recent works showed that these systems are strongly biasedagainst demographic sub-groups. Previous works introducedapproaches that aim at learning less biased representations.However, applying these approaches in real applications requiresa complete replacement of the templates in the database. Thisreplacement procedure further requires that a face image ofeach enrolled individual is stored as well. In this work, wepropose the first bias-mitigating solution that works on thecomparison-level of a biometric system. We propose a fairnessdriven neural network classifier for the comparison of twobiometric templates to replace the systems similarity function.This fair classifier is trained with a novel penalization termin the loss function to introduce the criteria of group andindividual fairness to the decision process. This penalization termforces the score distributions of different ethnicities to be similar,leading to a reduction of the intra-ethnic performance differences.Experiments were conducted on two publicly available datasetsand evaluated the performance of four different ethnicities. Theresults showed that for both fairness criteria, our proposedapproach is able to significantly reduce the ethnic bias, whileit preserves a high recognition ability. Our model, build onindividual fairness, achieves bias reduction rate between 15.35%and 52.67%. In contrast to previous work, our solution is easy tointegrate into existing systems by simply replacing the systemssimilarity functions with our fair template comparison approach.
Learning Privacy-Enhancing Face Representations through Feature Disentanglement
15th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2020). Proceedings
International Conference on Automatic Face and Gesture Recognition (FG) <15, 2020, Buenos Aires, Argentina>
Convolutional Neural Networks (CNNs) are today the de-facto standard for extracting compact and discriminative face representations (templates) from images in automatic face recognition systems. Due to the characteristics of CNN models, the generated representations typically encode a multitude of information ranging from identity to soft-biometric attributes, such as age, gender or ethnicity. However, since these representations were computed for the purpose of identity recognition only, the soft-biometric information contained in the templates represents a serious privacy risk. To mitigate this problem, we present in this paper a privacy-enhancing approach capable of suppressing potentially sensitive soft-biometric information in face representations without significantly compromising identity information. Specifically, we introduce a Privacy-Enhancing Face-Representation learning Network (PFRNet) that disentangles identity from attribute information in face representations and consequently allows to efficiently suppress soft-biometrics in face templates. We demonstrate the feasibility of PFRNet on the problem of gender suppression and show through rigorous experiments on the CelebA, Labeled Faces in the Wild (LFW) and Adience datasets that the proposed disentanglement-based approach is highly effective and improves significantly on the existing state-of-the-art.
PE-MIU: A Training-Free Privacy-Enhancing Face Recognition Approach Based on Minimum Information Units
Research on soft-biometrics showed that privacy-sensitive information can be deduced from biometric data. Utilizing biometric templates only, information about a persons gender, age, ethnicity, sexual orientation, and health state can be deduced. For many applications, these templates are expected to be used for recognition purposes only. Thus, extracting this information raises major privacy issues. Previous work proposed two kinds of learning-based solutions for this problem. The first ones provide strong privacy-enhancements, but limited to pre-defined attributes. The second ones achieve more comprehensive but weaker privacy-improvements. In this work, we propose a Privacy-Enhancing face recognition approach based on Minimum Information Units (PE-MIU). PE-MIU, as we demonstrate in this work, is a privacy-enhancement approach for face recognition templates that achieves strong privacy-improvements and is not limited to pre-defined attributes. We exploit the structural differences between face recognition and facial attribute estimation by creating templates in a mixed representation of minimal information units. These representations contain pattern of privacy-sensitive attributes in a highly randomized form. Therefore, the estimation of these attributes becomes hard for function creep attacks. During verification, these units of a probe template are assigned to the units of a reference template by solving an optimal best-matching problem. This allows our approach to maintain a high recognition ability. The experiments are conducted on three publicly available datasets and with five state-of-the-art approaches. Moreover, we conduct the experiments simulating an attacker that knows and adapts to the systems privacy mechanism. The experiments demonstrate that PE-MIU is able to suppress privacy-sensitive information to a significantly higher degree than previous work in all investigated scenarios. At the same time, our solution is able to achieve a verification performance close to that of the unmodified recognition system. Unlike previous works, our approach offers a strong and comprehensive privacy-enhancement without the need of training.
SER-FIQ: Unsupervised Estimation of Face Image Quality Based on Stochastic Embedding Robustness
2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) <2020, virtual>
Face image quality is an important factor to enable high-performance face recognition systems. Face quality assessment aims at estimating the suitability of a face image for the purpose of recognition. Previous work proposed supervised solutions that require artificially or human labelled quality values. However, both labelling mechanisms are error prone as they do not rely on a clear definition of quality and may not know the best characteristics for the utilized face recognition system. Avoiding the use of inaccurate quality labels, we proposed a novel concept to measure face quality based on an arbitrary face recognition model. By determining the embedding variations generated from random subnetworks of a face model, the robustness of a sample representation and thus, its quality is estimated. The experiments are conducted in a cross-database evaluation setting on three publicly available databases. We compare our proposed solution on two face embeddings against six state-of-the-art approaches from academia and industry. The results show that our unsupervised solution outperforms all other approaches in the majority of the investigated scenarios. In contrast to previous works, the proposed solution shows a stable performance over all scenarios. Utilizing the deployed face recognition model for our face quality assessment methodology avoids the training phase completely and further outperforms all baseline approaches by a large margin. Our solution can be easily integrated into current face recognition systems, and can be modified to other tasks beyond face recognition.
Detecting Face Morphing Attacks by Analyzing the Directed Distances of Facial Landmarks Shifts
German Conference on Pattern Recognition (GCPR) <40, 2018, Stuttgart, Germany>
Lecture Notes in Computer Science (LNCS), 11269
Face morphing attacks create face images that are verifiable to multiple identities. Associating such images to identity documents lead to building faulty identity links, causing attacks on operations like border crossing. Most of previously proposed morphing attack detection approaches directly classified features extracted from the investigated image. We discuss the operational opportunity of having a live face probe to support the morphing detection decision and propose a detection approach that take advantage of that. Our proposed solution considers the facial landmarks shifting patterns between reference and probe images. This is represented by the directed distances to avoid confusion with shifts caused by other variations. We validated our approach using a publicly available database, built on 549 identities. Our proposed detection concept is tested with three landmark detectors and proved to outperform the baseline concept based on handcrafted and transferable CNN features.
Enhancing the Privacy of Face Recognition and its Representations
Darmstadt, TU, Master Thesis, 2019
For these reasons, this work aims at preventing unauthorized deduction of private softbiometriccharacteristics from image representations. Latent features should be extractedfrom facial images, so that sparse feature representations are obtained. The featurerepresentations should be transformed in a way, that the predictive performance of softbiometricestimators is reduced. Biometric systems should still be able to recognize anindividual using the transformed representations.These objectives are achieved by the main contribution, the Thomson loss, that is presentedin this work. By using the Thomson loss a neural network learns a transformation that canbe applied to feature representations of facial images. After the feature representationshave been transformed, even non-binary soft-biometric estimators cannot make reliablepredictions anymore.
Exploring the Channels of Multiple Color Spaces for Age and Gender Estimation from Face Images
International Conference on Information Fusion (FUSION) <22, 2019, Ottawa, Canada>
Soft biometrics identify certain traits of individuals based on their sampled biometric characteristics. The automatic identification of traits like age and gender provides valuable information in applications ranging from forensics to service personalization. Color images are stored within a color space containing different channels. Each channel represents a different portion of the information contained in the image, including these of soft biometric patterns. The analysis of the age and gender information in the different channels and different color spaces was not previously studied. This work discusses the soft biometric performances using these channels and analyzes the sample error overlap between all possible channels to successfully prove that different information is considered in the decision making from each channel. We also present a multi-channel selection protocols and fusion solution of the selected channels. Beside the analyzes of color spaces and their channels, our proposed multi-channel fusion solution extends beyond state-of-the-art performance in age estimation on the widely used Adience dataset.
How Do Demographic Soft-Biometric Attributes Affect Kinship Verification ?
Darmstadt, TU, Bachelor Thesis, 2019
In recent years, facial kinship verification has received considerable attention due to the easy acquisition of facial images and a large potential application area. Facial kinship verification is defined as the process to determine whether two identities are kin or not by automatically comparing their facial images. Facial kinship verification may have a wide range of potential uses, including aiding in the fight against human trafficking, handling conflicts resulting from the refugee crisis, family album organization, and social media analysis. Other potential applications lie in the academic field, such as genealogical studies and in the identification of the kin of victims or suspects by law enforcement , . In Germany, from March 1951 to April 2019, a total of 1995 cases of missing children are unresolved as reported by the Bundeskriminalamt . Due to the significant change in the look of children at adult age, the high similarity of a child’s appearance to their parents, and the much easier acquisition of photos than DNA, facial kinship verification could help resolve these, and similar cases. Unfortunately, the performance of such kinship verification systems is still too underdeveloped to be used for real-world applications . One issue consists of the non-generalizability of currently available data sets to the real-world data distribution . Lopez et al. received an acceptable accuracy on two data sets by only comparing the chrominance . Inspired by this, Dawson et al. built a “From Same Photo” classifier to compete for the kinship verification task by only assigning those pictures as kin which originated from the same photo . As another trait, Guo et al. included gender and age as information in the kinship verification process by only considering this information to determine whether a person of the potential kin pair is older or the age is approximately the same . Although age and ethnicity have not yet been explicitly implemented into the kinship verification process, the present thesis analyzes the impact of gender and these attributes on the kinship verification process. Accordingly, two widely used data sets were labeled manually with gender, age, and ethnicity. The impact of the addition of these traits to the baseline model was then analyzed. These additional traits could improve the accuracy of the kinship verification process slightly. Only a softbiometric classifier, including gender, age, and ethnicity, was built. A significant fraction of the kinship verification process may be explained solely by these attributes because of an inappropriate data set composition. Moreover, an incorrect construction in the two analyzed data sets can be found, which evoked majorly from the same pictures and the same identities in different folds. Understanding the shortcomings of previously conducted research can help future researchers improve their development of the kinship verification process.
Minutiae-Based Gender Estimation for Full and Partial Fingerprints of Arbitrary Size and Shape
Computer Vision - ACCV 2018
Asian Conference on Computer Vision (ACCV) <14, 2018, Perth, Australia>
Lecture Notes in Computer Science (LNCS), 11361
Since fingerprints are one of the most widely deployed biometrics, accurate fingerprint gender estimation can positively affect several applications. For example, in criminal investigations, gender classification may significantly minimize the list of potential subjects. Previous work mainly offered solutions for the task of gender classification based on complete fingerprints. However, partial fingerprint captures are frequently occurring in many applications, including forensics and the fast growing field of consumer electronics. Due to its huge variability in size and shape, gender estimation on partial fingerprints is a challenging problem. Therefore, in this work we propose a flexible gender estimation scheme by building a gender classifier based on an ensemble of minutiae. The outputs of the single minutia gender predictions are combined by a novel adjusted score fusion approach to obtain an enhanced gender decision. Unlike classical solutions this allows to deal with unconstrained fingerprint parts of arbitrary size and shape. We performed investigations on a publicly available database and our proposed solution proved to significantly outperform state-of-the-art approaches on both full and partial fingerprints. The experiments indicate a reduction in the gender estimation error by 19.34% on full fingerprints and 28.33% on partial captures in comparison to previous work.
Mitigating Ethnic Bias in Face Recognition Models through Fair Template Comparison
Darmstadt, TU, Master Thesis, 2019
Face recognition systems find many uses in daily life. For example, they can be used to unlock your phone or automatically tag a person in a photo but they are also used in other application fields such as in security environments or surveillance. However, there is a significant problem with these systems: they are often biased. These systems make much more mistakes on women and darker-skinned people than on men and light-skinned people. This bias comes from data which is heavily skewed towards light-skinned men and the systems learn from these data, reflecting this bias. As face recognition systems become more prevalent, solving this problem increasingly gains importance, especially when these mistakes can have a large impact, such as when they are used for identifying criminals but entire groups of persons are discriminated. The important question is: How can the bias be reduced as much as possible so that the systems get fairer while maintaining a sufficient recognition performance? There are several ways to tackle bias. Previous approaches tried to introduce balanced datasets or remove features which may lead to a bias. However, often they have to deal with the challenge of providing enough data for a balanced dataset or with performance drops. This is especially true for minority groups, as it is intrinsically hard to collect more data for them. Therefore, there exists an even stronger bias against minority groups. In this thesis, the focus is on reducing the ethnic bias of facial recognition systems through a fair template comparison method: We propose applying two different fairness concepts during the training of template comparison models by adding them as penalization terms to the loss function. The first concept, group fairness, aims at equalizing groups while the second concept, individual fairness, aims at equal treatment for similar individuals. Our approach is evaluated on two different datasets. The template comparison is realized as logistic regression and neural network models. The experiments show not only the influence of the fairness terms but also that it is possible to achieve a fairer system without a significant face recognition performance drop.
Multi-algorithmic Fusion for Reliable Age and Gender Estimation from Face Images
International Conference on Information Fusion (FUSION) <22, 2019, Ottawa, Canada>
Automated estimation of demographic attributes, such as gender and age, became of great importance for many potential applications ranging from forensics to social media. Although previous works reported performances that closely match human level. These solutions lack of human intuition that allows human beings to state the confidences of their predictions. While the human intuition subconsciously considers surrounding conditions or the lack of experience in a certain task, current algorithmic solutions tend to mispredict with high confidence scores. In this work, we propose a multi-algorithmic fusion approach for age and gender estimation that is able to accurately state the model’s prediction reliability. Our solution is based on stochastic forward passes through a dropout-reduced neural network ensemble. By utilizing multiple stochastic forward passes combined from the neural network ensemble, the centrality and dispersion of these predictions are used to derive a confidence statement about the prediction. Our experiments were conducted on the Adience benchmark.We showed that the proposed solution reached and exceeded state-of-the-art performance for the age and gender estimation tasks. Further, we demonstrated that the reliability statements of the predictions of our proposed solution capture challenging conditions and underrepresented training samples.
Reliable Age and Gender Estimation from Face Images: Stating the Confidence of Model Predictions
IEEE 10th International Conference on Biometrics: Theory, Applications and Systems
IEEE International Conference on Biometrics: Theory, Applications and Systems (BTAS) <10, 2019, Tampa, Florida, USA>
Automated age and gender estimation became of great importance for many potential applications ranging from forensics to social media. Although previous works reported high increased performances, these solutions tend to mispredict under challenging conditions or when the trained model faces a sample that was underrepresented in the training data. In this work, we propose an age and gender estimation model, as well as a novel reliability measure to quantify the confidence of the model’s prediction. Our solution is based on stochastic forward passes through dropout-reduced neural networks that were theoretically proven to approximate Gaussian processes. By utilizing multiple stochastic forward passes, the centrality and dispersion of these predictions are used to derive a confidence statement about the prediction. Experiments were conducted on the Adience benchmark. We showed that the proposed solution reached and exceeded state-ofthe-art performance. Further, we demonstrated that the proposed reliability measure correlates with the prediction performance and thus, is highly successful in quantifying the prediction reliability.
Suppressing Gender and Age in Face Templates Using Incremental Variable Elimination
The 12th IAPR International Conference On Biometrics
IAPR International Conference on Biometrics (ICB) <12, 2019, Crete, Greece>
Recent research on soft-biometrics showed that more information than just the person’s identity can be deduced from biometric data. Using face templates only, information about gender, age, ethnicity, health state of the person, and even the sexual orientation can be automatically obtained. Since for most applications these templates are expected to be used for recognition purposes only, this raises major privacy issues. Previous work addressed this problem purely on image level regarding function creep attackers without knowledge about the systems privacy mechanism. In this work, we propose a soft-biometric privacy enhancing approach that reduces a given biometric template by eliminating its most important variables for predicting soft-biometric attributes. Training a decision tree ensemble allows deriving a variable importance measure that is used to incrementally eliminate variables that allow predicting sensitive attributes. Unlike previous work, we consider a scenario of function creep attackers with explicit knowledge about the privacy mechanism and evaluated our approach on a publicly available database. The experiments were conducted to eight baseline solutions. The results showed that in many cases IVE is able to suppress gender and age to a high degree with a negligible loss of the templates recognition ability. Contrary to previous work, which is limited to the suppression of binary (gender) attributes, IVE is able, by design, to suppress binary, categorical, and continuous attributes.
To Detect or not to Detect: The Right Faces to Morph
The 12th IAPR International Conference On Biometrics
IAPR International Conference on Biometrics (ICB) <12, 2019, Crete, Greece>
Recent works have studied the face morphing attack detection performance generalization over variations in morphing approaches, image re-digitization, and image source variations. However, these works assumed a constant approach for selecting the images to be morphed (pairing) across their training and testing data. A realistic variation in the pairing protocol in the training data can result in challenges and opportunities for a stable attack detector. This work extensively study this issue by building a novel database with three different pairing protocols and two different morphing approaches. We study the detection generalization over these variations for single image and differential attack detection, along with handcrafted and CNNbased features. Our observations included that training an attack detection solution on attacks created from dissimilar face images, in contrary to the common practice, can result in an overall more generalized detection performance. Moreover, we found that differential attack detection is very sensitive to variations in morphing and pairing protocols.
Unsupervised Privacy-enhancement of Face Representations Using Similarity-sensitive Noise Transformations
Face images processed by a biometric system are expected to be used for recognition purposes only. However, recent work presented possibilities for automatically deducing additional information about an individual from their face data. By using soft-biometric estimators, information about gender, age, ethnicity, sexual orientation or the health state of a person can be obtained. This raises a major privacy issue. Previous works presented supervised solutions that require large amount of private data in order to suppress a single attribute. In this work, we propose a privacy-preserving solution that does not require these sensitive information and thus, works in an unsupervised manner. Further, our approach offers privacy protection that is not limited to a single known binary attribute or classifier. We do that by proposing similarity-sensitive noise transformations and investigate their effect and the effect of dimensionality reduction methods on the task of privacy preservation. Experiments are done on a publicly available database and contain analyses of the recognition performance, as well as investigations of the estimation performance of the binary attribute of gender and the continuous attribute of age. We further investigated the estimation performance of these attributes when the prior knowledge about the used privacy mechanism is explicitly utilized. The results show that using this information leads to significantly enhancement of the estimation quality. Finally, we proposed a metric to evaluate the trade-off between the privacy gain and the recognition loss for privacy-preservation techniques. Our experiments showed that the proposed cosine-sensitive noise transformation was successful in reducing the possibility of estimating the soft private information in the data, while having significantly smaller effect on the intended recognition performance.
CrazyFaces: Unassisted Circumvention of Watchlist Face Identification
IEEE 9th International Conference on Biometrics: Theory, Applications and Systems
IEEE International Conference on Biometrics: Theory, Applications and Systems (BTAS) <9, 2018, Redondo Beach, CA, USA>
Once upon a time, there was a blacklisted criminal who usually avoided appearing in public. He was surfing the Web, when he noticed, what had to be a targeted advertisement announcing a concert of his favorite band. The concert was in a near town, and the only way to get there was by train. He was worried, because he heard in the news about the new face identification system installed at the train station. From his last stay with the police, he remembers that they took these special face images with the white background. He thought about what can he do to avoid being identified and an idea popped in his mind “what if I can make a crazy-face, as the kids call it, to make my face look different? What do I exactly have to do? And will it work?”. He called his childhood geeky friend and asked him if he can build him a face recognition application he can tinker with. The geeky friend was always interested in such small projects where he can use open-source resources and didn’t really care about the goal, as usual. The criminal tested the application and played around, trying to figure out how can he make a crazy-face that won’t be identified as himself. On the day of the concert, he took off to the train station with some doubt in his mind and fear in his soul. To know what happened next, you should read the rest of this paper.
Deep and Multi-algorithmic Gender Classification of Single Fingerprint Minutiae
International Conference on Information Fusion (FUSION) <21, 2018, Cambridge, UK>
Accurate fingerprint gender estimation can positively affect several applications, since fingerprints are one of the most widely deployed biometrics. For example, gender classification in criminal investigations may significantly minimize the list of potential subjects. Previous work mainly offered solutions for the task of gender classification based on complete fingerprints. However, partial fingerprint captures are frequently occurring in many applications, including forensics and the fast growing field of consumer electronics. Moreover, partial fingerprints are not well-defined. Therefore, this work improves the gender decision performance on a well-defined partition of the fingerprint. It enhances gender estimation on the level of a single minutia. Working on this level, we propose three main contributions that were evaluated on a publicly available database. First, a convolutional neural network model is offered that outperformed baseline solutions based on hand crafted features. Second, several multi-algorithmic fusion approaches were tested by combining the outputs of different gender estimators that help further increase the classification accuracy. Third, we propose including minutia detection reliability in the fusion process, which leads to enhancing the total gender decision performance. The achieved gender classification performance of a single minutia is comparable to the accuracy that previous work reported on a quarter of aligned fingerprints including more than 25 minutiae.
Fingerprint and Iris Multi-biometric Data Indexing and Retrieval
International Conference on Information Fusion (FUSION) <21, 2018, Cambridge, UK>
Indexing of multi-biometric data is required to facilitatefast search in large-scale biometric systems. Previous worksaddressing this issue in multi-biometric databases focused onmulti-instance indexing, mainly iris data. Few works addressedthe indexing in multi-modal databases, with basic candidate listfusion solutions limited to joining face and fingerprint data. Irisand fingerprint are widely used in large-scale biometric systemswhere fast retrieval is a significant issue. This work proposes jointmulti-biometric retrieval solution based on fingerprint and irisdata. This solution is evaluated under eight different candidatelist fusion approaches with variable complexity on a databaseof 10,000 reference and probe records of irises and fingerprints.Our proposed multi-biometric retrieval of fingerprint and irisdata resulted in a reduction of the miss rate (1- hit rate) at 0.1%penetration rate by 93% compared to fingerprint indexing and88% compared to iris indexing.
P-score: Performance Aligned Normalization and an Evaluation in Score-level Multi-biometric Fusion
2018 Proceedings of the 26th European Signal Processing Conference (EUSIPCO)
European Signal Processing Conference (EUSIPCO) <26, 2018, Rome, Italy>
Normalization is an important step for different fusion, classification, and decision making applications. Previous normalization approaches considered bringing values from different sources into a common range or distribution characteristics. In this work we propose a new normalization approach that transfers values into a normalized space where their relative performance in binary decision making is aligned across their whole range. Multi-biometric verification is a typical problem where information from different sources are normalized and fused to make a binary decision and therefore a good platform to evaluate the proposed normalization.We conducted an evaluation on two publicly available databases and showed that the normalization solution we are proposing consistently outperformed state-of-the-art and best practice approaches, e.g. by reducing the false rejection rate at 0.01% false acceptance rate by 60- 75% compared to the widely used z-score normalization under the sum-rule fusion.
What Can a Single Minutia Tell about Gender?
2018 International Workshop on Biometrics and Forensics (IWBF)
International Workshop on Biometrics and Forensics (IWBF) <2018, Sassari, Italy>
Since fingerprints are one of the most widely deployed biometrics, several applications can benefit from an accurate fingerprint gender estimation. Previous work mainly tackled the task of gender estimation based on complete fingerprints. However, partial fingerprint captures are frequently occurring in many applications including forensics and consumer electronics, with the considered ratio of the fingerprint is variable. Therefore, this work investigates gender estimation on a small, detectable, and well-defined partition of a fingerprint. It investigates gender estimation on the level of a single minutia. Working on this level, we propose a feature extraction process that is able to deal with the rotation and translation invariance problems of fingerprints. This is evaluated on a publicly available database and with five different binary classifiers. As a result, the information of a single minutia achieves a comparable accuracy on the gender classification task as previous work using quarters of aligned fingerprints with an average of more than 25 minutiae.
Efficient, Accurate, and Rotation-Invariant Iris Code
IEEE Signal Processing Letters
The large scale of the recently demanded biometric systems has put a pressure on creating a more efficient, accurate, and private biometric solutions. Iris biometrics is one of the most distinctive and widely used biometric characteristics. High-performing iris representations suffer from the curse of rotation inconsistency. This is usually solved by assuming a range of rotational errors and performing a number of comparisons over this range, which results in a high computational effort and limits indexing and template protection. This work presents a generic and parameter-free transformation of binary iris representation into a rotation-invariant space. The goal is to perform accurate and efficient comparison and enable further indexing and template protection deployment. The proposed approach was tested on a database of 10 000 subjects of the ISYN1 iris database generated by CASIA. Besides providing a compact and rotational-invariant representation, the proposed approach reduced the equal error rate by more than 55% and the computational time by a factor of up to 44 compared to the original representation.
General Borda Count for Multi-biometric Retrieval
2017 International Joint Conference on Biometrics
IEEE International Joint Conference on Biometrics (IJCB) <2017, Denver, CO, USA>
Indexing of multi-biometric data is required to facilitate fast search in large-scale biometric systems. Previous works addressing this issue were challenged by including biometric sources of different nature, utilizing the knowledge about the biometric sources, and optimizing and tuning the retrieval performance. This work presents a generalized multi-biometric retrieval approach that adapts the Borda count algorithm within an optimizable structure. The approach was tested on a database of 10k reference and probe instances of the left and the right irises. The experiments and comparisons to five baseline solutions proved to achieve advances in terms of general indexing performance, tunability to certain operating points, and response to missing data. A clear advantage of the proposed solution was noticed when faced by candidate lists of low quality.
Indexing of Multi-biometric Databases: Fast and Accurate Biometric Search
Darmstadt, TU, Master Thesis, 2017
Biometrics is a rapidly developing field of research and biometric-based identification systems experience a massive growth all around the world caused by the gaining industrial, government and citizen acceptance. The US-VISIT program uses biometric systems to enforce homeland and border security, whereas in the United Arab Emirates (UAE), biometric systems play a major role in the border control process. Similar, in India, biometrics have gained a great deal of attention, as the Unique Identification Authority of India (UIDAI) have already registered over one billion Indian citizens in the last 7 years (uidai.gov.in). Despite the rapid propagation of large-scale databases, the majority of researchers are still focusing on the matching accuracy of small databases, while neglecting scalability and speed issues. Identity association is usually determined by comparing input data against every entry in the database, which causes computational problems when it comes to large-scale databases. Biometric indexing aims to reduce the number of candidate identities to be considered by an identification system when searching for a match in large biometric databases. However, this is a challenging task since biometric data is fuzzy and does not exhibit any natural sorting order. Current indexing methods are mainly based on tree traversal (using kd-trees, B-trees, R-trees) which suffer from the curse of dimensionality, while other indexing methods are based on hashing, which suffer from pure key generation. The goal of this thesis is to develop an indexing scheme based on multiple biometric modalities. It aims to present the main results of research focusing on iris and fingerprint indexing. Fingerprints are undisputedly the most studied biometric modality that are extensive used in civil and forensic recognition systems. Together with the potential rise of iris recognition accurateness along with enhanced robustness, indexing of this modalities becomes a promising field of research. Different unimodal and multimodal identification approaches have already been proposed in past years. However, most of them trade fast identification rates at the cost of accuracy, while the remaining make use of complex indexing structures, which results in a complete restructuring if insertions or deletions are necessary. This work offers a framework for fast and accurate iris indexing as well as effective indexing schemes to combine multiple modalities. To achieve that, three main contributions are made: First, a new rotation invariant iris representation was developed, reducing the equal error rate by more than 55% and the computation time by a factor up to 44 compared to the original representation. Second, this representation was used to construct an indexing scheme, which reaches a hit rate of 99.7% at 0.1% penetration rate, outperforming state of the art algorithms. And third, a general rank-level indexing fusion scheme was developed to effectively combine multiple sources, achieving over 99.98% hit rate at same penetration rate of 0.1%.
Indexing of Single and Multi-instance Iris Data Based on LSH-Forest and Rotation Invariant Representation
Computer Analysis of Images and Patterns
International Conference on Computer Analysis of Images and Patterns (CAIP) <17, 2017, Ystad, Sweden>
Indexing of iris data is required to facilitate fast search in large-scale biometric systems. Previous works addressing this issue were challenged by the tradeoffs between accuracy, computational efficacy, storage costs, and maintainability. This work presents an iris indexing approach based on rotation invariant iris representation and LSH-Forest to produce an accurate and easily maintainable indexing structure. The complexity of insertion or deletion in the proposed method is limited to the same logarithmic complexity of a query and the required storage grows linearly with the database size. The proposed approach was extended into a multi-instance iris indexing scheme resulting in a clear performance improvement. Single iris indexing scored a hit rate of 99.7% at a 0.1% penetration rate while multi-instance indexing scored a 99.98% hit rate at the same penetration rate. The evaluation of the proposed approach was conducted on a large database of 50k references and 50k probes of the left and the right irises. The advantage of the proposed solution was put into prospective by comparing the achieved performance to the reported results in previous works.