• Vita
  • Publications
  • Lectures
  • Projects
Show publication details

Fang, Meiling; Damer, Naser; Boutros, Fadi; Kirchbuchner, Florian; Kuijper, Arjan

The Overlapping Effect and Fusion Protocols of Data Augmentation Techniques in Iris PAD

2022

Machine Vision and Applications

Iris Presentation Attack Detection (PAD) algorithms address the vulnerability of iris recognition systems to presentation attacks. With the great success of deep learning methods in various computer vision fields, neural network-based iris PAD algorithms emerged. However, most PAD networks suffer from overfitting due to insufficient iris data variability. Therefore, we explore the impact of various data augmentation techniques on performance and the generalizability of iris PAD. We apply several data augmentation methods to generate variability, such as shift, rotation, and brightness. We provide in-depth analyses of the overlapping effect of these methods on performance. In addition to these widely used augmentation techniques, we also propose an augmentation selection protocol based on the assumption that various augmentation techniques contribute differently to the PAD performance. Moreover, two fusion methods are performed for more comparisons: the strategy-level and the score-level combination. We demonstrate experiments on two fine-tuned models and one trained from the scratch network and perform on the datasets in the Iris-LivDet-2017 competition designed for generalizability evaluation. Our experimental results show that augmentation methods improve iris PAD performance in many cases. Our least overlap-based augmentation selection protocol achieves the lower error rates for two networks. Besides, the shift augmentation strategy also exceeds state-of-the-art (SoTA) algorithms on the Clarkson and IIITD-WVU datasets.

Show publication details

Fang, Meiling; Damer, Naser; Boutros, Fadi; Kirchbuchner, Florian; Kuijper, Arjan

Cross-database and Cross-attack Iris Presentation Attack Detection Using Micro Stripes Analyses

2021

Image and Vision Computing

With the widespread use of mobile devices, iris recognition systems encounter more challenges, such as the vulnerability of Presentation Attack Detection (PAD). Recent works pointed out the contact lens attacks, especially images captured under the uncontrolled environment, as a hard task for iris PAD. In this paper, we propose a novel framework for detecting iris presentation attacks that especially for detecting contact lenses based on the normalized multiple micro stripes. The classification decision is made by the majority vote of those micro-stripes. An in-depth experimental evaluation of this framework reveals a superior performance in three databases compared with state-of-the-art (SoTA) algorithms and baselines. Moreover, our solution minimizes the confusion between textured (attack) and transparent (bona fide) presentations in comparison to SoTA methods. We support the rationalization of our proposed method by studying the significance of different pupil-centered eye areas in iris PAD decisions under different experimental settings. In addition, extensive cross-database and cross-attack (unknown attack) detection evaluation experiments are demonstrated to explore the generalizability of our proposed method, texture-based method, and neural network based methods in three different databases. The results indicate that our Micro Stripes Analyses (MSA) method has, in most experiments, better generalizability compared to other baselines.

Show publication details

Purnapatra, Sandip; Smalt, Nic; Bahmani, Keivan; Das, Priyanka; Yambay, David; Mohammadi, Amir; George, Anjith; Bourlai, Thirimachos; Marcel, Sébastien; Schuckers, Stephanie; Fang, Meiling; Damer, Naser; Boutros, Fadi; Kuijper, Arjan; Kantarci, Alperen; Demir, Başar; Yildiz, Zafer; Ghafoory, Zabi; Dertli, Hasan; Ekenel, Hazım Kemal; Vu, Son; Christophides, Vassilis; Dashuang, Liang; Guanghao, Zhang; Zhanlong, Hao; Junfu, Liu; Yufeng, Jin; Liu, Samo; Huang, Samuel; Kuei, Salieri; Singh, Jag Mohan; Ramachandra, Raghavendra

Face Liveness Detection Competition (LivDet-Face) - 2021

2021

IJCB 2021. IEEE/IARP International Joint Conference on Biometrics

IEEE International Joint Conference on Biometrics (IJCB) <2021, online>

Liveness Detection (LivDet)-Face is an international competition series open to academia and industry. The competition’s objective is to assess and report state-of-the-art in liveness / Presentation Attack Detection (PAD) for face recognition. Impersonation and presentation of false samples to the sensors can be classified as presentation attacks and the ability for the sensors to detect such attempts is known as PAD. LivDet-Face 2021 * will be the first edition of the face liveness competition. This competition serves as an important benchmark in face presentation attack detection, offering (a) an independent assessment of the current state of the art in face PAD, and (b) a common evaluation protocol, availability of Presentation Attack Instruments (PAI) and live face image dataset through the Biometric Evaluation and Testing (BEAT) platform. The competition can be easily followed by researchers after it is closed, in a platform in which participants can compare their solutions against the LivDet-Face winners.

Show publication details

Fang, Meiling; Damer, Naser; Boutros, Fadi; Kirchbuchner, Florian; Kuijper, Arjan

Iris Presentation Attack Detection by Attention-based and Deep Pixel-wise Binary Supervision Network

2021

IJCB 2021. IEEE/IARP International Joint Conference on Biometrics

IEEE International Joint Conference on Biometrics (IJCB) <2021, online>

Iris presentation attack detection (PAD) plays a vital role in iris recognition systems. Most existing CNN-based iris PAD solutions 1) perform only binary label supervision during the training of CNNs, serving global information learning but weakening the capture of local discriminative features, 2) prefer the stacked deeper convolutions or expert-designed networks, raising the risk of overfitting, 3) fuse multiple PAD systems or various types of features, increasing difficulty for deployment on mobile devices. Hence, we propose a novel attention-based deep pixel-wise bi-nary supervision (A-PBS) method. Pixel-wise supervision is first able to capture the fine-grained pixel/patch-level cues. Then, the attention mechanism guides the network to automatically find regions that most contribute to an accurate PAD decision. Extensive experiments are performed on LivDet-Iris 2017 and three other publicly available databases to show the effectiveness and robustness of proposed A-PBS methods. For instance, the A-PBS model achieves an HTER of 6.50% on the IIITD-WVU database outperforming state-of-the-art methods.

Show publication details

Boutros, Fadi; Damer, Naser; Fang, Meiling; Kirchbuchner, Florian; Kuijper, Arjan

MixFaceNets: Extremely Efficient Face Recognition Networks

2021

IJCB 2021. IEEE/IARP International Joint Conference on Biometrics

IEEE International Joint Conference on Biometrics (IJCB) <2021, online>

In this paper, we present a set of extremely efficient and high throughput models for accurate face verification, Mix-FaceNets which are inspired by Mixed Depthwise Convolutional Kernels. Extensive experiment evaluations on Label Face in the Wild (LFW), Age-DB, MegaFace, and IARPA Janus Benchmarks IJB-B and IJB-C datasets have shown the effectiveness of our MixFaceNets for applications requiring extremely low computational complexity. Under the same level of computation complexity (≤ 500M FLOPs), our MixFaceNets outperform MobileFaceNets on all the evaluated datasets, achieving 99.60% accuracy on LFW, 97.05% accuracy on AgeDB-30, 93.60 TAR (at FAR1e-6) on MegaFace, 90.94 TAR (at FAR1e-4) on IJB-B and 93.08 TAR (at FAR1e-4) on IJB-C. With computational complexity between 500M and 1G FLOPs, our MixFaceNets achieved results comparable to the top-ranked models, while using significantly fewer FLOPs and less computation over-head, which proves the practical value of our proposed Mix-FaceNets. All training codes, pre-trained models, and training logs have been made available https://github.com/fdbtrs/mixfacenets.

Show publication details

Fang, Meiling; Damer, Naser; Kirchbuchner, Florian; Kuijper, Arjan

Real Masks and Spoof Faces: On the Masked Face Presentation Attack Detection

2021

Pattern Recognition

Face masks have become one of the main methods for reducing the transmission of COVID-19. This makes face recognition (FR) a challenging task because masks hide several discriminative features of faces. Moreover, face presentation attack detection (PAD) is crucial to ensure the security of FR systems. In contrast to the growing number of masked FR studies, the impact of face masked attacks on PAD has not been explored. Therefore, we present novel attacks with real face masks placed on presentations and attacks with subjects wearing masks to reflect the current real-world situation. Furthermore, this study investigates the effect of masked attacks on PAD performance by using seven state-of-the-art PAD algorithms under different experimental settings. We also evaluate the vulnerability of FR systems to masked attacks. The experiments show that real masked attacks pose a serious threat to the operation and security of FR systems.

Show publication details

Boutros, Fadi; Damer, Naser; Fang, Meiling; Raja, Kiran; Kirchbuchner, Florian; Kuijper, Arjan

Compact Models for Periocular Verification Through Knowledge Distillation

2020

BIOSIG 2020

Conference on Biometrics and Electronic Signatures (BIOSIG) <19, 2020, Online>

GI-Edition - Lecture Notes in Informatics (LNI), P-306

Despite the wide use of deep neural network for periocular verification, achieving smaller deep learning models with high performance that can be deployed on low computational powered devices remains a challenge. In term of computation cost, we present in this paper a lightweight deep learning model with only 1.1m of trainable parameters, DenseNet-20, based on DenseNet architecture. Further, we present an approach to enhance the verification performance of DenseNet-20 via knowledge distillation. With the experiments on VISPI dataset captured with two different smartphones, iPhone and Nokia, we show that introducing knowledge distillation to DenseNet-20 training phase outperforms the same model trained without knowledge distillation where the Equal Error Rate (EER) reduces from 8.36% to 4.56% EER on iPhone data, from 5.33% to 4.64% EER on Nokia data, and from 20.98% to 15.54% EER on cross-smartphone data.

Show publication details

Fang, Meiling; Damer, Naser; Boutros, Fadi; Kirchbuchner, Florian; Kuijper, Arjan

Deep Learning Multi-layer Fusion for an Accurate Iris Presentation Attack Detection

2020

FUSION 2020

International Conference on Information Fusion (FUSION) <23, 2020, Online>

Iris presentation attack detection (PAD) algorithms are developed to address the vulnerability of iris recognition systems to presentation attacks. Taking into account that the deep features successfully improved computer vision performance in various fields including iris recognition, it is natural to use features extracted from deep neural networks for iris PAD. Each layer in a deep learning network carries features of different level of abstraction. The features extracted from the first layer to the higher layers become more complex and more abstract. This might point our complementary information in these features that can collaborate towards an accurate PAD decision. Therefore, we propose an iris PAD solution based on multi-layer fusion. The information extracted from the last several convolutional layers are fused on two levels, feature-level and score-level. We demonstrated experiments on both, off-theshelf pre-trained network and network trained from scratch. An extensive experiment also explores the complementary between different layer combinations of deep features. Our experimental results show that feature-level based multi-layer fusion method performs better than the best single layer feature extractor in most cases. In addition, our fusion results achieve similar or better results than the state-of-the-art algorithms on the Notre Dame and IIITD-WVU databases of the Iris Liveness Detection Competition 2017 (LivDet-Iris 2017).

Show publication details

Fang, Meiling; Damer, Naser; Kirchbuchner, Florian; Kuijper, Arjan

Demographic Bias in Presentation Attack Detection of Iris Recognition Systems

2020

28th European Signal ProcessingConference (EUSIPCO 2020). Proceedings

European Signal Processing Conference (EUSIPCO) <28, 2020, online>

With the widespread use of biometric systems, the demographic bias problem raises more attention. Although many studies addressed bias issues in biometric verification, there are no works that analyze the bias in presentation attack detection (PAD) decisions. Hence, we investigate and analyze the demographic bias in iris PAD algorithms in this paper. To enable a clear discussion, we adapt the notions of differential performance and differential outcome to the PAD problem. We study the bias in iris PAD using three baselines (hand-crafted, transfer-learning, and training from scratch) using the NDCLD- 2013 [18] database. The experimental results point out that female users will be significantly less protected by the PAD, in comparison to males.

Show publication details

Das, Priyanka; McGrath, Joseph; Fang, Zhaoyuan; Boyd, Aidan; Jang, Ganghee; Mohammadi, Amir; Purnapatra, Sandip; Yambay, David; Marcel, Sébastien; Trokielewicz, Mateusz; Maciejewicz, Piotr; Bowyer, Kevin W.; Czajka, Adam; Schuckers, Stephanie; Tapia, Juan; Gonzalez, Sebastian; Fang, Meiling; Damer, Naser; Boutros, Fadi; Kuijper, Arjan; Sharma, Renu; Chen, Cunjian; Ross, Arun A.

Iris Liveness Detection Competition (LivDet-Iris) – The 2020 Edition

2020

IJCB 2020. IEEE/IARP International Joint Conference on Biometrics

IEEE/IARP International Joint Conference on Biometrics (IJCB) <2020, online>

Launched in 2013, LivDet-Iris is an international competition series open to academia and industry with the aim to assess and report advances in iris Presentation Attack Detection (PAD). This paper presents results from the fourth competition of the series: LivDet-Iris 2020. This year's competition introduced several novel elements: (a) incorporated new types of attacks (samples displayed on a screen, cadaver eyes and prosthetic eyes), (b) initiated LivDet-Iris as an on-going effort, with a testing protocol available now to everyone via the Biometrics Evaluation and Testing (BEAT)* open-source platform to facilitate reproducibility and benchmarking of new algorithms continuously, and (c) performance comparison of the submitted entries with three baseline methods (offered by the University of Notre Dame and Michigan State University), and three open-source iris PAD methods available in the public domain. The best performing entry to the competition reported a weighted average APCER of 59.10% and a BPCER of 0.46% over all five attack types. This paper serves as the latest evaluation of iris PAD on a large spectrum of presentation attack instruments.

Show publication details

Fang, Meiling; Damer, Naser; Kirchbuchner, Florian; Kuijper, Arjan

Micro Stripes Analyses for Iris Presentation Attack Detection

2020

IJCB 2020. IEEE/IARP International Joint Conference on Biometrics

IEEE/IARP International Joint Conference on Biometrics (IJCB) <2020, online>

Iris recognition systems are vulnerable to the presentation attacks, such as textured contact lenses or printed images. In this paper, we propose a lightweight framework to detect iris presentation attacks by extracting multiple micro-stripes of expanded normalized iris textures. In this procedure, a standard iris segmentation is modified. For our Presentation Attack Detection (PAD) network to better model the classification problem, the segmented area is processed to provide lower dimensional input segments and a higher number of learning samples. Our proposed Micro Stripes Analyses (MSA) solution samples the segmented areas as individual stripes. Then, the majority vote makes the final classification decision of those micro-stripes. Experiments are demonstrated on five databases, where two databases (IIITD-WVU and Notre Dame) are from the LivDet-2017 Iris competition. An in-depth experimental evaluation of this framework reveals a superior performance compared with state-of-the-art (SoTA) algorithms. Moreover, our solution minimizes the confusion between textured (attack) and soft (bona fide) contact lens presentations.