Facemorphing - Interview with biometrics experts

(The article first appeared on the website of the Fraunhofer ICT Group: www.innovisions.de)

Face morphing refers to the superimposition and copying of two faces on a photo so that a new face is created that strongly resembles two people. What initially sounds like a little Photoshop gimmick has immense consequences. Because wherever biometric systems are used, e.g. at passport control, face morphing poses a major security risk. Dr. Naser Damer from Fraunhofer IGD talks about what this is and what solutions are being developed for this problem.

 

Dr. Damer, face morphing is not just a little gimmick, but is increasingly being used by criminals. Why is that? What exactly can happen when two faces are morphed in a photo?

 

Face morphing creates an image that in principle contains several identities. Neither the human eye nor the machine can recognize the fraud. If such a photo is on a passport or identity document, criminals who are banned from entering a certain country, for example, can still cross borders with such a photo. They use a passport with a false name, false details and a morphed photo. This photo serves as a verification tool for the police and border control. A passport proves that the identity actually belongs to a person who claims to be the owner of the identity. And with the help of face morphing, people with a criminal background can simply generate an alternative identity and identify themselves.

 

Identity checks are not only carried out by the human eye, especially at passport controls. The passport is also read by a machine, so it should be possible to check the authenticity of the document, especially the image. How is it that the system is so prone to errors and fails to recognize a morphed image?

 

This is due to machine learning. These biometric systems have been trained to think more or less like us humans. As a human being, you have to accept changes in the face of your counterpart every day. For example, they also recognize a family member when he or she changes facial expressions, i.e. smiles or cries. You also recognize him/her when their hair has been cut or dyed. You would even recognize people in a photo that is five years old. And that's exactly what the machine does. It has been trained to accept some changes to the face. If it didn't, there would be problems. For example, your passport is valid for 10 years and they expect that the border officials and the machine will be able to identify you with this passport and the corresponding picture during this time.

The machine therefore has a difficult task to master. It has to accept certain deviations from a person and at the same time be able to distinguish between different people. And as you said, there is actually a double check. So it's not just the automated system that can be outsmarted here. A police officer who checks the document manually would also allow a person with a morphed photo to pass.

 

This is where the research of the Fraunhofer Institute for Computer Graphics Research IGD comes into play. The institute is intensively involved in the field of biometrics and works in various research areas. For example, there is a project that deals with qualitative image recognition. Passport photos of insufficient quality are recognized from the outset and not approved. Is that correct?

 

That's right, we are also working on checking the quality of facial images. Part of this work also deals with the question of which images should be accepted on a passport.

Currently, the images on a passport follow certain standards that define what should be captured on the image. These standards include that the picture must be taken against a white background, that the face must be facing forward, that there must be no hair hanging in the face and much more. But these are ultimately only descriptive rules. An official checks the image for these quality metrics, but of course that is not enough. An image taken with a smartphone may comply with these rules, but the quality of the image itself is poor - or worse than it should be.

The image quality can of course also be related to the issue of face morphing attacks. However, a morphed image is not always one of poor quality. So you could say that there is a correlation between these research topics, but they are not the same thing.

 

Can you tell me more about the face morphing project?

 

Our main activities in the field of face morphing take place within the ATHENE project. ATHENE is a research institution of the Fraunhofer-Gesellschaft, more precisely, the "National Research Center for Applied Cybersecurity" in Darmstadt and Fraunhofer IGD is part of it, as is the Fraunhofer Institute for Secure Information Technology SIT. Within ATHENE, we deal with the topic of biometrics at various levels. Among other things, there is a research project on identity management and here we are researching face morphing. We see the recognition, detection and identification of face morphing attacks in particular as one of our main tasks.

Like all other types of (cyber) attacks, face morphing attacks are on the rise. Criminals are always finding a new way to set up such an attack - and it is correspondingly difficult to detect an attack without recognizing it. We therefore have two ways of countering this problem - and we use both of them. Firstly, we are working on detection algorithms that are designed to expect unknown attacks. In this way, we can be armed against an attack that is generated by previously unknown methods. In the process, we are creating a generalized face morphing detection, so to speak. On the other hand, we also simply try to be faster than the criminals themselves and develop possible attack scenarios and strategies. In this way, we can ensure that our detection algorithm is already aware of new, unused possibilities for face morphing attacks and can detect them accordingly in an emergency. You could say: we create the attack ourselves.

 

How exactly does it work?

 

There are certain points of interest on the human face. These are, for example, the corners of the mouth, the corners of the eyes and so on; these points can be variable. We generally speak of 68 well-defined points of interest. In the traditional way of morphing faces, two faces are used that are more or less similar in many of these feature points. The positions of these feature points can be identified either manually or automatically and then averaged, for example by image processing software, to create a new face. The averaging of the feature points is a central factor in face morphing. Some post-processing is then necessary to hide certain features and the editing process. This type of face morphing is sometimes difficult and sometimes easy to recognize. But you have to bear in mind that hackers and attackers are often much more technically adept and develop other methods. That is why we have also looked into a new method that hackers could use. We actually generate new faces based on generative adversarial networks. Deep learning and artificial intelligence play a key role here. We are no longer averaging values to create a new face. The system learns from the images, uses the factors and characteristics and, based on these feature structures, generates a new face completely automatically - which of course has characteristics of both original faces that served as the basis. With a face generated in this way, both original identities can now be identified and verified.

 

In other words, you morph images at the institute itself to reveal the error-prone nature of biometric systems?

 

That is correct. We have developed a method that shows how vulnerable our system still is.

 

This will probably serve as a basis for further research. Is there already a project working on a solution?

 

We are currently working on solutions to the problem. However, we cannot claim that these solutions are perfect. The reason for this is simple: there will continue to be new types of attacks in the future and for this reason our solution cannot be perfect. But we must work to create a near-perfect solution. In principle, we need to be one step ahead of criminals and also research new possibilities so that we can train and prepare our systems for them.

 

Can you give an example of how counterfeit protection in relation to ID documents with photos could be increased in general?

 

That is difficult to say. Because it doesn't just depend on Germany, for example, i.e. on German research and German guidelines. We could say that the passport issuing process needs to be changed. The photos already have to be checked here to see whether they are genuine or morphed. Then we could trust German passports more, but what about those from other countries? Face morphing attacks will continue to be a major problem. It is not enough to make the German passport - or a passport in general - secure for the time being, we have to track down the criminals and their methods. For this reason, I think that the only realistic solution is really to develop methods that detect this type of attack. And that is exactly what we are working on within the ATHENE research center.

 

Is Fraunhofer IGD involved in any other projects on biometrics or face morphing within ATHENE?

 

There are three biometrics projects within ATHENE that Fraunhofer IGD is also working on. However, these projects themselves are quite large, which is why they are also subdivided into different project activities.

We have already talked about two of the three projects. The first deals with the quality control of images with faces. We measure the usability of a facial image within facial recognition systems. However, this is not only important for passport checks; image recognition is also very relevant for forensic purposes, for example. After all, we need to know how much we can rely on a facial image to decide whether a person is a real person or not.

In the identity management project area, we focus on the variance and variability of biometric systems. This includes the aforementioned morphing attacks, but the field itself is much larger. So-called "presentation attacks" also fall into this area. Roughly speaking, we are dealing here with photos or videos of a person's face or fingerprint that are passed to a system in order to deceive the system. We also deal with attacks on privacy via such systems. For example, when you log in to your bank via facial recognition, you want your bank to use the information from your face only for proof of identity. But the fact is that the information from facial recognition can be used to determine your gender, race, health problems and more. And we want to make sure that facial recognition data is protected and not misused.

The third project deals with biometrics in embedded systems, i.e. systems that don't have so much variability, but have their own detection properties. In this area, we are working a lot on solutions for machine learning. The aim is to apply biometrics to cell phones or augmented reality cameras, for example, in such a way that they can actually be used to create greater security. Among other things, we are working on augmented reality cameras that can also be used for border controls. To achieve this, we need to develop machine learning solutions with low combination performance.

 

Finally, can you give us a brief research outlook? What will your future research work in the field of biometrics and face morphing look like?

 

As part of the identity management project, we are of course continuing to work on new solutions. We have identified and defined the main problem. Now we need to use and develop new machine learning methods to create highly generalizable solutions and detect unknown attacks in time.

Fraunhofer IGD is an institute for applied research, so we are not aiming to develop a theory that looks interesting on a scientific paper. We are interested in creating solutions that are generalizable and can be used practically to combat the problem.