In May, a video was seen on the internet of Donald Trump advising the people of Belgium regarding climate change. While looking straight at the camera, he stated that if he had the courage to pull back from the Paris atmosphere understanding, so should they. This was a video which was made by a Belgian ideological group, Socialistische Partij Anders.
The video was posted on Sp.a’s Facebook and Twitter. A lot of people appeared to be shocked and concerned that the American President would say something regarding Belgium’s climate policy. A lot of people made comments regarding his speech as well, however, later it was confirmed that the video was fake and was only a hi-tech forgery.
According to Sp.a, a production studio had been authorized by them to make use of machine learning to create ‘deep fake’. Deep fake is a computer produced replication of an individual, in this situation Trump, doing or stating things he had never done or said.
The aim of Sp.a, was to make use of the fraud video to catch the public’s attention and then divert it to an online appeal approaching the Belgian government to make critical and quick moves related to climate action. Later on, the video’s makers stated that they thought the low quality of the fake video would be sufficient to alarm their followers about its inauthenticity. According to them, the lip movements of Trump made it clear that it was not an authentic speech. As soon as the Sp.a realized that their practical joke was being taken too seriously, their social media team went into damage control.
The communication team of Sp.a had obviously underestimated what effect their forgery might have on the audience. In any case, the small political party unintentionally gave a profoundly disturbing case of how the use of a fraud video can be troubling to the person being targeted and the audience as well. It determined on a small scale, how this innovation may be utilized to pose a threat to our already vulnerable information ecosystem.
How can Deepfakes be dangerous?
Deepfakes pose a different and serious threat as they are a malicious kind of facial recognition. Traditional facial recognition is used for a number of things today such as finding the snapshots of your friends in google photos. However, it can also scan your face at a concert or an airport without you knowing. As far as traditional facial recognition is concerned, its main aim is to transform your features into a specific code for computers, whereas the aim of deepfakes, on the other hand is to duplicate an identity so well that it is difficult to tell between the fake and the real. Not only can it ruin an individual’s life, but it is even more dangerous when you think that the technology is capable of manipulating the public’s perception of political leaders, head of states and other powerful personalities.
This is the reason that researchers from the Pentagon and forensics experts are looking for a way to detect deepfakes. According to researchers, it is easier to create a convincing deep fake these days as compared to detecting one. The technology has been designed to duplicate how different parts of a human face move on the camera and uses this to create a speaking, moving human – just like a photorealistic digital puppet. The driving force behind the rapid development of deep fakes is artificial intelligence. However, for this technology, in order to produce a video facial images need to be used. The truth still remains, that nobody is safe from this malicious technology, staying out of the public eye and social media also won’t help protect people from deep fakes, because these days almost everyone is exposed in one way or another.
Is face swap related to Deepfakes as well?
There are different types of deepfakes, each of them comes in different sizes, shapes, and threats. There are three main types of deep fakes, but the most common one is face swap. For some, face swapping can be nothing more than something harmless and fun. People make memes and swap faces of actors onto shows and movies they have never acted in. You often see memes where the face of one actor is swapped with another. However, though this might not pose such a threat but on a bigger picture, this technology can be insidious as well. Famous actresses, such as Gal Gadot and Scarlett Johansson, have been used in deepfakes videos. This has also been done to people who are not famous too.
What basically is needed for someone to create your deepfake is a collection of various images of your face. Since machine learning is utilized by the software, data sets of your face and another face are required in order to swap faces convincingly in a video. This is one of the main reasons that public figures and celebrities are the most common victims of deep fakes. The internet is filled with videos and source photos to create a veritable image stockpile.
How to detect deepfakes?
Though deepfake seem authentic, there are still a few ways to detect one as the new algorithm does have a few flaws. One way is to focus on how the stimulated faces blink. The normal human blinks between every 3 to 10 seconds. Hence, that is what we expect to see in a video of a normal human talking. However, this is not the case with deep fakes.
When a deep fake of a person is created, it uses the pictures of the individual that are available on the internet. Even for people who are often photographed, very few pictures of them are available online where their eyes are closed. Not only are pictures like these hard to find, but they are usually not even published by photographers. Hence, it is difficult to find a picture where the eyes of the main subject are shut. Without the images of the persons shut eyes, it is difficult for deep fake algorithms to create faces that blink at a normal rate. Hence, if you want to detect whether a video is genuine or just a deep fake you can focus whether the individual in the video is blinking normally. You can calculate the rate of blinking of the person in the video and then compare it with the natural rate at which a normal person blinks.