How deepfake endangering cyber security
Fakes are widespread on the Internet: manipulated images, fictitious messages or false claims. It is becoming increasingly difficult to distinguish between fake and reality. There is now a new counterfeit variant: deepfakes.
This kind of twisted reality first appeared on the Reddit platform in December 2017. A user had posted a video in which celebrity faces were credibly built into slippery scenes. For this, the anonymous sender created an algorithm based on freely available applications such as Keras and Google’s TensorFlow.
Such fake videos are now banned on many platforms such as Twitter or Discord. But deepfakes have already started their triumphal march and can no longer be stopped.
Counterfeiting without specialist knowledge
In the past, video fakes in which faces were exchanged could only be accomplished with a lot of work and specialist knowledge. This is not necessary for deep fakes, because the manipulations are largely created automatically by the PC.
Deepfakes owe their name to the fact that they are based on deep learning. This is a special type of machine learning that is used in artificial intelligence. The algorithms on which the deepfakes are based are fed with large amounts of image and video material. The more data from a person, the better and more accurate the result. Videos are the perfect source material because they can be split into hundreds of individual images and show the person from a wide variety of perspectives. Just a few hundred pictures of the target person are sufficient to create a plausible deep fake. An example of how it works can be seen here .
The actual work is done by a neural network. If you provide it with pictures of kittens, over time it learns to differentiate between animals and backgrounds so that it can ultimately create kittens itself. It works analogously with heads and faces. In the end, the neural network knows exactly which features the face of the target person has, and can independently generate it and plant it in existing videos. One algorithm creates the fake; a second searches for errors and reports them back. The quality of the fake increases with the number of repetitions. Voices can also be reproduced in this way, which is then called Deepvoice.
The basic difference to conventional fakes, in which the head from one image is inserted into another image and then retouched a little with Photoshop, is as follows: Deepfakes do not deal with existing data, but generate new material.
The code for deepfakes is open source and is therefore available to everyone. There are several projects on GitHub in which developers are working on such algorithms, one example being FakeApp.
Deepfakes look authentic at first glance, but are not perfect. If there is not enough data available from certain points of view, the algorithm has to forego details. This leads to slightly spongy image areas. Even the hairline and ears sometimes look a little fuzzy or fake when you look closely. Small details are often treacherous: two unequal earrings, too many incisors, unsymmetrical glasses. The neural network does not even know the real world, nor does it know how depth of field or shadows look physically correct. People usually notice “weird” things quickly.
With the help of special programs, blink patterns, skin color or the pulsation of the blood can also be analyzed. A Siver Bullet against deepfakes does not yet exist.
How to Create Deepfakes
To create deep fakes with a program like FakeApp, all you need is a powerful Nvidia graphics card. It provides the necessary computing power. It basically works with a PC CPU, but it takes much longer. Deepfakes are therefore not only available to YouTube trolls, but also to cybercriminals.
“Deepfakes could be used to blackmail companies or individuals by placing the target in a criminal setting. Fake manager orders to transfer a certain amount may result in financial loss. The video identification process, which is common in the private sector, is no longer counterfeit-proof, ”explains Jelle Wieringa, Security Awareness Advocate at KnowBe4.
Social changes are to be feared when it becomes increasingly difficult to determine the authenticity of a video. Video evidence – in court, not in football – may also lose its evidential value. Ultimately, deep fakes and deep voices can also be used for opinion making and propaganda and to influence politics. In the age of social media, videos spread quickly, even if they are obviously deepfakes. You see what you want to see. In addition: Deepfakes will appear increasingly credible in the future.
Legal measures against Deepfakes
China has released a new government policy designed to prevent AI-bogus news and misleading videos. The new regulation provides that when it is published on the Internet, it should be indicated that the article in question was created using AI or VR technology. If this does not happen, criminal proceedings are opened. The regulation will come into force on New Year 2020. Enforcement is handled by the Cyberspace Administration of China.
The Chinese government is following suit with similar laws that already exist in the United States to combat cyber crime. Last month, California was the first state to criminalize the use of deepfakes in political campaign advertising. The law, called AB 730 and signed by Governor Gavin Newsom, makes it a crime to publish audio, video or video data that gives a false or harmful impression of a politician’s words or actions. The standard applies to candidates within 60 days of an election and is due to expire by 2023 unless explicitly extended.
In Germany there is no basic law regulating deepfakes. In fact, two legal texts on the subject have to be consulted. On the one hand, the use of faces violates the right to one’s own image, which is regulated in the Art Copyright Act. Accordingly, pictures of people cannot be used without their consent. In addition, the Basic Law regulates personal rights, since public defamation could damage the reputation of those affected.
Training and awareness raising
“Employees are the greatest risk, and the safety-conscious behavior of every individual is of crucial importance. Companies should create awareness that deepfake attacks are possible at any time. Who knows that a voice on the phone does not necessarily belong to the expected person can better avoid possible attacks. The sole implementation of security technologies such as firewalls, IDS or endpoint protection is no longer sufficient. In addition, “human firewalls” are required, “explains Wieringa.
One way to raise awareness is through company-specific training courses that deal with possible social engineering and deepfake attacks on the respective company. Generic awareness training, on the other hand, is often viewed by employees as a tedious compulsory exercise and consequently achieves moderate success at best. Many companies still rely on the fear factor. In order to achieve behavior changes, however, it is more promising to emphasize the positive effects of correct behavior. Wieringa explains: “Security awareness training should not just be about imparting knowledge, but about changing behavior. This works best with emotions. The employees must have fun learning safety-conscious behavior. “
The trainings of the KnowBe4 training platform also work with comics and Netflix-like video series. Cliffhangers ensure that customers want more of it themselves. Gamification elements such as badges to be won or competitions between several teams in a company also playfully increase awareness. Training should be fun, interactive, and tailored to current threats.
The technology for deepfakes will soon be so good that the voice can be generated in real time. An attacker can thus interact directly with his victim and sound like a superior or the CEO of the company. So it is high time to sensitize employees as well as private individuals. On the other hand, Deepvoice could also be used to conduct training with the voice of the boss or a popular actor.