Join us for the Human Behavior Conference on Oct. 30th

Practical steps to avoid deepfakes

Share This Post

In the modern era of technology, the methods of cyber criminals and bad actors alike continue to evolve. The topic of deepfake attacks is becoming more common in the digital landscape. An attacker impersonating someone we know using artificial intelligence is no longer a far-fetched idea out of science fiction media.

Individuals and corporations face the very real threat of deepfake attacks at an increasing rate. In fact, in 2021 the FBI issued a warning about the rising threat of synthetic content, which includes deepfakes. They followed this up with the anticipation of the technology being “increasingly used by foreign and criminal cyber actors for spear-phishing and social engineering in an evolution of cyber operational tradecraft.”

How can we effectively protect ourselves from bad actors using synthetic media in their attacks?
Practical Steps to Avoid Deepfakes

How Bad Actors Leverage Deepfake Technology

The creation of deepfakes relies on deep learning, a branch of AI that processes data and makes predictions similar to the human brain. Advancements in AI have led to increasingly realistic outputs made by such technology. Bad actors are now able to alter pre-recorded footage. They can also superimpose the face of a completely different individual over their own in real time.

A noteworthy example of this is the AI assisted social engineering attack on a multinational firm in Hong Kong. Bad actors tricked a finance worker into paying out $200 million Hong Kong dollars – about $25.6 million USD. What started off as a phishing message requesting the financial transaction then led to a live video conference call. The finance worker assumed this would be a secure way of validating the suspicious request. The finance worker believed they were communicating with the CFO and other colleagues and thus felt comfortable enough to comply with the request. Unknown to the employee at the time, EVERY PERSON in the video conference call was completely fake.

Along with social engineering attacks like the Hong Kong incident, bad actors also leverage deepfake technology in many other malicious ways. Some of these include identity fraud, spreading fake news, and sadly even blackmailing unsuspecting victims with fake pornography of the individual themselves. Truly, synthetic media has led to a surge of nefarious attack vectors against individuals and corporations alike.

Protective Measures

The criminal use of synthetic content leads many to wonder what steps they can take to protect themselves. Corporations especially remain on high alert and are actively looking for ways to defend themselves against deepfake attacks like the Hong Kong incident.

Indicators You Can Spot:

  • The FBI urges to look for key visual indicators that a piece of media may contain synthetic content. Such indicators would be distortions, warping, inconsistencies, or asymmetries in photos. These may typically be found around pupils and earlobes in still images. Common asymmetries found in synthetic images would be earrings that are different on each side. Or one eye could be slightly bigger than the other. In the case of full body images, many deepfake image models struggle to generate hands. So, it may be worthwhile to check for the correct number of fingers, and the size and realism of the hand as well.
  • In video form, key indicators would be improperly synced audio to lip movements. There may be unnatural blinking or flickering around the eyes. In addition, there may be odd lighting or shadows, and facial expressions that don’t match the emotional tone of speech. When it comes to videos using “face swap methods,” you can sometimes see where another person’s face has been blended onto the original forehead. Differences in skin texture or color may be noticeable, along with the inconsistencies in one’s hairline.

Training and Best Practices that Work

Of course, even knowing about visual indicators that we can spot right now is not 100% foolproof. With the ever-advancing capabilities of artificial intelligence, these inconsistencies and visual tells may soon be almost impossible to spot. Because of this, adopting good cyber hygiene as individuals, and employing effective security awareness training as a corporation, is imperative.

This would include:

  • Adopting a “zero trust philosophy” and verifying what you see by not assuming an online persona or individual is legitimate based on the existence of video, photographs, or audio.
  • Seek multiple, independent sources of information for verification of an individual, especially if they have strange requests seeking your information.
  • Be aware of potential attack vectors of bad actors, and train users to identify and report attempts of social engineering, which may compromise personal or corporate accounts.
  • For personal content and media, include digital fingerprints or watermarks that may make it more difficult for someone to create synthetic content from them. Some applications even add noise to your photos to mitigate the risk of it being utilized for AI-generated content.

Test. Educate. Protect.

By increasing our awareness of the new attacks that bad actors use, and employing best practices, we can effectively protect ourselves from the likes of Deepfake Attacks. Many corporations opt for advancing security awareness training to educate their staff members. At Social-Engineer LLC, our managed service programs, Vishing, Phishing, SMiShing, and Security Assessments, will Test, Educate, and Protect your company’s first line of defense – your employees.

Along with these live and tailored awareness programs comes our latest program, our premier AI-Based Social Engineering Audit. With our new cutting-edge service, we help your staff navigate the future of cybersecurity using advanced deepfake technologies. These hyper-realistic scenarios are designed to test and enhance your team’s vigilance, preparing them for real-world challenges. If you would like more information about this or any of our other security awareness programs, head to Social-Engineer.com.

Written by Josten Peña
Human Risk Analyst for Social-Engineer, LLC

More To Explore

How to build rapport in vishing
Vishing

How to Build Rapport in Vishing

When you hear the word “rapport,” what do you think of? It often involves actions like mirroring the other person’s body language and offering a warm smile, which can significantly

Practical steps to avoid deepfakes
Social Engineering

Practical Steps to Avoid Deepfakes

In the modern era of technology, the methods of cyber criminals and bad actors alike continue to evolve. The topic of deepfake attacks is becoming more common in the digital