Deepfake Terrorism: The Terrifying Threat of Manipulated Videos
Published:
Imagine a world where terrorists can manipulate videos to make it seem like world leaders are declaring war or provocative violence. This nightmare scenario is becoming increasingly plausible with the rise of deepfake technology. From spreading false information to destabilizing nations, the implications of deepfake terrorism are deeply unsettling. In this elaborate scheme, perpetrators employed deepfake technology to impersonate senior executives during a video conference, leading to substantial financial loss and widespread media attention.
BEC fraud using deepfakes in Hong Kong
The emergence of deepfake technology has ushered in a new era of sophisticated cybercrime, as evidenced by a recent incident in Hong Kong where a multinational company fell victim to a $25 million USD Business Email Compromise (BEC) scam. A finance clerk and senior executives of the company were present during a video conference to discuss an urgent confidential transaction, however the senior employees were actually deepfakes with the incident making national and international headlines.
Figure 1. A Hong Kong police officer Baron Chan holding a press conference on the incident
Details of the incident
The individual targeted in the incident was a financial officer located within the Hong Kong office of the multinational company. In mid-January, the officer received an email from the Chief Financial Officer (CFO) of the UK headquarters about the need for a secret business transaction. The officer initially suspected it was a phishing email, however was subsequently invited to a video conference attended by the CFO and several colleagues. During the meeting, all looked as usual, and led to a transfer of HK $200 million in 15 installments to 5 locations. A few days later, the Hong Kong branch confirmed the situation with their headquarters, and discovered that the whole process was fraud. The culprit is believed to have used YouTube videos and other materials published by the company to create deepfakes (see below).
Deepfakes and the rise of abuse
Deepfakes are an emerging type of threat falling under the umbrella of synthetic media which leverage a form of Artificial Intelligence (AI) to create a believable, realistic video, picture or audio files of events which never occurred or have been manipulated. In 2017, the term ‘deepfake’ was first coined by a Reddit user. This user created and posted a video combining the face of a famous actress with a pornographic video that used open source face-swapping technology. Despite advanced digital alteration technology existing for some time, it had typically been limited in access by cost and skill. Just a short amount of time later with the development of related technologies, tools and smartphone apps available, virtually anyone can easily create deepfakes.
Exploitation for cyber attacks
Cyber criminals are increasingly using deepfakes to deceive individuals and companies in attacks such as Business Email Compromise (BEC). Hacker forums and social media platforms such as Telegram exchange information such as how to create and use deepfakes to get past facial recognition, and also how to replace webcam footage with deepfakes.
Figure 2. Hacker Forum deepfake tutorial
Fraud Using Deepfakes Growing
Late last year, security firm Sum and Substance released its Identity Fraud Report 2023. According to the report, deepfake content has increased over the years, with 10 times more detected in 2023 than the previous year. At the same time, deepfakes are increasingly being used for fraud. The United States and Canada have seen a marked increase, however the trend is global.
Deepfake abuse in Japan
In Japan, an Information-technology Promotion Agency (IPA) report summarizes the damage caused by BEC and deepfakes in Japan, describing cases that occurred in 2023. According to one case in the report, an email was sent to an overseas affiliate of a Japanese organisation, disguising the source and impersonating the chairperson of the head office of the company. A phone call impersonating the voice of the managing director was also made using deepfake audio. Luckily the incident did not result in any financial damage due to awareness within the organization of the potential for such attacks. In addition to direct financial attacks, deepfake videos have been created to target politicians and entertainers in pranks and harassment, both of which have become a growing problem. In November last year, a video of Japanese Prime Minister Fumio Kishida making lewd comments went viral on social media. The video was criticized for dishonoring the prime minister and for using the logo of a popular news outlet. The creator of the video later deleted the video and posted an apology on X (formerly Twitter). Speaking to media, the creator stated how simple it was to create the deepfake video and that he had no programming knowledge, and was able to create it in just under an hour.
Summary
Advances in AI technology have allowed deepfake content creation with limited knowledge or technical skills. Images and videos of corporate executives on official websites make it easy for attackers to find content in which to create deepfakes. The number of cases of deepfakes being used for BEC and other crimes is also increasing rapidly. The IPA report mentioned above recommends that if you find anything suspicious about the contents of an email or phone call, you should always attempt to verify the identity of the person, or ask questions that only the sender of the email or the caller knows. Within any enterprise it is imperative to raise the awareness of deepfakes throughout the organization by educating employees on how to spot and identify deepfakes as well as how to deal with them.
About our Cyber Security Insights
This blog post is part of our The Cyber Security Insights, that are released several times every month, providing invaluable insights into the evolving threat landscape. Crafted by NTT Security Japan Inc. Consulting Services
Department’s OSINT Monitoring Team and NTT Security Sweden’s Incident Response Team, our content includes expert analysis on recent breaches, vulnerabilities, and cyber events. Stay ahead of the curve with our timely updates and actionable intelligence, ensuring your digital assets remain secure in an ever-changing environment.
Read more Cyber Security Insights here.
Sources:
- Hong Kong Free Press “Multinational losses HK$200 million to deepfake video conference scam, Hong Kong police say”
- 21 Yuzuru “Shake! “,” CFO, Runjaku! Hong Kong’s largest AI draft Xiang Qiang”
- Hong Kong Grid Logistics “Deepfake colleagues trick HK clerk into paying HK$200m”
- CNN “Finance worker pays out $25 million after video call with deepfake ‘chief financial officer’”
- Docomo business Watch “What is a deepfake? Security risks used in cyberattacks”
- PALOALTO INSIGHT “What technology is a deepfake?”
- Sum and Substance, 2023 Identity Theft & Fraud Statistics
- KnowBe4, Deepfakes: The New Star of Fraud
- IPA, “Cyberinformation Sharing Initiative (J-CSIP) Operational Status [April-June 2023]”
- Sankei Shimbun “Deepfake risk technology to spread and refine fake video of prime minister”
Want to know more about how we can help you with your cybersecurity?
Book a meeting with NTT Security experts to learn more about our advisory services and penetration testing. We help you protect sensitive data while ensuring privacy and convenience.