Tuesday 26th November 2024
"If it looks like a duck, quacks like a duck, swims like a duck, it’s probably a duck…right? WRONG."
Initially, deepfakes emerged as a novelty act in the tech world, often poking fun at celebrities with comedic augmented voiceovers. Remember the viral video of Donald Trump, Joe Biden, and Barack Obama dancing and singing along to "I’m Sexy and I Know It"? If you haven’t seen it, you should—it’s a perfect example of deepfakes' entertaining potential.
Deepfakes have rapidly evolved, highlighting both their potential for entertainment and for malicious deception. The latter creates significant ethical and security challenges, as they can be easily deployed to spread misinformation, manipulate opinion, exploit, commit fraud, and espionage at both nation-state and corporate levels.
For instance, cybercriminals can create deepfakes to spread misinformation about an organisation, impacting its brand, share price, and overall reputation, or to steal money and data. A survey conducted by Deloitte in 2024 revealed that 90% of large corporations consider deepfakes a major cybersecurity threat. Additionally, 50% of these companies reported experiencing at least one deepfake-related incident in the past year. According to a report by Cybersecurity Ventures, deepfake-related fraud is projected to cost businesses over $400 million annually by the end of 2024. This includes financial scams, identity theft, and corporate espionage.
The world of deepfakes is fascinating and challenging, offering a great example of how quickly the cybersecurity landscape has evolved. As the famous bon mot asserts, “Don’t believe anything you hear and only believe 80% of what you see, because the other 20% is an illusion.” This rings especially true in the world of deepfakes.
An example of deepfakes used for nefarious intent is the infiltration of KnowBe4, a leading provider of security awareness training. KnowBe4 needed a software engineer for their internal IT AI team. They posted the job, received resumes, conducted interviews, performed background checks, verified references, and hired the person. They sent them a Mac workstation, and the moment it was received, it immediately started to load malware.
The KnowBe4 HR team conducted four video conference-based interviews on separate occasions, confirming the individual matched the photo provided on their application. Additionally, a background check and all other standard pre-hiring checks were performed and came back clear due to the stolen identity being used. This was a real person using a valid but stolen US-based identity. The picture was AI-enhanced.
The EDR software detected it and alerted their InfoSec Security Operations Centre. The SOC called the new hire and asked if they could help. That's when it got dodgy fast. They shared the collected data with their colleagues at a leading global cybersecurity agency as well as the FBI, to corroborate their initial findings. It turns out, it was a fake IT worker from North Korea. The picture you see below is an AI fake that started out with stock photography.
Left is the original stock picture. Right is the AI fake submitted to HR.
The term ‘deepfake’ can be a little confusing. The ‘deep’ bit refers to the use of deep learning (neural networks and machine learning that reproduces highly realistic media based on training input) and the ‘fake’ bit, well…that means it’s fake. Deepfake, in a nutshell, is a highly realistic and deceptive piece of content generated through deep learning techniques. As AI continues to evolve, these deception techniques will only become even more sophisticated and harder to detect.
Our reliance on the internet and social media has created fertile ground for deception, and cybercriminals know it. Organisations face unprecedented challenges in keeping themselves safe in this modern-day gladiatorial cyber arena, continuously fighting off a multitude of attacks and evolving threats. As these threats become more sophisticated with the advancement of AI, legacy security solutions are becoming increasingly obsolete. Organizations that take a proactive stance are better equipped to respond to emerging threats. With AI changing the game at a rapid speed, security leaders must start preparing for an increased number of attacks with more sophistication than ever before.
At a personal level, people need to remain more vigilant than ever before to keep themselves and their organisations safe. What could be more compelling than a socially engineered BEC attack followed up by a video call with a highly realistic deepfake, perfectly mirroring the person you believe it to be?
Spotting sophisticated deepfakes is a challenging endeavour, but there are a few things you can do:
As AI technology continues to evolve, so too must our methods for detecting and defending against these sophisticated deceptions. As cybersecurity expert Bruce Schneier aptly puts it, “Security is a process, not a product.” Staying informed and vigilant is key to navigating the complex landscape of deepfakes. This blog is entitled ‘we invented skydiving and now we’re rushing to invent the parachute” - because in the case of deepfakes, technology has advanced so quickly that we're now scrambling to develop effective detection methods and legal frameworks to mitigate the risks. It's a powerful reminder of the importance of considering the potential consequences and ethical implications of new technologies as we develop them and think ahead to how their potential use for nefarious intent can be mitigated.
At Bytes, we work with leading cyber security vendors as well as being actively involved in a wide variety of thinktanks and advisory groups. Our team of experienced cyber security professionals are well placed to support your organisation in bolstering its defences and protecting your brand, your people and your stakeholders from cyber threats.
Thank you for reading.
If you have any questions, or would like to learn about any of the content covered in this blog, please email our friendly team via [email protected]
Credits:
Want to keep informed? Sign up to our Newsletter