Deepfakes & Malware: Artificial Intelligence Growing Involvement in Cyberattacks

The Large Language Models (LLMs) that underpin modern Artificial Intelligence (AI) tools may be used to create malware that can self-augment and evade YARA regulations.

In a recent research shared with The Hacker News, Recorded Future stated that "Generative AI can be used to evade string-based YARA rules by augmenting the source code of small malware variants, effectively lowering detection rates.”

The results are part of a red teaming exercise meant to investigate dangerous applications of AI technology, which threat actors are already experimenting with to produce phishing emails, malware code snippets, and reconnaissance on possible targets.

The cybersecurity company claimed to have submitted STEELHOOK, a known piece of malware linked to the APT28 hacking group, along with its YARA rules to an LLM, asking it to alter the source code to avoid detection while maintaining the original functionality and producing error-free source code.

This feedback mechanism allowed the modified malware produced by the LLM to evade detection for basic string-based YARA rules. This method has several drawbacks, the most significant of which is that it is challenging to use on bigger code bases due to the model's limited capacity to handle text input at one time.

In addition to altering malware to evade detection, these artificial intelligence technologies have the potential to generate deepfakes that imitate prominent executives and leaders and carry out influence campaigns that replicate authentic websites on a large scale.

Deepfake Technology Continues to Cause Damage

These days, almost anyone can utilize easily accessible AI software to create material that appears to be doing and saying things that it never did. This makes it easier for dishonest people to commit fraud and other similar crimes against the public.

The risk associated with deepfakes is the inability to distinguish between artificial intelligence and reality. For our sister newspaper Technology Magazine, Heather Gantt-Evans, a former Chief Information Security Officer at SailPoint, explains, "By now, everyone has seen fake videos produced by deep learning (DL) and AI techniques, better known as 'deepfake' videos."

“However, imagine receiving a phishing email with a deepfake video of your CEO instructing you to go to a malicious URL. Or an attacker constructing more believable, legitimate-seeming phishing emails by using AI to better mimic corporate communications. Modern AI capabilities could completely blur the lines between legitimate and malicious emails, websites, company communications, and videos,” she continues.

If used with the right motives, deepfakes do have the ability to positively impact our lives. It has already been demonstrated that AI-generated media gives individuals a voice and empowers them on a larger scale.

Did you find this article interesting? Join our TTB Community on LinkedIn for more intriguing articles and cybersecurity updates.