Emerging Cyberthreats: Nations Leverage Generative AI for Cyberattacks
In a revealing insight, Microsoft, in collaboration with its partner OpenAI, has unveiled how geopolitical adversaries including Iran, North Korea, Russia, and China are beginning to harness the capabilities of generative AI to advance their cyber offensive tactics. While these techniques are not described as particularly groundbreaking, the acknowledgment of their use signifies a shift towards more sophisticated methodologies in breaching networks and conducting influence operations.
The revelations came from a blog post by the Redmond, Washington-based tech giant, which emphasized the importance of bringing these early-stage, incremental tactics to the public eye. This move is part of a broader understanding of how AI, particularly large language models (LLMs) like OpenAI’s ChatGPT, is reshaping the landscape of cybersecurity.
Historically, both sides of the cybersecurity battle have employed machine learning—defenders use it to detect anomalies within networks while attackers use it to find new vulnerabilities. The advent of LLMs has significantly enhanced the capability of attackers, making the race between cybercriminals and cybersecurity professionals even more competitive.
Microsoft’s announcement comes alongside a report stressing the potential for generative AI to bolster malicious social engineering efforts, including creating sophisticated deepfakes and cloning voices. This poses a significant threat to democracy, particularly in a year witnessing numerous global elections, accentuating the risks of disinformation campaigns.
Among the instances Microsoft highlighted, the North Korean group Kimsuky utilized LLMs to craft content for spear-phishing campaigns targeting foreign think tanks. Meanwhile, Iran’s Revolutionary Guard leveraged these models for social engineering purposes and software troubleshooting, including the generation of phishing emails aimed at feminists. Russia’s GRU unit exploited AI to research technologies pertinent to the Ukrainian conflict. Chinese espionage groups, Aquatic Panda and Maverick Panda, explored LLMs to augment their spying capabilities on a range of topics from geopolitics to U.S. defense strategies.
OpenAI, in its own statement, noted that the exploits discovered align with previous assessments. The company underpins that their current GPT-4 model provides “only limited, incremental capabilities” for malicious cybersecurity tasks beyond existing non-AI tools.
The integration of AI into cyber warfare has been a topic of concern among U.S. security officials. Last April, Jen Easterly, director of the U.S. Cybersecurity and Infrastructure Security Agency, identified AI as one of the two “epoch-defining threats” facing the nation, alongside China. The call for AI to be developed with security in mind has never been more urgent.
However, the rapid deployment of AI models like ChatGPT by Microsoft and other tech giants has faced criticism for prioritizing innovation over security. Amit Yoran, CEO of cybersecurity firm Tenable, highlighted the inevitability of such technology being weaponized, suggesting that the industry opened “Pandora’s Box” without adequate consideration of the consequences.
Some in the cybersecurity field argue that companies should focus more on securing AI models rather than developing tools to mitigate the vulnerabilities they introduce. This sentiment is echoed by Gary McGraw, a veteran in computer security, who questions the logic in selling defensive tools for problems that the introduction of these AI models has exacerbated.
As the use of AI in cyberattacks becomes increasingly sophisticated, the call for secure AI development practices grows louder. Edward Amoroso, a former AT&T Chief Security Officer and current NYU professor, warns that while the threats may not seem immediately pressing, AI and LLMs are on track to become potent tools in the arsenals of nation-states’ military offenses.
As the digital age progresses, the intertwining of AI with cyber warfare represents a significant challenge for global security. How nations and corporations respond to this evolving threat landscape will undoubtedly shape the future of international relations and cybersecurity.