Criminals’ Adoption of AI Remains Cautious, Report Finds

In a world increasingly anxious about the capabilities of artificial intelligence (AI) being misused for cyberattacks, a recent report by cybersecurity expert Trend Micro has shed light on the reality of this threat. The study highlights a cautious approach by criminals towards fully implementing AI technologies in their malevolent efforts.

According to Trend Micro, despite the ongoing use of generative AI by criminals since their last report eight months prior, the focus has remained on simpler applications rather than diving into the complicated world of advanced AI-enabled malware. “Criminals are primarily using generative AI for developing malware and enhancing social engineering schemes,” the report outlines. Notably, AI language models have been adeptly manipulated to craft phishing emails and scripts for scams that are more deceiving than ever before.

In a worrying trend, hackers have been seen offering services such as “jailbreak-as-a-service.” These services cleverly prompt commercial AI chatbots, like ChatGPT, to produce content that would usually be restricted, including instructions for illegal activities or explicit material. Instances like BlackhatGPT, which falsely poses as an original AI model while merely facilitating jailbreaking prompts to existing systems like OpenAI’s API, have been identified.

Moreover, platforms like flowgpt.com are being exploited to create AI agents tailored to follow specific, often criminal, prompts. The rise in fraudulent services, promising AI powers without actual delivery, exacerbates the issue. Scams under monikers like FraudGPT are becoming all too common in this murky underworld.

The offering of “deepfake” services by cybercriminals to assist in bypassing identity verification systems presents another pressing concern. Utilizing stolen ID photos, these services generate convincing synthetic images for fooling know-your-customer (KYC) protocols across banks and various institutions. Advertised on forums and via chat apps, the cost for these services can range dramatically from as little as $10 per image to $500 for a minute of video content.

Despite these advancements, the technology behind deepfakes still faces significant hurdles in convincingly impersonating individuals, particularly to those who are familiar with the person being mimicked. However, deepfake audio may hold potential for scams, such as fabricating kidnappings, even as more extensive deepfake-driven attacks impersonating executives have not significantly materialized yet.

Looking ahead, Trend Micro anticipates a slow uptake in AI-driven attacks over the next year or two, as criminals evaluate the cost and risks against existing, proven methods. The high expenses and technical barriers in training criminally intent AI models, such as a malware-embedded WormGPT, remain deterrents for widespread adoption.

The report advises that strengthening cyber defenses is crucial now, to stay ahead of more severe AI-enabled attacks in the future. Proactive security enhancements and the vigilant monitoring of criminal forums are recommended as part of preparing for the worst-case scenarios involving AI in cybercrime.

This gradual yet clear progression towards the adoption of AI in cybercriminal activities hints at an oncoming arms race between defenders and attackers in the digital realm. With generative AI’s advancement and increased accessibility, its allure for cybercriminals is set to rise, underlining the importance of robust countermeasures.

As highlighted by PYMNTS, AI is revolutionizing how security teams address cyberthreats, automating preliminary incident research, sifting through vast data volumes, and pinpointing complex patterns. This enables a quicker response and a more thorough understanding of threats right from the start.

Trend Micro’s insights provide a crucial perspective on the current cyberthreat landscape. While full-fledged AI-powered attacks may not yet be a reality, the infrastructure through jailbreaking services, deepfakes, and malware development is evidently being laid. The trajectory suggests a future where AI plays a dual role, serving both as a shield and a weapon in the cyber realm.

For businesses, this underscores the necessity of investment in technical defenses, acquiring AI-focused cybersecurity talent, and engaging in comprehensive threat intelligence. Keeping pace with—and staying ahead of—the criminal adoptive curve of AI will demand proactive strategies, swift responses, and a dedicated commitment to innovation and research. As we navigate the evolving cyberthreat landscape, effectively anticipating and counteracting the influence of AI in the hands of adversaries will be paramount.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

SEC Chairman Gensler Responds to Bitcoin Spot ETF Approval Misinformation and SEC Account Hack Incident

SEC Chair Gary Gensler Speaks Out on False Bitcoin Spot ETF Approval…

AI’s Challenge to Internet Freedom: Unmasking the Threat to Online Free Speech and Privacy

AI’s Challenge to Internet Freedom: A Rising Threat In October 2020, while…

Nucleus Security Lands $43 Million Series B Funding: Propelling Innovation in Vulnerability Management

Nucleus Security Secures $43 Million in Series B Funding to Lead Innovation…

From Controversy to Resilience: Noel Biderman’s Post-Scandal Journey after Ashley Madison Data Breach

Exploring the Aftermath: Noel Biderman’s Journey Post-Ashley Madison Data Breach In 2015,…