Google has identified five distinct families of AI-powered malware connected to cryptocurrency theft operations orchestrated by North Korean state-sponsored hackers. According to a recent report from Google’s Threat Intelligence Group (GTIG), these advanced malware strains exploit large language models (LLMs) to dynamically generate and modify malicious code, significantly increasing the sophistication and evasiveness of cyberattacks.
One of the most troubling revelations is the involvement of the North Korean cybercrime group UNC1069—also known by the alias Masan—which has been actively using artificial intelligence to infiltrate cryptocurrency wallets. Their tactics include probing digital wallets, crafting realistic phishing content, and designing tailored social engineering schemes aimed at deceiving victims and gaining unauthorized access to crypto assets.
GTIG’s findings highlight a growing trend where both criminal organizations and state-affiliated actors integrate LLMs into their malicious software to enhance adaptability and stealth. These AI-enhanced threats are capable of executing real-time modifications to their behavior, a significant departure from traditional malware, which typically relies on static, hard-coded logic.
Among the five malware families identified, two stood out for their groundbreaking use of AI: PROMPTFLUX and PROMPTSTEAL. PROMPTFLUX utilizes a modular system dubbed the “Thinking Robot,” which calls Google’s Gemini API on an hourly basis to rewrite its VBScript payloads. This continuous rewriting mechanism allows the malware to evade detection and adapt to different environments seamlessly.
PROMPTSTEAL, attributed to the Russian hacking collective APT28, employs the Qwen2.5-Coder model accessed via the Hugging Face platform. By generating Windows command sequences in real time, this malware enables attackers to perform customized actions without embedding predefined instructions. This technique, known as “just-in-time code creation,” allows for unprecedented flexibility and responsiveness during an intrusion.
These innovations mark a critical shift in malware design. Instead of relying on static code that security systems can analyze and block, AI-enhanced malware can adapt its behavior on the fly, making it significantly harder to detect and neutralize. This evolution poses a serious threat to digital infrastructure, particularly in sectors that handle large volumes of high-value assets like cryptocurrencies.
Google has taken direct action against these threats, deactivating user accounts associated with the malicious campaigns and introducing more rigorous security protocols. These include tighter API access controls and advanced filtering mechanisms to detect and prevent the misuse of AI tools by malicious actors.
The implications of AI-driven malware extend far beyond cryptocurrency theft. The ability of such malware to autonomously evolve its tactics could impact a wide range of industries, from finance to healthcare, where sensitive data and digital assets are prime targets. Moreover, the use of LLMs in malware development lowers the barrier to entry for less-skilled cybercriminals, enabling them to generate sophisticated code with minimal technical expertise.
This development also raises ethical and regulatory questions about the deployment and accessibility of large language models. As more powerful AI tools become publicly available, the risk of misuse by malicious entities increases. Current safeguards implemented by tech companies may need to be revisited and strengthened to prevent exploitation.
Furthermore, the dynamic nature of AI-enabled threats complicates incident response and digital forensics. Traditional security measures are often reactive, relying on known signatures or behaviors to identify malware. However, when the malware can rewrite itself in real time, those signatures become obsolete quickly, requiring new, proactive defense strategies.
Cybersecurity experts are now advocating for the integration of AI in defensive measures as well. Machine learning and LLMs could be used to detect anomalous behavior, predict attack patterns, and automate incident response. However, this creates an arms race where both attackers and defenders continuously evolve their AI capabilities.
The emergence of these AI-powered malware families underscores the urgent need for collaboration between governments, tech companies, and security researchers. Coordinated efforts are essential to track threat actors, share intelligence, and develop countermeasures that can keep pace with rapidly advancing technologies.
For individuals and organizations involved in cryptocurrency, heightened vigilance is more important than ever. Implementing multi-factor authentication, using cold wallets, and regularly monitoring transaction activity are just a few ways to mitigate risks. Organizations should also consider conducting regular security audits and investing in AI-based threat detection systems.
As AI continues to reshape the cybersecurity landscape, the line between automation and autonomy in cyberattacks is blurring. The ability of malware to think, adapt, and act independently introduces a new frontier in digital warfare—one that demands equally innovative defenses.

