Researchers at Googleβs Threat Intelligence Group (GTIG) have discovered that hackers are creating malware that can harness the power of large language models (LLMs) to rewrite itself on the fly.
An experimental malware family dubbed PROMPTFLUX, identified by GTIG in aΒ recent blog post, can rewrite its own code to avoid detection.
Itβs an escalation that could make future malware far more difficult to detect, further highlightingΒ growing cybersecurity concernsΒ brought on by the advent and widespread adoption of generative AI.
Tools like PROMPTFLUX βdynamically generate malicious scripts, obfuscate their own code to evade detection, and leverage AI models to create malicious functions on demand, rather than hard-coding them into the malware,β GTIG wrote.
According to the tech giant, this new βjust-in-timeβ approach βrepresents a significant step toward more autonomous and adaptive malware.β
PROMPTFLUX is a Trojan horse malware that interacts with Googleβs Gemini AI modelβs application programming interface (API) to learn how to modify itself to avoid detection on the fly.
βFurther examination of PROMPTFLUX samples suggests this code family is currently in a development or testing phase since some incomplete features are commented out and a mechanism exists to limit the malwareβs Gemini API calls,β the group wrote.
Fortunately, the exploit has yet to be observed infecting machines in the wild, as the βcurrent state of this malware does not demonstrate an ability to compromise a victim network or device,β Google noted. βWe have taken action to disable the assets associated with this activity.β
Nonetheless, GTIG noted that malware like PROMPTFLUX appears to be βassociated with financially motivated actors.β The team warned of a maturing βunderground marketplace for illicit AI tools,β which could lower the βbarrier to entry for less sophisticated actors.β
The threat of adversaries leveraging AI tools is very real. According to Google, βState-sponsored actors from North Korea, Iran, and the Peopleβs Republic of Chinaβ are already tinkering with the AI to enhance their operations.
In response to the threat, GTIG introduced a new conceptual framework aimed at securing AI systems.
While generative AI can be used to create almost impossible-to-detect malware, it can be used for good as well. For instance, Google recently introduced an AI agent,Β dubbed Big Sleep, which is designed to use AI to identify security vulnerabilities in software.
In other words, itβs AI being pitted against AI in a cybersecurity war thatβs evolving rapidly.