Scientists develop self - Propagating malware empowered by Artificial Intelligence
The team even embedded a malicious prompt in an image, triggering the AI to infect additional email clients.
Scientists have engineered a computer "worm" capable of spreading between computers using generative AI, raising concerns about the potential development of harmful malware in the near future, or potentially already in existence.
As outlined by Wired, the worm targets AI-powered email assistants to extract sensitive information from emails and disseminate spam messages, thereby infecting additional systems.
Cornell Tech researcher Ben Nassi, co-author of a paper awaiting peer review, highlighted the significance of this development, stating that it introduces the capability to execute a new form of cyberattack previously unseen.
Although AI-powered worms haven't been observed in real-world scenarios according to the report, researchers caution that it's just a matter of time before they emerge.
In a controlled experiment, researchers focused on email assistants powered by OpenAI's GPT-4, Google's Gemini Pro, and an open-source large language model called LLaVA. Using an "adversarial self-replicating prompt," they compelled AI models to generate prompts in their responses, leading to a chain reaction that could compromise these assistants and extract sensitive information such as names, phone numbers, credit card details, and Social Security numbers.
Essentially, due to the vast amount of personal data accessible to these AI assistants, they can be manipulated to disclose user secrets without effective safeguards in place.
By deploying a newly established email system capable of sending and receiving messages, the researchers successfully "poisoned" the database of a sent email, prompting the receiving AI to pilfer sensitive details. This method also facilitated the transmission of the worm to new machines.
The team even embedded a malicious prompt in an image, triggering the AI to infect additional email clients.
The researchers shared their findings with OpenAI and Google. An OpenAI spokesperson stated that the company is actively working to enhance the resilience of its systems. The urgency is emphasized by the researchers, who predict that AI worms could start spreading in the wild "in the next few years," leading to significant and undesirable consequences.
This alarming demonstration underscores the extent to which companies are incorporating generative AI assistants without taking proactive measures to avert potential cybersecurity disasters.