AI is now emerging as a significant force in defining the next stage of the Internet’s evolution, which has gone through several phases. While the idea of Metaverse once attracted interest, the spotlight has now shifted to AI as ChatGPT plugins and AI-powered code generation for websites and applications are being quickly integrated into internet services.
WormGPT, a tool made recently for launching cyberattacks, phishing attempts, and business email compromises (BEC), has drawn attention to the less desirable applications of AI development.
Every third website appears to use AI-generated content in some capacity. Previously, marginalised individuals and Telegram channels would distribute lists of AI services for various occasions, similar to how news from various websites would be distributed. The dark web has now emerged as the new frontier for AI’s impact.
WormGPT represents a concerning development in this realm, providing cybercriminals with a powerful tool to exploit vulnerabilities. Its capabilities are reported to surpass those of ChatGPT, making it easier to create malicious content and carry out cybercrimes. The potential risks associated with WormGPT are evident, as it enables the generation of junk sites for search engine optimization (SEO) manipulation, the rapid creation of websites through AI website builders, and the spread of manipulative news and disinformation.
With AI-powered generators at their disposal, threat actors can devise sophisticated attacks, including new levels of adult content and activities on the dark web. These advancements highlight the need for robust cybersecurity measures and enhanced protective mechanisms to counter the potential misuse of AI technologies.
Earlier this year, an Israeli cybersecurity firm revealed how cybercriminals were circumventing ChatGPT’s restrictions by exploiting its API and engaging in activities such as trading stolen premium accounts and selling brute-force software to hack into ChatGPT accounts using large lists of email addresses and passwords.
The lack of ethical boundaries associated with WormGPT emphasizes the potential threats posed by generative AI. This tool allows even novice cybercriminals to launch attacks swiftly and on a large scale, without requiring extensive technical knowledge.
Adding to the concern, threat actors are promoting “jailbreaks” for ChatGPT, utilizing specialized prompts and inputs to manipulate the tool into generating outputs that may involve disclosing sensitive information, producing inappropriate content, or executing harmful code.
Generative AI, with its ability to create emails with impeccable grammar, presents a challenge in identifying suspicious content, as it can make malicious emails seem legitimate. This democratization of sophisticated BEC attacks means that attackers with limited skills can now leverage this technology, making it accessible to a wider range of cybercriminals.
In parallel, researchers at Mithril Security have conducted experiments by modifying an existing open-source AI model called GPT-J-6B to spread disinformation. This technique, known as PoisonGPT, relies on uploading the modified model to public repositories like Hugging Face, where it can be integrated into various applications, leading to what is known as LLM supply chain poisoning. Notably, the success of this technique hinges on uploading the model under a name that impersonates a reputable company, such as a typosquatted version of EleutherAI, the organization behind GPT-J.
Read more related topics:
Read More: mpost.io