Google has initiated a lawsuit in a U.S. District Court in San Jose, California, targeting entities that allegedly used the buzz around artificial intelligence to deceive the public on Facebook.
The tech giant accuses these entities of using its logo in fake ads to trick users into downloading malware disguised as Bard, Google’s AI platform.
The court documents reveal that the scammers, using names like “Google AI” and “AIGoogle,” misled users with fraudulent social media posts and domains like gbard-ai.info and gg-bard-ai.com.
They used Google’s proprietary typeface, colors, and images, including those of Google CEO Sundar Pichai, to create a convincing facade. The malware, once installed, aimed to steal users’ social media login credentials, specifically targeting small business and advertiser accounts.
Google Takes Legal Action Against AI Scammers
The lawsuit aims to disrupt this scheme, increase public awareness, and prevent further harm. Google is seeking a jury trial against the defendants, emphasizing its commitment to protecting consumers and small businesses from online abuse and establishing legal precedents in emerging tech fields. Google is also highlighting the importance of clear rules against frauds and scams in novel settings.
This lawsuit comes at a time when advancements in AI are being exploited for sophisticated cybercrimes. The FBI has recently warned about the rise in extortion using AI-generated deepfakes. Cybersecurity firms like SlashNext have reported a dramatic increase in phishing emails, attributing this surge to cybercriminals using AI tools like ChatGPT to craft more convincing phishing messages.
While Google declined to comment directly on the case, the company has expressed its dedication to protecting internet users from fraudulent activities and scams. This lawsuit against AI scammers is part of Google’s broader strategy to combat the misuse of technology and safeguard the digital ecosystem.
This legal action by Google highlights the growing need for vigilance against AI-assisted cybercrimes and the efforts of tech giants to combat such threats.
As AI technology continues to evolve, companies and law enforcement agencies are increasingly focusing on preventing its misuse. They are also working to protect users from sophisticated online scams.
Disclaimer
Any data, text, or other content on this page is provided as general market information and not as investment advice. Past performance is not necessarily an indicator of future results.
The Trust Project is a worldwide group of news organizations working to establish transparency standards.
Nik is an accomplished analyst and writer at Metaverse Post, specializing in delivering cutting-edge insights into the fast-paced world of technology, with a particular emphasis on AI/ML, XR, VR, on-chain analytics, and blockchain development. His articles engage and inform a diverse audience, helping them stay ahead of the technological curve. Possessing a Master’s degree in Economics and Management, Nik has a solid grasp of the nuances of the business world and its intersection with emergent technologies.
Nik Asti
Nik is an accomplished analyst and writer at Metaverse Post, specializing in delivering cutting-edge insights into the fast-paced world of technology, with a particular emphasis on AI/ML, XR, VR, on-chain analytics, and blockchain development. His articles engage and inform a diverse audience, helping them stay ahead of the technological curve. Possessing a Master’s degree in Economics and Management, Nik has a solid grasp of the nuances of the business world and its intersection with emergent technologies.
Read More: mpost.io