Of course Gemini is being used for crimes

Google Gemini Pro's new token count
(Image credit: Future)

Google has taken to the blog machine to tell us how Gemini is being used to commit crimes. Not just any crimes but even state-level World War-inducing crimes. Nobody should be surprised.

Google's Threat Intelligence Group has written a white paper outlining the various ways serious abuses and intelligence threats were committed using its generative AI platform Gemini. The list of abusers includes names you already thought of — Iran, North Korea, China, and more. If you find this sort of thing interesting like I do and don't mind Googling a few nerd terms to help it all make sense, you should read it.

Everyone should have expected this. AI has been developed to do plenty of things, and it's doing plenty of those things for both good and bad reasons. Iran, for example, used Gemini to research Western defense organizations, conduct reconnaissance, and try to phish data from defense employees. North Korea used it to see how it could attack infrastructure and steal cryptocurrency. Russia leaned heavily on Gemini for malware development.

This is all possible because Gemini was trained to know as much about these things as it could. That can be helpful, even aiding in defending against state-sponsored cybercrime. It can also be harmful because people — even the people in charge of spy stuff for other countries — want to see what they can get away with.

A slide during the GDC 2024 panel "Simulacra and Subterfuge: Building Agentic 'Werewolf'". It shows the text "Generative Werewolf" and avatars for LLM bots.

(Image credit: Michael Hicks / Android Central)

Google found over 42 different groups of people using Gemini to craft some sort of attack against Western countries and the people who live in them. That tells me it didn't find hundreds of other groups. Generative AI is just too easy to use for things like this, and that can't be ignored.

This isn't going to go away on its own, and it will only get worse. Using an AI like Gemini to find a way to recon the enemy or attack public infrastructure is a lot easier than putting people in the field to do either. AI is excellent at coding tasks and doesn't care if it's writing "Hello World" or an exploit to disable a bridge. AI also makes it easy — too easy — to impersonate anyone or anything to gain some sort of an advantage.

I'm not interested in a life of crime, and I assume you aren't either. But if we were, we could use AI to make it all so much easier.

Jerry Hildenbrand
Senior Editor — Google Ecosystem

Jerry is an amateur woodworker and struggling shade tree mechanic. There's nothing he can't take apart, but many things he can't reassemble. You'll find him writing and speaking his loud opinion on Android Central and occasionally on Threads.