The FTC is finally going after AI scammers

Android figures
(Image credit: Jerry Hildenbrand / Android Central)

The FTC has brought legal action against five companies that have used AI technology to mislead or defraud consumers in the United States. The new law enforcement sweep, dubbed Operation AI Comply, is using existing fair trade and consumer protection laws to target DoNotPay, Ascend Ecom, Ecommerce Empire Builders, FBA Machine, and Rytr charging that these companies use "AI tools to trick, mislead, or defraud people" and makes it clear that AI is not exempt from current laws.

Android & Chill

Android Central mascot

(Image credit: Future)

One of the web's longest-running tech columns, Android & Chill is your Saturday discussion of Android, Google, and all things tech.

Four of the five cases were unanimous decisions, while the case against Ryter was authorized 3-2, with Commissioners Melissa Holyoak and Andrew Ferguson voting against taking action. The FTC makes it very clear it will not tolerate "unfair or deceptive practices in these markets, [and] is ensuring that honest businesses and innovators can get a fair shot and consumers are being protected.”

It's about time, I say. I'm very familiar with the weirdly inaccurate terms machine learning and AI. I understand the strong points of software that can use one sort of information to present a different sort, and I understand how useful it can be. I also hate reading and writing about AI because of its reckless addition to everything and the actual harm it brings.

I'm not talking about smart robots that destroy humanity or how people with bad intentions can do bad things using AI instead of other tools. I'm talking about how AI can be used to trick even the most tech-savvy among us and how this happens every day.

Google Photos AI Magic Editor at Google IO 2023

(Image credit: Android Central)

These five cases from the FTC showcase why AI is bad. AI will never replace a living, breathing lawyer, and the claims that it can are simply lies. People who believed those lies were not properly represented when they took it upon themselves to appear in front of judges without an actual attorney.

AI is not able to build an online storefront that will make you millions. That requires hard work from you and a great idea or a large gift of cash from your parents. AI can write fake reviews for products to try and manipulate potential buyers, but that's not legal. None of these five businesses have any right to exist because they prey on consumers with bogus promises and deceptive schemes. The FTC says these companies were using AI to "turbocharge deception" and I could not say it any better.

My only complaint is that it took this long for a small sample of offensive companies who used the AI buzzword to steal money from the unsuspecting public. I'd also like to see much harsher penalties, so I guess I have a second complaint.

I do understand at least part of the problem, though. It seems like everyone is sitting around waiting for some magical AI regulations to happen at the federal level. While waiting, it seemed that nothing was being done using existing laws to weed out the predators. Again, I also realize I'm not a lawyer and have no idea what it takes to build a solid case against a business, and an AI lawyer is very vague on the subject.

AI replies are bad.

(Image credit: Future)

We need some common sense federal AI regulations. Not the kind that causes undue levels of bureaucracy and red tape or the kind that stifles honest innovation, but the kind that protects consumers like you and me against deceptive businesses like the ones mentioned by the FTC. I have no idea if that can ever happen with the current level of division between political parties, but I have my doubts. We'll have to settle for executive orders in the meantime.

Until then, using the laws that already exist to protect us from fraud sounds like a grand plan. Deceptive business practices don't become less deceptive because AI wrote the web page that describes them. The people committing the potential fraud will do it with or without AI, just like people who make fake political ads.

If we get that sorted out, maybe we can start using AI to do a few good things instead of constantly bickering about how bad it is.

Jerry Hildenbrand
Senior Editor — Google Ecosystem

Jerry is an amateur woodworker and struggling shade tree mechanic. There's nothing he can't take apart, but many things he can't reassemble. You'll find him writing and speaking his loud opinion on Android Central and occasionally on Threads.

  • SeeBeeEss
    Something tells me that the FTC, and other governing bodies around the world, are going to have trouble keeping up with the ne're-do-wells on this one.
    Reply