California's bizarre proposed AI regulations would make things worse, not better

Android figures
(Image credit: Jerry Hildenbrand / Android Central)

California's SB 1047 is a bill that places liability on AI developers and it just passed the vote in the state assembly. The next step would be to go to the governor's desk to either be signed into law or rejected and sent back for more voting. We should all hope the latter happens because signing this bill into law solves none of AI's problems and would actually worsen the problems it intends to fix through regulation.

Android & Chill

Android Central mascot

(Image credit: Future)

One of the web's longest-running tech columns, Android & Chill is your Saturday discussion of Android, Google, and all things tech.

SB 1047 is not completely bad. Things like forcing companies to implement reasonable security protections or a way to shut any remote capability down when a problem arises are great ideas. However, the provisions of corporate liability and vague definitions of harm should stop the bill in its tracks until some changes are made.

You can do terrible things using AI. I'm not denying that, and I think there needs to be some sort of regulatory oversight to monitor its capabilities and the safety guardrails of its use. Companies developing AI should do their best to prevent users from doing anything illegal with it, but with AI at your fingertips on your phone, people will find ways to do it anyway.

When people inevitably find ways to sidestep those guidelines, those people need to be held responsible not the minds that developed the software. There is no reason laws can't be created to hold people liable for the things they do and those laws should be enforced with the same gusto that existing laws are.

The announcement of GPT-4o.

(Image credit: OpenAI)

What I'm trying to politely say is laws like this are dumb. All laws — even the ones you might like — that hold companies creating legal and beneficial goods, physical or digital, responsible for the actions of people who use their services are dumb. That means holding Google or Meta responsible for AI misuse is just as dense as holding Smith & Wesson responsible because of things people do. Laws and regulations should never be about what makes us comfortable. Instead, they should exist to place responsibility where it belongs and make criminals liable for their actions.

AI can be used to do despicable things like fraud and other financial crimes as well as social crimes like creating fake images of people doing something they never did. It can also do great things like detect cancer, help create life-saving medications, and make our roads safer.

Creating a law that makes AI developers responsible will stifle those innovations, especially open-source AI development where there aren't billions of investment capital flowing like wine. Every new idea or change of existing methods means a team of legal professionals will need to comb through, making sure the companies behind these projects won't be sued once someone does something bad with it — not if someone does something bad, but when

No company is going to move its headquarters out of California or block its products for use in California. They will just have to spend money that could be used to further research and development in other areas, leading to higher consumer costs or less research and product development. Money does not grow on trees even for companies with trillion-dollar market caps.

Mozilla AI logo

(Image credit: Mozilla)

This is why almost every company at the leading edge of AI development is against this bill and is urging Governor Newsom to veto it the way it stands now. You would naturally expect to see some profit-driven organizations like Google or Meta speak out against this bill, but the "good guys" in tech, like Mozilla, are also against it as written.

AI needs regulation. I hate seeing a government step into any industry and create miles of red tape in an attempt to solve problems, but some situations require it. Someone has to try and look out for citizens, even if it has to be a government filled with partisanship and technophobic officials. In his case there simply isn't a better solution.

However, there needs to be a nationwide way to oversee the industry, built with feedback from people who understand the technology and have no financial interest. California, Maryland, or Massachusetts making piecemeal regulations only makes the problem worse, not better. AI is not going away, and anything regulated in the U.S. will exist elsewhere and still be widely available for people who want to misuse it. 

Apple is not responsible for criminal activity committed using a MacBook. Stanley is not responsible for assault committed with a hammer. Google, Meta, or OpenAI is not responsible for how people misuse their AI products.

Jerry Hildenbrand
Senior Editor — Google Ecosystem

Jerry is an amateur woodworker and struggling shade tree mechanic. There's nothing he can't take apart, but many things he can't reassemble. You'll find him writing and speaking his loud opinion on Android Central and occasionally on Threads.

  • Golfdriver97
    I almost wish laws would be written with the legalese first, then the Cliff's Notes version right below. Lately, I have been reading bills, and the sentences are so long/run-on that you can easily forget what was stated at the beginning. There are 61 words in the first sentence. I think the legislature can be more concise than that.

    I just pasted the first sentence in Grammarly out of curiosity. The 4 categories are Correctness, Clarity, Engagement, and Delivery. It scored a max score on Clarity, near maximum on Correctness, and a solid score for Delivery. While Engagement scored nearly minimum (for those who don't have Grammarly, the scores are shown in a bar image, with no way to discern its true score).

    365206
    Reply