Analyzing Edits on the California AI Companion Chatbots Bill
Line edits on a new bill about companion chatbots communicate priorities for legislators and AI operators. The California bill addresses growing concern about AI as a potentially harmful tool. A TechCrunch writer explains how the bill, awaiting Governor Newsom’s signature, “protect minors and vulnerable users,” which could include all of us.
In this summary, we see edits to describe the purpose:
The first edit in the paragraph attempts to clarify language and standards. That bit about “unpredictable intervals…” is confusing. Originally, the purpose was to avoid periodic rewards that could be lead to addiction. A state senator said, “I think it strikes the right balance of getting to the harms without enforcing something that’s either impossible for companies to comply with, either because it’s technically not feasible or just a lot of paperwork for nothing.”
Instead of “take reasonable steps,” the bill now includes the reasonable person standard used in other legislation. But just as “reasonable steps” may include a wide range of choices, whether people are misled depends on a variety of factors, including their own capabilities and vulnerabilities, but the language is consistent with other legal measures.
In some areas, more responsibility is given to AI companies in the edited version. Although “minor” is mentioned in a previous paragraph, the word was missing in the unedited version. Now the bill specifies that, when interacting with a minor, the chatbot must reveal itself as AI. Also, the change to “preventing the production” of harmful content rather than just “addressing” what the user expresses adds accountability for the “operator,” which is defined as AI companies, app developers/hosts, and third-party deployers).
We’ll see whether other states follow California’s lead in passing new legislation.