California Outlaws Bots That Secretly Pretend To Be Human To Interact With Consumers
As more sites across the web adopt artificial intelligence to interact with customers, a new bill signed into law last week in California aims to provide more disclosure to would-be human clients.
Democratic Gov. Jerry Brown signed a bill on Friday that will require automated accounts, more commonly called “bots,” to disclose to customers that they’re not real humans, according to reporting from PC Magazine. Bots cannot, the bill stipulates, interact with humans in efforts to “incentivize a purchase or sale of goods or services in a commercial transaction or to influence a vote in an election.”
To be clear, the bill doesn’t outlaw these types of interactions — bots can still have conversations with humans online in California. But the bill does require bots to disclose to customers that they’re not real, and to do so in a way that will be “clear, conspicuous, and reasonably designed.” That means companies cannot hide the fact that they’re customer service agents are actually bots in something that resembles the “fine print” of a user agreement.
There are a few caveats to the bill. First, it won’t be implemented for almost a year — the law stipulates that enforcement of the disclosures won’t begin until July 2019.
California has banned bots secretly trying to sway elections https://t.co/XW3nr3cE2V pic.twitter.com/syiQYfpVTN
— CNET (@CNET) October 2, 2018
The law also only applies to websites that meet certain thresholds. Enforcement won’t affect any site that, on average, has less than 10 million unique pageviews per month. That effectively means that small businesses won’t have to adhere to the bot rule, but companies with social media accounts on sites like Twitter and Facebook will.
The anti-bots bill won’t just affect consumers — the law also bans bots from interacting with users when it comes to elections, according to reporting from BBC.
The law has broad support, but many question whether it will be truly enforceable or not.
“It’s a very useful rule but it might become an arms race — can you catch them?” Professor Ralph Schroeder from the Oxford Internet Institute said.
He also lamented that the law may have adverse effects.
“It also raises further questions about whether there are also positive bots and they get caught up in all this,” Schroeder said.
There are various ways to check if a social media user is, in fact, a bot. Checking a user’s bio and seeing that it doesn’t read like a real person might write one up is a sure sign that it’s a bot, according to reporting from Mashable.
It’s also a “tell” if the user tweets every few minutes during the workday — no matter how many times some of us log into social media during this period of time, nobody consistently writes a tweet or makes a Facebook post ever 10 minutes or so.
Finally, you can use a bot-detecting service, like botcheck, to help find out if a user is a social media bot. It’s not a perfect system, but it does help filter out a list of users who are known to be bots.