menu_open Columnists
We use cookies to provide some features and experiences in QOSHE

More information  .  Close

Does the public comment system have an AI problem?

2 0
yesterday

Does the public comment system have an AI problem?

Fake comments are easy. Proving them isn’t.

[Source Image: Olgadesigner/Adobe Stock]

Last year, when an air quality agency in Southern California proposed a new rule to encourage consumers to buy heat pumps instead of gas heaters, the agency was flooded with 20,000 comments opposing the idea—many more than usual. “Due to the volume and nature of these submissions, South Coast AQMD had concerns about their authenticity,” says Rainbow Yeung, an agency spokesperson. The agency’s executive director got an email thanking him for his “opposition” to a rule that his own team had drafted.

To check the validity of the comments, the agency reached out to a small sample of commenters—172 people—to confirm that they’d actually sent the emails. Almost no one responded. But of the five people who did, three of them said that they didn’t know anything about the comments that had been submitted in their own names. In a separate investigation, a campaigner from the Sierra Club also started contacting people on the list; the four people he reached also said that they hadn’t sent emails.

The L.A. Times recently reported that CiviClick, a company that bills itself as a provider of “AI-powered advocacy tools,” had led the campaign to send opposition comments. The client was a public affairs consultant with ties to the gas industry.

CiviClick denies that it sent any email without consent or that it used AI to fabricate automated messages. The air quality management district is still investigating the situation; the executive director said in a recent meeting that the team was exploring more “aggressive” ways of sampling commenters, since it couldn’t draw definitive conclusions from the limited initial response.

Regardless of what happened, it points to a broader question: if AI can now easily impersonate humans—and if comments can be submitted without someone’s knowledge—how can government agencies actually know when a public comment was written by a citizen rather than a bot?

Fake comments aren’t new. In 2017, the FCC received 22 million comments during the debate on net neutrality rules—and around 18 million of them were later found to be fake. Millions came from a single college student; half a million came from Russian email addresses. After an investigation, New York Attorney General Letitia James fined “lead generator” companies that had collectively impersonated millions of real people when they submitted comments.

AI, in theory, could make it easier to write and submit fake comments that sound real. CiviClick says that it simply uses AI to help real people personalize their comments. The platform asks users questions related to the issue—for example, how an increase in taxes would affect their budget—and then tailors an email. (The company also uses AI to predict how likely someone would be to respond to a campaign.)

Artificial Intelligence

Expedia CEO Ariane Gorin on Turning AI Into a Competitive Advantage


© Fast Company