Disappearing metrics is basically the opposite of what a lot of experts on manipulated information recommend. Transparency and disclosure are “key” components of reform efforts like the Digital Services Act in the EU, said Yacine Jernite, machine learning and society lead for Hugging Face, an open-source data science and machine learning platform.
Misinformation is winning the war on misinformation
Online falsehoods are as bad as they‘ve ever been. Does anyone care?
“We've seen that people who use [generative AI] services for information about elections may get misleading outputs,” Jernite added, “so it's particularly important to accurately represent and avoid over-hyping the reliability of those services.”
It’s generally better for an information ecosystem when people know more about what they’re using and how it works. And while some aspects of this fall under media literacy and information hygiene efforts, a portion of this has to come from the platforms and their boosters. Hyping up an AI chatbot as a next-generation search tool sets expectations that aren’t fulfilled by the service itself.
Platforms don’t have much incentive to care
Platforms aren’t just amplifying bad information, they’re making money off it. From TikTok Shop purchases to ad sales, if these companies take meaningful, systemic steps to change how disinformation circulates on their platforms, they might work against their business interests.
Social media platforms are designed to show you things you want to engage with and share. AI chatbots are designed to give the illusion of knowledge and research. But neither of these models are great for evaluating veracity, and doing so often requires limiting the scope of a platform working as intended. Slowing or narrowing how a platform like this works means less engagement, which means no growth, which means less money.
“I personally can't imagine that they would ever be as aggressively interested in addressing this as the rest of us are,” said Evan Thornburg, a bioethicist who posts on TikTok as @gaygtownbae. “The thing that they're able to monetize is our attention, our interest, and our buying power. And why would they whittle that down to a narrow scope?“
Many platforms begrudgingly began efforts to take on misinformation after the 2016 US elections, and again at the beginning of the Covid pandemic. But since then, there’s been kind of a pullback. Meta laid off employees from teams involved with content moderation in 2023, and rolled back its Covid-era rules. Maybe they’re sick of being held responsible for this stuff at this point. Or, as technology changes, they see an opportunity to move on from it.
So do they care?
Again, it’s hard to quantify the efforts by major platforms to curb misinformation, which leaves me leaning once again on informed vibes. For me, it feels like major platforms are backing away from prioritizing the fight against misinformation and disinformation, and that there’s a general kind of fatigue out there on the topic more broadly. That doesn’t mean that nobody is doing anything.
Prebunking, which involves preemptively fact-checking rumors and lies before they gain traction, is super promising, especially when applied to election misinformation. Crowdsourced fact-checking is also an interesting approach. And to the credit of platforms themselves, they do continue to update their rules as new problems emerge.
There’s a way in which I have some sympathy for the platforms here. This is an exhausting topic, and it’s tough to be told, over and over, that you’re not doing enough. But pulling back and moving on doesn’t stop bad information from finding audiences over and over. While these companies assess how much they care about moderating and addressing their platform’s capacity to spread lies, the people targeted by those lies are getting hurt.