Facebook and Instagram have had to remove over 18 million pieces of COVID-19-related misinformation since the start of the pandemic, according to a new report the company released on Wednesday.
In addition to that, over 167 million pieces of content were rated “false” by social media company fact-checkers, meaning they contained COVID-19 misinformation that Facebook representatives said “would not contribute or cause any kind of safety risk.”
The figures were released as part of Facebook’s Community Standards Enforcement Report, which covered the enforcement of the policies from January through March.
“From the start of the pandemic to April 2021, we removed more than 18 million pieces of content from Facebook and Instagram globally for violating our policies on COVID-19-related misinformation and harm,” read the report.
“We’re also working to increase vaccine acceptance and combat vaccine misinformation.”
Representatives from Facebook said these efforts include the creation of social media stickers to ensure users see when their friends and family get vaccinated, as well as “directing people” to accurate information about vaccines.
“This is all part of our goal to help people get one step closer to vaccination. We think we have, because of our reach, the opportunity to really make a difference here,” said a Facebook representative during a press call.
COVID-19 misinformation can be even more harmful than the garden-variety misinformation that traditionally spreads over social media platforms, one expert said.
“With COVID-19, there’s a particular danger, because you’re dealing with medical misinformation where … there’s a more clear pathway to cause harm or even death,” said Mary Blankenship, a University of Nevada researcher who looks at how misinformation spreads through Twitter.
The 18 million posts Facebook and Instagram removed likely only scratched the surface, according to Blankenship.
“There’s in general so much online traffic that is about COVID-19,” she said.
Because of that, the 18 million removed posts might be “low,” she said, “(compared to) the amount of COVID misinformation that occurs … there’s billions of tweets at this point, and so the amount of possible misinformation could be very high.”
When Global News pressed Facebook for contextualizing information about how frequently users post on Facebook and how often COVID-19 misinformation is reported, a representative of the company said, “I don’t have any numbers to give you on that.”
The company also wouldn’t share a country-specific breakdown of where the misinformation comes from.
“We will continue to look at ways to add transparency in the future,” said a Facebook representative.
Wednesday’s announcement also included the revelation that Facebook has hired a third-party, professional services firm EY, to conduct an “independent audit” on the company in order to “validate that our metrics are measured and reported correctly” when it comes to content that violates the platforms’ policies.
In addition to the COVID-19 misinformation posts that were taken down, Facebook said it removed just shy of 10 million pieces of organized hate content in the first quarter of 2021, up from 6.4 million at the end of last year.
Social media companies have been under fire in recent months for their role in the spread of misinformation. Many of these concerns came to a head on Jan. 6. Conspiracy theories that had taken root in the not-so-hidden corners of social media burst forward that day in a display of violence on Washington’s Capitol Hill.
“We have these massive global communication platforms in which anyone can say anything to the whole world for free, and that allows the amplification and dissemination of baseless and extreme views,” said Russell Muirhead, a professor at Dartmouth University who co-authored the book A Lot of People Are Saying, which explores the impact of conspiracy theories on democracy. He made the comments in an interview with Global News in January.
“Repetition has substituted for validation. Nobody’s asking whether something’s true anymore. They’re just saying, do a lot of people think it, do a lot of people say it? And if enough people say it, that’s true enough to say it one more time. So that repetition function is what’s substituting for the truth function in democracy.”
Following the storming of Capitol Hill, social media platforms took some unprecedented steps in an effort to tackle hate speech and misinformation online: they blocked then-U.S. president Donald Trump’s social media accounts.
Speaking Wednesday, Facebook representatives said they plan to continue striving for the right balance between protecting free speech while halting misinformation.
“We support an approach where the rules of the internet are updated with reforms to federal law that protects freedom of expression while still allowing platforms like ours to remove content that threatens people’s safety,” they said.
The company is also establishing a new “Transparency Center” that will be dedicated to explaining how hateful content is removed.
That center will also explain how Facebook employees “reduce the spread of problematic content that doesn’t violate our policies and give people additional context so they can decide what to click, read or share,” they added.
Blankenship said these efforts are all a step in the right direction, but there’s still much more to be done, particularly given the business model that underlies these social media companies.
Social media platforms make most of their money from ad revenue, Blankenship explained, which means their goal is to keep users online so they can keep looking at advertisements.
“With this kind of business model, what’s important to them is to grab your attention and hold it the longest. And so what holds it the longest, is the things that typically get an emotional response,” she said.
Misinformation often spreads when it provokes “anger or fear,” according to Blankenship.