Residents of Coulsdon say their town's name is being censored by Facebook.

Facebook’s Algorithms Think a Small English Community Is Up to No Good

Spare a thought for the people of Coulsdon, England, who say they’re being categorically oppressed by the heavy hand of algorithmic censorship for no reason other than the seemingly innocuous spelling of their town’s name.

According to the local news blog Inside Croydon, business owners and neighborhood associations in the town have had content removed from their Facebook pages because the platform’s content moderation algorithms are picking up the “LSD” in Coulsdon as a reference to the psychedelic drug.

The blog, quoting local sources who declined to be named, said that pages for local theaters, hardware stores, history groups, and residents’ associations had all been affected by the censorship and that Facebook has not fixed the issue despite multiple complaints.

“As long as it has ‘Coulsdon’ in the title, you get the drug reference that there’s no way around,” one anonymous source told Inside Croydon.

In a brief statement, Dave Arnold, a spokesperson for Facebook’s parent company, Meta, said “this was an error that has now been fixed.”

It wouldn’t be the first time Facebook’s filters blocked posts containing harmless—or possibly life-saving—information.

In 2021, Facebook apologized to some English users for censoring and banning people who posted about the Plymouth Hoe, a landmark in the coastal city of Plymouth.

The Washington Post reported earlier this year that as wildfires raged across the West Coast, the company’s algorithms censored posts about the blazes in local emergency management and fire safety groups. In dozens of examples documented by the newspaper, Facebook flagged the posts as “misleading” spam.

Facebook group administrators have also previously noticed patterns of posts in their communities that contained the word “men” being flagged as hate speech, according to Vice. The phenomenon led to the creation of facebookjailed.com, where users documented bizarre moderation decisions, like a picture of a chicken being labeled nudity or sexual activity.

Facebook’s own data shows that its heavy reliance on algorithms to police content on the platform leads to millions of mistakes each month.

According to its most recent moderation data, Facebook took 1.7 million enforcement actions on drug-related content between April and June of this year. About 98 percent of that content was detected by the company, compared to just 2 percent reported by users. People appealed the sanctions in 182,000 cases and Facebook ended up restoring more than 40,000 pieces of content—11,700 without any need for an appeal and 28,500 after an appeal.

The algorithms targeting other types of banned content, like spam, result in even more mistakes. The platform restored nearly 35 million posts it erroneously labeled as spam during the most recent three-month period, more than 10 percent of the allegedly spammy content it previously removed.

Leave a Comment

Your email address will not be published. Required fields are marked *