You are here
Facebook trying to stop its own algorithms from doing their job
Menlo Park, California
FOR years, Facebook has built its algorithms to maximise engagement and clicks - a strategy that has helped the company garner 2.7 billion users across its family of apps, including Instagram and Messenger. But increasingly, it is willing to go up against the way its software is designed to combat the spread of harmful content.
On Wednesday, the company announced a slew of new features and incremental product updates that counter the core engineering of its own systems by tweaking them to do more to reduce the spread of misinformation and sensational news - borderline content that the company won't remove entirely but is taking a more active role in policing.
For example, it will update its scrolling news feed algorithm by reviewing little-known websites whose articles get sudden surges of traffic on Facebook - a pattern that internal tests showed were a red flag for misinformation and clickbait. The new metric does not mean the problematic articles will be taken down, but their traffic will be reduced in news feed, the primary screen Facebook users see when they open the app.
The question is whether these changes are tweaks on the margins or more fundamental fixes. The newsfeed algorithm alone takes in hundreds of thousands of behavioural signals when it evaluates which posts get promotion - and it's tough to assess the impact any single fix might have on such a complex system.
The company will also expand fact-checking features for images, add privacy features to Messenger, and do more to take action on posts, images, hashtags and other content or behaviour that it calls "borderline" - material or actions that don't technically violate the company's rules but can lead to harmful outcomes.
"As content gets closer and closer to the line of community standards, we actually see that it gets more and more engagement," said Facebook operations specialist Henry Silverman.
Facebook may be making some progress in reducing the viral spread of false information in the US. Outside researchers have found there were significant drops in the proportion of Americans who visited fake news websites during the 2018 congressional elections compared to the presidential election in 2016.
In Messenger, one of Facebook's chat apps, the company is adding a verified badge, the blue check mark intended to help people distinguish between real accounts and impersonators.
On Instagram, the company says it will remove more sexually suggestive and other borderline content from its Explore and Discovery tabs, where people find new accounts and posts that they didn't explicitly seek out. Users who seek out terms like #porn, #cocaine, and #opioids find a blank page on the company's Explore Tab, and the company will be increasing the numbers of blocked terms.
Facebook is also enabling people to get more information about the groups that they join. People will able to see the history of the group's name changes, because many junk political news sites frequently change their names. WP