Karen Hao. How Facebook got addicted to spreading misinformation. MIT Technology Review, March 2021. URL: https://www.technologyreview.com/2021/03/11/1020600/facebook-responsible-ai-misinformation/ (visited on 2023-12-08).
This article explains how FB's AI systems helped the platform grow but also made the negative actions become harder to control. These negative actions being things like spreading misinformation or hate speech. The AI focuses on how FB's algorithms were designed to maximize engagement by showing users content they were likely to click and share. This means that more extreme content that gets views and clicks get more engagement and end up rewarding these negative actions within posts. The article also argues that the AI team at FB focuses more on the algorithmic bias than fixing the recommendation system that spreads misinformation because it could hurt FB overall.