- Oct 2022
-
www.wired.com www.wired.com
-
Where does clicktivism end and algorithmic gaming begin?
This linked article was super interesting to me. I had never heard of the term "clicktivism" before but it made a lot of sense to hear that, because many online seem to feel like helping things to trend is a notable part of activism, and this in turn can lead to behavior paltforms find suspicious. It's true that it does have its share of impact- causes and people in need can reach more people than was ever possible thanks to the internet- but it has also led to some odd cultural quirks online when it comes to giving such issues a platform. Occasionally the goal to "boost awareness" or publicly declare yourself aligned with the "right" or "correct" cause can overpower the actual execution of concrete, meaningful action. The term "performative activism" also encompasses this phenomenon, in which people publicly post a lot about a tag or trend to sort of assure their followers they too are on the right side of affairs. Accounts that are clearly mining activist circles for engagement (even if they are helping to spread awareness of issues) can definitely act similarly to bots or spammers.
-
This switch to pure algorithmic curation was an unmitigated disaster. Bullshit trended immediately, and regularly: “Megyn Kelly Fired From Fox News!” was a top Trending headline two days after the call was made. Yet Facebook kept Trending Topics for two more years—providing fodder for many more stories about loony trends—before killing the feature in June 2018. Facebook’s announcement downplayed the feature’s controversial history. “We’re removing Trending soon to make way for future news experiences on Facebook,” the company wrote.
Because I have prior experience with machine learning in my academics, this has always been a point of interest/contention for me. Algorithms are unique in that they do exactly what we tell them to do- but they will execute their task without the nuance that goes into human decisionmaking. If algorithmic curation is told to prioritize based on engagement, then it will usually pay little attention to the kind of content is getting that engagement. This is what has led to YouTube, Facebook, and others' history of platforming questionable or even dangerous content in their algorithm-powered user recommendations.
(The assumption that removing humans from the equation completely eliminates bias is honestly incorrect, since automated/machine learning measures are still capable of passively soaking up biases from the training datasets they learn from.)
-