Additionally, groups keep trying to re-invent old debunked pseudo-scientific (and racist) methods of judging people based on facial features (size of nose, chin, forehead, etc.), but now using artificial intelligence [h10]. Social media data can also be used to infer information about larger social trends like the spread of misinformation [h11]. One particularly striking example of an attempt to infer information from seemingly unconnected data was someone noticing that the number of people sick with COVID-19 correlated with how many people were leaving bad reviews of Yankee Candles saying “they don’t have any scent” (note: COVID-19 can cause a loss of the ability to smell):
It’s really shocking to realize how much personal information can be inferred from simple online behavior. The idea that AI or data mining can guess someone’s sexual orientation or addiction tendency just from their friend list or social activity feels invasive and unethical. I personally think it crosses a line between public and private life.
At the same time, I understand why companies want to use data to “predict” users—it’s part of how social media algorithms work. But when this data is used to judge people’s race or personality through pseudo-scientific facial recognition, it becomes a form of digital discrimination. It makes me wonder if we are gradually losing control of our identities online.