- Dec 2023
-
www.youtube.com www.youtube.com
-
https://www.youtube.com/watch?v=7xRXYJ355Tg The AI Bias Before Christmas by Casey Fiesler
-
- Oct 2023
-
-
Three AI Chatbots, Two Books, and One Weird Annotation Experiment by Remi Kalir on September 29, 2023 https://remikalir.com/blog/three-ai-chatbots-two-books-and-one-weird-annotation-experiment/
-
- Sep 2023
- May 2023
-
ourworldindata.org ourworldindata.orgBooks1
-
A book is defined as a published title with more than 49 pages.
[24] AI - Bias in Training Materials
-
-
www.technologyreview.com www.technologyreview.com
-
An AI model taught to view racist language as normal is obviously bad. The researchers, though, point out a couple of more subtle problems. One is that shifts in language play an important role in social change; the MeToo and Black Lives Matter movements, for example, have tried to establish a new anti-sexist and anti-racist vocabulary. An AI model trained on vast swaths of the internet won’t be attuned to the nuances of this vocabulary and won’t produce or interpret language in line with these new cultural norms. It will also fail to capture the language and the norms of countries and peoples that have less access to the internet and thus a smaller linguistic footprint online. The result is that AI-generated language will be homogenized, reflecting the practices of the richest countries and communities.
[21] AI Nuances
-
- Mar 2023
-
www.nytimes.com www.nytimes.com
-
Whose values do we put through the A.G.I.? Who decides what it will do and not do? These will be some of the highest-stakes decisions that we’ve had to make collectively as a society.’’
A similar set of questions might be asked of our political system. At present, the oligopolic nature of our electoral system is heavily biasing our direction as a country.
We're heavily underrepresented on a huge number of axes.
How would we change our voting and representation systems to better represent us?
-
- Feb 2023
-
wordcraft-writers-workshop.appspot.com wordcraft-writers-workshop.appspot.com
-
Many authors noted that generations tended to fall into clichés, especially when the system was confronted with scenarios less likely to be found in the model's training data. For example, Nelly Garcia noted the difficulty in writing about a lesbian romance — the model kept suggesting that she insert a male character or that she have the female protagonists talk about friendship. Yudhanjaya Wijeratne attempted to deviate from standard fantasy tropes (e.g. heroes as cartographers and builders, not warriors), but Wordcraft insisted on pushing the story toward the well-worn trope of a warrior hero fighting back enemy invaders.
Examples of artificial intelligence pushing toward pre-existing biases based on training data sets.
-
- Sep 2022
-
www.scientificamerican.com www.scientificamerican.com
-
Good overview article of some of the psychology research behind misinformation in social media spaces including bots, AI, and the effects of cognitive bias.
Probably worth mining the story for the journal articles and collecting/reading them.
-