image
In the article “These new tools let you see for yourself how biased AI image models are,” Melissa Heikkilä, argues that AI-generating image systems are applying harmful biases and stereotypes. Despite the company's attempts to fix these issues, Heikkila's research with Hugging Face and Leipzig University has created a tool to bring to light these biases. The tool highlights biases and stereotypical images based on gender and ethnicity. Heikkila blames these biases on the training data used which reflects off of American culture/values. Overall, the article clarifies these AI challenges and emphasizes making the system less biased.