2 Matching Annotations
  1. Apr 2025
  2. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Lauren Leffer. CNET Is Reviewing the Accuracy of All Its AI-Written Articles After Multiple Major Corrections. Gizmodo, January 2023. URL: https://gizmodo.com/cnet-ai-chatgpt-news-robot-1849996151 (visited on 2023-12-05).

      This article shows that CNET quietly began publishing AI-generated financial articles without clear disclosure, leading to serious questions about transparency, editorial integrity, and the role of human oversight in journalism. One thing that stood out was that over 70 AI-written stories were published before any public acknowledgment, many of which had factual errors. This highlights a major ethical concern about relying on AI tools for critical information, especially when it comes to public trust in media.

    1. By looking at enough data in enough different ways, you can find evidence for pretty much any conclusion you want. This is because sometimes different pieces of data line up coincidentally (coincidences happen), and if you try enough combinations, you can find the coincidence that lines up with your conclusion.

      This paragraph made me reflect on how easy it is to unintentionally manipulate data, especially when you're trying to prove a point. I’ve seen this happen in group projects where someone cherry-picked stats to support our thesis, but when we looked deeper, the broader dataset told a different story. It reminds me of how important it is to approach data with skepticism and to consider context rather than just patterns. Is there a reliable method to distinguish between meaningful correlations and coincidences in data?