4 Matching Annotations
  1. Last 7 days
    1. The technology behind generative AI tools isn’t designed to differentiate between what’s true and what’s not true

      This is a major point! It's all data to an AI or a computer. True or false has no meaning to it. That involves a higher level of thought/ sentience that it doesn't and can't have.

    2. Limitations of Generative Models: Generative AI models function like advanced autocomplete tools: They’re designed to predict the next word or sequence based on observed patterns. Their goal is to generate plausible content, not to verify its truth. That means any accuracy in their outputs is often coincidental. As a result, they might produce content that sounds reasonable but is inaccurate (O’Brien, 2023).

      I feel that this is miss leading, they are generating answers based on a give data set that is explained higher up in the article. So It's a flawed data that causes false information, it extrapolates and publishes what makes sense from it's "education"

    1. Disagreements about morality, measurement, and mechanisms should not stop us from accepting that our thoughts, feelings, and actions can be influenced by social cues implicitly

      This is an important statement, as it guides us to let go of our own feelings about our biases. This allows us to look at them objectively without shame or guilt.

    2. the state of California has introduced legislation to combat implicit bias.

      This seems like wasted legislation efforts. How can you legislate against an unknown unconscious force?