2 Matching Annotations
  1. Oct 2024
    1. Looking up a topic up on the internet, getting ideas, and writing about it is very different from copying sections from an article without attribution. Similarly, direct copying from content produced by generative AI tools requires disclosure, but using that content for ideation does not. Consider also the need for accountability. Blindly trusting in generative AI output is unwise and often unethical. AI output cannot be cited because it is not referenceable. Validating AI output by consulting reliable sources to arrive at a sound conclusion is surely reasonable.

      Sounds like we should be teaching people to skip the middleman (middle bot?) and just go to the “reliable sources,” then, huh?

    2. We have also seen GPT's dramatic reduction in confabulation errors as new releases come out, thanks in part to reinforcement learning from human feedback (RLHF), in which users rank responses in terms of accuracy.