- Nov 2024
-
-
AI models collapse when trained on recursively generated data by Ilia Shumailov et al.
ᔥ[[Mathew Lowry]] in AI4Communities post - MyHub Experiments Wiki (accessed:: 2024-11-06 09:43:23)
-
- Mar 2024
-
research.ibm.com research.ibm.com
-
https://research.ibm.com/blog/retrieval-augmented-generation-RAG
PK indicates that folks using footnotes in AI are using rag methods.
-
- Apr 2023
-
-
It was only by building an additional AI-powered safety mechanism that OpenAI would be able to rein in that harm, producing a chatbot suitable for everyday use.
This isn't true. The Stochastic Parrots paper outlines other avenues for reining in the harms of language models like GPT's.
-
- Mar 2023
-
dl.acm.org dl.acm.org
-
Bender, Emily M., Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜” In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610–23. FAccT ’21. New York, NY, USA: Association for Computing Machinery, 2021. https://doi.org/10.1145/3442188.3445922.
Would the argument here for stochastic parrots also potentially apply to or could it be abstracted to Markov monkeys?
-
- Feb 2023
-
wordcraft-writers-workshop.appspot.com wordcraft-writers-workshop.appspot.com
-
The application is powered by LaMDA, one of the latest generation of large language models. At its core, LaMDA is a simple machine — it's trained to predict the most likely next word given a textual prompt. But because the model is so large and has been trained on a massive amount of text, it's able to learn higher-level concepts.
Is LaMDA really able to "learn higher-level concepts" or is it just a large, straight-forward information theoretic-based prediction engine?
-