AI models are grown rather than built
for - AI - training - grown, not built
AI models are grown rather than built
for - AI - training - grown, not built
For example, AI models are trained on vast amounts of literature that include many science-fiction stories involving AIs rebelling against humanity.
for - AI - progress trap - training - movies like Terminator - This is a case of reality imitating movies - Another example - humans mismanagement of the biosphere and elite abuse and intransigency
a civilization’s worth of texts
I pause at the idea that LLMs are trained on a full "civilization's worth" of texts, especially with a Gramscian view. What texts represent a whole civilization? I expect both Zuckerman and Gramsci would argue that it is more than just the dominant hegemonic texts that make up most LLM training sets.
for - search prompt 2 - can an adult who has learned language experience pre-linguistic reality like an infant who hasn't learned language yet? - https://www.google.com/search?q=can+an+adult+who+has+learned+language+experience+pre-linguistic+reality+like+an+infant+who+hasn%27t+learned+language+yet%3F&sca_esv=869baca48da28adf&biw=1920&bih=911&sxsrf=AE3TifNnrlFbCZIFEvi7kVbRcf_q1qVnNw%3A1762660496627&ei=kBAQafKGJry_hbIP753R4QE&ved=0ahUKEwjyjouGluSQAxW8X0EAHe9ONBwQ4dUDCBA&uact=5&oq=can+an+adult+who+has+learned+language+experience+pre-linguistic+reality+like+an+infant+who+hasn%27t+learned+language+yet%3F&gs_lp=Egxnd3Mtd2l6LXNlcnAid2NhbiBhbiBhZHVsdCB3aG8gaGFzIGxlYXJuZWQgbGFuZ3VhZ2UgZXhwZXJpZW5jZSBwcmUtbGluZ3Vpc3RpYyByZWFsaXR5IGxpa2UgYW4gaW5mYW50IHdobyBoYXNuJ3QgbGVhcm5lZCBsYW5ndWFnZSB5ZXQ_SKL1AlAAWIziAnAPeAGQAQCYAaEEoAHyoAKqAQwyLTE0LjczLjE0LjO4AQPIAQD4AQGYAlSgApnFAcICBBAjGCfCAgsQABiABBiRAhiKBcICDRAAGIAEGLEDGEMYigXCAgsQLhiABBixAxiDAcICDhAuGIAEGLEDGNEDGMcBwgIEEAAYA8ICBRAuGIAEwgIKECMYgAQYJxiKBcICChAAGIAEGEMYigXCAg4QLhiABBixAxiDARiKBcICExAuGIAEGLEDGNEDGEMYxwEYigXCAggQABiABBixA8ICCBAuGIAEGLEDwgIFEAAYgATCAgsQLhiABBixAxiKBcICCxAAGIAEGLEDGIoFwgIGEAAYFhgewgILEAAYgAQYsQMYgwHCAgsQABiABBiGAxiKBcICCBAAGKIEGIkFwgIIEAAYgAQYogTCAgUQABjvBcICBhAAGA0YHsICBRAhGKABwgIHECEYoAEYCsICBRAhGJ8FwgIEECEYFcICBBAhGAqYAwCSBwwxMy4wLjguNTIuMTGgB-K1A7IHCTItOC41Mi4xMbgHgcUBwgcHMzUuNDcuMsgHcQ&sclient=gws-wiz-serp - from - search prompt 1 - can we unlearn language? - https://hyp.is/Ywp_fr0cEfCqhMeAP0vCVw/www.google.com/search?sca_esv=869baca48da28adf&sxsrf=AE3TifMGTNfpTekWWBdYUA96_PTLS9T00A:1762658867809&q=can+we+unlearn+language?&source=lnms&fbs=AIIjpHxU7SXXniUZfeShr2fp4giZ1Y6MJ25_tmWITc7uy4KIegmO5mMVANqcM7XWkBOa06dn2D9OWgTLQfUrJnETgD74qUQptjqPDfDBCgB_1tdfH756Z_Nlqlxc3Q5-U62E4zbEgz3Bv4TeLBDlGAR4oTnCgPSGyUcrDpa-WGo5oBqtSD7gSHPGUp_5zEroXiCGNNDET4dcNOyctuaGGv2d44kI9rmR9w&sa=X&ved=2ahUKEwj4_LP9j-SQAxVYXUEAHVT8FfMQ0pQJegQIDhAB&biw=1920&bih=911&dpr=1 - to - search prompt 2 (AI) - can an adult who has learned language re-experience pre-linguistic phenomena like an infant with no language training? - https://hyp.is/m0c7ZL0jEfC8EH_WK3prmA/www.google.com/search?q=can+an+adult+who+has+learned+language+re-experience+pre-linguistic+phenomena+like+an+infant+with+no+language+training?&gs_lcrp=EgZjaHJvbWUyBggAEEUYOTIHCAEQIRiPAjIHCAIQIRiPAtIBCTQzNzg4ajBqN6gCALACAA&sourceid=chrome&ie=UTF-8&udm=50&ved=2ahUKEwjfrLqDm-SQAxWDZEEAHcxqJgkQ0NsOegQIAxAB&aep=10&ntc=1&mstk=AUtExfAG148GJu71_mSaBylQit3n4ElPnveGZNA48Lew3Cb_ksFUHUNmWfpC0RPR_YUGIdx34kaOmxS2Q-TjbflWDCi_AIdYJwXVWHn-PA6PZM5edEC6hmXJ8IVcMBAdBdsEGfwVMpoV_3y0aeW0rSNjOVKjxopBqXs3P1wI9-H6NXpFXGRfJ_QIY1qWOMeZy4apWuAzAUVusGq7ao0TctjiYF3gyxqZzhsG5ZtmTsXLxKjo0qoPwqb4D-0K-uW-xjkyJj0Bi45UPFKl-Iyabi3lHKg4udEo-3N4doJozVNoXSrymPSQbr2tdWcxw93FzdAhMU9QZPnl89Ty1w&csuir=1&mtid=WBYQaYfuHYKphbIPzYmKiAs
when this technology meets it that we're not that our Interiors are not completely taken over because this technology is so potent when it you know it be very easy to lose our souls right to to to to decondition to be so conditioned so quickly by the dopamine whatever these you know whatever is going to happen when we kind of when this stuff rolls
Very important. This is why we are meeting AI as it evolves. We are training it in our language and with our QUALIA
just going back to the AI to the extent that the that the fourth turning meets the people who are actually doing the AI and informs the AI that actually the wheel goes this way don't listen to those guys it goes this way
for - AI - the necessity of training AI with human development - John Churchill
for example our standard english language model is trained with something like maybe 100 gigabytes or so of text um that gives it a strength as if you would throw bird at it with the google corpus so the other thing is of course uh a small corpus like that is computed in two hours or three hours on a on a laptop yeah so that's the other thing uh by the way i didn't mention our fingerprints are actually a boolean so when we when we train as i said we are not using floating points
for - comparison - cortical io vs normal AI - training dataset size and time
suppose that GPT 4 training took 3 months in 2027 a leading AI lab will be able to train a GPT 4 00:18:19 level model in a minute
for - stat - AI evolution - prediction 2027 - training time - 6 OOM decrease
stat - AI evolution - prediction 2027 - training time - 6 OOM decrease - today it takes 3 months to train GPT 4 - in 2027, it will take 1 minute - That is, 131,400 minutes vs 1 minute, or - 6 OOM
I feel violated, cheated upon, betrayed, and exploited.
What could possibly go wrong? Dear Stack Overflow denizens, thanks for helping train OpenAI's billion-dollar LLMs. Seems that many have been drinking the AI koolaid or mixing psychedelics into their happy tea. So much for being part of a "community", seems that was just happy talk for "being exploited to generate LLM training data..." The corrupting influence of the profit-motive is never far away.
If you ask ChatGPT to cite it will provide random citations. That's different from actually training a model to cite (e.g. use supervised finetuning on citations with human raters checking whether sources match, which would also allow you to verify how accurately a model cites). This is something OpenAI could do, it just doesn't.
We train our models using:
For a socially and economically sustainable growth path, the labor displacement in the sectors ofapplication must be counterbalanced by job creation within the same and other sector
it's 2023 and I don't see anyone planning for this massive job displacement, I think that the hollywood strikes are a sign of things to come
A book is defined as a published title with more than 49 pages.
[24] AI - Bias in Training Materials
Nice site sponsored by IBM providing lot of training materials for AI, MachineLearning and programming
Collaborative Filtering sample with Apache Spark
This framework can be used for recommender systems.