- Sep 2024
-
pivot-to-ai.com pivot-to-ai.com
-
Academic publishers are pushing authors to speed up delivering manuscripts and articles (incl suggesting peer review be done in 15d) to meet the quota they promised the AI companies they sold their soul to. Taylor&Francis/Routledge 75M USD/yr, Wiley 44M USD. No opt-outs etc. What if you ask those #algogens if this is a good idea?
-
-
github.com github.com
-
I don't think anyone has reliable information about post-2021 language usage by humans. The open Web (via OSCAR) was one of wordfreq's data sources. Now the Web at large is full of slop generated by large language models, written by no one to communicate nothing. Including this slop in the data skews the word frequencies. Sure, there was spam in the wordfreq data sources, but it was manageable and often identifiable. Large language models generate text that masquerades as real language with intention behind it, even though there is none, and their output crops up everywhere.
Robyn Speer will no update longer Wordfreq States that n:: there is no reliable post-2021 language usage data! Wordfreq was using open web sources, but it getting pollutted by #algogens output
-
- Jul 2024
-
www.hyperorg.com www.hyperorg.com
-
https://web.archive.org/web/20240712174702/https://www.hyperorg.com/blogger/2024/07/11/limiting-ais-imagination/ When 18m ago I played with the temperature (I don't remember how or what but it was an actual setting in the model, probably something from huggingface) what stood out for me was that at 0 it was immediately obvious it was automated, and it yielded the same answer to the same prompt repeatedly as it stuck to the likeliest outcome for each next token. At higher temps it would get wilder, and it struck me as easier to project a human having written it. Since then I almost regard the temp setting as the fakery/projectionlikelihood level. Although it doesn't take much to trigger projection, as per Eliza. l n:: temp v modellen maakt projecte mogelijk
-
- Jun 2024
-
debot.lodder.dev debot.lodder.devDeBot1
-
Een project v Open State Foundation.
-
- May 2024
-
media.dltj.org media.dltj.org
-
why training artificial intelligence in research context is and should continue to be a fair use
Examination of AI training relative to the four factors of fair use
-
-
media.dltj.org media.dltj.org
-
And in this on the side, you see we have this new chat box where the user can engage with the content and this very first action. The user doesn't have to do anything. They land on the page and as long as they run a search, we immediately process a prompt that says what in your voice, how is the query you put in?
Initial LLM chat prompt: why did this document come up
Using the patron's keyword search phrase, the first chat shown is the LLM analyzing why this document matched the patron's criteria. Then there are preset prompts for summarizing what the text is about, recommended topics to search, and a prompt to "talk to the document".
-
Navigating Generative Artificial Intelligence: Early Findings and Implications for Research, Teaching, and Learning
Spring 2024 Member Meeting: CNI website • YouTube
Beth LaPensee Senior Product Manager ITHAKA
Kevin Guthrie President ITHAKA
Starting in mid-2023, ITHAKA began investing in and engaging directly with generative artificial intelligence (AI) in two broad areas: a generative AI research tool on the JSTOR platform and a collaborative research project led by Ithaka S+R. These technologies are so crucial to our futures that working directly with them to learn about their impact, both positive and negative, is extremely important.
This presentation will share early findings that illustrate the impact and potential of generative AI-powered research based on what JSTOR users are expecting from the tool, how their behavior is changing, and implications for changes in the nature of their work. The findings will be contextualized with the cross-institutional learning and landscape-level research being conducted by Ithaka S+R. By pairing data on user behavior with insights from faculty and campus leaders, the session will share early signals about how this technology-enabled evolution is beginning to take shape.
-
-
media.dltj.org media.dltj.org
-
Navigating Generative Artificial Intelligence: Early Findings and Implications for Research, Teaching, and Learning
Spring 2024 Member Meeting: CNI website • YouTube
Beth LaPensee Senior Product Manager ITHAKA
Kevin Guthrie President ITHAKA
Starting in mid-2023, ITHAKA began investing in and engaging directly with generative artificial intelligence (AI) in two broad areas: a generative AI research tool on the JSTOR platform and a collaborative research project led by Ithaka S+R. These technologies are so crucial to our futures that working directly with them to learn about their impact, both positive and negative, is extremely important.
This presentation will share early findings that illustrate the impact and potential of generative AI-powered research based on what JSTOR users are expecting from the tool, how their behavior is changing, and implications for changes in the nature of their work. The findings will be contextualized with the cross-institutional learning and landscape-level research being conducted by Ithaka S+R. By pairing data on user behavior with insights from faculty and campus leaders, the session will share early signals about how this technology-enabled evolution is beginning to take shape.
-
-
www.arl.org www.arl.org
-
The ARL/CNI 2035 Scenarios: AI-Influenced Futures in the Research Environment. Washington, DC, and West Chester, PA: Association of Research Libraries, Coalition for Networked Information, and Stratus Inc., May 2024. https://doi.org/10.29242/report.aiscenarios2024
-
- Jan 2024
-
www.eff.org www.eff.org
-
Images of women are more likely to be coded as sexual in nature than images of men in similar states of dress and activity, because of widespread cultural objectification of women in both images and its accompanying text. An AI art generator can “learn” to embody injustice and the biases of the era and culture of the training data on which it is trained.
Objectification of women as an example of AI bias
-
- Nov 2023
-
media.dltj.org media.dltj.org
-
One of the ways that, that chat G BT is very powerful is that uh if you're sufficiently educated about computers and you want to make a computer program and you can instruct uh chat G BT in what you want with enough specificity, it can write the code for you. It doesn't mean that every coder is going to be replaced by Chad GP T, but it means that a competent coder uh with an imagination can accomplish a lot more than she used to be able to, uh maybe she could do the work of five coders. Um So there's a dynamic where people who can master the technology can get a lot more done.
ChatGPT augments, not replaces
You have to know what you want to do before you can provide the prompt for the code generation.
-
- Sep 2023
-
fortune.com fortune.com
-
analyticsindiamag.com analyticsindiamag.com
-
considering that Llama-2 has open weights, it is highly likely that it will improve significantly over time.
I believe the author refers to the open-sources of llama-2 model. It allows quick and specific fine-tuning of the original big model.
-
- Jul 2023
-
arxiv.org arxiv.org
-
AI-generated content may also feed future generative models, creating a self-referentialaesthetic flywheel that could perpetuate AI-driven cultural norms. This flywheel may in turnreinforce generative AI’s aesthetics, as well as the biases these models exhibit.
AI bias becomes self-reinforcing
Does this point to a need for more diversity in AI companies? Different aesthetic/training choices leads to opportunities for more diverse output. To say nothing of identifying and segregating AI-generated output from being used i the training data of subsequent models.
Tags
Annotators
URL
-
- May 2023
-
maggieappleton.com maggieappleton.com
-
Some of these people will become even more mediocre. They will try to outsource too much cognitive work to the language model and end up replacing their critical thinking and insights with boring, predictable work. Because that’s exactly the kind of writing language models are trained to do, by definition.
If you use LLMs to improve your mediocre writing it will help. If you use it to outsource too much of your own cognitive work it will get you the bland SEO texts the LLMs were trained on and the result will be more mediocre. Greedy reductionism will get punished.
Tags
Annotators
URL
-
- Dec 2022
-
garymarcus.substack.com garymarcus.substack.com
-
every country is going to need to reconsider its policies on misinformation. It’s one thing for the occasional lie to slip through; it’s another for us all to swim in a veritable ocean of lies. In time, though it would not be a popular decision, we may have to begin to treat misinformation as we do libel, making it actionable if it is created with sufficient malice and sufficient volume.
What to do then when our government reps are already happy to perpetuate "culture wars" and empty talking points?
-
anyone skilled in the art can now replicate their recipe.
Well anyone skilled enough who has $500k for the gpu bill and access to and the means to store the corpus... So corporations I guess... Yey!
-