- May 2023
-
maggieappleton.com maggieappleton.com
-
But some people will realise they shouldn’t be letting language models literally write words for them. Instead, they'll strategically use them as part of their process to become even better writers.They'll integrate them by using them as sounding boards while developing ideas, research helpers, organisers, debate partners, and Socratic questioners.
This hints towards prompt-engineering, and the role of prompts in human interaction itself [[Prompting skill in conversation and AI chat 20230301120740]]
High Q use of generative AI will be about where in a creative / work process you employ to what purpose. Not in accepting the current face presented to us in e.g. chatGPT: give me an input and I'll give you an output. This in turn requires an understanding of one's own creative work processes, and where tools can help reduce friction (and where the friction is the cognitive actual work and must not be taken out)
-
Some of these people will become even more mediocre. They will try to outsource too much cognitive work to the language model and end up replacing their critical thinking and insights with boring, predictable work. Because that’s exactly the kind of writing language models are trained to do, by definition.
If you use LLMs to improve your mediocre writing it will help. If you use it to outsource too much of your own cognitive work it will get you the bland SEO texts the LLMs were trained on and the result will be more mediocre. Greedy reductionism will get punished.
-
This raises both the floor and the ceiling for the quality of writing.
I wonder about reading after this entire section about writing. Why would I ever bother reading generated texts (apart from 'anonymous' texts like manuals? It does not negate the need to be able to identify a human author, on the contrary, but it would also make even the cheapest way of generating too costly if noone will ever read it or act upon it. Current troll farming has effect because we read it, and still assume it's human written and genuine. As soon as that assumption is fully eroded whatever gets generated will not have impact, because there's no reader left to be impacted. The current transitional assymmetry in judging output vs generating it is costly to humans, people will learn to avoid that cost. Another angle is humans pretending to be the actual author of generated texts.
-
On the new web, we’re the ones under scrutiny. Everyone is assumed to be a model until they can prove they're human.
On a web with many generative agents, all actors are going to be assumed models until it is clear they're really human.
Maggie Appleton calls this 'passing the reverse Turing test'. She suggests using different languages than English, insider jargon etc, may delay this effect by a few months at most (and she's right, I've had conversations with LLMs in several languages now, and there's no real difference anymore with English as there was last fall.)
-
This means they primarily represent the generalised views of a majority English-speaking, western population who have written a lot on Reddit and lived between about 1900 and 2023.Which in the grand scheme of history and geography, is an incredibly narrow slice of humanity.
Appleton points to the inherent severely limited trainingset and hence perspective that is embedded in LLMs. Most of current human society, of history and future is excluded. This goes back to my take on data and blind faith in using it: [[Data geeft klein deel werkelijkheid slecht weer 20201219122618]] en [[Check data against reality 20201219145507]]
-
Recently, people have taken this idea further and developed what are being called “generative agents”.Just over two weeks ago, this paper "Generative Agents: Interactive Simulacra of Human Behavior" came out outlining an experiment where they made a sim-like game (as in, The Sims) filled with little people, each controlled by a language-model agent.
Generative agents are a sort of indefinite prompt chaining: an NPC or interactive thing can be LLM controlled. https://www.youtube.com/watch?v=Gz6mAX41fs0 shows this for Skyrim. Appleton mentions a paper https://arxiv.org/abs/2304.03442 which does it for simlike stuff. See Zotero copy Vgl [[Stealing Worlds by Karl Schroeder]] where NPC were a mix of such agents and real people taking on an NPC role.
-
Most of the tools and examples I’ve shown so far have a fairly simple architecture.They’re made by feeding a single input, or prompt, into the big black mystery box of a language model. (We call them black boxes because we don't know that much about how they reason or produce answers. It's a mystery to everyone, including their creators.)And we get a single output – an image, some text, or an article.
generative AI currently follows the pattern of 1 input and 1 output. There's no reason to expect it will stay that way. outputs can scale : if you can generate one text supporting your viewpoint, you can generate 1000 and spread them all as original content. Using those outputs will get more clever.
-
By now language models have been turned into lots of easy-to-use products. You don't need any understanding of models or technical skills to use them.These are some popular copywriting apps out in the world: Jasper, Copy.ai, Moonbeam
Mentioned copy writing algogens * Jasper * Wordtune * copy.ai * quillbot * sudowrite * copysmith * moonbeam
-
These are machine-learning models that can generate content that before this point in history, only humans could make. This includes text, images, videos, and audio.
Appleton posits that the waves of generative AI output will expand the dark forest enormously in the sense of feeling all alone as a human online voice in an otherwise automated sea of content.
-
https://web.archive.org/web/20230503150426/https://maggieappleton.com/forest-talk
Maggie Appleton on the impact of generative AI on internet, with a focus on it being a place for humans and human connection. Take out some of the concepts as shorthand, some of the examples mentioned are new to me --> add to lists, sketch out argumentation line and arguments. The talk represents an updated version of earlier essay https://maggieappleton.com/ai-dark-forest which I probably want to go through next for additional details.
-
-
maggieappleton.com maggieappleton.com
-
https://web.archive.org/web/20230503151906/https://maggieappleton.com/ai-dark-forest The essay from which the talk I also saved came. The talk is a good approach to the storyline, expect more details / arguments to be found in here.
-
- Apr 2023
-
www.reuters.com www.reuters.com
-
On the temporary ban of ChatGPT in Italy on the basis of GDPR concerns.
Italian DPA temporarily bans ChatGPT until adequate answers are received from OpenAI. Issues to address: 1. Absence of age check (older than 13) of ChatGPT users 2. Missing justification for the presence of personal data in trainingsdata of ChatGPT. 3. OpenAI has no EU based offices and as such there's no immediate counterparts for DPAs to interact with them. The temp ban is to ensure a conversation with OpenAI will be started.
The trigger was a 9 hour cybersecurity breach where user's financial information and content of their prompts/generated texts leaked over into other accounts.
-
-
inflecthealth.medium.com inflecthealth.medium.com
-
This is the space where AI can thrive, tirelessly processing these countless features of every patient I’ve ever treated, and every other patient treated by every other physician, giving us deep, vast insights. AI can help do this eventually, but it will first need to ingest millions of patient data sets that include those many features, the things the patients did (like take a specific medication), and the outcome.
AI tools yes, not ChatGPT though. More contextualising and specialisation needed. And I'd add the notion that AI might be necessary as temporary fix, on our way to statistics. Its power is in weighing (literally) many more different factors then we could statistically figure out, also because of interdependencies between factors. Once that's done there may well be a path to less blackbox tooling like ML/DL towards logistic regression: https://pubmed.ncbi.nlm.nih.gov/33208887/ [[Machine learning niet beter dan Regressie 20201209145001]]
-
My fear is that countless people are already using ChatGPT to medically diagnose themselves rather than see a physician. If my patient in this case had done that, ChatGPT’s response could have killed her.
More ELIZA. The opposite of searching on the internet for your symptoms and ending up with selfdiagnosing yourself with 'everything' as all outliers are there too (availability bias), doing so through prompting generative AI will result in never suggesting outliers because it will stick to dominant scripted situations (see the vignettes quote earlier) and it won't deviate from your prompts.
-
If my patient notes don’t include a question I haven’t yet asked, ChatGPT’s output will encourage me to keep missing that question. Like with my young female patient who didn’t know she was pregnant. If a possible ectopic pregnancy had not immediately occurred to me, ChatGPT would have kept enforcing that omission, only reflecting back to me the things I thought were obvious — enthusiastically validating my bias like the world’s most dangerous yes-man.
Things missing in a prompt will not result from a prompt. This may reinforce one's own blind spots / omissions, lowering the probability of an intuitive leap to other possibilities. The machine helps you search under the light you switched on with your prompt. Regardless of whether you're searching in the right place.
-
ChatGPT rapidly presents answers in a natural language format (that’s the genuinely impressive part)
I am coming to see this as a pitfall of generative AI texts. It seduces us to anthromorphise the machine, to read intent and comprehension in the generated text. Removing the noise in generating text, meaning the machine would give the same rote answers to the same prompts would reduce this human projection. It would make the texts much 'flatter' and blander than they currently already are. Our fascination with these machines is that they sometimes sound like us, and it makes us easily overlook the actual value of the content produced. In human conversation we would give these responses a pass as they are plausible, but we'd also not treat conversation as likely fully true.
-
This is likely why ChatGPT “passed” the case vignettes in the Medical Licensing Exam. Not because it’s “smart,” but because the classic cases in the exam have a deterministic answer that already exists in its database.
Machines will do well in scripted situations (in itself a form of automation / codification). This was a factor in Hzap 08 / 09 in Rotterdam, where in programming courses the problems were simplified and highly scripted to enable the teacher to be able to grade the results, but at the cost of removing students from actual real life programming challenges they might encounter. It's a form of greedy reductionism of complexity. Whereas the proof of the pudding is performing well within complexity.
-
-
Here’s what I found when I asked ChatGPT to diagnose my patients
A comparison of ChatGPT responses to actual ER case descriptions. Interesting experiment by the author, though there shouldn't be an expectation for better results than it gave.
-
- Feb 2023
-
www.lawfareblog.com www.lawfareblog.com
-
It means that everything AI makes would immediately enter the public domain and be available to every other creator to use, as they wish, in perpetuity and without permission.
One issue with blanket, automatic entry of AI-generated works to the public domain is privacy: A human using AI could have good reasons not to have the outputs of their use made public.
-
- Dec 2022
-
www.theverge.com www.theverge.com
-
In September, the US Copyright Office granted a first-of-its-kind registration for a comic book generated with the help of text-to-image AI Midjourney. The comic is a complete work: an 18-page narrative with characters, dialogue, and a traditional comic book layout. And although it’s since been reported that the USCO is reviewing its decision, the comic’s copyright registration hasn’t actually been rescinded yet. It seems that one factor in the review will be the degree of human input involved in making the comic. Kristina Kashtanova, the artist who created the work, told IPWatchdog that she had been asked by the USCO “to provide details of my process to show that there was substantial human involvement in the process of creation of this graphic novel.”
If copyright status hinges on the level of human involvement, then this will quickly become one more (c) grey area.
-