Wikipedia examples of ai generated text signs
- Mar 2026
-
en.wikipedia.org en.wikipedia.org
- Feb 2026
-
Local file Local file
-
statistical knowledge is still required in order toformulate the correct prompts and to ensure that the AIdoes not leave out any step of the analysis.
rhetoric: author presents a prescriptive claim that AI needs humans with competent knowledge (in this case, statistics) to create prompts and ensure that the AI does not leave out any steps of the analysis. He positions domain knowledge not as a tool for using AI for statistical analysis, but a prerequisite for management of the AI and auditing the output.
inference: In addition to policing and correcting the AI outputs, the deep domain knowledge is what allows the AI to do complex data analysis without mistakes, hallucinated results, or mathematically false outcomes. This is basically the job description of a human with "Augmented Human Wisdom". The human's value is no longer in doing math, but in possessing the vertical expertise (flesh/wisdom) to know exact what math needs to be done and ultimately auditing the assistant machine's work.
-
ChatGPT Data Analyst clearly produced a false resulthere, precisely because the application assumptions for theANOVA were not checked.
rhetoric: Schwarz employs cause-and-effect reasoning here based on empirical testing. He links a specific technical failure (not checking assumptions) to a definitive unwanted outcome (a false result).
inference: the "Data Analyst" function of ChatGPT hallucinated a result during the use of it's core function! This is the best evidence so far of the 'Crisis of Truth' and the dangers of the 'Headless Automatons' in my essay. If a generalist with no deep knowledge uses AI, they are at great risk of blindly accepting mathematically false conclusions. Synthetic syntax without competent human validation is a liability.
-
The results show that generative AI canfacilitate data analysis for individuals with minimal knowledge of statistics,mainly by generating appropriate code, but only partly by following standardprocedures.
rhetoric: author uses comparative, objective statement (logos) to establish the main boundary of the technology's capability/capacity -- it excels at technical generation (things like coding) but fails at standard procedures (methodological adherence to SOPs).
inference: the proves the 'Raising the Floor' concept. AI completely automates the entry-level syntax (the "Word"), meaning that the Generalist coder is obsolete! However, because it fails at standard procedures, it requires a human architect to guide it to outputs that are valuable in the real world.
-
-
-
Perseverations that are input into the system are essentially mag-nified by the system’s suggested sentences,
rhetoric: authors explain an unintended consequence of using the AI tool: it scales the errors or the emptiness of the human prompt.
inference: this is an excellent metaphor for the 'manager fallacy'. If the human user in incompetent (or provides empty or incomplete input), the AI does not magically create wisdom -- it just amplifies the user's incompetence in a a highly articulate synthetic thought.
-
Participant 2 stated the age of her daughters (“Name1 is 18, Name2 is21”), Aphasia-GPT transformed it as “Name1 is 18 and 21”, which is an impossible, butrelated, hallucination
rhetoric: researchers use a specific, clinical observation of an error to demonstrate the model's inability to comprehend logical reality despite the human relaying a perfectly structured sentence.
inference: this shows that AI is amoral and lacks the lived experience necessary to make logical judgments that work in the real world. It can format a sentence beautifully, but it does not/will not always understand that a single human cannot be two ages at once. This is why it is very important/necessary for the "flesh" to text the output against reality
-
-
www.bizjournals.com www.bizjournals.com
-
nearly half of the workers who received sloppy AI-generated material from coworkers saw those coworkers as less creative, capable and reliable, while smaller amounts saw them as less trustworthy and intelligent.
AI Slop generated by your own hands changes he way people view you at work!
-
The cost of the time that it takes fix "workslop" could add up too, with a $186 monthly cost per employee on average, according to a survey of desk workers by BetterUp in partnership with the Stanford Social Media Lab. Forty percent of the workers surveyed said they received "workslop" in the last month and that it took an average of two hours to resolve each incident.
$186/per employee/per month!
10 employees = ($22,320) 25 employees = ($55,800) 50 employees = ($111,600) 100 employees = ($223,200) 250 employees = ($558,000) 500 employees = ($1,116,000) 1000 employees = ($2,232,000)
-
“Younger workers aren’t necessarily more careless, but they’re often using AI more frequently and earlier in their workflows," Dennison said. "There is also a training gap. Organizations often assume younger employees intuitively understand AI, yet provide little guidance on verification, risk, or appropriate use cases. As a result, AI may be treated as an answer engine rather than a support tool."
this is another great quote, which helps to establish how orgs treat younger generations, and how they tend to overtrust their understanding of AI.
-
About 70% of managers say their direct reports have made AI-related mistakes in the past year
Good stat from a survey of 1100 managers at resume.com. 70% of managers say that their employees have submitted AI generated errors inside their work in the last year.
-
58% said direct reports submitted work that contained factual inaccuracies generated by AI tools, while fewer reported that AI failed to account for critical contextual factors. Other issues cited include low-quality content, poor recommendations and inappropriate messaging.
from reporting managers, 58% of them said that employees were submitting work that contained factual inaccuracies in the work that was generated by AI, and that fewer of them reported that AI failed to account for "critical contextual factors", implying that the writing was generic and not directly applicable to the context that the writing was written in. Other issues were: low quality content, poor recommendations and inappropriate messaging.
-
59% of managers saying that they had to invest additional time to correct or redo work created by AI. Similarly, 53% said their direct reports had to take on extra work, while 45% said they had to bring in co-workers to help fix the mistake.
Extra time and money spent to repair errors made by AI but not caught by the human in the middle. 59% is almost 2/3 (closer to 3/5) needed to correct or redo the work created by AI without a human auditing it. 53% claim extra work is needed to repair the AI mistakes, and 45% also needed to bring in a (perhaps more senior) co-worker to help fix the mistake. I can imagine workers needing to work on a mistake the hits production code, and all of the thousands (or more) mistakes that would need to be later repaired and rolled back. very expensive and costly.
-
they damaged client relationships.
missed deadlines are tough, but damaging client relationships is a sin!
-
While 18% of managers said they did not suffer any financial losses from the mistakes, and 20% said those losses were less than $1,000, a significant number reported bigger losses. Twelve percent said those losses were more than $25,000, while 11% said between $10,000 and $24,999. Another 27% placed the value of those losses above $1,000 but below $10,000.
great stats for the cost of using AI without human auditing.
-
34% said Gen Z followed by 26% who blamed millennials, with 18% citing Gen X and just 9% saying it was baby boomers.
while this affect all generations, this is clearly a young people problem! This problem is here to stay
-
“AI is reliable when used as an assistant, not a decision-maker," Dennison said. "Without human judgment and clear processes, speed becomes a risk, and efficiency gains can turn into costly mistakes,”
great quote. directly mentions my concept of requiring human judgement, and how not having a human in the loop can make work move faster, but can also lead to very costly mistakes.
-
“Employees treat AI outputs as finished work rather than as a starting point. Current AI tools are very good at generating fluent content, but they don’t understand context, business nuance, risk, or consequences. That gap shows up in factual errors, missing constraints, poor judgment calls, and tone misalignment.”
another great quote -- ties into the abdicating human agency to a robot, and the full quote even illustrates the dangers of doing so.
-
The term “workslop” was coined in a Harvard Business Review article last year
'workslop' is creeping into businesses. This section mentions AI-generated content, slide decks, and even lengthy reports or 'random code' being passed off as polished work by employees!
-
- Jan 2026
-
www.office.com www.office.com
-
https://web.archive.org/web/20260105192105/https://www.office.com/
#2026/01/05 the day that Office was renamed to 'Microsoft 365 Copilot app' Meanwhile a billion PC owners are holding out on even Windows 11. [[Windows 10 is out of support, but 1 billion PCs still haven’t upgraded]] And MS CEO asks all to pretty please don't talk about slop or Microslop.... [[Microsoft CEO Begs Users to Stop Calling It Slop]]
Tags
Annotators
URL
-
-
www.windowscentral.com www.windowscentral.com
-
Dell COO in q3 earnings call said 500 million PCs that could have not been upgraded to Windows 11, and 500 million PCs that couldn't are also still in operation on Windows 10. 1billion PCs thusfar not moving to Win11.
-
-
futurism.com futurism.com
-
Microsoft CEO Satya Nadella doesn't want people to use the word slop for AI output
-
-
www.baldurbjarnason.com www.baldurbjarnason.com
-
Baldur Bjarnason notices that a number of the 1200 (!) blogs he follows which are normally dormant have become active again, but on the topic of AI if not generated by AI. By the original blogger. Blandness ensues.
Tags
Annotators
URL
-
- Nov 2025
-
-
https://web.archive.org/web/20251129105036/https://www.nature.com/articles/d41586-025-03506-6
For an international AI conf, a chunk of papers was generated, but also 21% of the peer reviews on those papers was. human centipede epistemology is here vgl [[Talk The Expanding Dark Forest and Generative AI]]
Tags
Annotators
URL
-
- Oct 2025
-
hubtr.bonjour.cafeia.org hubtr.bonjour.cafeia.orgE-mail1
-
Workslop : l’essor du travail de remplissage
Tags
Annotators
URL
-
- Mar 2025
-
www.404media.co www.404media.co
-
AI slop as brute forcing algorithmic timelines. Intended audience isn't people, but the algp's shoving stuff in timelines.
-
- Feb 2025
-
thathtml.blog thathtml.blog
-
Blogger Jared White warns that LLMs coding output is as bland as its prose output and seem biased to some specific frameworks. #slop Horrifying example of people waiting on AI to catch-up with new versions of framework. That is really bad.
-
- Sep 2024
-
www.trend-mill.com www.trend-mill.com
-
Stephen Moore rants about the internet he does not like. Incl the deterioration of search, the adtech, growth hacking for engagement etc. Now no longer human created even but generated slop from bots. Calls it a addictive dopamine machine without joy.
Tags
Annotators
URL
-
-
www.404media.co www.404media.co
-
paywalled article.
Wordfreq is shutting down because LLM output on the web is polluting its data to the point of uselessness. It would track longitudinally the change in use of words across a variety of languages. Vgl human centipede epistomology in [[Talk The Expanding Dark Forest and Generative AI]] by [[Maggie Appleton]]
-