- Nov 2024
-
-
AI models collapse when trained on recursively generated data by Ilia Shumailov et al.
ᔥ[[Mathew Lowry]] in AI4Communities post - MyHub Experiments Wiki (accessed:: 2024-11-06 09:43:23)
-
-
experiments.myhub.ai experiments.myhub.ai
-
https://experiments.myhub.ai/ai4communities_post
Matthew Lowry experiment
-
the notes you make about it as you curate it (which tells the AI exactly what you find useful about it)
My notes may give some indication about what I find useful about a thing, but certainly not exactly or even all of what I find useful.
-
the model collapse paper now suggests that the training data created by well-managed communities could be the new currency of collective intelligence.
-
“after greed and short-sightedness floods the commons with low-grade AI content… well-managed online communities of actual human beings [may be] the only place able to provide the sort of data tomorrow’s LLMs will need”
The value spoken of here is that of slowly building up (evolving) directed knowledge over time. The community evolves links using work and coherence into actionable information/knowledge whereas AI currently don't have an idea of leadership or direction into which to take that knowledge, so they're just creating more related information which is interpreted as "adjacent noise". Choosing a path and building off of it to create direction is where the promise lies. Of course some paths may wither and die, but the community will manage that whereas the AI would indiscriminately keep building in all directions without the value of demise within the system.
Tags
- social media
- curation
- beyond the pale
- Friends of the Link 2024-10-23
- evolution
- read
- ai4communities
- Friends of the Link 2024-10-16
- note making
- collective memory
- communities
- leadership
- content creation
- ratchets
- training data
- data ownership
- zettelkasten ratchet
- collective intelligence
- sense making
- direction
- artificial intelligence
- small web
Annotators
URL
-
-
cybercultural.com cybercultural.com
-
Google AI Overviews is the main culprit and poses an existential threat to publishers.
-
-
notebooklm.google notebooklm.google
-
https://notebooklm.google/
-
- Oct 2024
-
www.chathamhouse.org www.chathamhouse.org
-
Why AI must be decolonized to fulfill its true potential by [[Mahlet Zimeta]], Data and technology policy expert, Freelance
-
-
www.youtube.com www.youtube.com
-
Successful Secretary Presented by Royal Office Typewriters. A Thomas Craven Film Corporation Production, 1966. https://www.youtube.com/watch?v=If5b2FiDaLk.
Script: Lee Thuna<br /> Educational Consultant: Catharine Stevens<br /> Assistant Director: Willis F. Briley<br /> Design: Francisco Reynders<br /> Director & Producer: Carl A. Carbone<br /> A Thomas Craven Film Corporation Production
"Mother the mail"
gendered subservience
"coding boobytraps"
"I think you'll like the half sheet better. It is faster." —Mr. Typewriter, timestamp
A little bit of the tone of "HAL" from 2001: A Space Odyssey (1968). This is particularly suggestive as H.A.L. was a one letter increment from I.B.M. and the 1966 Royal 660 was designed to compete with IBM's Selectric
This calm voice makes suggestions to a secretary while H.A.L. does it for a male astronaut (a heroic figure of the time period). Suddenly the populace feels the computer might be a bad actor.
"We're living in an electric world, more speed and less effort."—Mr. Typewriter<br /> (techno-utopianism)
Tags
- secretaries
- efficiency
- Mr. Typewriter
- 2001: A Space Odyssey (1968)
- H.A.L.
- typewriter ads
- artificial intelligence as overlord
- power over
- Royal 660
- quotes
- typewriters
- voice over
- typewriter shortcuts
- gendered subservience
- Royal typewriters
- IBM selectric
- 1966
- effort
- techno-utopianism
Annotators
URL
-
-
-
One employee suggested that Adobe should come up with "a long-term communication and marketing plan outside of blog posts," and meet with the company's most prominent critics on YouTube and social media to "correct the misinformation head-on.""Watching the misinformation spread on social media like wildfire is really disheartening," this person wrote in Slack. "Still, a loud 'F Adobe' and 'Cancel Adobe' rhetoric is happening within the independent creator community that needs to be addressed."A third worker said the internal communication review process might be broken. "What are we doing meaningfully to prevent this or is this only acted on when called out?" the person wrote.
-
It's unclear how this information is accessed, and whether the creators of the data can opt out or get paid.
-
Adobe upset many artists and designers recently by implying it would use their content to train AI models. The company had to quell those concerns with a blog post denying this.But some Adobe employees are still not happy with the response, and they are calling for improved communication with customers.According to screenshots of an internal Slack channel, obtained by Business Insider, Adobe employees complained about the company's poor response to the controversy and demanded a better long-term communication plan. They pointed out that Adobe got embroiled in similar controversies in the past, adding the internal review process needed to be fixed. This story is available exclusively to Business Insider subscribers. Become an Insider and start reading now. Have an account? Log in. "If our goal is truly to prioritize our users' best interests (which, to be honest, I sometimes question), it's astonishing how poor our communication can be," one of the people wrote in Slack. "The general perception is: Adobe is an evil company that will do whatever it takes to F its users.""Let's avoid becoming like IBM, which seems to be surviving primarily due to its entrenched market position and legacy systems," this Adobe employee added.
-
- Sep 2024
-
learn.microsoft.com learn.microsoft.com
-
For all audiences and in most content, use intelligent or intelligence to describe or talk about the benefits of AI.In UI, use intelligent technology to describe the underlying technology that powers AI features.
I think this is a good example of a misleading marketing ploy that shouldn't exist in technical documentation.
-
-
www.theregister.com www.theregister.com
-
criminals could easily create a package that uses a name produced by common AI services and cram it full of malware
-
-
arxiv.org arxiv.org
-
the average percentage of hallucinated packages is at least 5.2% for commercial models and 21.7% for open-source models, including a staggering 205,474 unique examples of hallucinated package names, further underscoring the severity and pervasiveness of this threat
Tags
Annotators
URL
-
-
dl.acm.org dl.acm.org
-
Boden’s conception of creativity as “exploration and play”
Margaret Boden, research professor at University of Sussex, has provided pivotal work in the exploration of creativity using interdisciplinary research across music, game, story, physics, and artificial intelligence to explore human creativity in arts, science, and life.
-
Phil Agre
American AI researcher and humanities professor known for critiques of technology.
-
-
www.humanwordsproject.com www.humanwordsproject.com
-
https://web.archive.org/web/20231121081108/https://www.humanwordsproject.com/
Found via Richard Polt's blog.
Site no longer exists in 2024
-
-
www.platformer.news www.platformer.news
-
Why note-taking apps don't make us smarter by [[Casey Newton]]
Newton takes a thin view of the eternal question of information overload and whether or not AI might improve the situation. No serious results...
-
- Aug 2024
-
storm.genie.stanford.edu storm.genie.stanford.eduSTORM1
-
https://storm.genie.stanford.edu/
STORM Get a Wikipedia-like report on your topic<br /> STORM is a research prototype for automating the knowledge curation process.
-
-
en.wikipedia.org en.wikipedia.org
-
If you put in all physics data up to 1904, would an AI ever be able to have come up with anything from Einstein's annus mirabilis? I suspect not, at least right now.
-
-
feministai.pubpub.org feministai.pubpub.org
-
Freedom of Expression and Challenges Posed by Artificial Intelligence
The first section concentrates on the building a conceptual framework which is based on the relationship between freedom of expression and democracy and the way in which the Internet and social media enhance the exercise of freedom of expression.
The second section examines the challenges to freedom of expression in relation to the promotion of gender equality on social media which are imposed by AI content curation and moderation.
In the final section, the two-tiered approach, i.e., policies and law, is proposed to deal with the problem.
-
-
www.midjourney.com www.midjourney.com
-
https://www.midjourney.com/home
2024-08-21: PM indicates they're doing free trials now
-
-
pinokio.computer pinokio.computerPinokio1
-
https://pinokio.computer/<br /> Pinokio is a browser that lets you install, run, and programmatically control ANY application, automatically.
-
-
-
https://flux1.ai/<br /> Flux AI Image Generator
-
-
glif.app glif.app
-
https://glif.app/glifs
-
-
-
we are slower we are irrational we are imperfect we are drifting away we are forgetting stuff we are making mistakes but we are learning from our failures we get support from our from our friends from our from our colleagues and we are understanding and instead of just analyzing the world and this is giving us the ultimate cognitive Edge
for - key insight - human vs artificial intelligence - humans will create the best ideas
key insight - human vs artificial intelligence - humans will create the best ideas - why? - because we are - slower - imperfect - less rational - drifting away - forgetting - and we learn from the mistakes we make and from different perspectives shared with us
-
human beings don't do that we understand that the chair is not a specifically shaped object but something you consider and once you understood that concept that principle you see chairs everywhere you can create completely new chairs
for - comparison - human vs artificial intelligence
question - comparison - human vs artificial intelligence - Can't an AI also consider things we sit on to then generalize their classifcation algorithm?
-
the brain is Islam Islam is it is lousy and it is selfish and still it is working yeah look around you working brains wherever you look and the reason for this is that we totally think differently than any kind of digital and computer system you know of and many Engineers from the AI field haven't figured out that massive difference that massive difference yet
for - comparison - brain vs machine intelligence
comparison - brain vs machine intelligence - the brain is inferior to machine in many ways - many times slower - much less accurate - network of neurons is mostly isolated in its own local environment, not connected to a global network like the internet - Yet, it is able to perform extraordinary things in spite of that - It is able to create meaning out of sensory inputs - Can we really say that a machine can do this?
-
this blue ball with three stumps a chair or this strange design object here because you can sit on it and what you see here is the difference the main difference between the computer world and the brainworld
for - comparison - brain vs machine intelligence - comparison - human intelligence vs artificial intelligence
comparison - human intelligence vs artificial intelligence - AI depends on feeding the AI system with huge datasets that it can - analyze and make correlations and - perform big data analysis - Humans don't operate the same way
-
for - Henning Beck - neuroscientist - video - youtube - The Brain vs Artificial Intelligence
Tags
- comparison - brain vs machine intelligence - what brains and consciousness can do but AI cannot
- question - comparison - human vs artificial intelligence - Can't an AI also consider things we sit on to then generalize their classifcation algorithm?
- comparison - human intelligence vs artificial intelligence
- video - youtube - The Brain vs Artificial Intelligence
- key insight - human vs artificial intelligence - humans will create the best ideas
- Henning Beck - neuroscientist
Annotators
URL
-
- Jul 2024
-
docs.google.com docs.google.com
-
Please sign these letters to legislators, telling them that misguided AI laws will hurt startups and small companies and discourage AI innovation and investment in California.AI offers tremendous benefits, but many fear AI and worry about potential harm and misuse. These are valid concerns for everyone, including legislators, but laws that promote safe and equitable AI should be fact-based, straightforward, and universally applied. Legislators in Sacramento are considering two proposals, AB 2930 and SB 1047, that would impose costly and unpredictable burdens on AI developers, including anticipating and preventing future harmful uses of AI. Though well-intended, these bills will dampen and inhibit innovation, permanently embed today’s AI leaders as innovation gatekeepers, and drive investment and talent to other states and countries.
-
-
github.com github.com
-
The Computational Democracy Project
We bring data science to deliberative democracy, so that governance may better reflect the multidimensionality of the public's will.
-
-
moshi.chat moshi.chat
-
https://moshi.chat/
Moshi is an experimental conversational AI.
-
-
sillytavernai.com sillytavernai.com
-
SillyTavern<br /> https://sillytavernai.com/
-
-
www.nytimes.com www.nytimes.com
-
Kurutz, Steven. “Now You Can Read the Classics With A.I.-Powered Expert Guides.” The New York Times, June 13, 2024, sec. Style. https://www.nytimes.com/2024/06/13/style/now-you-can-read-the-classics-with-ai-powered-expert-guides.html.
Tags
- William James
- Great Books of the Western World
- James Joyce
- John Kaag
- read
- Margaret Atwood
- John Dubuque
- John Banville
- Rebind.ai
- suicide
- artificial intelligence for reading
- Clancy Martin
- great books idea
- The Great Books Movement
- reading practices
- chatbots
- John Muir
- Roxane Gay
- Friedrich Nietzsche
- Martin Heidegger
- Marlon James
- philosophy
- Elaine Pagels
- Laura Kipnis
Annotators
URL
-
- Jun 2024
-
disruptedjournal.postdigitalcultures.org disruptedjournal.postdigitalcultures.org
-
In this respect, we join Fitzpatrick (2011) in exploring “the extent to which the means of media production and distribution are undergoing a process of radical democratization in the Web 2.0 era, and a desire to test the limits of that democratization”
Comment by chrisaldrich: Something about this is reminiscent of WordPress' mission to democratize publishing. We can also compare it to Facebook whose (stated) mission is to connect people, while it's actual mission is to make money by seemingly radicalizing people to the extremes of our political spectrum.
This highlights the fact that while many may look at content moderation on platforms like Facebook as removing their voices or deplatforming them in the case of people like Donald J. Trump or Alex Jones as an anti-democratic move. In fact it is not. Because of Facebooks active move to accelerate extreme ideas by pushing them algorithmically, they are actively be un-democratic. Democratic behavior on Facebook would look like one voice, one account and reach only commensurate with that person's standing in real life. Instead, the algorithmic timeline gives far outsized influence and reach to some of the most extreme voices on the platform. This is patently un-democratic.
-
-
www.youtube.com www.youtube.com
-
Claude Shannon Ultimate Machine
Could this be the end result of artificial intelligence?
cross reference: - Niklas Luhmann's jokerzettel - War Games (1983) and "Joshua" (WOPR)
-
-
link.springer.com link.springer.com
-
illuminate.withgoogle.com illuminate.withgoogle.com
-
https://illuminate.withgoogle.com/
via
<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>Interesting experiment from Google that creates an NPR-like discussion about any academic paper.<br><br>It definitely suggests some cool possibilities for science communication. And the voices, pauses, and breaths really scream public radio. Listen to at least the first 30 seconds. pic.twitter.com/r4ScqenF1d
— Ethan Mollick (@emollick) June 1, 2024
-
-
-
www.perplexity.ai www.perplexity.ai
- May 2024
-
www.bardsandsages.com www.bardsandsages.com
-
Meanwhile, Amazon and other ebook retails are pushing full-steam ahead to promote AI-generated content at the expense of real authors and artists. Publisher who actually pay authors and artists and editors now have to compete with AI-generated material churned out in bulk and sold at 99 cents. And while it is easy to shrug this off if you are outside the industry and claim, "Well, the cream rises to the top," anyone that has been around the industry long enough knows that what rises to the top is what Amazon's algorithms push there. And the AI bots are much better at manipulating the algorithms that real people.
Amazon care about money; they don't care about humans.
-
- Apr 2024
-
bigthink.com bigthink.com
-
Have you ever had a meaningful conversation with Siri or Alexa or Cortana? Of course not.
That said, I have had some pretty amazing conversations with ChatGPT-4. I've found it to be useful, too, for brainstorming. In one recent case (which I blogged about on my personal blog), the AI helped me through figuring out a structural issue with my zettelkasten.
-
rtificial intelligence is already everywhere
-
-
www.bloodinthemachine.com www.bloodinthemachine.com
-
We’re over a year into the AI gold rush now, and corporations using top AI services report unremarkable gains, AI salesmen have been asked to rein in their promises for fear of underdelivering on them, an anti-generative AI cultural backlash is growing, the first high-profile piece of AI-centered consumer hardware crashed and burned in its big debut, and a bombshell scientific paper recently cast serious doubt on AI developers’ ability to continue to dramatically improve their models’ performance. On top of all that, the industry says that it can no longer accurately measure how good those models even are. We just have to take the companies at their word when they inform us that they’ve “improved capabilities” of their systems.
-
-
-
"The problem is that even the best AI software can only take a poor-quality image so far, and such programs tend to over sharpen certain lines, resulting in strange artifacts," Foley said. Foley suggested that Netflix should have "at the very least" clarified that images had been altered "to avoid this kind of backlash," noting that "any kind of manipulation of photos in a documentary is controversial because the whole point is to present things as they were." Hollywood's increasing use of AI has indeed been controversial, with screenwriters' unions opposing AI tools as "plagiarism machines" and artists stirring recent backlash over the "experimental" use of AI art in a horror film. Even using AI for a movie poster, as Civil War did, is enough to generate controversy, the Hollywood Reporter reported.
-
-
theamericanscholar.org theamericanscholar.org
-
despite Rus’s assurances, people do fear that robots and AI will steal their jobs, or at best demote them to underlings. To counter this dystopian idea, Rus contends that technology usually creates jobs: between 1980 and 2015, for example, computers eliminated 3.5 million jobs but generated 19 million new ones. Perhaps so, but were these good jobs? Quality should count as much as quantity. And for every inspiring new application of robotics in the book, Rus includes another idea that made me cringe. On one page, she describes a smart glove that could help an elderly stroke victim regain the use of her hands and write a birthday card to a grandchild. Wonderful. Then she talks about adapting the glove for children, to take control of their hands when learning to write and circumvent the hard work of mastering this skill. Dreadful. In spite of her warnings about the need to properly train robots and AI systems, she seemingly forgets that human muscles and nerves need training as well, and that failing and flailing are integral parts of learning. Short-circuiting that process has a cost. Such lessons apply to adults, too. As much as I’d love to turn all yard work over to feral Roombas and never rake leaves or shovel snow again, such chores do instill discipline and pride. They’re also forms of physical activity, something most of us need more of. Indeed, it would be one thing if people offloaded tedious chores to robots and spent their free time hiking mountains or running marathons—but lying on the couch eating potato chips seems likelier. We’ve all seen the human blobs of Wall-E.
-
-
Local file Local file
-
it follows that no purchasable articlecan supply our individual wants so far as a key to our stockof information is concerned. We shall always be mainly de-pendent in this direction upon our own efforts to meet ourown situation.
I appreciate his emphasis on "always" here. Though given our current rise of artificial intelligence and ChatGPT, this is obviously a problem which people are attempting to overcome.
Sadly, AI seem to be designed for the commercial masses in the same way that Google Search is (cross reference: https://hypothes.is/a/jx6MYvETEe6Ip2OnCJnJbg), so without a large enough model of your own interests, can AI solve your personal problems? And if this is the case, how much data will it really need? To solve this problem, you need your own storehouse of personally curated data to teach an AI. Even if you have such a store for an AI, will the AI still proceed in the direction you would in reality or will it represent some stochastic or random process from the point it leaves your personal data set?
How do we get around the chicken-and-egg problem here? What else might the solution space look like outside of this sketch?
-
- Mar 2024
-
research.ibm.com research.ibm.com
-
https://research.ibm.com/blog/retrieval-augmented-generation-RAG
PK indicates that folks using footnotes in AI are using rag methods.
-
-
www.cassidyai.com www.cassidyai.com
-
Build AI Automations & Assistants Trained on Your Business.
-
-
-
We can't use algorithms to filter for quality because they're not designed to. They're designed to steer you towards whatever's most profitable for their creators.That puts the onus on us, as users, to filter out the noise and that is increasingly difficult.
-
-
www.storycenter.org www.storycenter.org
-
So AI could be a tool to help move people into expression, to move past creative blocks
To what extent are we using AI in this way in ds106? That is, using it as a starting point to build on rather than an end product?
-
-
pluralistic.net pluralistic.net
-
"The Curse of Recursion: Training on Generated Data Makes Models Forget," a recent paper, goes beyond the ick factor of AI that is fed on botshit and delves into the mathematical consequences of AI coprophagia: https://arxiv.org/abs/2305.17493 Co-author Ross Anderson summarizes the finding neatly: "using model-generated content in training causes irreversible defects": https://www.lightbluetouchpaper.org/2023/06/06/will-gpt-models-choke-on-their-own-exhaust/ Which is all to say: even if you accept the mystical proposition that more training data "solves" the AI problems that constitute total unsuitability for high-value applications that justify the trillions in valuation analysts are touting, that training data is going to be ever more elusive.
-
Botshit can be produced at a scale and velocity that beggars the imagination. Consider that Amazon has had to cap the number of self-published "books" an author can submit to a mere three books per day: https://www.theguardian.com/books/2023/sep/20/amazon-restricts-authors-from-self-publishing-more-than-three-books-a-day-after-ai-concerns
-
For people inflating the current AI hype bubble, this idea that making the AI "more powerful" will correct its defects is key. Whenever an AI "hallucinates" in a way that seems to disqualify it from the high-value applications that justify the torrent of investment in the field, boosters say, "Sure, the AI isn't good enough…yet. But once we shovel an order of magnitude more training data into the hopper, we'll solve that, because (as everyone knows) making the computer 'more powerful' solves the AI problem"
-
As the lawyers say, this "cites facts not in evidence." But let's stipulate that it's true for a moment. If all we need to make the AI better is more training data, is that something we can count on? Consider the problem of "botshit," Andre Spicer and co's very useful coinage describing "inaccurate or fabricated content" shat out at scale by AIs: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4678265 "Botshit" was coined last December, but the internet is already drowning in it. Desperate people, confronted with an economy modeled on a high-speed game of musical chairs in which the opportunities for a decent livelihood grow ever scarcer, are being scammed into generating mountains of botshit in the hopes of securing the elusive "passive income": https://pluralistic.net/2024/01/15/passive-income-brainworms/#four-hour-work-week
-
-
writingslowly.com writingslowly.com
-
https://writingslowly.com/2024/03/13/the-card-index.html
Richard ties together the "aliveness" of card indexes, phonographs, and artificial intelligence in an interesting way. He also links it to the living surroundings of indigenous cultures which see these things in ways that westerners don't.
-
-
theinformed.life theinformed.life
-
This episode's transcript was produced by an AI. If you notice any errors, please get in touch.
puts the work on the reader/user rather than the producer
-
-
www.404media.co www.404media.co
-
- Feb 2024
-
chat.openai.com chat.openai.com
-
https://chat.openai.com/g/g-z5XcnT7cQ-zettel-critique-assistant
Zettel Critique Assistant<br /> By Florian Lengyel<br /> Critique Zettels following three rules: Zettels should have a single focus, WikiLinks indicate a shift in focus, Zettels should be written for your future self. The GPT will suggest how to split multi-focused notes into separate notes. Create structure note from a list of note titles and abstracts.
ᔥ[[ZettelDistraction]] in Share with us what is happening in your ZK this week. February 20, 2024
-
-
www.sciencedirect.com www.sciencedirect.com
-
Despite the opportunities of AI-based technologies for teaching and learning, they have also ethical issues.
Yes, I agree with this statement. Ethical issues range from academic integrity concerns to data privacy. AI technology based on algorithmic applications intentionally collects human data from its users and they do not specifically know what kind of data and what quantities of them are collected
-
-
Local file Local file
-
Joy, Bill. “Why the Future Doesn’t Need Us.” Wired, April 1, 2000. https://www.wired.com/2000/04/joy-2/.
Annotation url: urn:x-pdf:753822a812c861180bef23232a806ec0
-
The experiences of the atomic scientists clearly show the need to takepersonal responsibility, the danger that things will move too fast, andthe way in which a process can take on a life of its own. We can, as theydid, create insurmountable problems in almost no time flat. We mustdo more thinking up front if we are not to be similarly surprised andshocked by the consequences of our inventions.
Bill Joy's mention that insurmountable problems can "take on a life of [their] own" is a spectacular reason for having a solid definition of what "life" is, so that we might have better means of subverting it in specific and potentially catastrophic situations.
-
The GNR technologies do not divide clearly into commercial andmilitary uses; given their potential in the market, it’s hard to imaginepursuing them only in national laboratories. With their widespreadcommercial pursuit, enforcing relinquishment will require a verificationregime similar to that for biological weapons, but on an unprecedentedscale. This, inevitably, will raise tensions between our individual pri-vacy and desire for proprietary information, and the need for verifica-tion to protect us all. We will undoubtedly encounter strong resistanceto this loss of privacy and freedom of action.
While Joy looks at the Biological and Chemical Weapons Conventions as well as nuclear nonproliferation ideas, the entirety of what he's looking at is also embedded in the idea of gun control in the United States as well. We could choose better, but we actively choose against our better interests.
What role does toxic capitalism have in pushing us towards these antithetical goals? The gun industry and gun lobby have had tremendous interest on that front. Surely ChatGPT and other LLM and AI tools will begin pushing on the profitmaking levers shortly.
-
Now, as then, we are creators of new technologies and stars of theimagined future, driven—this time by great financial rewards andglobal competition—despite the clear dangers, hardly evaluating whatit may be like to try to live in a world that is the realistic outcome ofwhat we are creating and imagining.
Tags
- futurism
- Bill Joy
- technology
- lobbying
- toxic capitalism
- future of work
- read
- societal interests
- knowledge-enabled mass destruction (KMD)
- personal interests
- weapons of mass destruction
- quotes
- origin of life
- surveillance capitalism
- professional ethics
- artificial intelligence control
- definitions
- genetics nanotechnology robotics (GNR)
- artificial intelligence
- George Gilder
- gun control
- Dan Allosso Book Club
- evolution
- abortion
Annotators
-
- Jan 2024
-
Local file Local file
-
How soon could such an intelligent robot be built? The coming ad-vances in computing power seem to make it possible by 2030.
In 2000, Bill Joy predicted that advances in computing would allow an intelligent robot to be built by 2030.
-
in hishistory of such ideas, Darwin Among the Machines, George Dysonwarns: “In the game of life and evolution there are three players at thetable: human beings, nature, and machines. I am firmly on the side ofnature. But nature, I suspect, is on the side of the machines.”
-
Uncontrolledself-replication in these newer technologies runs a much greater risk: arisk of substantial damage in the physical world.
As a case in point, the self-replication of misinformation on social media networks has become a substantial physical risk in the early 21st century causing not only swings in elections, but riots, take overs, swings in the stock market (GameStop short squeeze January 2021), and mob killings. It is incredibly difficult to create risk assessments for these sorts of future harms.
In biology, we see major damage to a wide variety of species as the result of uncontrolled self-replication. We call it cancer.
We also see programmed processes in biological settings including apoptosis and necrosis as means of avoiding major harms. What might these look like with respect to artificial intelligence?
-
Moravec’s view is that the robots will eventually suc-ceed us—that humans clearly face extinction.
Joy contends that one of Hans Moravec's views in his book Robot: Mere Machine to Transcendent Mind is that robots will push the human species into extinction in much the same way that early North American placental species eliminated the South American marsupials.
-
Our overuse of antibiotics has led to what may be thebiggest such problem so far: the emergence of antibiotic-resistant andmuch more dangerous bacteria. Similar things happened when attemptsto eliminate malarial mosquitoes using DDT caused them to acquireDDT resistance; malarial parasites likewise acquired multi-drug-resistant genes.
Just as mosquitoes can "acquire" (evolve) DDT resistance or bacteria might evolve antiobiotic-resistance, might not humans evolve AI resistance? How fast might we do this? On what timeline? Will the pressure be slowly built up over time, or will the onset be so quick that extinction is the only outcome?
Tags
- uncontrolled self-replication
- Bill Joy
- analogies
- robots
- misinformation
- extinction events
- life
- apoptosis
- cancer
- George Dyson
- quotes
- machines
- history of information
- necrosis
- GameStop
- short squeeze
- artificial intelligence
- self-replication
- robot overlords
- human extinction
- predictions
- Hans Moravec
- extinction
- resistance
- intellectual history
- evolution
- nature
Annotators
-
-
bavatuesdays.com bavatuesdays.com
-
I could totally see this UI with a video generated version of Niklas Luhmann answering questions using the training set of notes in his online zettelkasten at https://niklas-luhmann-archiv.de/bestand/zettelkasten/tutorial
syndication link: https://bavatuesdays.com/ai106-long-live-the-new-flesh/comment-page-1/#comment-388943
-
read [[Jim Groom]] in [AI106: Long Live the New Flesh](https://bavatuesdays.com/ai106-long-live-the-new-flesh/comment-page-1/0
-
-
oblivion.university oblivion.university
-
https://oblivion.university/
ᔥ[[Jim Groom]] in AI106: Long Live the New Flesh
-
-
glaze.cs.uchicago.edu glaze.cs.uchicago.edu
-
There are many reasons for why one should use software like Glaze. An example:
https://www.artnews.com/art-news/news/midjourney-ai-artists-database-1234691955/
Tags
Annotators
URL
-
-
www.artnews.com www.artnews.com
-
Use Glaze, a system designed to protect human artists by disrupting style mimicry, to protect what you create from being stolen under the guise of 'training AI'; the term should really be 'thievery'.
-
- Dec 2023
-
www.linkedin.com www.linkedin.com
-
Matt GrossMatt Gross (He/Him) • 1st (He/Him) • 1st Vice President, Digital Initiatives at Archetype MediaVice President, Digital Initiatives at Archetype Media 4d • 4d • So, here's an interesting project I launched two weeks ago: The HistoryNet Podcast, a mostly automated transformation of HistoryNet's archive of 25,000+ stories into an AI-driven daily podcast, powered by Instaread and Zapier. The voices are pretty good! The stories are better than pretty good! The implications are... maybe terrifying? Curious to hear what you think. Listen at https://lnkd.in/emUTduyC or, as they always say, "wherever you get your podcasts."
https://www.linkedin.com/feed/update/urn:li:activity:7142905086325780480/
One can now relatively easily use various tools in combination with artificial intelligence-based voices and reading to convert large corpuses of text into audiobooks, podcasts or other spoken media.
-
-
www.youtube.com www.youtube.com
-
https://www.youtube.com/watch?v=7xRXYJ355Tg The AI Bias Before Christmas by Casey Fiesler
-
-
-
there's this broader issue of of being able to get inside other people's heads as we're driving down the road all the time we're looking at other 00:48:05 people and because we have very advanced theories of mind
- for: comparison - AI - HI - example - driving, comparison - artificial i human intelligence - example - driving
-
in my view the biggest the most dangerous phenomenon on the human on our planet is uh human stupidity it's not artificial intelligence
-
for: meme - human stupidity is more dangerous than artificial intelligence
-
meme: human stupidity is more dangerous than artificial intelligence
- author:Nikola Danaylov
- date: 2021
-
-
-
ideaflow.app ideaflow.appIdeaflow1
-
https://ideaflow.app/
Audio transcription notes with AI
2023-12-12 Released on PruductHunt https://www.producthunt.com/posts/ideaflow
-
- Nov 2023
-
www.meetup.com www.meetup.com
-
https://www.meetup.com/edtechsocal/events/296723328/
Generative AI: Super Learning Skills with Data Discovery and more!
-
-
-
Research and write your next paper with Jenni AI
-
-
www.scholarcy.com www.scholarcy.com
-
The AI-powered article summarizer
Scholarcy https://www.scholarcy.com/
-
-
www.wizdom.ai www.wizdom.ai
-
Your personal research assistant
-
-
hu.ma.ne hu.ma.ne
-
The AI pin has just been listed o/a 2023-11-10.
Tags
Annotators
URL
-
-
social.coop social.coopMastodon1
-
I use expiration dates and refrigerators to make a point about #AI and over-reliance, and @dajb uses ducks. #nailingit @weareopencoop
—epilepticrabbit @epilepticrabbit@social.coop on Nov 09, 2023, 11:51 at https://mastodon.social/@epilepticrabbit@social.coop/111382329524902140
-
-
ailiteracy.fyi ailiteracy.fyi
-
https://ailiteracy.fyi/
Doug Belshaw joint
-
-
twitter.com twitter.com
-
<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>As an ex-Viv (w/ Siri team) eng, let me help ease everyone's future trauma as well with the Fundamentals of Assisted Intelligence.<br><br>Make no mistake, OpenAI is building a new kind of computer, beyond just an LLM for a middleware / frontend. Key parts they'll need to pull it off:… https://t.co/uIbMChqRF9
— Rob Phillips 🤖🦾 (@iwasrobbed) October 29, 2023
-
-
www.theguardian.com www.theguardian.com
-
-
www.nytimes.com www.nytimes.com
-
-
www.cnn.com www.cnn.com
-
www.washingtonpost.com www.washingtonpost.com
-
- Oct 2023
-
www.nature.com www.nature.com
-
Wang et. al. "Scientific discovery in the age of artificial intelligence", Nature, 2023.
A paper about the current state of using AI/ML for scientific discovery, connected with the AI4Science workshops at major conferences.
(NOTE: since Springer/Nature don't allow public pdfs to be linked without a paywall, we can't use hypothesis directly on the pdf of the paper, this link is to the website version of it which is what we'll use to guide discussion during the reading group.)
-
-
-
Three AI Chatbots, Two Books, and One Weird Annotation Experiment by Remi Kalir on September 29, 2023 https://remikalir.com/blog/three-ai-chatbots-two-books-and-one-weird-annotation-experiment/
-
whether or not it was appropriate to write notes in library books (OK according to Bard, nope for ChatGPT and Claude).
An interesting divergent take on writing in library books...
-
-
typeshare.co typeshare.co
-
-
typeshare.co typeshare.co
-
-
-
Envisioning the next wave of emergent AI
Are we stretching too far by saying that AI are currently emergent? Isn't this like saying that card indexes of the early 20th century are computers. In reality they were data storage and the "computing" took place when humans did the actual data processing/thinking to come up with new results.
Emergence would seem to actually be the point which comes about when the AI takes its own output and continues processing (successfully) on it.
-
-
aboard.com aboard.com
-
A card-based collaboration tool that leverages information visualization. Pinterest for collaborative teams with expandable data.
Looks interesting and I've got a beta invite, but not sure if it fits any of my needs, particularly with an eye toward note taking.
-
- Sep 2023
-
www.theguardian.com www.theguardian.com
-
-
Amazon has become a marketplace for AI-produced tomes that are being passed off as having been written by humans, with travel books among the popular categories for fake work.
-
-
-
Envisioning the next wave of emergent AIAn experimental Future Trends Forum workshop event
-
-
www.britannica.com www.britannica.com
-
R.U.R.: Rossum’s Universal Robots, drama in three acts by Karel Čapek, published in 1920 and performed in 1921. This cautionary play, for which Čapek invented the word robot (derived from the Czech word for forced labour), involves a scientist named Rossum who discovers the secret of creating humanlike machines. He establishes a factory to produce and distribute these mechanisms worldwide. Another scientist decides to make the robots more human, which he does by gradually adding such traits as the capacity to feel pain. Years later, the robots, who were created to serve humans, have come to dominate them completely.
-
-
delong.typepad.com delong.typepad.com
-
What do you do then? You can take the book to someone else who, you think, can read better than you, and have him explain the parts that trouble you. ("He" may be a living person or another book-a commentary or textbook. )
This may be an interesting use case for artificial intelligence tools like ChatGPT which can provide the reader of complex material with simplified synopses to allow better penetration of the material (potentially by removing jargon, argot, etc.)
-
Active Reading
He then pushes a button and "plays back" the opinion whenever it seems appropriate to do so. He has performed acceptably without having had to think.
This seems to be a reasonable argument to make for those who ask, why read? why take notes? especially when we can use search and artificial intelligence to do the work for us. Can we really?
-
-
-
These articles kill me. They sit at surface level and don't delve any deeper for actual insight. And this is why these techniques are necessary. (BTW, these methods go back thousands of years... Tiago didn't invent them.)
-
-
a16z.simplecast.com a16z.simplecast.com
-
https://a16z.simplecast.com/episodes/a-true-second-brain-xrODaBD2
Recommended by Michael Grossman
-
- Aug 2023
-
-
Do not rely on Claude without doing your own independent research.
-
-
remikalir.com remikalir.com
-
Kalir, Remi H. “Playing with Claude.” Academic blog. Remi Kalir (blog), August 25, 2023. https://remikalir.com/blog/playing-with-claude/.
-
-
chat.openai.com chat.openai.comChatGPT1
-
www.youtube.com www.youtube.com
-
&list=PLdAbfZfaH_1I0vD3GsgbIdsLp6id6AOUb&index=9
-
-
Local file Local file
-
Mills, Anna, Maha Bali, and Lance Eaton. “How Do We Respond to Generative AI in Education? Open Educational Practices Give Us a Framework for an Ongoing Process.” Journal of Applied Learning and Teaching 6, no. 1 (June 11, 2023): 16–30. https://doi.org/10.37074/jalt.2023.6.1.34.
Annotation url: urn:x-pdf:bb16e6f65a326e4089ed46b15987c1e7
-
-
-
danallosso.substack.com danallosso.substack.com
-
Remember ChatGPT? It is going to do to the white collar world what robotics and offshoring did to blue collar America. So maybe this isn't the best time to be abandoning the Humanities to focus on vocational training?
This is one of the things that doesn't seem to be being explored enough presently, or at least I'm not seeing it outside of the SAG and WGA strikes where it seems to be a side issue rather than a primary issue.
-
-
textfx.withgoogle.com textfx.withgoogle.comTextFX1
-
er.educause.edu er.educause.edu
-
A Generative AI Primer on 2023-08-15 by Brian Basgen
ᔥGeoff Corb in LinkedIn update (accessed:: 2023-08-26 01:34:45)
-
- Jul 2023
-
arxiv.org arxiv.org
-
Epstein, Ziv, Hertzmann, Aaron, Herman, Laura, Mahari, Robert, Frank, Morgan R., Groh, Matthew, Schroeder, Hope et al. "Art and the science of generative AI: A deeper dive." ArXiv, (2023). Accessed July 21, 2023. https://doi.org/10.1126/science.adh4451.
Abstract
A new class of tools, colloquially called generative AI, can produce high-quality artistic media for visual arts, concept art, music, fiction, literature, video, and animation. The generative capabilities of these tools are likely to fundamentally alter the creative processes by which creators formulate ideas and put them into production. As creativity is reimagined, so too may be many sectors of society. Understanding the impact of generative AI - and making policy decisions around it - requires new interdisciplinary scientific inquiry into culture, economics, law, algorithms, and the interaction of technology and creativity. We argue that generative AI is not the harbinger of art's demise, but rather is a new medium with its own distinct affordances. In this vein, we consider the impacts of this new medium on creators across four themes: aesthetics and culture, legal questions of ownership and credit, the future of creative work, and impacts on the contemporary media ecosystem. Across these themes, we highlight key research questions and directions to inform policy and beneficial uses of the technology.
-
- Jun 2023
-
www.imm.dtu.dk www.imm.dtu.dk
-
Reflection enters the picture when we want to allow agents to reflect uponthemselves and their own thoughts, beliefs, and plans. Agents that have thisability we call introspective agents.
-
-
learn-us-east-1-prod-fleet01-xythos.content.blackboardcdn.com learn-us-east-1-prod-fleet01-xythos.content.blackboardcdn.com
-
The problem with that presumption is that people are alltoo willing to lower standards in order to make the purported newcomer appear smart. Justas people are willing to bend over backwards and make themselves stupid in order tomake an AI interface appear smart
AI has recently become such a big thing in our lives today. For a while I was seeing chatgpt and snapchat AI all over the media. I feel like people ask these sites stupid questions that they already know the answer too because they don't want to take a few minutes to think about the answer. I found a website stating how many people use AI and not surprisingly, it shows that 27% of Americans say they use it several times a day. I can't imagine how many people use it per year.
Tags
Annotators
URL
-
-
docdrop.org docdrop.org
-
there is a scenario 00:18:21 uh possibly a likely scenario where we live in a Utopia where we really never have to worry again where we stop messing up our our planet because intelligence is not a bad commodity more 00:18:35 intelligence is good the problems in our planet today are not because of our intelligence they are because of our limited intelligence
-
limited (machine) intelligence
- cannot help but exist
- if the original (human) authors of the AI code are themselves limited in their intelligence
-
comment
- this limitation is essentially what will result in AI progress traps
- Indeed,
- progress and their shadow artefacts,
- progress traps,
- is the proper framework to analyze the existential dilemma posed by AI
-
-
-
www.youtube.com www.youtube.com
-
OEG Live: Audiobook Versions of OER Textbooks (and AI Implications)
Host: Alan Levine<br /> Panelists: Brian Barrick (LA Harbor College), Delmar Larsen, Brenna, Jonathan, Amanda Grey (KPU), Steel Wagstaff (Pressbooks).
Find out more information and discuss this topic on OEG Connect: https://oeg.pub/439V1Bc
-
-
writeout.ai writeout.ai
-
Recommended by Steel Wagstaff at OEG Live 2023-06-02.
-
-
adjacentpossible.substack.com adjacentpossible.substack.com
-
Project Tailwind by Steven Johnson
-
I’ve also found that Tailwind works extremely well as an extension of my memory. I’ve uploaded my “spark file” of personal notes that date back almost twenty years, and using that as a source, I can ask remarkably open-ended questions—“did I ever write anything about 19th-century urban planning” or “what was the deal with that story about Houdini and Conan Doyle?”—and Tailwind will give me a cogent summary weaving together information from multiple notes. And it’s all accompanied by citations if I want to refer to the original direct quotes for whatever reason.
This sounds like the sort of personalized AI tool I've been wishing for since the early ChatGPT models if not from even earlier dreams that predate that....
-
- May 2023
-
-
Deep Learning (DL) A Technique for Implementing Machine LearningSubfield of ML that uses specialized techniques involving multi-layer (2+) artificial neural networksLayering allows cascaded learning and abstraction levels (e.g. line -> shape -> object -> scene)Computationally intensive enabled by clouds, GPUs, and specialized HW such as FPGAs, TPUs, etc.
[29] AI - Deep Learning
-
-
en.wikiquote.org en.wikiquote.org
-
The object of the present volume is to point out the effects and the advantages which arise from the use of tools and machines ;—to endeavour to classify their modes of action ;—and to trace both the causes and the consequences of applying machinery to supersede the skill and power of the human arm.
[28] AI - precedents...
-
-
openai.com openai.comGPT-41
-
Safety & alignment
[25] AI - Alignment
Tags
Annotators
URL
-
-
ourworldindata.org ourworldindata.orgBooks1
-
A book is defined as a published title with more than 49 pages.
[24] AI - Bias in Training Materials
-
-
www.notepage.net www.notepage.net
-
Epidemiologist Michael Abramson, who led the research, found that the participants who texted more often tended to work faster but score lower on the tests.
[21] AI - Skills Erosion
-
-
www.technologyreview.com www.technologyreview.com
-
An AI model taught to view racist language as normal is obviously bad. The researchers, though, point out a couple of more subtle problems. One is that shifts in language play an important role in social change; the MeToo and Black Lives Matter movements, for example, have tried to establish a new anti-sexist and anti-racist vocabulary. An AI model trained on vast swaths of the internet won’t be attuned to the nuances of this vocabulary and won’t produce or interpret language in line with these new cultural norms. It will also fail to capture the language and the norms of countries and peoples that have less access to the internet and thus a smaller linguistic footprint online. The result is that AI-generated language will be homogenized, reflecting the practices of the richest countries and communities.
[21] AI Nuances
-
-
serokell.io serokell.io
-
According to him, there are several goals connected to AI alignment that need to be addressed:
[20] AI - Alignment Goals
-
-
cointelegraph.com cointelegraph.com
-
The AI developers came under intense scrutiny in Europe recently, with Italy being the first Western nation to temporarily ban ChatGPT
[19] AI - Legal Response
-
-
www.visualcapitalist.com www.visualcapitalist.com
-
The following table lists the results that we visualized in the graphic.
[18] AI - Increased sophistication
-
-
bard.google.com bard.google.comBard1
-
Meet Bard: your creative and helpful collaborator, here to supercharge your imagination, boost your productivity, and bring your ideas to life.
-
-
hamlet.andromedayelton.com hamlet.andromedayelton.com
-
https://hamlet.andromedayelton.com/
- Given a thesis, find out which other theses are most conceptually similar.
-
-
librarian.aedileworks.com librarian.aedileworks.com
-
The promise of using machine learning on your own notes to connect with external sources is not new. Andromeda Yelton’s HAMLET is six years old.
-
Asking a computer to create a glossary for you doesn’t make you any smarter than having a book that comes with a glossary.
-
-
www.youtube.com www.youtube.com
-
Tagging and linking with AI (Napkin.one) by Nicole van der Hoeven
https://www.youtube.com/watch?v=p2E3gRXiLYY
Nicole underlines the value of a good user interface for traversing one's notes. She'd had issues with tagging things in Obsidian using their #tag functionality, but never with their [[WikiLink]] functionality. Something about the autotagging done by Napkin's artificial intelligence makes the process easier for her. Some of this may be down to how their user interface makes it easier/more intuitive as well as how it changes and presents related notes in succession.
Most interesting however is the visual presentation of notes and tags in conjunction with an outliner for taking one's notes and composing a draft using drag and drop.
Napkin as a visual layer over tooling like Obsidian, Logseq, et. al. would be a much more compelling choice for me in terms of taking my pre-existing data and doing something useful with it rather than just creating yet another digital copy of all my things (and potentially needing sync to keep them up to date).
What is Napkin doing with all of their user's data?
-
- Apr 2023
-
crsreports.congress.gov crsreports.congress.gov
-
Abstract
Recent innovations in artificial intelligence (AI) are raising new questions about how copyright law principles such as authorship, infringement, and fair use will apply to content created or used by AI. So-called “generative AI” computer programs—such as Open AI’s DALL-E 2 and ChatGPT programs, Stability AI’s Stable Diffusion program, and Midjourney’s self-titled program—are able to generate new images, texts, and other content (or “outputs”) in response to a user’s textual prompts (or “inputs”). These generative AI programs are “trained” to generate such works partly by exposing them to large quantities of existing works such as writings, photos, paintings, and other artworks. This Legal Sidebar explores questions that courts and the U.S. Copyright Office have begun to confront regarding whether the outputs of generative AI programs are entitled to copyright protection as well as how training and using these programs might infringe copyrights in other works.
-
-
-
It was only by building an additional AI-powered safety mechanism that OpenAI would be able to rein in that harm, producing a chatbot suitable for everyday use.
This isn't true. The Stochastic Parrots paper outlines other avenues for reining in the harms of language models like GPT's.
-
- Mar 2023
-
dl.acm.org dl.acm.org
-
Bender, Emily M., Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜” In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610–23. FAccT ’21. New York, NY, USA: Association for Computing Machinery, 2021. https://doi.org/10.1145/3442188.3445922.
Would the argument here for stochastic parrots also potentially apply to or could it be abstracted to Markov monkeys?
-
-
www.nytimes.com www.nytimes.com
-
A.I. Is Mastering Language. Should We Trust What It Says?<br /> by Steven Johnson, art by Nikita Iziev
Johnson does a good job of looking at the basic state of artificial intelligence and the history of large language models and specifically ChatGPT and asks some interesting ethical questions, but in a way which may not prompt any actual change.
When we write about technology and the benefits and wealth it might bring, do we do too much ethics washing to help paper over the problems to help bring the bad things too easily to pass?
-
We know from modern neuroscience that prediction is a core property of human intelligence. Perhaps the game of predict-the-next-word is what children unconsciously play when they are acquiring language themselves: listening to what initially seems to be a random stream of phonemes from the adults around them, gradually detecting patterns in that stream and testing those hypotheses by anticipating words as they are spoken. Perhaps that game is the initial scaffolding beneath all the complex forms of thinking that language makes possible.
Is language acquisition a very complex method of pattern recognition?
-
How do we make them ‘‘benefit humanity as a whole’’ when humanity itself can’t agree on basic facts, much less core ethics and civic values?
-
Another way to widen the pool of stakeholders is for government regulators to get into the game, indirectly representing the will of a larger electorate through their interventions.
This is certainly "a way", but history has shown, particularly in the United States, that government regulation is unlikely to get involved at all until it's far too late, if at all. Typically they're only regulating not only after maturity, but only when massive failure may cause issues for the wealthy and then the "regulation" is to bail them out.
Suggesting this here is so pie-in-the sky that it only creates a false hope (hope washing?) for the powerless. Is this sort of hope washing a recurring part of
-
OpenAI has not detailed in any concrete way who exactly will get to define what it means for A.I. to ‘‘benefit humanity as a whole.’’
Who get's to make decisions?
-
Whose values do we put through the A.G.I.? Who decides what it will do and not do? These will be some of the highest-stakes decisions that we’ve had to make collectively as a society.’’
A similar set of questions might be asked of our political system. At present, the oligopolic nature of our electoral system is heavily biasing our direction as a country.
We're heavily underrepresented on a huge number of axes.
How would we change our voting and representation systems to better represent us?
-
Should we build an A.G.I. that loves the Proud Boys, the spam artists, the Russian troll farms, the QAnon fabulists?
What features would be design society towards? Stability? Freedom? Wealth? Tolerance?
How might long term evolution work for societies that maximized for tolerance given Popper's paradox of tolerance?
-
Right before we left our lunch, Sam Altman quoted a saying of Ilya Sutskever’s: ‘‘One thing that Ilya says — which I always think sounds a little bit tech-utopian, but it sticks in your memory — is, ‘It’s very important that we build an A.G.I. that loves humanity.’ ’’
Tags
- ethics
- language acquisition
- artificial intelligence bias
- tech solutionism
- OpenAI
- techbros
- Ilya Sutskever
- oligopolies
- evolution of technology
- decision making
- Proud Boys
- humanity
- read
- power over
- leadership
- representation
- tolerance
- QAnon
- paradox of tolerance
- Karl Popper
- quotes
- governmental regulation
- open questions
- thinking
- shiny object syndrome
- diversity equity and inclusion
- ethical technology
- artificial intelligence
- governance
- ChatGPT
- cultural anthropology
- pattern recognition
Annotators
URL
-
-
www.amazon.com www.amazon.com
-
Impromptu: Amplifying Our Humanity Through AI by Reid Hoffman
via Friends of the Link
-
-
www.nybooks.com www.nybooks.com
-
Primary care physician Gavin Francis reviews two books on the importance of forgetting, as part of a larger reflection on memory.
-
-
web.hypothes.is web.hypothes.is
-
-
Annotation and AI Starter Assignments<br /> by Jeremy Dean
- students as fact-checkers
- students as content experts
- students as editors
-
-
-
www.wired.com www.wired.com
-
the apocalypse they refer to is not some kind of sci-fi takeover like Skynet, or whatever those researchers thought had a 10 percent chance of happening. They’re not predicting sentient evil robots. Instead, they warn of a world where the use of AI in a zillion different ways will cause chaos by allowing automated misinformation, throwing people out of work, and giving vast power to virtually anyone who wants to abuse it. The sin of the companies developing AI pell-mell is that they’re recklessly disseminating this mighty force.
Not Skynet, but social disruption
-
-
chat.openai.com chat.openai.comChatGPT1
-
ChatGPTThis is a free research preview.🔬Our goal is to get external feedback in order to improve our systems and make them safer.🚨While we have safeguards in place, the system may occasionally generate incorrect or misleading information and produce offensive or biased content. It is not intended to give advice.
-
- Feb 2023
-
www.youtube.com www.youtube.comYouTube1
-
Sam Matla talks about the collector's fallacy in a negative light, and for many/most, he might be right. But for some, collecting examples and evidence of particular things is crucially important. The key is to have some idea of what you're collecting and why.
Historians collecting small facts over time may seem this way, but out of their collection can emerge patterns which otherwise would never have been seen.
cf: Keith Thomas article
concrete examples of this to show the opposite?
Relationship to the idea of AI coming up with black box solutions via their own method of diffuse thinking
-
-
Local file Local file
-
Certainly, computerizationmight seem to resolve some of the limitations of systems like Deutsch’s, allowing forfull-text search or multiple tagging of individual data points, but an exchange of cardsfor bits only changes the method of recording, leaving behind the reality that one muststill determine what to catalogue, how to relate it to the whole, and the overarchingsystem.
Despite the affordances of recording, searching, tagging made by computerized note taking systems, the problem still remains what to search for or collect and how to relate the smaller parts to the whole.
customer relationship management vs. personal knowledge management (or perhaps more important knowledge relationship management, the relationship between individual facts to the overall whole) suggested by autocomplete on "knowl..."
-
One might then say that Deutsch’s index devel-oped at the height of the pursuit of historical objectivity and constituted a tool ofhistorical research not particularly innovative or limited to him alone, given that the useof notecards was encouraged by so many figures, and it crystallized a positivistic meth-odology on its way out.
Can zettelkasten be used for other than positivitistic methodologies?
-
-
www.cyberneticforests.com www.cyberneticforests.com
-
https://www.cyberneticforests.com/ai-images
Critical Topics: AI Images is an undergraduate class delivered for Bradley University in Spring 2023. It is meant to provide an overview of the context of AI art making tools and connects media studies, new media art, and data ethics with current events and debates in AI and generative art. Students will learn to think critically about these tools by using them: understand what they are by making work that reflects the context and histories of the tools.
-
-
wordcraft-writers-workshop.appspot.com wordcraft-writers-workshop.appspot.com
-
Sloan, Robin. “Author’s Note.” Experimental fiction. Wordcraft Writers Workshop, November 2022. https://wordcraft-writers-workshop.appspot.com/stories/robin-sloan.
brilliant!
-
"I have affirmed the premise that the enemy can be so simple as a bundle of hate," said he. "What else? I have extinguished the light of a story utterly.
How fitting that the amanuensis in a short story written with the help of artificial intelligence has done the opposite of what the author intended!
-
-
wordcraft-writers-workshop.appspot.com wordcraft-writers-workshop.appspot.com
-
Wordcraft Writers Workshop by Andy Coenen - PAIR, Daphne Ippolito - Brain Research Ann Yuan - PAIR, Sehmon Burnam - Magenta
cross reference: ChatGPT
-
LaMDA was not designed as a writing tool. LaMDA was explicitly trained to respond safely and sensibly to whomever it’s engaging with.
-
LaMDA's safety features could also be limiting: Michelle Taransky found that "the software seemed very reluctant to generate people doing mean things". Models that generate toxic content are highly undesirable, but a literary world where no character is ever mean is unlikely to be interesting.
-
A recurring theme in the authors’ feedback was that Wordcraft could not stick to a single narrative arc or writing direction.
When does using an artificial intelligence-based writing tool make the writer an editor of the computer's output rather than the writer themself?
-
If I were going to use an AI, I'd want to plugin and give massive priority to my commonplace book and personal notes followed by the materials I've read, watched, and listened to secondarily.
-
Several participants noted the occasionally surreal quality of Wordcraft's suggestions.
Wordcraft's hallucinations can create interesting and creatively surreal suggestions.
How might one dial up or down the ability to hallucinate or create surrealism within an artificial intelligence used for thinking, writing, etc.?
-
Writers struggled with the fickle nature of the system. They often spent a great deal of time wading through Wordcraft's suggestions before finding anything interesting enough to be useful. Even when writers struck gold, it proved challenging to consistently reproduce the behavior. Not surprisingly, writers who had spent time studying the technical underpinnings of large language models or who had worked with them before were better able to get the tool to do what they wanted.
Because one may need to spend an inordinate amount of time filtering through potentially bad suggestions of artificial intelligence, the time and energy spent keeping a commonplace book or zettelkasten may pay off magnificently in the long run.
-
Many authors noted that generations tended to fall into clichés, especially when the system was confronted with scenarios less likely to be found in the model's training data. For example, Nelly Garcia noted the difficulty in writing about a lesbian romance — the model kept suggesting that she insert a male character or that she have the female protagonists talk about friendship. Yudhanjaya Wijeratne attempted to deviate from standard fantasy tropes (e.g. heroes as cartographers and builders, not warriors), but Wordcraft insisted on pushing the story toward the well-worn trope of a warrior hero fighting back enemy invaders.
Examples of artificial intelligence pushing toward pre-existing biases based on training data sets.
-
Wordcraft tended to produce only average writing.
How to improve on this state of the art?
-
“...it can be very useful for coming up with ideas out of thin air, essentially. All you need is a little bit of seed text, maybe some notes on a story you've been thinking about or random bits of inspiration and you can hit a button that gives you nearly infinite story ideas.”- Eugenia Triantafyllou
Eugenia Triantafyllou is talking about crutches for creativity and inspiration, but seems to miss the value of collecting interesting tidbits along the road of life that one can use later. Instead, the emphasis here becomes one of relying on an artificial intelligence doing it for you at the "hit of a button". If this is the case, then why not just let the artificial intelligence do all the work for you?
This is the area where the cultural loss of mnemonics used in orality or even the simple commonplace book will make us easier prey for (over-)reliance on technology.
Is serendipity really serendipity if it's programmed for you?
-
The authors agreed that the ability to conjure ideas "out of thin air" was one of the most compelling parts of co-writing with an AI model.
Again note the reference to magic with respect to the artificial intelligence: "the ability to conjure ideas 'out of thin air'".
-
Wordcraft shined the most as a brainstorming partner and source of inspiration. Writers found it particularly useful for coming up with novel ideas and elaborating on them. AI-powered creative tools seem particularly well suited to sparking creativity and addressing the dreaded writer's block.
Just as using a text for writing generative annotations (having a conversation with a text) is a useful exercise for writers and thinkers, creative writers can stand to have similar textual creativity prompts.
Compare Wordcraft affordances with tools like Nabokov's card index (zettelkasten) method, Twyla Tharp's boxes, MadLibs, cadavre exquis, et al.
The key is to have some sort of creativity catalyst so that one isn't working in a vacuum or facing the dreaded blank page.
-
We like to describe Wordcraft as a "magic text editor". It's a familiar web-based word processor, but under the hood it has a number of LaMDA-powered writing features that reveal themselves depending on the user's activity.
The engineers behind Wordcraft refer to it "as a 'magic text editor'". This is a cop-out for many versus a more concrete description of what is actually happening under the hood of the machine.
It's also similar, thought subtly different to the idea of the "magic of note taking" by which writers are taking about ideas of emergent creativity and combinatorial creativity which occur in that space.
-
The application is powered by LaMDA, one of the latest generation of large language models. At its core, LaMDA is a simple machine — it's trained to predict the most likely next word given a textual prompt. But because the model is so large and has been trained on a massive amount of text, it's able to learn higher-level concepts.
Is LaMDA really able to "learn higher-level concepts" or is it just a large, straight-forward information theoretic-based prediction engine?
-
Our team at Google Research built Wordcraft, an AI-powered text editor centered on story writing, to see how far we could push the limits of this technology.
Tags
- user interface
- Wordcraft
- training sets
- safety
- programmed creativity
- in-context learning
- structural bias
- magic of note taking
- rhetoric
- technophobia
- writing vs. editing
- combinatorial creativity
- artificial intelligence for writing
- text editors
- writing tools
- hallucination
- creativity catalysts
- blank page brainstorming
- Eugenia Triantafyllou
- magic of artificial intelligence
- predictive text
- cadavre exquis
- emergence
- card index for creativity
- surrealism
- surprise
- limits of creativity
- creative writing
- brainstorming
- Twyla Tharp
- training
- magic
- information theory
- commonplace books
- large langue models
- zettelkasten
- technophilia
- Vladimir Nabokov
- blank page
- artificial intelligence bias
- structural racism
- digital amanuensis
- writer's block
- content moderation
- LaMDA
- prompt engineering
- read
- ChatGPTedu
- press of a button
- quotes
- group creativity
- human computer interaction
- open questions
- affordances
- definitions
- PAIR (Google)
- examples
- Mad Libs
- experimental fiction
- tools for creativity
- tools for thought
- serendipity
- corpus linguistics
- artificial intelligence
- Eloi vs Morlocks
- storytelling
- Weapons of Math Destruction
Annotators
URL
-
-
pair.withgoogle.com pair.withgoogle.com
-
People + AI Research (PAIR) is a multidisciplinary team at Google that explores the human side of AI by doing fundamental research, building tools, creating design frameworks, and working with diverse communities.
-
-
Local file Local file
-
Ippolito, Daphne, Ann Yuan, Andy Coenen, and Sehmon Burnam. “Creative Writing with an AI-Powered Writing Assistant: Perspectives from Professional Writers.” arXiv, November 9, 2022. https://doi.org/10.48550/arXiv.2211.05030.
See also: https://wordcraft-writers-workshop.appspot.com/learn
A Google project entering the public as ChatGPT was released and becoming popular.
For additional experiences, see: https://www.robinsloan.com/newsletters/authors-note/
-
-
www.robinsloan.com www.robinsloan.com
-
Author's note by Robin Sloan<br /> November 2022
-
I have to report that the AI did not make a useful or pleasant writing partner. Even a state-of-the-art language model cannot presently “understand” what a fiction writer is trying to accomplish in an evolving draft. That’s not unreasonable; often, the writer doesn’t know exactly what they’re trying to accomplish! Often, they are writing to find out.
-
First, I’m impressed as hell by the Wordcraft team. Daphne Ippolito, Ann Yuan, Andy Coenen, Sehmon Burnam, and their colleagues engineered an impressive, provocative writing tool, but/and, more importantly, they investigated its use with sensitivity and courage.
-
-
www.politifact.com www.politifact.com
-
PolitiFact - People are using coded language to avoid social media moderation. Is it working?<br /> by Kayla Steinberg<br /> November 4, 2021
-