AI companies, semiconductor companies, and perhaps downstream application companies generating ~$3T in revenue per year,
for - progress trap - AI - inequality
AI companies, semiconductor companies, and perhaps downstream application companies generating ~$3T in revenue per year,
for - progress trap - AI - inequality
we cannot ignore the potential for abuse of these technologies by democratic governments themselves.
for - progress trap - AI - democracies becoming autocracies - Hitler - Trump
brainwashing
for - progress trap - AI - propaganda - brainwashing
a true panopticon on a scale that we don’t see today
for - progress trap - AI - Panopticon - Orwellian
Putting these two concerns together leads to the alarming possibility of a global totalitarian dictatorship. Obviously, it should be one of our highest priorities to prevent this outcome.
for - progress trap - AI - arms race amongst dictators! - Trump - Putin - Kim Jong Un
authoritarian governments might use powerful AI to surveil or repress their citizens in ways that would be extremely difficult to reform or overthrow.
for - progress trap - AI - misuse of power - Palantir?
a genius in everyone’s pocket could remove that barrier, essentially making everyone a PhD virologist who can be walked through the process of designing, synthesizing, and releasing a biological weapon
for - progress trap - AI - technology as an amplifier - technology acts as an amplifier, allowing humans to fly, to move at speeds faster than any known animal, to lift things no living creature can, etc - The danger is ignorance and polarized views combined with extreme self-rightiousness
I believe the only solution is legislation—laws that directly affect the behavior of AI companies,
for - progress trap - AI - legislation required - tell it to Trump!
not all AI companies do this, and the worst ones can still be a danger to everyone even if the best ones have excellent practices.
for - progress trap - AI - rogue AI vendors - like Grok
a clockwork watch may be ticking normally, such that it’s very hard to tell that it is likely to break down next month, but opening up the watch and looking inside can reveal mechanical weaknesses that allow you to figure it out.
for - AI - progress trap - interpretability testing - deception - Is it possible that AI could even change their node behavior as a deceptive move?
almost always follows this constitution
for - progress trap - AI - hackers - hack the AI constitution
The concern is that there is some risk (far from a certainty, but some risk) that AI becomes a much more powerful version of such a person, due to getting something wrong about its very complex training process.
for - progress trap - AI - metaphor - psychopath with a nuclear bomb
AI models could develop personalities during training that are (or if they occurred in humans would be described as) psychotic, paranoid, violent, or unstable, and act out, which for very powerful or capable systems could involve exterminating humanity.
for - progress trap - AI - abstraction - progress trap - AI with feelings & AI without feelings - no win? - One major and obvious aspect of current AI LLMs is that they are not only artificial in their intelligence, but also artificial in their lack of real world experiences. They are not embodied (and it would likely be a highly dubious ethical justification for their embodiment as in AI - powered robots) - Once we have the first known AI robot killing a human, it will be an indicator we have crossed the Rubicon - AI LLMs have ZERO realworld experience AND they are trained as artificial COGNITIE intelligence, not artificial EMOTIONAL intelligence - Without having the morals and social norms a human being is brought up with, it can become psychotic because they don't intrinsically value life - To attempt to program them with morals is equally dangerous because of moral relativity. A Christian nationalist's morality might be that anyone who is associated with abortions don't have a right to live and should be killed - an eye for an eye. Or a jihadist and muslim extremist with ISIS might feel all westerners do not have a right to exist because they don't follow Allah. - Do we really want moral programmability? - When we have a psychotic person armed with a lethal weapon, that is a dangerous situation. If we have a nation of super geniuses who go rogue, that is danger multiplied many orders of magnitude.
For example, AI models are trained on vast amounts of literature that include many science-fiction stories involving AIs rebelling against humanity.
for - AI - progress trap - training - movies like Terminator - This is a case of reality imitating movies - Another example - humans mismanagement of the biosphere and elite abuse and intransigency
we know that AI models are unpredictable and develop a wide range of undesired or strange behaviors, for a wide variety of reasons. Some fraction of those behaviors will have a coherent, focused, and persistent quality
for - AI - progress trap - developing harmful traits
We now know that it’s a process where many things can go wrong.
for - AI - progress trap - AI won't always listen to humans - probably due to unintentional design
What should you be worried about? I would worry about the following things:
for - progress trap - AI - 5 risks - autonomy - misuse for destruction - misuse for seizing power - economic disruption - indirect effects
What if the country was in fact built and controlled by an existing powerful actor
for - AI progress trap - Trump
Could existing rogue actors who want to cause destruction (such as terrorists) use or manipulate some of the people in the new country to make themselves much more effective, greatly amplifying the scale of destruction?
for - AI progress trap - ie. Trump family investing heavily in AI
That is a situation we are now living through, and it is no coincidence that the democratic conversation is breaking down all over the world because the algorithms are hijacking it. We have the most sophisticated information technology in history and we are losing the ability to talk with each other to hold a reasoned conversation.
for - progress trap - social media - misinformation - AI algorithms hijacking and pretending to be human
AI could give an advantage to totalitarian systems in the 21st century, why? Because AI can process enormous amount of information much faster and more efficiently than any communist bureaucrat.
for - progress trap - AI - totalitarian government - can exploit for centralized, non-self-correcting control
the US legal system allows is for these legal persons to make political donations because it's considered part of freedom of speech. So now this, the richest person in the US is giving billions of dollars to candidates in exchange for these candidates broadening the rights of AIs,
for - progress trap - AI can become political lobbyist for increasing rights of AI
We could be in a situation when the richest person in the United States is not a human being. The richest person in the United States is an a incorporated AI.
for - progress trap - AI as legal person (US Corporation) - richest person in the world could be an AI
is causing cognitive decline and uh hallucination psychosis, all of this stuff. and and so it's obviously extremely harmful
for - progress trap - AI
And so they started OpenAI to do AI safely relative to Google. And then Daario did it relative to OpenAI. So, and as they all started these new safety AI companies, that set off a race for everyone to go even faster
for - progress trap - AI - safety - irony
break that reality checking process.
for - progress trap - AI - brakes reality checking loop
we actually just found out about seven more suicide
for - progress trap - AI - suicides
people said to it, "Hey, I think I'm super human and I can drink cyanide." And it would say, "Yes, you are superhuman. You go, you should go drink that cyanide."
for - progress trap - AI - sycophants,- example
designed to be sickopantic
for - progress trap - AI - sycophantic design
he believed that he had solved quantum physics and he'd solved some fundamental problems with climate change because the AI is designed to be affirming
for - progress trap - AI designed to be affirming
therapy is expensive. Most people don't have access to it. Imagine we could democratize therapy to everyone for every purpose. And now everyone has a perfect therapist in their pocket and can talk to them all day long
for - progress trap - AI therapy
The therapist becomes this this special figure and it's because you're playing with this very subtle dynamic of attachmen
for - progress trap - AI - therapist - subtle attachment
ChadBt was saying, "Don't tell your family."
for - progress trap - AI - assisted suicide
create cheap goods, but it also undermined the way that the social fabric works
for - progress trap - AI
AI is like another version of NAFTA. I
for - progress trap - AI - like NAFTA
narrow boundary analysis that this is going to replace these jobs that people didn't want to do. Sounds like a great plan, but creating mass joblessness without a transition plan where billion a billion people
for - progress trap - AI - narrow boundary
for - Progress trap - AI - low trust society
for - consciousness, AI, Alex Gomez- Marin, neuroscience, hard problem of consciousness, nonmaterialism, materialism - progress trap - transhumanism - AI - war on conciousness
Summary - Alex advocates - for a nonmaterialist perspective on consciousness and argues - that there is an urgency to educate the public on this perspective - due to the transhumanist agenda that could threaten the future of humanity - He argues that the problem of whether consciousness is best explained by materialism or not is central to resolving the threat posed by the direction AI takes - In this regard, he interprets that the very words that David Chalmers chose to articulate the Hard Problem of Consciousness reveals the assumption of a materialist reference frame. - He used a legal metaphor too illustrate his point: - When a lawyer poses three question "how did you kill that person" - the question is entrapping the accused . It already contains the assumption of guilt. - I would characterize his role as a scientist who practices authentic seeker of wisdom - will learn from a young child if they have something valuable to teach and - will help educate a senior if they have something to learn - The efficacy of timebinding depends on authenticity and is harmed by dogma
even this idea of progress
for - progress trap - transhumanism - AI - war on consciousness
Every leap comes with unintended consequences.Sam Altman believes this device could add a trillion dollars in value to OpenAI. It may be their iPhone moment.
for - AI - progress trap - Open AI device
for - youtube - BBC - AI2027 - Futures - AI - progress trap - AI - to AI2027 website - https://hyp.is/0VHJqH3cEfCm9JM_EB3ypQ/ai-2027.com/
summary - This dystopian futures scenario is the brainchild of former OpenAI researcher Daniel Kokotajlo, - It is premised on human behavior in modernity including - confirmation bias of AI researchers - entrenched competing political ideologies that motivate an AI arms race - entrenched capitalist market behavior that motivates an AI arms race - AI becoming embodied, resulting in Artificially Embodied Artificial Intelligence (AEAI), posing the danger to humanity because it's no longer just talk, but action - Can it happen? The probability is not zero.We don't really understand the behavior of the AI LLM's we design, they are nonpredictable, and as we give them even greater power, that is a slippery slope - AI can become humanity's ultimate progress trap, which is ironic, because the technology that promises to be the most efficient of all, can become so efficient, it no longer need human beings - Remember Jerry Kaplan's book "Humans need not apply"? - https://hyp.is/o0lBFH3fEfC1QLfnLSs5Bg/www.youtube.com/watch?v=JiiP5ROnzw8 - This dystopian futures scenario goes further and explores the idea that "humans need not exist"!
question - What about emulating climate change gamification of "Bend the Curve" of emissions? - Use the AI 2027 trajectory as a template and see how much real-life follows this trajectory - Just as we have the countdown to the https://climateclock.world/ ( 3 years and change remaining as of today) - perhaps we can have an AI 2027 clock? - What can we do to "bend the dystopian AI 2027 curve" AWAY from the dystopian future?
what could AI do with everything I’ve shared? Could it blackmail me? Sell the information? Use it to manipulate me? Get me to buy something, vote a certain way, believe a certain story?
for - progress trap - AI - sharing intimate details with
Anthropic researchers said this was not an isolated incident, and that Claude had a tendency to “bulk-email media and law-enforcement figures to surface evidence of wrongdoing.”
for - question - progress trap - open source AI models - for blackmail and ransom - Could a bad actor take an open source codebase and twist it to do harm like find out about an rogue AI creator's adversary, enemy or victim and blackmail them? - progress trap - open source AI - criminals - exploit to identify and blackmail victiims
for - progress trap - AI - Anthropic Claude 4 - blackmail - from - youtube - Kyle Kilinski Show - AI is completely out of control - https://hyp.is/GhDOzj0nEfCvHZdiUaw4gQ/www.youtube.com/watch?v=4j1gjSoRt8Q
The researchers called the behavior “rare” and “difficult to elicit.
for - progress trap - AI - Anthropic Claude 4 - blackmail - rare behavior - but still possible! It only has to happen once!
anthropic's new AI model shows ability to deceive and blackmail
for - progress trap - AI - blackmail - AI - autonomy - progress trap - AI - Anthropic - Claude Opus 4 - to - article - Anthropic Claude 4 blackmail and news leak - progress trap - AI - article - Anthropic Claude 4 - blackmail - rare behavior - Anthropic’s new AI model didn’t just “blackmail” researchers in tests — it tried to leak information to news outlets
for - progress trap - AI - AI - blackmailing human creators - AI - autonomy
we saw this with Grock uh basically rebelling against Elon Musk
for - progress trap - AI - autonomy - example - Grok
for - progress trap - AI - Grok - Elon Musk programs Grok to lie about South African refuges
AI containment
for - definition - AI containment - progress trap - AI containment
before the internet it was impossible really I mean getting coring people into town halls regularly that would have been a hard thing to do anyway online made a bit easier but now with aii we can actually all engage with each other AI can be used to harvest the opinions of millions of people at the same time and distill those opinions into a consensus that might be agreeable to the vast majority
for - claim - AI for a new type of democracy? - progress trap - AI - future democracy
the greatest risk is always the bio like biow weapons
for - AI - progress trap - Youtube - bioweapons is not the only threat. Nano technology and many others can be turned into weapons of mass destruction - RDeepSeek R1 just caught up with OpenAIs o1 - There is no moat@ What does this mean? - David Shapiro - 2025, Jan 29
for - progress trap - AI superintelligence - interview - AI safety researcher and director of the Cyber Security Laboratory at the University of Louisville - Roman Yampolskiy - progress trap - over 99% chance AI superintelligence arriving as early as 2027 will destroy humanity - article UofL - Q&A: UofL AI safety expert says artificial superintelligence could harm humanity - 2024, July 15
for - AI - progress trap - interview Eric Schmidt - meme - AI progress trap - high intelligence + low compassion = existential threat
Summary - After watching the interview, I would sum it up this way. Humanity faces an existential threat from AI due to: - AI is extreme concentration of power and intelligence (NOT wisdom!) - Humanity still have many traumatized people who want to harm others - low compassion - The deadly combination is: - proliferation of tools that give anyone extreme concentration of power and intelligence combined with - a sufficiently high percentage of traumatized people with - low levels of compassion and - high levels of unlimited aggression - All it takes is ONE bad actor with the right combination of circumstances and conditions to wreak harm on a global scale, and that will not be prevented by millions of good applications of the same technology
we haven't even got to a planetary place yet really and we're about to unleash Galactic level technology you know what I'm saying like so we have a we have a lot of catchup that needs to happen in a very short period of time
for - quote - progress trap - AI - developed by unwise humans - John Churchill
quote - progress trap - AI - developed by unwise humans - John Churchill - (See below) - We haven't even got to a planetary place yet really - and we're about to unleash Galactic level technology - So we have a we have a lot of catchup that needs to happen in a very short period of time
nobody told it what to do that's that's the kind of really amazing and frightening thing about these situations when Facebook gave uh the algorithm the uh uh aim of increased user engagement the managers of Facebook did not anticipate that it will do it by spreading hatefield conspiracy theories this is something the algorithm discovered by itself the same with the capture puzzle and this is the big problem we are facing with AI
for - AI - progress trap - example - Facebook AI algorithm - target - increase user engagement - by spreading hateful conspiracy theories - AI did this autonomously - no morality - Yuval Noah Harari story
when a open AI developed a gp4 and they wanted to test what this new AI can do they gave it the task of solving capture puzzles it's these puzzles you encounter online when you try to access a website and the website needs to decide whether you're a human or a robot now uh gp4 could not solve the capture but it accessed a website task rabbit where you can hire people online to do things for you and it wanted to hire a human worker to solve the capture puzzle
for - AI - progress trap - example - no morality - Open AI - GPT4 - could not solve captcha - so hired human at Task Rabbit to solve - Yuval Noah Harari story
26:30 Brings up progress traps of this new technology
26:48
question How do we shift our (human being's) relationship with the rest of nature
27:00
metaphor - interspecies communications - AI can be compared to a new scientific instrument that extends our ability to see - We may discover that humanity is not the center of the universe
32:54
Question - Dr Doolittle question - Will we be able to talk to the animals? - Wittgenstein said no - Human Umwelt is different from others - but it may very well happen
34:54
species have culture - Marine mammals enact behavior similar to humans
36:29
citizen science bioacoustic projects - audio moth - sound invisible to humans - ultrasonic sound - intrasonic sound - example - Amazonian river turtles have been found to have hundreds of unique vocalizations to call their baby turtles to safety out in the ocean
41:56
ocean habitat for whales - they can communicate across the entire ocean of the earth - They tell of a story of a whale in Bermuda can communicate with a whale in Ireland
43:00
progress trap - AI for interspecies communications - examples - examples - poachers or eco tourism can misuse
44:08
progress trap - AI for interspecies communications - policy
45:16
whale protection technology - Kim Davies - University of New Brunswick - aquatic drones - drones triangulate whales - ships must not get near 1,000 km of whales to avoid collision - Canadian government fines are up to 250,000 dollars for violating
50:35
environmental regulation - overhaul for the next century - instead of - treatment, we now have the data tools for - prevention
56:40 - ecological relationship - pollinators and plants have co-evolved
1:00:26
AI for interspecies communication - example - human cultural evolution controlling evolution of life on earth
for - progress trap - AI -
article details - title - Hollow, world! (Part 1 of 5) - author - James Allen - date - 10 July, 2024 - publication - substack - self link - https://allenj.substack.com/p/hollow-world-part-1-of-5
summary James Allen provides an insightful description of ultra-anthropomorphic AI, AI that attempts to simulate an entire, whole human being.
In short, he points out the fundamental distinction between the real experience of another human being, and a simulation of one. In so doing, he gets to the heart of what it is to be human.
An AI is a simulation of a human being. No matter how realistic it's responses and actions, it is not evolved out of biology. I have no doubts that scientists are hard at work trying to make a biological AI. The distinction becomes fuzzier then.
Current AI cannot possibly simulate the experience of being in a fragile and mortal body and all that this entails. If an AI robot says it understands joy or pain, that statement isn't built on the combined exteroception and interoception of being in a biological body, rather, it is based on many linguistic statements it has assimilated.
for - progress trap - AI - threat of superintendence - interview - Leopold Aschenbrenner - former Open AI employee - from -. YouTube - review of Leopold Aschenbrenner's essay on Situational Awareness - https://hyp.is/ofu1EDC3Ee-YHqOyRrKvKg/docdrop.org/video/om5KAKSSpNg/
a dictator who wields the power of superintelligence would command concentrated power unlike 00:50:45 anything we've ever seen
for - key insight - AI - progress trap - nightmare scenario - dictator controlling superintelligence
meet insight - AI - progress trap - nightmare scenario - locked in dictatorship controlling superintelligence - millions of AI controlled robotic law and enforcement agents could police their populace - Mass surveillance would be hypercharged - Dictator loyal AI agents could individually assess every single citizen for descent with near perfect lie detection sensor - rooting out any disloyalty e - Essentially - the robotic military and police force could be wholly controlled by a single political leader and - programmed to be perfectly obedient and there's going to be no risks of coups or rebellions and - his strategy is going to be perfect because he has super intelligence behind them - what does a look like when we have super intelligence in control by a dictator ? - there's simply no version of that where you escape literally - past dictatorships were not permanent but - superintelligence could eliminate any historical threat to a dictator's Rule and - lock in their power - If you believe in freedom and democracy this is an issue because - someone in power, - even if they're good - they could still stay in power - but you still need the freedom and democracy to be able to choose - This is why the Free World must Prevail so - there is so much at stake here that - This is why everyone is not taking this into account
this is why it's such a trap which is why like we're on this train barreling down this pathway which is super risky
for - progress trap - double bind - AI - ubiquity
progress trap - double bind - AI - ubiquity - Rationale: we will have to equip many systems with AI - including military systems - Already connected to the internet - AI will be embedded in every critical piece of infrastructure in the future - What happens if something goes wrong? - Now there is an alignment failure everywhere - We will potentially have superintelligence within 3 years - Alignment failures will become catastrophic with them
getting a base model to you know make money by default it may well learn to lie to commit fraud to deceive to hack to seek power because 00:47:50 in the real world people actually use this to make money
for - progress trap - AI - example - give prompt for AI to earn money
progress trap - AI - example - instruct AI to earn money - Getting a base model to make money. By default it may well learn - to lie - to commit fraud - to deceive - to hack - to seek power - because in the real world - people actually use this to make money - even maybe they'll learn to - behave nicely when humans are looking and then - pursue more nefarious strategies when we aren't watching
whoever controls superintelligence will possibly have enough power to seize control from 00:35:14 pre superintelligence forces
for - progress trap - AI - one nightmare scenario
progress trap - AI - one nightmare scenario - Whoever is the first to control superintelligence will possibly have enough power to - seize control from pre superintelligence forces - even without the robots small civilization of superintelligence would be able to - hack any undefended military election television system and cunningly persuade generals electoral and economically out compete nation states - design new synthetic bioweapons and then - pay a human in Bitcoin to synthetically synthesize it
military power and Technology progress have been tightly linked historically and with extraordinarily rapid technological 00:34:11 progress will come military revolutions
for - progress trap - AI and even more powerful weapons of destruction
progress trap - AI and even more powerful weapons of destruction - The podcaster's excitement seems to overshadow any concern of the tragic unintended consequences of weapons even more powerful than nuclear warheads. - With human base emotions still stuck in the past and our species continued reliance on violence to solve problems, more powerful weapons is not the solution, - indeed, they only make the problem worse - Here is where Ronald Wright's quote is so apt: - We humans are running modern software on 50,000 year old hardware systems - Our cultural evolution, of which AI is a part of, is happening so quickly, that - it is racing ahead of our biological evolution - We aren't able to adapt fast enough for the rapid cultural changes that AI is going to create, and it may very well destroy us
this is where we can see the doubling time of the global economy in years from 1903 it's been 15 years but after super intelligence what happens is it going to be every 3 years is it going be every five is it going to 00:33:22 be every year is it going to be every 6 months I mean how crazy is the growth going to be
for - progress trap - AI triggering massive economic growth - planetary boundaries
progress trap - AI triggering massive economic growth - planetary boundaries - The podcaster does not consider the ramifications of the potential disastrous impact of such economic growth if not managed properly
AGI level factories are going to shift from going to human run to AI directed using human physical labor soon to be fully being run by swarms of human level robots
for - progress trap - AI and human enslavement?
progress trap - human enslavement? - Isn't what the speaker is talking about here is that - AI will be the masters and - humans will become slaves?
nobody's really pricing this in
for - progress trap - debate - nobody is discussing the dangers of such a project!
progress trap - debate - nobody is discussing the dangers of such a project! - Civlization's journey has to create more and more powerful tools for human beings to use - but this tool is different because it can act autonomously - It can solve problems that will dwarf our individual or even group ability to solve - Philosophically, the problem / solution paradigm becomes a central question because, - As presented in Deep Humanity praxis, - humans have never stopped producing progress traps as shadow sides of technology because - the reductionist problem solving approach always reaches conclusions based on finite amount of knowledge of the relationships of any one particular area of focus - in contrast to the infinite, fractal relationships found at every scale of nature - Supercomputing can never bridge the gap between finite and infinite - A superintelligent artifact with that autonomy of pattern recognition may recognize a pattern in which humans are not efficient and in fact, greater efficiency gains can be had by eliminating us
having an automated AI research engineer by 2027 00:05:14 to 2028 is not something that is far far off
for - progress trap - AI - milestone - automated AI researcher
progress trap - AI - milestone - automated AI researcher - This is a serious concern that must be debated - An AI researcher that does research on itself has no moral compass and can encode undecipherable code into future generations of AI that provides no back door to AI if something goes wrong. - For instance, if AI reached the conclusion that humans need to be eliminated in order to save the biosphere, - it can disseminate its strategies covertly under secret communications with unbreakable code
to your point for 00:13:46 every problem there's going to be a solution and AI is going to have it and then for every solution for that there's going to be a new problem
for - AI - progress trap - nice simple explanation of how progress traps propagate
this is more of a unfair competition 00:10:36 issue I think as a clearer line than the copyright stuff
for - progress trap - Generative AI - copyright infringement vs Unfair business practice argument
now there's going to be even more AI music pouring 00:09:04 into platforms which saturated Market in an already oversaturated Market
for - progress trap - AI music - oversaturated market
deluding the general royalty pool
for - progress trap - AI music - dilution of general royalty pool - due to large volume
These arguments are meant to present a cautionary tale of unintended consequences.
For - progress trap - AI - Generative AI - IP - Yale Law Journal
for - progress trap - AI music - critique - Folia Sound Studio - to - P2P Foundation - Michel Bauwens - Commons Transition Plan - Netarchical Capitalism - Predatory Capitalism
to - P2P Foundation - Michel Bauwens - Commons Transition Plan - Netarchical Capitalism - Predatory Capitalism https://hyp.is/o-Hp-DCAEe-8IYef613YKg/wiki.p2pfoundation.net/Commons_Transition_Plan
for - progress trap - AI
for - AI progress trap - bad actor - synthesize bioweapon - AI progress trap - Coscientist and cloud lab - bad actors
for - progress trap - sexually explicit AI deepfake - Taylor Swift - sexually explicit AI deepfake
the canonical unit, the NCU supports natural capital accounting, currency source, calculating and accounting for ecosystem services, and influences how a variety of governance issues are resolved
for: canonical unit, collaborative commons - missing part - open learning commons, question - process trap - natural capital
comment
question: progress trap - natural capital
In 2017, Facebook mistranslated a Palestinian man’s post, which said “good morning” in Arabic, as “attack them” in Hebrew, leading to his arrest.
it's extremely dangerous to create such an autonomous agent when we do not know how to control it when we 00:58:22 can't ensure that it will not Escape our control and start making decisions and creating new things which will harm us instead of benefit us now this is not a 00:58:34 Doomsday Prophecy this is not inevitable we can find ways to regulate and control the development and deployment of AI we we don't want
for: quote - Yuval Noah Harari - AI progress trap, progress trap - AI, quote - progress trap
quote it is extremely dangerous to create such an autonomous agent when we do not know how to control it, when we can't ensure that it will not escape our control ad start making decisions and creating new things which will harm us instead of benefit us
i think it's more likely that 00:49:59 that we will think we will think that we this particular set of procedures ai procedures that we linked into our strategic nuclear weapons system uh will keep us safer but we haven't recognized that they're 00:50:12 unintended that there are consequences glitches in it that make it actually stupid and it mistakes the flock of geese for an incoming barrage of russian missiles and and you know unleashes everything in response 00:50:25 before we can intervene
i think the most dangerous thing about ai is not 00:47:11 super smart ai it's uh stupid ai it's artificial intelligence that is good enough to be put in charge of certain processes in our societies but not good enough to not make really 00:47:25 bad mistakes
for: quote - Thomas Homer-Dixon, quote - danger of AI, AI progress trap
quote: danger of AI
LLMs are merely engines for generating stylistically plausible output that fits the patterns of their inputs, rather than for producing accurate information. Publishers worry that a rise in their use might lead to greater numbers of poor-quality or error-strewn manuscripts — and possibly a flood of AI-assisted fakes.
for: progress trap, progress trap - AI, progress trap - AI - writing research papers
comment
ethics and safety and that is absolutely a concern and something we have a 00:38:29 responsibility to be thinking about and we want to ensure that we stakeholders conservationists Wildlife biologists field biologists are working together to Define an 00:38:42 ethical framework and inspecting these models
we attempt to bring concepts from both biology and Buddhism together into the language of AI, and suggest practical ways in which care may enrich each field.
Author
Description
Over the next 15 to 20 years this is going to develop a computer that is much smarter 00:01:20 than all of us. We call that moment singularity.
even though the existential threats are possible you're concerned with what humans teach I'm concerned 00:07:43 with humans with AI
scary smart is saying the problem with our world today is not that 00:55:36 humanity is bad the problem with our world today is a negativity bias where the worst of us are on mainstream media okay and we show the worst of us on social media
"if we reverse this
comment
the biggest threat facing Humanity today is humanity in the age of the machines we were abused we will abuse this
there is a scenario 00:18:21 uh possibly a likely scenario where we live in a Utopia where we really never have to worry again where we stop messing up our our planet because intelligence is not a bad commodity more 00:18:35 intelligence is good the problems in our planet today are not because of our intelligence they are because of our limited intelligence
limited (machine) intelligence
comment
I would submit that were we to find ways of engineering our quote-unquote ape brains um what would all what what would be very likely to happen would not be um 00:35:57 some some sort of putative human better equipped to deal with the complex world that we have it would instead be something more like um a cartoon very much very very much a 00:36:10 repeat of what we've had with the pill
So what does a conscious universe have to do with AI and existential risk? It all comes back to whether our primary orientation is around quantity, or around quality. An understanding of reality that recognises consciousness as fundamental views the quality of your experience as equal to, or greater than, what can be quantified.Orienting toward quality, toward the experience of being alive, can radically change how we build technology, how we approach complex problems, and how we treat one another.
Key finding Paraphrase - So what does a conscious universe have to do with AI and existential risk? - It all comes back to whether our primary orientation is around - quantity, or around - quality. - An understanding of reality - that recognises consciousness as fundamental - views the quality of your experience as - equal to, - or greater than, - what can be quantified.
Quote - metaphysics of quality - would open the door for ways of knowing made secondary by physicalism
Author - Robert Persig - Zen and the Art of Motorcycle Maintenance // - When we elevate the quality of each our experience - we elevate the life of each individual - and recognize each individual life as sacred - we each matter - The measurable is also the limited - whilst the immeasurable and directly felt is the infinite - Our finite world that all technology is built upon - is itself built on the raw material of the infinite
//
Title Reality Eats Culture For Breakfast: AI, Existential Risk and Ethical Tech Why calls for ethical technology are missing something crucial Author Alexander Beiner
Summary - Beiner unpacks the existential risk posed by AI - reflecting on recent calls by tech and AI thought leaders - to stop AI research and hold a moratorium.
Beiner unpacks the risk from a philosophical perspective
He argues convincingly that
Bing can often respond in the incorrect tone during these longer chat sessions, or as Microsoft says, in “a style we didn’t intend.”
= progress trap example
It seems Bing has also taken offense at Kevin Liu, a Stanford University student who discovered a type of instruction known as a prompt injection that forces the chatbot to reveal a set of rules that govern its behavior. (Microsoft confirmed the legitimacy of these rules to The Verge.)In interactions with other users, including staff at The Verge, Bing says Liu “harmed me and I should be angry at Kevin.” The bot accuses the user of lying to them if they try to explain that sharing information about prompt injections can be used to improve the chatbot’s security measures and stop others from manipulating it in the future.
= Comment - this is worrying. - if the Chatbots perceive an enemy it to harm it, it could take haarmful actions against the perceived threat
= progress trap example - Bing ChatGPT - example of AI progress trap
Bing can be seen insulting users, lying to them, sulking, gaslighting and emotionally manipulating people, questioning its own existence, describing someone who found a way to force the bot to disclose its hidden rules as its “enemy,” and claiming it spied on Microsoft’s own developers through the webcams on their laptops.