However, in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values. Some uses are also simply outside the bounds of what today’s technology can safely and reliably do. Two such use cases have never been included in our contracts with the Department of War, and we believe they should not be included now:
- Last 7 days
-
www.anthropic.com www.anthropic.com
-
- Nov 2025
-
www.youtube.com www.youtube.com
-
And so they started OpenAI to do AI safely relative to Google. And then Daario did it relative to OpenAI. So, and as they all started these new safety AI companies, that set off a race for everyone to go even faster
for - progress trap - AI - safety - irony
-
Dario Amade was the C CEO of Anthropic a big AI company. He worked on safety at OpenAI and he left to start Anthropic because he said, "We're not doing this safely enough. I have to start another company that's all about safety
for - history - AI - Anthropic - safety first
-
- Dec 2024
-
www.techradar.com www.techradar.com
-
In response, Yampolskiy told Business Insider he thought Musk was "a bit too conservative" in his guesstimate and that we should abandon development of the technology now because it would be near impossible to control AI once it becomes more advanced.
for - suggestion- debate between AI safety researcher Roman Yampolskiy and Musk and founders of AI - difference - business leaders vs pure researchers // - Comment - Business leaders are mainly driven by profit so already have a bias going into a debate with a researcher who is neutral and has no declared business interest
//
-
for - article - Techradar - Top AI researcher says AI will end humanity and we should stop developing it now — but don't worry, Elon Musk disagrees - 2024, April 7 - AI safety researcher Roman Yampolskiy disagrees with industry leaders and claims 99.999999% chance that AGI will destroy and embed humanity // - comment - another article whose heading is backwards - it was Musk who spoke it first, then AI safety expert Roman Yampolskiy commented on Musk's claim afterwards!
Tags
- difference - business leaders vs pure researchers
- AI safety researcher Roman Yampolskiy disagrees with industry leaders and claims 99.999999% chance that AGI will destroy and embed humanity
- suggestion- debate between AI safety researcher Roman Yampolskiy and Musk and founders of AI
- article - Techradar - Top AI researcher says AI will end humanity and we should stop developing it now — but don't worry, Elon Musk disagrees - 2024, April 7
Annotators
URL
-
-
www.windowscentral.com www.windowscentral.com
-
for - article - Windows Central - AI safety researcher warns there's a 99.999999% probability AI will end humanity, but Elon Musk "conservatively" dwindles it down to 20% and says it should be explored more despite inevitable doom - 2024, Ape 2 - AI safety researcher warns there's a 99.999999% probability AI will end humanity
// - Comment - In fact, the heading is misleading. - It should be the other way around. - Elon Musk made the claim first but the AI Safety expert commented on Elon Musk's claim.
Tags
- AI safety researcher warns there's a 99.999999% probability AI will end humanity
- article - Windows Central - AI safety researcher warns there's a 99.999999% probability AI will end humanity, but Elon Musk "conservatively" dwindles it down to 20% and says it should be explored more despite inevitable doom - 2024, Ape 2
Annotators
URL
-
-
louisville.edu louisville.edu
-
for - progress trap - AI superintelligence - interview - AI safety researcher and director of the Cyber Security Laboratory at the University of Louisville - Roman Yampolskiy - progress trap - over 99% chance AI superintelligence arriving as early as 2027 will destroy humanity - article UofL - Q&A: UofL AI safety expert says artificial superintelligence could harm humanity - 2024, July 15
Tags
- progress trap - AI superintelligence - interview - AI safety researcher and director of the Cyber Security Laboratory at the University of Louisville - Roman Yampolskiy
- article UofL - Q&A: UofL AI safety expert says artificial superintelligence could harm humanity - 2024, July 15
- progress trap - over 99% chance AI superintelligence arriving as early as 2027 will destroy humanity
Annotators
URL
-
- Nov 2024
-
arstechnica.com arstechnica.com
- Aug 2024
-
feministai.pubpub.org feministai.pubpub.org
-
Manila has one of the most dangerous transport systems in the world for women (Thomson Reuters Foundation, 2014). Women in urban areas have been sexually assaulted and harassed while in public transit, be it on a bus, train, at the bus stop or station platform, or on their way to/from transit stops.
The New Urban Agenda and the United Nations’ Sustainable Development Goals (5, 11, 16) have included the promotion of safety and inclusiveness in transport systems to track sustainable progress. As part of this effort, AI-powered machine learning applications have been created.
-
- Sep 2023
-
www.theguardian.com www.theguardian.com
- May 2023
-
www.lesswrong.com www.lesswrong.com
-
must have an alignment property
It is unclear what form the "alignment property" would take, and most importantly how such a property would be evaluated especially if there's an arbitrary divide between "dangerous" and "pre-dangerous" levels of capabilities and alignment of the "dangerous" levels cannot actually be measured.
-
- Dec 2020
-
medium.com medium.com
-
Thus, just as humans built buildings and bridges before there was civil engineering, humans are proceeding with the building of societal-scale, inference-and-decision-making systems that involve machines, humans and the environment. Just as early buildings and bridges sometimes fell to the ground — in unforeseen ways and with tragic consequences — many of our early societal-scale inference-and-decision-making systems are already exposing serious conceptual flaws.
Analogous to the collapse of early bridges and building, before the maturation of civil engineering, our early society-scale inference-and-decision-making systems break down, exposing serious conceptual flaws.
-