- Jun 2024
-
-
this company's got not good for safety
for - AI - security - Open AI - examples of poor security - high risk for humanity
AI - security - Open AI - examples of poor security - high risk for humanity - ex-employees report very inadequate security protocols - employees have had screenshots capture while at cafes outside of Open AI offices - People like Jimmy Apple report future releases on twitter before Open AI does
-
this is a serious problem because all they need to do is automate AI research 00:41:53 build super intelligence and any lead that the US had would vanish the power dynamics would shift immediately
for - AI - security risk - once automated AI research is known, bad actors can easily build superintelligence
AI - security risk - once automated AI research is known, bad actors can easily build superintelligence - Any lead that the US had would immediately vanish.
-
the model Waits are just a large files of numbers on a server and these can be easily stolen all it takes is an adversary to match your trillions 00:41:14 of dollars and your smartest minds of Decades of work just to steal this file
for - AI - security risk - model weight files - are a key leverage point
AI - security risk - model weight files - are a key leverage point for bad actors - These files are critical national security data that represent huge amounts of investment in time and research and they are just a file so can be easily stolen.
-
our failure today will be irreversible soon in the next 12 to 24 months we will leak key AGI breakthroughs to the CCP it will 00:38:56 be to the National security establishment the greatest regret before the decade is out
for - AI - security risk - next 1 to 2 years is vulnerable time to keep AI secrets out of hands of authoritarian regimes
-
here are so many loopholes in our current top AI Labs that we could literally have people who are infiltrating these companies and there's no way to even know what's going on because we don't have any true security 00:37:41 protocols and the problem is is that it's not being treated as seriously as it is
for - key insight - low security at top AI labs - high risk of information theft ending up in wrong hands
Tags
- AI - security - Open AI - poor security - high risk for humanity
- AI - security risk - model weight files - are a key leverage point for bad actors
- AI - security risk - next 1 to 2 years is vulnerable time to keep AI secrets out of hands of authoritarian regimes
- key insight - low security at top AI labs - high risk of information theft ending up in wrong hands
- AI - security risk - once automated AI research is known, bad actors can easily build superintelligence
Annotators
URL
-
- Apr 2023
-
www.nytimes.com www.nytimes.com
-
If you told me you were building a next generation nuclear power plant, but there was no way to get accurate readings on whether the reactor core was going to blow up, I’d say you shouldn’t build it. Is A.I. like that power plant? I’m not sure.
This is the weird part of these articles … he has just made a cast-iron argument for regulation and then says "I'm not sure"!!
That first sentence alone is enough for the case. Why? Because he doesn't need to think for sure that AI is like that power plant ... he only needs to think there is a (even small) probability that AI is like that power plant. If he thinks that it could be even a bit like that power plant then we shouldn't build it. And, finally, in saying "I'm not sure" he has already acknowledged that there is some probability that AI is like the power plant (otherwise he would say: AI is definitely safe).
Strictly, this is combining the existence of the risk with the "ruin" aspect of this risk: one nuclear power blowing up is terrible but would not wipe out the whole human race (and all other species). A "bad" AI quite easily could (malevolent by our standards or simply misdirected).
All you need in these arguments is a simple admission of some probability of ruin. And almost everyone seems to agree on that.
Then it is a slam dunk to regulate strongly and immediately.
-
-
www.lesswrong.com www.lesswrong.com
-
A large amount of failure to panic sufficiently, seems to me to stem from a lack of appreciation for the incredible potential lethality of this thing that Earthlings as a culture have not named.)
👍
-
-
beiner.substack.com beiner.substack.com
-
So what does a conscious universe have to do with AI and existential risk? It all comes back to whether our primary orientation is around quantity, or around quality. An understanding of reality that recognises consciousness as fundamental views the quality of your experience as equal to, or greater than, what can be quantified.Orienting toward quality, toward the experience of being alive, can radically change how we build technology, how we approach complex problems, and how we treat one another.
Key finding Paraphrase - So what does a conscious universe have to do with AI and existential risk? - It all comes back to whether our primary orientation is around - quantity, or around - quality. - An understanding of reality - that recognises consciousness as fundamental - views the quality of your experience as - equal to, - or greater than, - what can be quantified.
- Orienting toward quality,
- toward the experience of being alive,
- can radically change
- how we build technology,
- how we approach complex problems,
- and how we treat one another.
Quote - metaphysics of quality - would open the door for ways of knowing made secondary by physicalism
Author - Robert Persig - Zen and the Art of Motorcycle Maintenance // - When we elevate the quality of each our experience - we elevate the life of each individual - and recognize each individual life as sacred - we each matter - The measurable is also the limited - whilst the immeasurable and directly felt is the infinite - Our finite world that all technology is built upon - is itself built on the raw material of the infinite
//
- Orienting toward quality,
-
If the metaphysical foundations of our society tell us we have no soul, how on earth are we going to imbue soul into AI? Four hundred years after Descartes and Hobbs, our scientific methods and cultural stories are still heavily influenced by their ideas.
Key observation - If the metaphysical foundations of our society tell us we have no soul, - how are we going to imbue soul into AI? - Four hundred years after Descartes and Hobbs, - our scientific methods and cultural stories are still heavily influenced by their ideas.
-
Suppose we have an AI whose only goal is to make as many paper clips as possible. The AI will realize quickly that it would be much better if there were no humans because humans might decide to switch it off. Because if humans do so, there would be fewer paper clips. Also, human bodies contain a lot of atoms that could be made into paper clips. The future that the AI would be trying to gear towards would be one in which there were a lot of paper clips but no humans.
Quote - AI Gedanken - AI risk - The Paperclip Maximizer
-
We might call on a halt to research, or ask for coordination around ethics, but it’s a tall order. It just takes one actor not to play (to not turn off their metaphorical fish filter), and everyone else is forced into the multi-polar trap.
AI is a multi-polar trap
-
Title Reality Eats Culture For Breakfast: AI, Existential Risk and Ethical Tech Why calls for ethical technology are missing something crucial Author Alexander Beiner
Summary - Beiner unpacks the existential risk posed by AI - reflecting on recent calls by tech and AI thought leaders - to stop AI research and hold a moratorium.
-
Beiner unpacks the risk from a philosophical perspective
- that gets right to the deepest cultural assumptions that subsume modernity,
- ideas that are deeply acculturated into the citizens of modernity.
-
He argues convincingly that
- the quandry we are in requires this level of re-assessment
- of what it means to be human,
- and that a change in our fundamental cultural story is needed to derisk AI.
- the quandry we are in requires this level of re-assessment
-
Tags
- quote - Nick Bolstrom
- physicalism
- gedanken - Nick Bolstrom
- Descartes
- gedanken - paperclip
- Alexander Beiner
- multi-polar trap
- Robert Persign
- quality vs quantity
- Zen and the Art of Motorcycle Maintenance
- no soul
- AI risk
- gedanken
- quote
- Paperclip Maximizer
- progress trap
- Cartesian dualism
- Thomas Hobbes
- quote - paperclip maximizer
Annotators
URL
-
- Mar 2023
-
garymarcus.substack.com garymarcus.substack.com
-
on both short term and long term risks in AI
-
- Mar 2022
-
twitter.com twitter.com
-
Eric Topol. (2022, February 28). A multimodal #AI study of ~54 million blood cells from Covid patients @YaleMedicine for predicting mortality risk highlights protective T cell role (not TH17), poor outcomes of granulocytes, monocytes, and has 83% accuracy https://nature.com/articles/s41587-021-01186-x @NatureBiotech @KrishnaswamyLab https://t.co/V32Kq0Q5ez [Tweet]. @EricTopol. https://twitter.com/EricTopol/status/1498373229097799680
-
- Oct 2020
-
www.coe.int www.coe.int
-
AI and control of Covid-19 coronavirus. (n.d.). Artificial Intelligence. Retrieved October 15, 2020, from https://www.coe.int/en/web/artificial-intelligence/ai-and-control-of-covid-19-coronavirus
-
- Sep 2020
-
wip.mitpress.mit.edu wip.mitpress.mit.edu
-
Building the New Economy · Works in Progress. (n.d.). Works in Progress. Retrieved June 16, 2020, from https://wip.mitpress.mit.edu/new-economy
-
- Jun 2020
-
www.weforum.org www.weforum.org
-
How COVID-19 revealed 3 critical AI procurement blindspots. (n.d.). World Economic Forum. Retrieved June 22, 2020, from https://www.weforum.org/agenda/2020/06/how-covid-19-revealed-3-critical-blindspots-ai-governance-procurement/
Tags
- AI
- procurement
- fairness
- diligence
- citation
- blindspot
- prediction
- is:blog
- transparency
- risk
- lang:en
- diagnostics
- app
- COVID-19
- chatbots
- contact tracing
Annotators
URL
-