- Sep 2024
-
www.youtube.com www.youtube.com
-
nobody told it what to do that's that's the kind of really amazing and frightening thing about these situations when Facebook gave uh the algorithm the uh uh aim of increased user engagement the managers of Facebook did not anticipate that it will do it by spreading hatefield conspiracy theories this is something the algorithm discovered by itself the same with the capture puzzle and this is the big problem we are facing with AI
for - AI - progress trap - example - Facebook AI algorithm - target - increase user engagement - by spreading hateful conspiracy theories - AI did this autonomously - no morality - Yuval Noah Harari story
-
when a open AI developed a gp4 and they wanted to test what this new AI can do they gave it the task of solving capture puzzles it's these puzzles you encounter online when you try to access a website and the website needs to decide whether you're a human or a robot now uh gp4 could not solve the capture but it accessed a website task rabbit where you can hire people online to do things for you and it wanted to hire a human worker to solve the capture puzzle
for - AI - progress trap - example - no morality - Open AI - GPT4 - could not solve captcha - so hired human at Task Rabbit to solve - Yuval Noah Harari story
Tags
- AI - progress trap - example - no morality - Open AI - GPT4 - could not solve captcha - so hired human at Task Rabbit to solve - Yuval Noah Harari story
- AI - progress trap - example - Facebook AI algorithm - target - increase user engagement - by spreading hateful conspiracy theories - AI did this autonomously - no morality - Yuval Noah Harari story
Annotators
URL
-
- Feb 2024
-
arxiv.org arxiv.org
-
T. Herlau, "Moral Reinforcement Learning Using Actual Causation," 2022 2nd International Conference on Computer, Control and Robotics (ICCCR), Shanghai, China, 2022, pp. 179-185, doi: 10.1109/ICCCR54399.2022.9790262. keywords: {Digital control;Ethics;Costs;Philosophical considerations;Toy manufacturing industry;Reinforcement learning;Forestry;Causality;Reinforcement learning;Actual Causation;Ethical reinforcement learning}
-
-
pdf.sciencedirectassets.com pdf.sciencedirectassets.com
-
Can model-free reinforcement learning explain deontological moraljudgments?Alisabeth AyarsUniversity of Arizona, Dept. of Psychology, Tucson, AZ, USA
-
- Jul 2023
-
-
That's the way computers are learning today. 00:02:35 We basically write algorithms that allow computers to understand those patterns… And then we get them to try and try and try. And through pattern recognition, through billions of observations, they learn. They're learning by observing. And what are they observing? They're observing a world that's full of greed, disregard for other species, violence, ego, 00:03:05 showing off The only way to be not only intelligent but also to have the right value set is that we start to portray that right value set today. THE PROBLEM IS UNHAPPINESS
- Machine learning
- will learn all our bad habits
- and become supercharged, amplified versions of them
- The antidote to apocalyptic machine learning
- is human happiness and wisdom
- Machine learning
-
-
docdrop.org docdrop.org
-
even though the existential threats are possible you're concerned with what humans teach I'm concerned 00:07:43 with humans with AI
- It is the immoral human being that is the real problem
- they will teach AI to be immoral and with its power, can end up destroying humanity
-
a nefarious controller of AI presumably could teach it to be immoral
- bad actor will teach AI to be immoral
- this also creates an arms race as "good" actors are forced to develop AI to counter the AI of bad actors
-