- Apr 2023
-
www.nytimes.com www.nytimes.com
-
If you told me you were building a next generation nuclear power plant, but there was no way to get accurate readings on whether the reactor core was going to blow up, I’d say you shouldn’t build it. Is A.I. like that power plant? I’m not sure.
This is the weird part of these articles … he has just made a cast-iron argument for regulation and then says "I'm not sure"!!
That first sentence alone is enough for the case. Why? Because he doesn't need to think for sure that AI is like that power plant ... he only needs to think there is a (even small) probability that AI is like that power plant. If he thinks that it could be even a bit like that power plant then we shouldn't build it. And, finally, in saying "I'm not sure" he has already acknowledged that there is some probability that AI is like the power plant (otherwise he would say: AI is definitely safe).
Strictly, this is combining the existence of the risk with the "ruin" aspect of this risk: one nuclear power blowing up is terrible but would not wipe out the whole human race (and all other species). A "bad" AI quite easily could (malevolent by our standards or simply misdirected).
All you need in these arguments is a simple admission of some probability of ruin. And almost everyone seems to agree on that.
Then it is a slam dunk to regulate strongly and immediately.
-
- May 2018
-
www.theatlantic.com www.theatlantic.com
-
Who is responsible for the actions of AI? How should liability be determined for their mistakes? Can a legal system designed by humans keep pace with activities produced by an AI capable of outthinking and potentially outmaneuvering them?
Politically, people have been pushing deregulation for decades, but we have regulations for a reason, as these questions illustrate.
-