- Oct 2024
-
pmc.ncbi.nlm.nih.gov pmc.ncbi.nlm.nih.gov
-
But it would be just an object, not an entity with the moral standing that derives from having real experiences and real pains of the type that people, dogs, and probably lizards and crabs have.
I think this is what AI should be. I think that it makes most sense for humans to live in a world where AI exists this way. What is wrong with AI remaining this way? If it pretends to reciprocate why is real consciousness needed?
-
Would it deserve rights? If it pleads or seems to plead for its life, or not to be turned off, or to be set free, ought we give it what it appears to want?
I don't think that these robots should deserve rights. They are real humans or Americans that are protected by the U.S. Constitution. Again, I think if Americans had to treat Ai as if it were a real U.S. citizen it may bring more harm than good.
-
While theories of the exact basis of moral standing differ, sentience is widely viewed as critically important.
I think that abiding ethical morals, is more important than having AI that has sentience. With it's advancements I think it imposes more on real humans' lives and it would probably do more harm than good if it had genuine emotions and consciousness.
-
it is no longer in the realm of science fiction to imagine AI systems having feelings and even human-level consciousness,” advocating the urgent prioritization of consciousness research so that researchers can assess when and if AI systems develop consciousness.14
If the current AI realm is what used to see as science-fiction, and consciousness is within the near future, then is what is "out of reach" going to become a reality in the future?
-
On some leading theories of consciousness, for example global workspace theory8 and attention schema theory,9 we might be not far from creating genuinely conscious systems.
Is consciousness the only thing that enables real beings and AI to have emotions? Or can you still be conscious and remain emotionless?
-
At least one person has apparently committed suicide because of a toxic emotional relationship with a chatbot
If people believe some AI is sentient and it has caused such implications as someone taking their own life due to a toxic relationship, why would scientists and roboticist continue to advance AI to have such an affect on someone's life. Why wouldn't we limit it's capabilities and prevent events like this from happening.
-