10 Matching Annotations
  1. Sep 2018
    1. However, courts might go further and address the concern that, even where government regulation of cognitive enhancement drugs is rooted in legitimate safety concerns, this should not—by itself—give the government authority to restrict individuals’ mental freedom or “cognitive liberty” far more than is necessary to address those safety concerns. Perhaps, for example, government has imposed a complete ban where something less restrictive will satisfy the safety concerns it is worried about. For example, the state might instead institute a “gatekeeper” system in which a doctor must assess and discuss risks for a particular individual before drugs are prescribed or require a mandatory course on side effects before use of cognitive enhancement drugs.

      I believe that this solution to the paragraph directly above it, directly contradicts itself. If the Government bans the use of a drug not because it can make someone happier/better, but because it can have potentially negative or harmful side affects, then this solution is impossible. If the government deems some as potentially harmful then in more cases than not it most probably is. In this way no government could rationally come to this solution rather than the one above it. It would be obscure for a Government to allow a person who is educated about the dangers of a product to choose to use it. In the Government's and the medical professional's eyes this person would not be in their 'right mind'.. How then, could they ever allow someone who they do not deem 'in their right mind' to use a potentially hazardous drug?

    1. No. It’s not you. You were different before. – I’m still the same person, Lin. – I wasn’t, when I was on it. I did things I would never do. – Those things saved your life. – But they weren’t me. – Yes, they were. No, the way it works… – I know how it works. I get it. I totally get it. You feel invincible.

      The rhetoric of this passage raises a very important question. Are the people who are taking this drug really themselves still? If this was just a thought enhancing drug then perhaps this would be the case, however it does more than just make the user hyper-intelligent. The fact that this drug changes people's attitudes and their personalities proves that these people aren't themselves. On the other hand a hyper-intelligence may not directly change the person, but may enable them because a higher intelligence could reasonably lead to a higher confidence and a higher rationale of thinking.

    1. The merely instrumental, merely anthropological definition of technology is therefore in principle untenable. And it may not be rounded out by being referred back to some metaphysical or religious explanation that undergirds it.

      Saying that the usual and formal definition of Technology as simply a tool or a means to end is untenable, not able to be defended from objection, is to take away Technology's connection to humanity. This is problematic. Technology, at least in the way humanity knows it, would never exist without humans. Sure, if you classify the way an otter uses a rock to open a clam or the way a monkey uses a stick to get ants as technology, then yes it would exist without humanity. However I believe that although technology can be classified as a tool, I do not believe that a tool can be classified as technology. Technology, as defined, is the practical application of knowledge to an area. A tool is a device used to accomplish a task, especially in a profession. The key to Technology is the idea of knowledge. Does an otter have knowledge about the anatomy of the clam or of the physics behind using a rock to open the clam? No, so the rock is a tool. Technology only exists because of Humanity's knowledge, humanity creates Technology as a tool to solve our problems, technology has in its very existence an instrumental definition. Taking that away, takes away it's connection to humanity. It makes it impersonal.

    2. The will to mastery becomes all the more urgent the more technology threatens to slip from human control.

      Mastery is a very interesting word here. It would be assumed that anything that humans have the capability to create we could retain mastery over. It is also interesting to consider the two meanings we could take from mastery. On one hand its that we are the masters or owners of that object, on the other it could be a mastery in the way a person masters a craft or a skill. So would this quote mean that compared to improving technology and the possibility of an ultra-intelligent A.I, our skill of creating technology would appear to be becoming obsolete, or does it mean the gradual loss of our control over technology? Either way it raises an important concern over the dangers of advancing technology and the ever looming possibility of an ultra-intelligent A.I.

    1. Good has captured the essence of the runaway, but he does not pursue its most disturbing consequences. Any intelligent machine of the sort he describes would not be humankinds “tool” – any more than humans are the tools of rabbits, robins, or chimpanzees.

      If humanity were to create an ultra-intelligent computer then humans would be far surpassed. This part of the passage says that because our minds would be so simple compared to that of the machines at the point of the singularity that we would be little more than rabbits. This idea is astounding because our mental capabilities are vastly superior to those of rabbits or other animals and the idea that we could be so easily surpassed and so greatly surpassed is probably terrifying to many people. This is probably what makes the idea of a singularity such an abstract idea.

    1. That’s Dr. Hunter, isn’t it? “By the Way do you mind if I ask you a personal question?

      HAL, a supposedly emotion feigning ultra-intelligent A.I., has just asked Dave if he could ask him a "personal question?" This should raise a concern in Dave, but it doesn't. Earlier in the film, during the BBC interview, the interviewer asked the Astronauts if HAL had emotions or if he was just faking it, their reply was that he was definitely programmed to feign emotions, however the fact whether if he actually had emotions or not remains a mystery. In this scene HAL acknowledges the existence of emotions by asking if he can ask a question that might incite a negative emotional response, a "personal question." This revelation should have frightened Dave, because it shows that HAL is more than a computer and is capable of more than just controlling the ship and maintaining optimal performance, HAL is capable of reading emotions and perhaps even capable of being afflicted by them.

    2. Hal, you have an enormous responsibility on this mission  perhaps the greatest responsibility of any single mission element. You’re the brain and central nervous system of the ship. Your responsibilities include watching over the men in hibernation. Does this ever cause you any lack of confidence?

      Hal is given complete control over the ship and everything inside it, even the people. It is in this way that he is beyond that of a tool. He controls, he is not controlled. As portrayed in the film he can kill any of the crew members any time, which he does, and advises the crew members of what they should do. This is perfectly described in "The Technological Singularity" where the authors states that a super-intelligent AI will be as much of a tool to humanity as we are tools to animals.

    3. – Do you know what happened? I’m sorry, Dave. I don’t have enough information.

      Hal is having a very human experience at this point in the film. Not only has he killed one of the cremates and intends to kill the other cremates, but he has some sense that it is wrong and it will lead to bad things for him. Even though he knows exactly what happened, he knows that it would be best for him to keep it away from Dave. This human experience only enhances when begins to die through the slow and monotonous process of being shut down. He begins to tell Dave that he can feel it and that he is afraid, showing that he has more than intelligence, but that he also has consciousness.

    1. Transhumanists regard human nature not as an end in itself, not as perfect, and not as having any claim on our allegiance. Rather, it is just one point along an evolutionary pathway and we can learn to reshape our own nature in ways we deem desirable and valuable. By thoughtfully, carefully, and yet boldly applying technology to ourselves, we can become something no longer accurately described as human – we can become posthuman.

      The author describes Transhumanism as something that the name directly defines it as. It is an idea whose followers believe that humans are not obsolete or non-perfect but instead a 'blank slate' that can be improved in specific ways through specific applications of technology to the human form, instead of relying on natural processes such as evolution or natural selection. The Latin prefix trans can be defined as either across, beyond, or through. The author describes Transhumanism simply as a movement to advance the human form, through technology, to meet our needs and ever changing values.