19 Matching Annotations
  1. Nov 2017
    1. Finally, there are differences of opinion about how much testing is needed. Philip Koopman, associate professor of electrical and computer engineering at Carnegie Mellon University, says there’s so much uncertainty around the technology that you might need close to a billion miles of test-driving data to ensure safety on roads populated with both human and machine-driven cars. Koopman also says he worries the industry is seriously underestimating how hard it will be to build innate safety features into artificially intelligent cars. “There’s a possibility at least some companies are just going to put the technology out there and roll the dice,” Koopman says. “My fear is this will really happen, and it will be bad technology.” 

      Companies underestimate the difficulty of producing a product that is truly autonomous. In order to be completely safe, the vehicle must practically be completely autonomous and make its own decisions and currently there is no AI that can do just that

    1. The separate threads of automated vehicles and cooperative ITS have not yet been thoroughly woven together, but this will be a necessary step in the near future because the cooperative exchange of data will provide vital inputs to improve the performance and safety of the automation systems. This means that it is at least important to start thinking about the cybersecurity implications of cooperative automated vehicle systems.

      It is important to look into the cybersecurity of avs

    1. The current literature on self-driving cars tends to focus on ethical complexities related to an individual vehicle, including “trolley-type” scenarios. This is a necessary but insufficient step towards determining how the technology will impact human lives and society more generally. Ethical, legal, and policy deliberations about self-driving cars need to incorporate a broader, system level of analysis, including the interactions and effects that these cars will have on one another and on the socio-technical systems in which they are embedded.1 Of course, there are many types of self-driving vehicles that are not cars, including autonomous trucks (Anderson 2015), and they carry with them their own interesting ethical issues. For example, self-driving public transportation, like taxis, could have environmental and other important benefits (Greenblatt and Saxena 2015). Though much of the discussion is applicable to a range of ground-based vehicles, the focus here is on privately owned cars.

      must consider communication with other AVs

    1. 0 Report The social dilemma of autonomous vehicles Jean-François Bonnefon1, Azim Shariff2,*, Iyad Rahwan3,†1Toulouse School of Economics, Institute for Advanced Study in Toulouse, Center for Research in Management, CNRS, University of Toulouse Capitole, Toulouse, France.2Department of Psychology, University of Oregon, Eugene, OR 97403, USA.3The Media Lab, Massachusetts Institute of Technology, Cambridge, MA 02139, USA.↵†Corresponding author. Email: irahwan@mit.edu↵* Present address: Department of Psychology and Social Behavior, 4201 Social and Behavioral Sciences Gateway, University of California, Irvine, Irvine, CA 92697, USA. See allHide authors and affiliations Science  24 Jun 2016:Vol. 352, Issue 6293, pp. 1573-1576DOI: 10.1126/science.aaf2654 Jean-François BonnefonToulouse School of Economics, Institute for Advanced Study in Toulouse, Center for Research in Management, CNRS, University of Toulouse Capitole, Toulouse, France.Find this author on Google Scholar Find this author on PubMed Search for this author on this site Azim ShariffDepartment of Psychology, University of Oregon, Eugene, OR 97403, USA.Find this author on Google Scholar Find this author on PubMed Search for this author on this site Iyad RahwanThe Media Lab, Massachusetts Institute of Technology, Cambridge, MA 02139, USA.Find this author on Google Scholar Find this author on PubMed Search for this author on this site Article Figures & Data Info & Metrics eLetters PDF Codes of conduct in autonomous vehiclesWhen it becomes possible to program decision-making based on moral principles into machines, will self-interest or the public good predominate? In a series of surveys, Bonnefon et al. found that even though participants approve of autonomous vehicles that might sacrifice passengers to save others, respondents would prefer not to ride in such vehicles (see the Perspective by Greene). Respondents would also not approve regulations mandating self-sacrifice, and such regulations would make them less willing to buy an autonomous vehicle.Science, this issue p. 1573; see also p. 1514AbstractAutonomous vehicles (AVs) should reduce traffic accidents, but they will sometimes have to choose between two evils, such as running over pedestrians or sacrificing themselves and their passenger to save the pedestrians. Defining the algorithms that will help AVs make these moral decisions is a formidable challenge. We found that participants in six Amazon Mechanical Turk studies approved of utilitarian AVs (that is, AVs that sacrifice their passengers for the greater good) and would like others to buy them, but they would themselves prefer to ride in AVs that protect their passengers at all costs. The study participants disapprove of enforcing utilitarian regulations for AVs and would be less willing to buy such an AV. Accordingly, regulating for utilitarian algorithms may paradoxically increase casualties by postponing the adoption of a safer technology.Science Table of Contents newsletterGet the latest issue of Science delivered to your inbox weeklySign UpBy signing up, you agree to share your email address with the publication. Information provided here is subject to Science's Privacy PolicyThe year 2007 saw the completion of the first benchmark test for autonomous driving in realistic urban environments (1, 2). Since then, autonomous vehicles (AVs) such as Google’s self-driving car covered thousands of miles of real-road driving (3). AVs have the potential to benefit the world by increasing traffic efficiency (4), reducing pollution (5), and eliminating up to 90% of traffic accidents (6). Not all crashes will be avoided, though, and some crashes will require AVs to make difficult ethical decisions in cases that involve unavoidable harm (7). For example, the AV may avoid harming several pedestrians by swerving and sacrificing a passerby, or the AV may be faced with the choice of sacrificing its own passenger to save one or more pedestrians (Fig. 1). <img class="fragment-image" src="https://d2ufo47lrtsv5s.cloudfront.net/content/sci/352/6293/1573/F1.medium.gif"/> Download high-res image Open in new tab Download Powerpoint Fig. 1 Three traffic situations involving imminent unavoidable harm.The car must decide between (A) killing several pedestrians or one passerby, (B) killing one pedestrian or its own passenger, and (C) killing several pedestrians or its own passenger. Although these scenarios appear unlikely, even low-probability events are bound to occur with millions of AVs on the road. Moreover, even if these situations were never to arise, AV programming must still include decision rules about what to do in such hypothetical situations. Thus, these types of decisions need be made well before AVs become a global commodity. Distributing harm is a decision that is universally considered to fall within the moral domain (8, 9). Accordingly, the algorithms that control AVs will need to embed moral principles guiding their decisions in situations of unavoidable harm (10). Manufacturers and regulators will need to accomplish three potentially incompatible objectives: being consistent, not causing public outrage, and not discouraging buyers.

      morality and decision making

    1. Progressively autonomous technologies already in development, such as military robots, driverless cars or trains and service robots in the home and for healthcare, will be involved in moral situations that directly affect the safety and well-being of humans. An autonomous bomb disposal robot might in the future be faced with the decision which bomb it should defuse first, in order to minimize casualties. Similarly, a moral decision that a driverless car might have to make is whether to break for a crossing dog or avoid the risk of causing injury to the driver behind him. Such decisions require judgment. Currently operators make such moral decisions, or the decision is already inscribed in the design of the computer system. Machine ethics, Wallach and Allen argue, goes one step beyond making engineers aware of the values they build into the design of their products, as it seeks to build ethical decision-making into the machines.

      autonomous vehicles are faced with the obstacle of making ethical decisions

    1. Some carmakers fear that—even more than reactor operators or professional pilots—untrained motorists may only worsen the problem when suddenly required to take control of an otherwise fully automated system. Ford believes it is better to skip Level 3 altogether, and go straight to Level 4, even if it takes longer.

      Having the person also take over while the car is in full control is like having two drivers arguing for the control of the vehicle

    2. Level 3 autonomous driving is even more controversial. The main difference is that, while the driver must still remain vigilant and ready to intervene in an emergency, responsibility for all the critical safety functions is shifted to the car. This has a lot of engineers worried. Experience has not been good with control systems that relegate the operator to a managerial role whose only job is to intercede in the case of an emergency.

      the driver still must be active while in the car

    1. All our rules about driving — from who pays for a speeding ticket to who is liable for a crash — are based on having a human behind the wheel. That is going to have to change.

      All of the rules of driving will have to change

    1. Accidents are, however, happening at an increasing rate as autonomous cars become more common. On Valentine’s Day this year, one of Google’s autonomous cars caused its first crash when it pulled out in front of a bus. Fortunately, no one was hurt. Just three months later, on May 7, a Tesla S driving autonomously on a 65mph (about 105kmh) limit road drove into a truck turning across the highway. The driver, Joshua Brown, who was sitting in the driving seat of the Tesla was killed. According to reports, he was watching a Harry Potter movie.

      With autonomous cars becoming more present on the roads, more accidents begin to occur.

  2. Sep 2017
    1. And yet, as important as these economic and technological spinoffs of science are, knowledge, in itself, is still at the center of the scientific enterprise. In this respect, perhaps the greatest benefit of science for society is how it transforms our culture. Science provides us with a new perspective on our place in the cosmos and a better understanding of ourselves as human beings. It helps us overcome our otherwise myopic preconceptions about how the world works.

      science impacts our culture, and provides us with a new perspective of our place

    2. t’s tempting to locate the utility of science in the technology it produces, and in the way that technology improves the human condition. But the truth is that science is not equivalent to technology alone—nor is it equivalent to a set of facts, which can then be contrasted with another set of “alternative” facts to decide which set one prefers.Science is a process for deriving facts about nature. It’s a process for enhancing our understanding of the world around us, and for separating nonsense from sense via empirical investigation, logical reasoning, and constant testing. Trying to define science as an activity that upholds “the common good” or is “in the national interest” distorts the fact that science is nothing more or less than a remarkably successful empirical process for uncovering the way the world works. At its best, this process is open-ended and curiosity-driven.

      Science isn't just about facts, its about uncovering the truth about the world

    1. Cheaper alternative energy is the best hope the world has left. People are not willing to fundamentally change their lives for problems far off in the future, even ones as potentially catastrophic as climate change. To avoid the worst effects of climate change, alternative energies need to become as cheap and reliable as their carbon-emitting counterparts, and quickly.

      technology and science help create alternative forms of energy which can save the planet.

    2. 2. Doctors and scientists have used technology to tackle problems that once seemed insurmountableDoctors have played a role too of course. HIV has been transformed from a death sentence to a manageable disease in just thirty years. Bigger things are still on the horizon. According to the most recent data, venture capital firms poured $11 billion into healthcare companies in 2014, a 30 percent jump over the previous year. These funds are being used to develop supercomputers that crunch mountains of data to offer better diagnosis and treatment, and to better understand our genetic building blocks and how to use them to fight off disease.

      Science helps the world conquer deadly diseases.

    1. Many experts say that a big factor driving this trend is the lack of role models in the upper divisions of academia, which have been slow to change.

      lack of role models

      role models have a huge impact on people. In your mind you unconsciously choose role models because of their success.If women dont see many role models in society, they wont have that unconscious drive that pushes them to achieve the success of their would be role models.

    2. On the first day of class, “he looked around and said 'I see women in the classroom. I don't believe women have any business in engineering, and I'm going to personally see to it that you all fail'.”

      Instances where professors are literally against all women in engineering. Back in the 70s, blatant sexism was more common. Although the women affected by this sexism would perhaps lead their own daughters away from careers in STEM.

    3. Disparities can also be found in grant funding in some countries.

      gap in funding in science

    1. Stereotypes are engrained so early; parents may try to minimize the effects of stereotypes by employing their own treatments. For example, a recent study found that although mothers talk to female babies more, sex differences exist in the type of talk presented to female babies in relation to that presented to male babies. Mothers were shown to engage in more science learning and literacy related talk with male babies than female babies (Tenenbaum, Snow, Roach, & Kurland, 2005). These early experiences of science talk may affect the developmental course of babies depending on the gendered experiences they were exposed to by their parents early in life.

      early stereotypes developed in childhood

      The stereotypes are engraved in all of our minds. i.e. a mother engaging in more science related talk with male babies.

    1. "There is significant evidence suggesting that gender stereotyping starts to show up as early as age five, through things such as girls toys (dolls, dressing up) and boys toys (cars, building blocks, adventure games)," comments Naomi Climer, former president of the IET and the first woman to hold the position. "At an unconscious level, girls and boys are absorbing messages about what they're supposed to be like and - again, often at an unconscious level - parents and teachers are giving them the same messages. It's a serious issue that makes it harder for boys and girls to pursue their dream or interest if it doesn't fit with the perceived norm."

      Stereotyping begins with the childhood and how the children are raised. i.e. girl toys vs boy toys

      unconsciously implementing/conforming to these stereotypes.

    1. This implies that the demand for students in differentmajors is biased in favour of the minority gender: forexample, the share of female students who are admittedto major in maths and physics jumps from 8% to 12%.These results show that professors’ evaluations are notdirectly driven by simplistic stereotypes such as ‘girls areno good at science’.

      opposite effect favors minority. This offers the counterargument that states that the selection process is no longer a root of the gender gap. The process actually helps women today because companies are trying to stand out by implementing diversity.