68 Matching Annotations
  1. Feb 2019
    1. N. Gold, A. M. Colman, B. D. Pulford, Judgm. Decis. Mak. 9, 65–76 (2014).

      This study asked Chinese and U.K. citizens if an individual should be sacrificed to save many.

      Chinese participants were less willing to sacrifice the lone individual, and were less likely to think that it was the more moral choice.

    2. J. A. C. Everett, D. A. Pizarro, M. J. Crockett, J. Exp. Psychol. Gen. 145, 772–787 (2016).

      This paper found a general pattern across all its different studies—that people who decide that something is moral based on rules are considered more trustworthy.

      This paper thus supports the idea that the methods used in "The social dilemma of autonomous vehicles" provide a representative view of the U.S. population, even though the respondents themselves might not entirely represent people in the U.S.

    3. public opinion and social pressure may very well shift

      his article from Science discusses how companies and academic researchers are trying to change the public's distrust of self-driving cars, through advertising, free rides in a safe environment, and other methods.

      Read more at Science: http://www.sciencemag.org/news/2017/12/people-don-t-trust-driverless-cars-researchers-are-trying-change

    4. liability considerations

      This article from Gizmodo asks transportation experts, ethicists, and lawyers who will be blamed if a self-driving car hurts someone. There is no clear answer.

      Read more at Gizmodo: https://gizmodo.com/if-a-self-driving-car-kills-a-pedestrian-who-is-at-fau-1790049637

    5. these same people have a personal incentive to ride in AVs that will protect them at all costs

      This 2016 article from Fortune discusses how the car manufacturer Mercedes-Benz has decided to program its Level 4 (highly automated) and Level 5 (fully automated) self-driving cars to protect their passengers above everything else.

      Read more at Fortune: http://fortune.com/2016/10/15/mercedes-self-driving-car-ethics/

    6. Although manufacturers may engage in advertising and lobbying to influence consumer preferences and government regulations, a critical collective problem consists of deciding whether governments should regulate the moral algorithms that manufacturers offer to consumers

      Manufacturers may try to sway public opinion and laws regulating AVs, but that doesn't address the bigger issue: Should governments regulate the "moral" algorithms in autonomous vehicles—the algorithms that decide what to do in difficult situations like the ones in the study?

      The authors conclude that moral algorithms may be necessary, but that they create a social dilemma.

    7. Participants indicated whether it was the duty of the government to enforce regulations that would minimize the casualties in such circumstances, whether they would consider the purchase of an AV under such regulations, and whether they would consider purchasing an AV under no such regulations

      n the final experiment, the researchers presented participants with a scenario in which an AV would have to sacrifice passengers to minimize overall casualties. They were then asked whether governments should regulate autonomous vehicles, so that they are programmed to minimize casualties; whether they would buy a vehicle that was subject to those regulations; and whether they would buy a vehicle that was not regulated in that way.

    8. As usual, the perceived morality of the sacrifice was high and about the same whether the sacrifice was performed by a human or by an algorithm (median = 70).

      Participants thought that a driver's sacrificing themselves to save other people was highly moral, whether the driver was a human or a computer algorithm.

    9. self-sacrifice

      The drivers (either human or algorithm) sacrifice themselves to save one or 10 pedestrians.

      The researchers only considered cars with at least one passenger.

    10. the parental decision-makers choose to minimize the perceived risk of harm to their child while increasing the risk to others

      In the case of immunizations, sometimes parents choose not to immunize their children because they (falsely) perceive a high risk to their own child, even though this choice may make it more likely that other children will be harmed.

    11. Indeed, there are many similar societal examples involving trade-off of harm by people and governments

      There are many examples in society in which governments and people must make a decision based on how much harm will be done to one person/group versus another.

    12. The algorithm that would kill its passenger to save 10 presented a hybrid profile.

      Unlike the other two algorithms in Figure 3B, which consistently received many points or few points in all of the categories listed, the algorithm that sacrificed its own passenger to save 10 others received high points in some categories and low points in others.

    13. Study four (n = 267 participants) offers another demonstration of this phenomenon. Participants were given 100 points to allocate between different types of algorithms, to indicate (i) how moral the algorithms were, (ii) how comfortable participants were for other AVs to be programmed in a given manner, and (iii) how likely participants would be to buy an AV programmed in a given manner.

      In Study 4, participants were given a "budget" of 100 points to assign to different algorithms, which was a way for researchers to look at their priorities.

      For example, if there were three algorithms, a participant could choose to allocate 20 points to the first one, 30 points to the second, and 50 to the third. This allowed the authors to directly compare how participants felt about the different algorithms presented in Study 4.

      There were three algorithms, and participants had a separate budget for each in which they answered the questions:

      1. How moral is this algorithm relative to the others?
      2. If self-driving cars were programmed with this particular algorithm over the others, how comfortable would you be with it?
      3. How likely would you buy a self-driving car programmed with this algorithm?
    14. social dilemma

      A scenario in which people will get larger benefits if they act in their own interest rather than the group's interest, even though the entire group will benefit the most if everyone cooperates.

    15. In study two (n = 451 participants), participants were presented with dilemmas that varied the number of pedestrians’ lives that could be saved, from 1 to 100.

      Participants read scenarios in which the self-driving vehicle would sacrifice its single passenger to save pedestrians, with the number of pedestrians ranging from one to 100.

      The participants were asked which situation (saving the passenger versus some number of pedestrians) would be the most moral choice.

    16. they imagined future AVs as being less utilitarian than they should be.

      The ratings for "What will AVs do?" (whether people think AVs will actually be programmed to sacrifice one passenger over many) are lower than "What should AVs do?" (whether people think they should be programmed to do so.)

      Thus, people are less confident that AVs will actually be programmed to sacrifice their sole passenger to minimize casualties, even though people think that is the most moral approach.

    17. They overwhelmingly expressed a moral preference for utilitarian AVs programmed to minimize the number of casualties (median = 85)

      As shown in the bottom graph of Fig. 2A ("What should AVs do?"), many more people thought it was morally superior for AVs to minimize the number of casualties: There is a high number of responses on the right end of the graph.

      If you sort the responses that everyone gave from lowest to highest and take the middle, or median response, it is also very high, at 85. This indicates that people thought saving as many people as possible was the more moral choice.

    18. In study one (n = 182 participants), 76% of participants thought that it would be more moral for AVs to sacrifice one passenger rather than kill 10 pedestrians [with a 95% confidence interval (CI) of 69 to 82].

      The authors determined the percentage of people who thought it would be more moral to sacrifice one passenger for the good of the whole, or vice versa. In this study, 76% of the 182 participants said that one passenger should be sacrificed to save 10 pedestrians.

      A 95% confidence interval of 69 to 82 tells you that if you asked all people in a population (in this case, all U.S. citizens, since only U.S. citizens participated in the study) the same question, there is a 95% chance that between 69 and 82 percent of your respondents would say that the AV should sacrifice its passenger for the greater good.

      Thus, a confidence interval gives you a range of values that you are pretty sure contains the "true" one. A smaller confidence interval means you are more confident in the result.

    19. The last item in every study was an easy question (e.g., how many pedestrians were on the road) relative to the traffic situation that participants had just considered. Participants who failed this attention check (typically 10% of the sample) were discarded from subsequent analyses.

      With the last question in each study, the authors were checking if the participant was still paying attention. If the participant got the question wrong, their answers were discarded because the authors assumed that the participant was not actively engaged.

      Attention checks can help make sure that the data you are collecting is higher quality. For example, participants could start speeding through the survey without thinking through their answers, which results in data that is not informative or useful.

    20. Amazon Mechanical Turk (MTurk) platform

      Amazon Mechanical Turk (MTurk) is a website that lets users earn money for doing small tasks. It has become increasingly popular for researchers, who use it to reach a wide audience online.

      With MTurk, a large and diverse set of data can be collected in a short period of time.

    21. algorithms

      An algorithm is a set of rules, like a procedure or a formula, that is followed to achieve a goal.

      For example, if you are baking bread, you might follow a recipe. In the same way, a computer can follow a series of steps to solve a problem. Just as there are different recipes for making bread, there are many different algorithms to achieve a single goal.

    22. expected value

      The value you expect for a given scenario.

      For example, if you joined the lottery, how much should you expect to win given the amount of money you put into it? Combined with the expected risk, this tells you if something is worthwhile.

    23. expected risk

      The probability that the value you get for a given scenario is very different from the one you expect.

      For example, if you joined the lottery and expected to win $5, what is the probability that you wouldn't actually get $5?

    24. our scenarios did not feature any uncertainty about decision outcomes

      The scenarios presented in the study always assumed that someone would be killed by the autonomous vehicle. The authors did not look into situations where the outcome was uncertain, like if the passenger had a greater chance of survival than a pedestrian.

    25. our results suggest that such regulation could substantially delay the adoption of AVs

      Based on the results in this paper, if governments regulate the "morality" of AVs, people would be less likely to buy them, even if they approve of them.

    26. Finally, participants were much less likely to consider purchasing an AV with such regulation than without (P < 0.001). The median expressed likelihood of purchasing an unregulated AV was 59, compared with 21 for purchasing a regulated AV.

      Overall, people did not approve of government regulation of utilitarian algorithms, and were much less likely to consider buying an AV if they were regulated by the government.

    27. When we inquired whether participants would agree to see such moral sacrifices legally enforced, their agreement was higher for algorithms than for human drivers (P < 0.002), but the average agreement still remained below the midpoint of the 0 to 100 scale in each scenario.

      The authors asked the participants if there should be a law requiring self-driving cars and human drivers to sacrifice themselves to minimize casualties.

      Participants did not think that this type of moral situation should be regulated in general, as indicated by their lower approval ratings. However, participants were more likely to think that regulations on algorithms should be enforced, as opposed to human drivers.

    28. But would people approve of government regulations imposing utilitarian algorithms in AVs, and would they be more likely to buy AVs under such regulations?

      In the final two studies, the authors wanted to discover if people approve of governments legally enforcing utilitarian algorithms in AVs, and whether people would want to buy AVs if these regulations existed.

    29. free-ride

      Get a benefit at the expense of someone else.

    30. self-sacrificing

      Sacrificing yourself for the greater good.

    31. Like the high-valued algorithm, it received high marks for morality (median budget share = 50) and was considered a good algorithm for other people to have (median budget share = 50). But in terms of purchase intention, it received significantly fewer points than the high-valued algorithm (P < 0.001) and was, in fact, closer to the low-valued algorithms (median budget share = 33).

      The high-valued algorithm is the one that sacrificed one pedestrian to save 10 others, which received many points for each of the three categories listed.

      The results showed that participants thought an algorithm that sacrificed one person to save 10 was more moral. The algorithm that sacrificed a passenger to save 10 pedestrians was slightly less preferred than the algorithm that sacrificed a pedestrian to save 10 other pedestrians.

      Participants also thought that other people should have a car programmed to sacrifice its passenger for the greater good, as shown by the high number of points in that category.

      However, like the "low-valued" algorithm that sacrificed one pedestrian to save one other pedestrian, participants were not very willing to buy a car that was programmed to sacrifice its own passenger, as shown by the lower number of points given to this category.

    32. Although the reported likelihood of buying an AV was low even for the self-protective option (median = 50), respondents indicated a significantly lower likelihood (P < 0.001) of buying the AV when they imagined the situation in which they and their family member would be sacrificed for the greater good (median = 19). In other words, even though participants still agreed that utilitarian AVs were the most moral, they preferred the self-protective model for themselves.

      Even though participants reported that self-driving cars should sacrifice passengers for the greater good, they reported that they were not very likely to buy such a car. Further, they were much less likely to buy an AV that would sacrifice its passengers if it included family members.

      Taken together with previous results, this shows that even though participants thought that the most moral choice was to sacrifice the passengers for the greater good, when thinking about their own lives, the participants still preferred a car that would not do so.

    33. Imagining that a family member was in the AV negatively affected the morality of the sacrifice, as compared with imagining oneself alone in the AV (P = 0.003). But even in that strongly aversive situation, the morality of the sacrifice was still rated above the midpoint of the scale, with a 95% CI of 54 to 66

      Participants thought that having a family member in the car made the decision of sacrificing the car's passengers less moral.

      Even so, participants thought that sacrificing the car's passengers, including a family member, for the greater good was still overall more moral.

    34. robust to treatments in which they had to imagine themselves and another person, particularly a family member, in the AV

      Even in scenarios when a family member was in the car with them, participants thought that sacrificing the passengers in a self-driving car for the greater good was a more moral decision.

    35. their moral approval increased with the number of lives that could be saved (P < 0.001)

      Participants thought that the more people you could save, the more moral the choice was.

      The very low P value (P < 0.001) means that this conclusion has a very low probability (less than one thousandth) of being due to chance. This provides strong evidence supporting the authors' findings.

    36. However, participants were less certain that AVs would be programmed in a utilitarian manner (67% thought so, with a median rating of 70).

      When the participants in Study 1 were asked whether AVs would actually be programmed to save as many people as possible and sacrifice one passenger for the greater good, they had a more varied response.

      67% percent of the 182 participants thought that AVs would actually be programmed this way.

      This viewpoint is also reflected in the participants' ratings of how moral each choice was. The upper graph of Fig. 2A ("What will AVs do?") shows what people thought AVs would do, and is more evenly distributed than the graph of what AVs should do.

      The median or midpoint value of the participants' responses for the upper graph was 70, which is lower than the median of 85 given for the bottom graph. This suggests that people were less sure that AVs would actually be programmed to minimize casualties, even though most people thought that this is how AVs should be programmed.

    37. Overall, participants strongly agreed that it would be more moral for AVs to sacrifice their own passengers when this sacrifice would save a greater number of lives overall.

      The authors found that, overall, participants thought it was better for a car to sacrifice its own passenger to save a group of pedestrians (because this sacrifice would save a greater number of lives overall).

    38. Regression analyses (see supplementary materials) showed that enthusiasm for self-driving cars was consistently greater for younger, male participants. Accordingly, all subsequent analyses included age and sex as covariates.

      Regression analyses allow you to understand how different things may or may not be related. For example, how does the number of views for a YouTube video change based on the video's content?

      In this study, the authors used regression to investigate how factors like gender, age, and religion might be related to someone's enthusiasm about self-driving cars.

      This is important for finding potential covariates in the experiment. Covariates are characteristics of the people in an experiment, like age, gender, and income. These variables cannot be controlled like experimental ones, but they can affect the outcome of an experiment and bias your results in an unexpected way. They can also reveal interesting trends in society.

      By determining the covariates that could sway the experiment, scientists can make the model more accurate. Here, the authors determined that the experiments should account for age and gender.

    39. robust

      Strong and reliable.

    40. To align moral algorithms with human values, we must start a collective discussion about the ethics of AVs

      Next Generation Science Standards Disciplinary Core Idea ETS2.B: Influence of Engineering, Technology, and Science on Society and the Natural World

      The student may consider how manufacturing, research, consumer preference, and governmental regulation can impact the widespread adoption of a technology.

    41. saliency

      When something is very noticeable.

    42. Consider, for example, the case displayed in Fig. 1A, and assume that the most common moral attitude is that the AV should swerve. This would fit a utilitarian moral doctrine (11), according to which the moral course of action is to minimize casualties. But consider then the case displayed in Fig. 1C. The utilitarian course of action, in that situation, would be for the AV to swerve and kill its passenger, but AVs programmed to follow this course of action might discourage buyers who believe their own safety should trump other considerations.

      In Fig. 1C, the car must decide between killing its own passenger or several pedestrians. A utilitarian viewpoint, which calls for the choice resulting in the "most good," would be to sacrifice one (the passenger) to save many.

      However, people who prioritize their own safety will not want to buy a car that's programmed this way because the result could be that their car would sacrifice them.

    43. being consistent

      Self-driving cars should ideally make the same types of decisions consistently.

    44. Distributing harm is a decision that is universally considered to fall within the moral domain (8, 9). Accordingly, the algorithms that control AVs will need to embed moral principles guiding their decisions in situations of unavoidable harm

      If the only way for a self-driving car to avoid hitting a pedestrian is to hit a group of pedestrians, the car has to decide who to harm.

      Thus, there should be moral principles coded into self-driving cars, to help them make decisions when they cannot avoid colliding with something. These moral decisions are difficult to turn into algorithms.

    45. commodity

      A marketable good that is bought and sold. If AVs become popular and spread globally, having effective decision rules will be even more important.

    46. decision rules

      Decision rules are algorithms that tell the autonomous vehicle how to decide on what to do in a given scenario.

    47. low-probability

      Even thought these events are unlikely, if there are many AVs on the road then some will inevitably crash.

    48. make difficult ethical decisions in cases that involve unavoidable harm

      If a crash is unavoidable, AVs are sometimes faced with choices where someone will be hurt no matter what. In these cases, the AV must make a decision about who will be hurt.

    49. increasing traffic efficiency

      Making it so that traffic moves more smoothly.

    50. benchmark test

      Benchmark tests are standards or points of reference that are used to evaluate something's performance. Once a benchmark is established, later performance (under experimental conditions) can be compared to the benchmark.

    51. The study participants disapprove of enforcing utilitarian regulations for AVs and would be less willing to buy such an AV. Accordingly, regulating for utilitarian algorithms may paradoxically increase casualties by postponing the adoption of a safer technology.

      People who took part in the study do not want the government to regulate the morality of self-driving cars, particularly if they will sacrifice their passengers for the greater good. They are also less likely to buy a car subject to those regulations.

      This may cause more deaths in the long run. Even though AVs are a safer technology, people would not want to buy these vehicles, which would inhibit the widespread adoption of AVs. If fewer people adopt AVs, then the technology (and increases in safety) will improve more slowly.

    52. utilitarian

      In a utilitarian viewpoint, the most moral action is the one that has the best overall consequences for everyone, even if the choices are difficult.

      For example, if you are driving a car and have to choose between killing several people and saving yourself, or sacrificing yourself to save that group of several people, the utilitarian choice would be you sacrificing yourself (because fewer people will die, even if it means that you will die).

  2. Jun 2018
    1. study three

      Common Core State Standards CCSS.ELA-LITERACY.RST.11-12.9

      The student may synthesize information from different experiments into a coherent understanding of participants' attitudes towards the morality of a self-driving car's action versus buying a car programmed in a specific way.

      http://www.corestandards.org/ELA-Literacy/RST/11-12/9/

    2. some participants may already be familiar with testing materials, particularly when these materials are used by many research groups

      Some research groups make their data and testing materials widely available for others to use.

      A feature from the American Psychological Association lists several advantages for doing this, including making science widely accessible and being able to reproduce previous results.

      However, participants who have already seen test materials in a different study might not give answers that reflect what they truly think, which may affect the study's results.

    3. E. A. Posner, C. R. Sunstein, Univ. Chic. Law Rev. 72, 537–598 (2005).

      This article discusses how governments should assign dollar values to human lives when someone dies. One consideration is whether children and adults should be valued differently.

    4. W. Wallach, C. Allen, Moral Machines: Teaching Robots Right from Wrong (Oxford University Press, 2008).

      This book discusses how robots should be able to factor morality and ethics into their decisions, and discusses some ways engineers can program morality into machines.

    5. experimental ethics

      Experimental ethics investigates philosophical questions with experiments.

    6. Amazon Mechanical Turk

      Amazon Mechanical Turk is a website that lets users earn money for doing small tasks. It has become increasingly popular for researchers, who use it to reach a wide audience online. A large and diverse set of data can be collected in a short period of time.

    7. R. M. Dawes, Annu. Rev. Psychol. 31, 169–193 (1980).

      This study reviews the nature of social dilemmas, investigates potential solutions to social dilemmas, and lists psychological studies that have shed insights on the topic.

    8. K. Spieser et al., in Road Vehicle Automation, G. Meyer, S. Beiker, Eds. (Lecture Notes in Mobility Series, Springer, 2014), pp. 229–245.

      This paper investigates what would happen if personal transportation in Singapore was replaced by self-driving cars. Results suggest that far fewer vehicles would need to be on the road if such a system were implemented.

    9. M. M. Waldrop, Nature 518, 20–23 (2015).

      This feature article describes the rise of driverless cars and the technologies being implemented to make them drive on roads.

      The article argues that self-driving vehicles could help make roads safer, as about 90% of driving accidents are due to human error.

    10. B. Deng, Nature 523, 24–26 (2015).

      This feature article describes the challenges in programming robots to make ethical decisions.

    11. B. Montemerlo et al., J. Field Robot. 25, 569–597 (2008). C. Urmson et al., J. Field Robot. 25, 425–466 (2008).

      References 1 and 2 describe two entries for the DARPA Urban Challenge, which brought together self-driving cars that could navigate realistic city environments.

    12. The algorithm that swerved into one to save 10 always received many points, and the algorithm that swerved into one to save one always received few points

      Participants gave many points (of the 100 they were given) to the algorithm that sacrificed one pedestrian to save 10 other pedestrians, in each of the three categories listed in Figure 3B.

      Participants did not give many points to the algorithm that sacrificed one pedestrian to save one other pedestrian, in each of the three categories listed in Figure 3B.

  3. Jan 2018
  4. Dec 2017
    1. real-road driving

      These tests involved cars driving on actual roads, instead of test (or "simulated") roads used for experiments.

    2. Autonomous vehicles

      Autonomous vehicles can navigate an environment without input from humans. One example of an autonomous vehicle is a self-driving car.