4 Matching Annotations
  1. Jan 2026
    1. One set of powers that researchers now have is the ability to observe people’s behavior without their consent or awareness. Researchers could, of course, do this in past, but in the digital age, the scale is completely different, a fact that has been proclaimed repeatedly by many fans of big data sources.

      The discussion of unanticipated secondary use connects modern data practices to historical harms such as the use of census data during the Holocaust and other genocides. This challenges the assumption that data collected for benign purposes will remain benign. It also suggests that ethical evaluation must consider future political and social changes, not just present day intentions.

    2. In the analog age, most social research had a relatively limited scale and operated within a set of reasonably clear rules. Social research in the digital age is different. Researchers—often in collaboration with companies and governments—have more power over participants than in the past, and the rules about how that power should be used are not yet clear. By power, I mean simply the ability to do things to people without their consent or even awareness. The kinds of things that researchers can do to people include observing their behavior and enrolling them in experiments. As the power of researchers to observe and perturb is increasing, there has not been an equivalent increase in clarity about how that power should be used. In fact, researchers must decide how to exercise their power based on inconsistent and overlapping rules, laws, and norms. This combination of powerful capabilities and vague guidelines creates difficult situations.

      The author defines power as the ability to observe or experiment on people without their consent or awareness. This framing is important because it shifts the ethical focus away from intent and toward capacity. Even if researchers have good intentions, the mere ability to act without consent introduces ethical responsibility. This makes digital research fundamentally different from analog research, where scale limited this kind of power.

    1. For example, during the 2014 Ebola outbreak, public health officials wanted information about the mobility of people in the most heavily infected countries in order to help control the outbreak. Mobile phone companies had detailed call records that could have provided some of this information. Yet ethical and legal concerns bogged down researchers’ attempts to analyze the data (Wesolowski et al. 2014; McDonald 2016). If we, as a community, can develop ethical norms and standards that are shared by both researchers and the public—and I think we can do this—then we can harness the capabilities of the digital age in ways that are responsible and beneficial to society.

      The Ebola mobility example highlights what the author describes as the chilling effect of ethics. This connects to debates during COVID 19 about contact tracing apps, where privacy concerns slowed adoption even though population level data could have supported public health responses. This raises a justice question about who bears the cost when ethical caution delays research, especially when vulnerable populations are most affected.

    2. There is currently uncertainty about the appropriate conduct of some digital-age social research. This uncertainty has led to two related problems, one of which has received much more attention than the other. On the one hand, some researchers have been accused of violating people’s privacy or enrolling participants in unethical experiments. These cases—which I’ll describe in this chapter—have been the subject of extensive debate and discussion. On the other hand, the ethical uncertainty has also had a chilling effect, preventing ethical and important research from happening, a fact that I think is much less appreciated. For example, during the 2014 Ebola outbreak, public health officials wanted information about the mobility of people in the most heavily infected countries in order to help control the outbreak. Mobile phone companies had detailed call records that could have provided some of this information.

      The author argues that ethical uncertainty in the digital age comes from researchers’ rapidly increasing ability to observe and experiment on people without consent or awareness. How should we evaluate responsibility when harm is unintentional but foreseeable? For example, if researchers could reasonably anticipate privacy risks from large-scale data linkage, does beneficence require them to refrain from the study entirely, or only to mitigate harm after the fact?