16 Matching Annotations
  1. Oct 2020
  2. Jul 2020
  3. Jun 2020
  4. May 2020
  5. Feb 2020
    1. Research ethics concerns issues, such as privacy, anonymity, informed consent and the sensitivity of data. Given that social media is part of society’s tendency to liquefy and blur the boundaries between the private and the public, labour/leisure, production/consumption (Fuchs, 2015a: Chapter 8), research ethics in social media research is par-ticularly complex.
  6. Mar 2019
    1. Worse yet, it wouldn’t surprise me if we saw more unethical people publish data as a strategic communication tool, because they know people tend to believe numbers more than personal stories. That’s why it’s so important to have that training on information literacy and methodology.”

      Like the way unethical people use statistics in general? This should be a concern, especially as government data, long considered the gold standard of data, undergoes attacks that would skew the data toward political ends. (see the census 2020)

  7. Aug 2018
    1. as boyd and Crawford argue, ‘without taking into account the sample of a data set, the size of the data set is meaningless’ (2012: 669). Furthermore, many tech-niques used by the state and corporations in big data analysis are based on probabilistic prediction which, some experts argue, is alien to, and even incom-prehensible for, human reasoning (Heaven 2013). As Mayer-Schönberger stresses, we should be ‘less worried about privacy and more worried about the abuse of probabilistic prediction’ as these processes confront us with ‘profound ethical dilemmas’ (in Heaven 2013: 35).

      Primary problems to resolve regarding the use of "big data" in humanitarian contexts: dataset size/sample, predictive analytics are contrary to human behavior, and ethical abuses of PII.

  8. Mar 2018
    1. For the past 100 years we have been chasing visions of data with a singular passion. Many of the best minds of each new generation have devoted themselves to delivering on the inspired data science promises of their day: intelligence testing, building the computer, cracking the genetic code, creating the internet, and now this. We have in the course of a single century built an entire society, economy and culture that runs on information. Yet we have hardly begun to engineer data ethics appropriate for our extraordinary information carnival. If we do not do so soon, data will drive democracy, and we may well lose our chance to do anything about it.

      We have hardly begun to engineer data ethincs approriate for our extraordinary information carnival.

  9. Oct 2017
    1. OxTREC Reference: 593-16

      Kudos for listing the identifier. I googled that, which brought me to http://researchsupport.admin.ox.ac.uk/governance/ethics/committees/oxtrec , and under "Approved studies 2016", I found the entry

      593-16 Elizabeth Pisani What makes data sharing work? WWARN case study Minimal Risk N/A 16/5/16

      which is way more transparent than most ethics statements in published papers.

      Also kudos to the Central University Research Ethics Committee (CUREC) for the decision highlighted under the "Approved studies 2016" headline:

      It was agreed by CUREC in early 2016 that, in the interest of transparency, OxTREC should make publicly available a list of studies that it has approved. The list below sets out those studies that have been approved by OxTREC since January 2016.

  10. Sep 2016
    1. the risk of re-identification increases by virtue of having more data points on students from multiple contexts

      Very important to keep in mind. Not only do we realise that re-identification is a risk, but this risk is exacerbated by the increase in “triangulation”. Hence some discussions about Differential Privacy.

    2. the automatic collection of students’ data through interactions with educational technologies as a part of their established and expected learning experiences raises new questions about the timing and content of student consent that were not relevant when such data collection required special procedures that extended beyond students’ regular educational experiences of students

      Useful reminder. Sounds a bit like “now that we have easier access to data, we have to be particularly careful”. Probably not the first reflex of most researchers before they start sending forms to their IRBs. Important for this to be explicitly designated as a concern, in IRBs.

    3. Responsible Use

      Again, this is probably a more felicitous wording than “privacy protection”. Sure, it takes as a given that some use of data is desirable. And the preceding section makes it sound like Learning Analytics advocates mostly need ammun… arguments to push their agenda. Still, the notion that we want to advocate for responsible use is more likely to find common ground than this notion that there’s a “data faucet” that should be switched on or off depending on certain stakeholders’ needs. After all, there exists a set of data use practices which are either uncontroversial or, at least, accepted as “par for the course” (no pun intended). For instance, we probably all assume that a registrar should receive the grade data needed to grant degrees and we understand that such data would come from other sources (say, a learning management system or a student information system).

    1. the use of data in scholarly research about student learning; the use of data in systems like the admissions process or predictive-analytics programs that colleges use to spot students who should be referred to an academic counselor; and the ways colleges should treat nontraditional transcript data, alternative credentials, and other forms of documentation about students’ activities, such as badges, that recognize them for nonacademic skills.

      Useful breakdown. Research, predictive models, and recognition are quite distinct from one another and the approaches to data that they imply are quite different. In a way, the “personalized learning” model at the core of the second topic is close to the Big Data attitude (collect all the things and sense will come through eventually) with corresponding ethical problems. Through projects vary greatly, research has a much more solid base in both ethics and epistemology than the kind of Big Data approach used by technocentric outlets. The part about recognition, though, opens the most interesting door. Microcredentials and badges are a part of a broader picture. The data shared in those cases need not be so comprehensive and learners have a lot of agency in the matter. In fact, when then-Ashoka Charles Tsai interviewed Mozilla executive director Mark Surman about badges, the message was quite clear: badges are a way to rethink education as a learner-driven “create your own path” adventure. The contrast between the three models reveals a lot. From the abstract world of research, to the top-down models of Minority Report-style predictive educating, all the way to a form of heutagogy. Lots to chew on.

  11. Dec 2015
    1. The idea was to pinpoint the doctors prescribing the most pain medication and target them for the company’s marketing onslaught. That the databases couldn’t distinguish between doctors who were prescribing more pain meds because they were seeing more patients with chronic pain or were simply looser with their signatures didn’t matter to Purdue.