353 Matching Annotations
  1. Jun 2024
    1. https://web.archive.org/web/20240630131807/https://www.w3.org/TR/ethical-web-principles/

      Dated June 2024 a set of 'ethical' principles for the web. Curiously it never mentions linking, not even in context of the principle of enabling verification of info.

      Some things are handy checklist to run against my own website / web activities though.

  2. Feb 2024
    1. T. Herlau, "Moral Reinforcement Learning Using Actual Causation," 2022 2nd International Conference on Computer, Control and Robotics (ICCCR), Shanghai, China, 2022, pp. 179-185, doi: 10.1109/ICCCR54399.2022.9790262. keywords: {Digital control;Ethics;Costs;Philosophical considerations;Toy manufacturing industry;Reinforcement learning;Forestry;Causality;Reinforcement learning;Actual Causation;Ethical reinforcement learning}

    1. Joy, Bill. “Why the Future Doesn’t Need Us.” Wired, April 1, 2000. https://www.wired.com/2000/04/joy-2/.

      Annotation url: urn:x-pdf:753822a812c861180bef23232a806ec0

      Annotations: https://jonudell.info/h/facet/?user=chrisaldrich&url=urn%3Ax-pdf%3A753822a812c861180bef23232a806ec0&max=100&exactTagSearch=true&expanded=true

    2. Verifying compliance will also require that scientists and engineersadopt a strong code of ethical conduct, resembling the Hippocraticoath, and that they have the courage to whistleblow as necessary, evenat high personal cost.

      another suggestion for professional ethics oaths for scientists and engineers...

    3. The only realistic alternative I see is relinquishment: to limit de-velopment of the technologies that are too dangerous, by limiting ourpursuit of certain kinds of knowledge.
  3. Jan 2024
    1. the canonical unit, the NCU supports natural capital accounting, currency source, calculating and accounting for ecosystem services, and influences how a variety of governance issues are resolved
      • for: canonical unit, collaborative commons - missing part - open learning commons, question - process trap - natural capital

      • comment

        • in this context, indyweb and Indranet are not the canonical unit, but then, it seems the model is fundamentally missing the functionality provided but the Indyweb and Indranet, which is and open learning system.
        • without such an open learning system that captures the essence of his humans learn, the activity of problem-solving cannot be properly contextualised, along with all of limitations leading to progress traps.
        • The entire approach of posing a problem, then solving it is inherently limited due to the fractal intertwingularity of reality.
      • question: progress trap - natural capital

        • It is important to be aware that there is a real potential for a progress trap to emerge here, as any metric is liable to be abused
      • for: elephants in the room - financial industry at the heart of the polycrisis, polycrisis - key role of finance industry, Marjorie Kelly, Capitalism crisis, Laura Flanders show, book - Wealth Supremacy - how the Extractive Economy and the Biased Rules of Captialism Drive Today's Crises

      • Summary

        • This talk really emphasizes the need for the Stop Reset Go / Deep Humanity Wealth to Wellth program
        • Interviewee Marjorie Kelly started Business Ethics magainze in 1987 to show the positive side of business After 30 years, she found that it was still tinkering at the edges. Why? - because it wasn't addressing the fundamental issue.
        • Why there hasn't been noticeable change in spite of all these progressive efforts is because we avoided questioning the fundamental assumption that maximizing returns to shareholders and gains to shareholder portfolios is good for people and planet.**** It turns out that it isn't. It's fundamentally bad for civilization and has played a major role in shaping today's polycrisis.
        • Why wealth supremacy is entangled with white supremacy
        • Financial assets are the subject
          • Equity and bonds use to be equal to GDP in the 1950s.
          • Now it's 5 times as much
        • Financial assets extracts too much from common people
        • Question: Families are swimming in debt. Who owns all this financial debt? ...The financial elites do.
      • meme

        • wealth supremacy and white supremacy are entangled
  4. Dec 2023
    1. Their ideas of possible action vary fromimportant-looking signed pronounce-ments and protests to the withholding ofservices and the refusal to assist in techni-cal developments that may be misap-plied.

      Not too dissimilar from programmers who add licensing to their work now to prevent it from being misused.



  5. Nov 2023
    1. I am even more attuned to creative rights. We can address algorithms of exploitation by establishing creative rights that uphold the four C’s: consent, compensation, control, and credit. Artists should be paid fairly for their valuable content and control whether or how their work is used from the beginning, not as an afterthought.

      Consent, compensation, control, and credit for creators whose content is used in AI models

  6. Oct 2023
    1. https://web.archive.org/web/20231019053547/https://www.careful.industries/a-thousand-cassandras

      "Despite being written 18 months ago, it lays out many of the patterns and behaviours that have led to industry capture of "AI Safety"", co-author Rachel Coldicutt ( et Anna Williams, and Mallory Knodel for Open Society Foundations. )

      For Open Society Foundations by 'careful industries' which is a research/consultancy, founded 2019, all UK based. Subscribed 2 authors on M, and blog.

      A Thousand Cassandras in Zotero.

    1. Foster international collaboration on PPDSA through promotion of partnerships and aninternational policy environment that furthers the development and adoption of PPDSAtechnologies and supports common values while protecting national and economic security

      Another cross-over from paper with Anna

    2. Elevate and promote foundational and use-inspired research through investments inmultidisciplinary research that will advance practical deployment of PPDSA approach andexploratory research to develop the next generation of PPDSA technologies.
    3. Advance governance and responsible adoption through the establishment of a multi-partnersteering group to help develop and maintain a healthy PPDSA ecosystem, greater clarity on theuse of PPDSA technologies within the statutory and regulatory environments, and proactive riskmitigation measures.

      Some implications for the Research Harmonization paper with Anna

    4. PPDSA technologies will be created and used in a manner that stimulates responsible scientificresearch and innovation, and enables individuals and society to benefit equitably from the valuederived from data sharing and analytics

      Data Sharing and Ethics

  7. Sep 2023
      • for: bio-buddhism, buddhism - AI, care as the driver of intelligence, Michael Levin, Thomas Doctor, Olaf Witkowski, Elizaveta Solomonova, Bill Duane, care drive, care light cone, multiscale competency architecture of life, nonduality, no-self, self - illusion, self - constructed, self - deconstruction, Bodhisattva vow
      • title: Biology, Buddhism, and AI: Care as the Driver of Intelligence
      • author: Michael Levin, Thomas Doctor, Olaf Witkowski, Elizaveta Solomonova, Bill Duane, AI - ethics
      • date: May 16, 2022
      • source: https://www.mdpi.com/1099-4300/24/5/710/htm

      • summary

        • a trans-disciplinary attempt to develop a framework to deal with a diversity of emerging non-traditional intelligence from new bio-engineered species to AI based on the Buddhist conception of care and compassion for the other.
        • very thought-provoking and some of the explanations and comparisons to evolution actually help to cast a new light on old Buddhist ideas.
        • this is a trans-disciplinary paper synthesizing Buddhist concepts with evolutionary biology
    1. We would appeal to his trust and how people would not trust him on the future. Also, he is assuming I would do the same. He thinks I would act in the same way

    1. synthetic bioengineering provides a really astronomically large option space for new bodies and new minds that don't have 00:04:28 standard evolutionary backstories
      • for: cultural evolution, cumulative cultural evolution, CCE, bioengineering, novel life form, culturally evolved life, bioethics, progress trap, progress trap - bioengineering, progress trap - genetic engineering
      • comment
        • cultural evolution, which itself emerges from biological evolution is acting upon itself to create new life forms that have no evolutionary backstory
        • this is tantamount to playing God
        • progress traps often emerge out of the large speed mismatch between cultural and biological/genetic evolution.
        • Nowhere is this more profound than in bioengineering of new forms of life with no evolutionary history
        • This presents profound ethical challenges
  8. Aug 2023
    1. One of the most common examples was in thefield of criminal justice, where recent revelations have shown that an algorithm used by the UnitedStates criminal justice system had falsely predicted future criminality among African-Americans attwice the rate as it predicted for white people

      holy shit....bad!!!!!

    2. automated decisions

      What are all the automated decisions currently be made by AI systems globally? How to get a database/list of these?

    3. The idea that AI algorithms are free from biases is wrong since the assumptionthat the data injected into the models are unbiased is wrong

      Computational != objective! Common idea rests on lots of assumptions

  9. Jun 2023
    1. Overview of how tech changes work moral changes. Seems to me a detailing of [[Monstertheorie 20030725114320]] diving into a specific part of it, where cultural categories are adapted to fit new tech in. #openvraag are the sources containing refs to either Monster theory by Smits or the anthropoligical work of Mary Douglas. Checked: it doesn't, but does cite refs by PP Verbeek and Marianne Boenink, so no wonder there's a parallel here.

      The first example mentioned points in this direction too: the 70s redefinition of death as brain death, where it used to be heart stopped (now heart failure is a cause of death), was a redefinition of cultural concepts to assimilate tech change. Third example is a direct parallel to my [[Empathie verschuift door Infrastructuur 20080627201224]] [[Hyperconnected individuen en empathie 20100420223511]]

      Where Monstertheory is a tool to understand and diagnose discussions of new tech, wherein the assmilation part (both cultural cats and tech get adapted) is the pragmatic route (where the mediation theory of PP Verbeek is located), it doesn't as such provide ways to act / intervene. Does this taxonomy provide agency?

      Or is this another way to locate where moral effects might take place, but still the various types of responses to Monsters still may determine the moral effect?

      Zotero antilib Mechanisms of Techno-moral Change

      Via Stephen Downes https://www.downes.ca/post/75320

    1. As the EU heads toward significant AI regulation, Altman recently suggested such regulation might force his company to pull out of Europe. The proposed EU regulation, of course, is focused on copyright protection, privacy rights, and suggests a ban on certain uses of AI, particularly in policing — all concerns of the present day. That reality turns out to be much harder for AI proponents to confront than some speculative future

      While wrongly describing the EU regulation on AI, author rightly points to the geopolitical reality it is creating for the AI sector. AIR is focused on market regulation, risk mitigation wrt protection of civic rights and critical infrastructure, and monopoly-busting/level playing field. Threatening to pull out of the EU is an admission you don't want to be responsible for your tech at all. And it thus belies the ethical concerns voiced through proximate futurising. Also AIR is just one piece of that geopolitical construct, next to GDPR, DMA, DSA, DGA, DA and ODD which all consistently do the same things for different parts of the digital world.

    2. In 2010, Paul Dourish and Genevieve Bell wrote a book about tech innovation that described the way technologists fixate on the “proximate future” — a future that exists “just around the corner.” The authors, one a computer scientist, and the other a tech industry veteran, were examining emerging tech developments in “ubiquitous computing,” which promised that the sensors, mobile devices, and tiny computers embedded in our surroundings would lead to ease, efficiency, and general quality of life. Dourish and Bell argue that this future focus distracts us from the present while also absolving technologists of responsibility for the here and now.

      Proximate Future is a future that is 'nearly here' but never quite gets here. Ref posits this is a way to distract from issues around a tech now and thus lets technologists dodge responsibility and accountability for the now, as everyone debates the issues of a tech in the near future. It allows the technologists to set the narrative around the tech they develop. Ref: [[Divining a Digital Future by Paul Dourish Genevieve Bell]] 2010

      Vgl the suspicious call for reflection and pause wrt AI by OpenAI's people and other key players. It's a form of [[Ethics futurising dark pattern 20190529071000]]

      It may not be a fully intentional bait and switch all the time though: tech predictions, including G hypecycle put future key events a steady 10yrs into the future. And I've noticed when it comes to open data readiness and before that Knowledge management present vs desired [[Gap tussen eigen situatie en verwachting is constant 20071121211040]] It simply seems a measure of human capacity to project themselves into the future has a horizon of about 10yrs.

      Contrast with: adjacent possible which is how you make your path through [[Evolutionair vlak van mogelijkheden 20200826185412]]. Proximate Future skips actual adjacent possibles to hypothetical ones a bit further out.

    1. Technology is valuable and empowering, but at what end direct cost? Consumers don't have available data for the actual costs of the options they're choosing in many contexts.

      What if that reprocessing costs the equivalent of three glasses of waters? Is it worth it for our environment, especially when the direct costs to the "consumer" are hidden into advertising models.

      (via Brenna)

  10. May 2023
    1. the Carthusian monks decided in 2019 to limit Chartreuse production to 1.6 million bottles per year, citing the environmental impacts of production, and the monks' desire to focus on solitude and prayer.[10] The combination of fixed production and increased demand has resulted in shortages of Chartreuse across the world.

      In 2019, Carthusian monks went back to their values and decided to scale back their production of Chartreuse.

    1. must have an alignment property

      It is unclear what form the "alignment property" would take, and most importantly how such a property would be evaluated especially if there's an arbitrary divide between "dangerous" and "pre-dangerous" levels of capabilities and alignment of the "dangerous" levels cannot actually be measured.

    1. study done this past December to get a sense of how possible this is: Comparing scientific abstracts generated by ChatGPT to original abstracts using an artificial intelligence output detector, plagiarism detector, and blinded human reviewers" – Catherine Gao, et al. (2022)Blinded human reviewers were given a mix of real paper abstracts and ChatGPT-generated abstracts for submission to 5 of the highest-impact medical journals.

      I think these types of tests can only result in showing human failing at them. Because the test is reduced to judging only the single artefact as a thing in itself, no context etc. That's the basic element of all cons: make you focus narrowly on something, where the facade is, and not where you would find out it's fake. Turing isn't about whether something's human, but whether we can be made to believe it is human. And humans can be made to believe a lot. Turing needs to keep you from looking behind the curtain / in the room to make the test work, even in its shape as a thought experiment. The study (judging by the sentences here) is a Turing test in the real world. Why would you not look behind the curtain? This is the equivalent of MIT's tedious trolley problem fixation and calling it ethics of technology, without ever realising that the way out of their false dilemma's is acknowledging nothing is ever a di-lemma but always a multi-lemma, there are always myriad options to go for.

  11. Apr 2023
    1. just than the State

      I think this is yet to be seen. Although it is true that the computer always gives the same output given the same input code, a biased network with oppressive ideologies could simply transform, instead of change, our current human judiciary enforcement of the law.

    1. In other words, the currently popular AI bots are ‘transparent’ intellectually and morally — they provide the “wisdom of crowds” of the humans whose data they were trained with, as well as the biases and dangers of human individuals and groups, including, among other things, a tendency to oversimplify, a tendency for groupthink, and a confirmation bias that resists novel and controversial explanations

      not just trained with, also trained by. is it fully transparent though? Perhaps from the trainers/tools standpoint, but users are likely to fall for the tool abstracting its origins away, ELIZA style, and project agency and thus morality on it.

    1. In Vice, Maggie Puniewska points to the moral foundations theory, according to which liberals and conservatives prioritize different ethics: the former compassion, fairness and liberty, the latter purity, loyalty and obedience to authority.
    1. Recommended Source

      Under the "More on Philosophies of Copyright" section, I recommended adding the scholarly article by Chinese scholar Peter K. Yu that explains how Chinese philosophy of Yin-Yang can address the contradictions in effecting or eliminating intellectual property laws. One of the contradictions is in intellectual property laws protecting individual rights while challenging sustainability efforts for future generations (as climate change destroys more natural resources.

      Yu, Peter K., Intellectual Property, Asian Philosophy and the Yin-Yang School (November 19, 2015). WIPO Journal, Vol. 7, pp. 1-15, 2015, Texas A&M University School of Law Legal Studies Research Paper No. 16-70, Available at SSRN: https://ssrn.com/abstract=2693420

      Below is a short excerpt from the article that details Chinese philosophical thought on IP and sustainability:

      "Another area of intellectual property law and policy that has made intergenerational equity questions salient concerns the debates involving intellectual property and sustainable development. Although this mode of development did not garner major international attention until after the 1992 Earth Summit in Rio de Janeiro, the Yin-Yang school of philosophy—which “offers a normative model with balance, harmony, and sustainability as ideals”—provides important insight into sustainable development."

    1. But I also don’t think that a company that creates harmful technology should be excused simply because they’re bad at it.

      Being crap at doing harm doesn't allow you to claim innocence of doing harm.

  12. Mar 2023
    1. Ganguli, Deep, Askell, Amanda, Schiefer, Nicholas, Liao, Thomas I., Lukošiūtė, Kamilė, Chen, Anna, Goldie, Anna et al. "The Capacity for Moral Self-Correction in Large Language Models." arXiv, (2023). https://doi.org/https://arxiv.org/abs/2302.07459v2.


      We test the hypothesis that language models trained with reinforcement learning from human feedback (RLHF) have the capability to "morally self-correct" -- to avoid producing harmful outputs -- if instructed to do so. We find strong evidence in support of this hypothesis across three different experiments, each of which reveal different facets of moral self-correction. We find that the capability for moral self-correction emerges at 22B model parameters, and typically improves with increasing model size and RLHF training. We believe that at this level of scale, language models obtain two capabilities that they can use for moral self-correction: (1) they can follow instructions and (2) they can learn complex normative concepts of harm like stereotyping, bias, and discrimination. As such, they can follow instructions to avoid certain kinds of morally harmful outputs. We believe our results are cause for cautious optimism regarding the ability to train language models to abide by ethical principles.

    1. A.I. Is Mastering Language. Should We Trust What It Says?<br /> by Steven Johnson, art by Nikita Iziev

      Johnson does a good job of looking at the basic state of artificial intelligence and the history of large language models and specifically ChatGPT and asks some interesting ethical questions, but in a way which may not prompt any actual change.

      When we write about technology and the benefits and wealth it might bring, do we do too much ethics washing to help paper over the problems to help bring the bad things too easily to pass?

    2. How do we make them ‘‘benefit humanity as a whole’’ when humanity itself can’t agree on basic facts, much less core ethics and civic values?
    3. ‘‘So long as so-called A.I. systems are being built and deployed by the big tech companies without democratically governed regulation, they are going to primarily reflect the values of Silicon Valley,’’ Emily Bender argues, ‘‘and any attempt to ‘teach’ them otherwise can be nothing more than ethics washing.’’

      ethics washing!

    1. For open educators, this runs counter to the very reason we use OER in the first place. Many open educators choose OER because there are legal permissions that allow for the ethical reuse of other people’s material — material the creators have generously and freely made available through the application of open licenses to it. The thought of using work that has not been freely gifted to the commons by the creator feels wrong for many open educators and is antithetical to the generosity inherent in the OER community.
    1. “We call on the field to recognize that applications that aim to believably mimic humans bring risk of extreme harms,” she co-wrote in 2021. “Work on synthetic human behavior is a bright line in ethical Al development, where downstream effects need to be understood and modeled in order to block foreseeable harm to society and different social groups.”

      Synthetic human behavior as AI bright line

      Quote from Bender

  13. Feb 2023
    1. In my most recent field (software), the engineer is placated with the delusion that the purpose is to give the customer what ey wants, whether that solves the customer’s problem or not. This is a lazy sort of self-imposed servitude that entirely avoids the actual purpose of engineering.

      One reason why software engineering isn't "real" engineering: no ethical obligation to "the public good".

    1. https://web.archive.org/web//https://www.binnenlandsbestuur.nl/bestuur-en-organisatie/io-research/morele-vragen-rijksambtenaren-vaak-onvoldoende-opgevolgd

      #nieuw reden:: Verschillende soorten morele dilemma's volgens I&O bij (rijks)ambtenaren. Let wel, alleen rondom beleidsproces, niet om digitale / data dingen. #openvraag is er een kwalitatief verschil tussen die 2 soorten vragen, of zijn het verschillende verschijningsvormen van dezelfde vragen? Verhouden zich de vragen tot de 7 normen van openbaar bestuur? Waarom is dat niet de indeling dan? #2022/06/30

    1. Ghosting is only a problem for the ghosted if the ghosted values the opinion/support/endorsement of the ghoster.

      So the real questions are: Why does one value the ghoster? Is that valuation warranted?

  14. Jan 2023
    1. Students working together in a group trying to make meaning out of their own data could find themselves in a similar situation. Your lack of imagination about your own data may not result in a lack of imagination by others about what they think of you. Putting students in these situations without preparing them about assumptions they might make of others could lead to embarrassment and misunderstanding. 

      This is a really interesting point about all kinds of self-disclosure in the classroom, but especially disclosing what third-parties think about you.

    1. Environmentalists say bulldozing the village to expand the Garzweiler mine would result in huge amounts of greenhouse gas emissions. The government and utility company RWE argue the coal is needed to ensure Germany's energy security.Police officers use water cannons on protesters in Luetzerath on Saturday. (Thilo Schmuelgen/Reuters)The regional and national governments, both of which include the environmentalist Green party, reached a deal with RWE last year allowing it to destroy the abandoned village in return for ending coal use by 2030, rather than 2038.Some speakers at Saturday's demonstration assailed the Greens, whose leaders argue that the deal fulfils many of the environmentalists' demands and saved five other villages from demolition.What on Earth?Why the reversal of a decades-old coal policy sparked controversy in Alberta"It's very weird to see the German government, including the Green party, make deals and compromise with companies like RWE, with fossil fuel companies, when they should rather be held accountable for all the damage and destruction they have caused," Thunberg said."My message to the German government is that they should stop what's happening here immediately, stop the destruction, and ensure climate justice for everyone."

      Assuming the facts are correct and complete here, it's surprisingly naive of Thunberg to take this view. One unknown is whether the displaced villagers were suitably compensated for being evicted. Still, taking 8 years off the deadline to end coal use - that's a pretty massive win and could set the stage for even more in the future.

    1. Excellent article on the complex nature of rape. The key point for me is that too many people think it's always a black-and-white matter. In fact, the boundary between rape and not-rape is not that crisp. There is a boundary layer here. I think that if more people realized every boundary is really a boundary layer, there would be fewer conflicts about such matters.

  15. Dec 2022
    1. "If you don’t know, you should just say you don’t know rather than make something up," says Stanford researcher Percy Liang, who spoke at a Stanford event Thursday.

      Love this response

    1. https://shkspr.mobi/blog/2022/12/the-ethics-of-syndicating-comments-using-webmentions/

      Not an answer to the dilemma, though I generally take the position of keeping everything unless someone asks me to take it down or that I might know that it's been otherwise deleted. Often I choose not to delete my copy, but simply make it private and only viewable to me.

      On the deadnaming and related issues, it would be interesting to create a webmention mechanism for the h-card portions so that users might update these across networks. To some extent Automattic's Gravatar system does this in a centralized manner, but it would be interesting to see it separately. Certainly not as big an issue as deadnaming, but there's a similar problem on some platforms like Twitter where people will change their display name regularly for either holidays, or lately because they're indicating they'd rather be found on Mastodon or other websites.

      The webmention spec does contain details for both editing/deleting content and resending webmentions to edit and/or remove the original. Ideally this would be more broadly adopted and used in the future to eliminate the need for making these choices by leaving the choice up to the original publisher.

      Beyond this, often on platforms that don't have character limits (Reddit for example), I'll post at the bottom of my syndicated copy of content that it was originally published on my site (along with the permalink) and explicitly state that I aggregate the replies from various locations which also helps to let people know that they might find addition context or conversation at the original post should they be interested. Doing this on Twitter, Mastodon, et al is much harder due to space requirements obviously.

      While most responses I send would fall under fair use for copying, I also have a Creative Commons license on my text in an effort to help others feel more comfortable with having copies of my content on their sites.

      Another ethical layer to this is interactions between sites which both have webmentions enabled. To some extent this creates an implicit bi-directional relationship which says, I'm aware that this sort of communication exists and approve of your parsing and displaying my responses.

      The public norms and ethics in this area will undoubtedly evolve over time, so it's also worth revisiting and re-evaluating the issue over time.

  16. Nov 2022
    1. For whom do we make knowledge and why? This question could not be timelier as humanists and administrators seek to make disciplines appear more relevant to students, applicable to social problems, and attendant to political, social, and economic exigencies” (2019, p. 351)
    1. Data minimisation

      Collect only the data that we need to meet our research objectives.

    2. The GDPR places obligations on both: the ‘data controller’, which ‘alone, or jointly with others, determines the purposes and means ofthe processing of personal data’; and the ‘data processor’, which ‘processes personal data on behalf of the controller’.

      Nosotros seríamos el data controller

    3. It is your responsibility to ensure that your research complies withthe data protection laws in all the Member States in which your research data areprocessed, as well as the GDPR.33 See in particular Articles 9(4), 8 and 89(3) GDPR.

      Pilas con Alemania

    4. The increasing impact of these and other new technologieson our everyday lives and activity is reflected in the letter and spirit of the EU’s 2016 General DataProtection Regulation GDPR)

      letter about the use of nw technologies (artificial intelligence)

    5. Ethics and data protection

      Ethics and data protection



    1. This section concerns research involving goods, software and technologies covered bythe EU Export Control Regulation No 482/2009.

      Los sensores que nosotros utilizamos tienen que ser evaluados por esta regulación?

    2. Pseudonymisation and anonymisation are not the same thing.‘Anonymised’ means that the data has been rendered anonymous in such away that the data subject can no longer be identified (and therefore is nolonger personal data and thus outside the scope of data protection law).‘Pseudonymised’ means to divide the data from its direct identifiers so thatlinkage to a person is only possible with additional information that is heldseparately. The additional information must be kept separately and securelyfrom processed data to ensure non-attribution.

      Diferencias entre anonimización y pseudoanonimización

    3. Collecting personal data (e.g. on religion, sexual orientation, race, ethnicity,etc.) that is not essential to your research may expose you to allegations of‘hidden objectives’ or ‘mission creep’

      Justificar por qué queremos recolectar esta información

    4. Ethics issues checklist

      Ethics issues checklist - Personal data

    5. 1) Declarationconfirmingcompliance withthe laws of thecountry where thedata was collected.

      Aquí hay que especificar que se cumplen con las leyes del país de donde se recoge la información.

    6. name, anidentification number, location data, an online identifier or to one or morefactors specific to the physical, physiological, genetic, mental, economic,cultural or social identity of that natural person (art. 2(a) EU General DataProtection Regulation (GDPR).

      What makes a person identifiable

    7. Participants must be given an informed consent form and detailed information sheetsthat: are written in a language and in terms they can fully understand describe the aims, methods and implications of the research, the nature of theparticipation and any benefits, risks or discomfort that might ensue explicitly state that participation is voluntary and that anyone has the right torefuse to participate and to withdraw their participation, samples or data atany time — without any consequences state how biological samples and data will be collected, protected during theproject and either destroyed or reused subsequently state what procedures will be implemented in the event of unexpected orincidental findings (in particular, whether the participants have the right toknow, or not to know, about any such findings).

      Detalles del consentimiento informado

    8. Does it involveinvasivetechniques (e.g.collection ofhuman cells ortissues, surgicalor medicalinterventions,invasive studieson the brain, TMSetc.)?

      It seems we do not

    9. vulnerableindividuals orgroups?

      How do they define vulnerable?

    10. Ethics issues checklist

      Ethics issues checklist - Research on human beings

    11. How to complete your ethics self-assessment

      How to complete your ethics self-assessment

    1. But we cannot have become social beings unless weassumed limitations on freedom of action in the struggle for exis-tence. Hence we must have become ethical before we becamerational.



  17. Sep 2022
    1. it is now necessary to distinguish “virtue ethics” (the third approach) from “virtue theory”, a term which includes accounts of virtue within the other approaches.

      virtue ethics and virtue theory are somewhat distinct, where the latter deals with virtue from the perspectives of deontology and utilitarianism

    2. Whereas consequentialists will define virtues as traits that yield good consequences and deontologists will define them as traits possessed by those who reliably fulfil their duties, virtue ethicists will resist the attempt to define virtues in terms of some other concept that is taken to be more fundamental. Rather, virtues and vices will be foundational for virtue ethical theories and other normative notions will be grounded in them.
    1. humanity as not only the source and context for technology and its use, but its ultimate yardstick for the constructive use and impact of technology. This may sound obvious, it certainly does to me, but in practice it needs to be repeated to ensure it is used as such a yardstick from the very first design stage of any new technology.

      Vgl [[Networked Agency 20160818213155]] wrt having a specific issue to address that is shared by the user group wielding a tech / tool, in their own context.

      Vgl [[Open data begint buiten 20200808162905]] wrt the only yardstick for open data stems from its role as policy instrument: impact achieved outside in the aimed for policy domains through the increased agency of the open data users.

      Tech impact is not to be measured in eyeballs, usage, revenue etc. That's (understandably) the corporation's singular and limited view, the rest of us should not adopt it as the only possible one.

  18. inst-fs-iad-prod.inscloudgate.net inst-fs-iad-prod.inscloudgate.net
    1. The critical writing on data science hastaken the paradoxical position of in-sisting that normative issues pervadeall work with data while leaving unad-dressed the issue of data scientists’ethical agency. Critics need to consid-er how data scientists learn to thinkabout and handle these trade-offs,while practicing data scientists needto be more forthcoming about all ofthe small choices that shape their deci-sions and systems.

      In my opinion, data science is a huge field with a lot of unresolved issues. I believe that critics of data science must be able to understand all the analytics that data scientists use to truly have a valid critique. I also believe that the critical writing about data science doesn't do much justice as actually understanding the analytics would. How would analytics play a role in critical writing?

  19. Aug 2022
    1. We should all transition from thinking about logic as a field of great dead white men and as a field of “geniuses”, to recognizing those men for the flawed creatures they were, whose “genius” relied on the subjugation of many women and BIPOC around them, and ensuring that the Wikipedia, SEP, etc., pages for these logicians acknowledge that.

      This is the wrong approach, because it imposes modern norms on past times. It's illogical and superficial.

      It would be appropriate, though, to carefully review the histories of past logicians and to document more fully the roles that others played in their work, with a clinical and factual dispassion, and with the intention of being accurate and attributing progress to whoever actually did the work.

    1. The purpose of this secular knowledge system is not intrinsically about wellbeing, ethics and goodness per se; it is about the search for truth and efficacy—be that for competing ideas about what is good, for the purposes of competitive advantage in commerce or national prestige, or for destructive purposes linked to warfare and security.
  20. Jul 2022
  21. May 2022
  22. Apr 2022
  23. Mar 2022
    1. Importantly, we had a human in the loop with a firm moral and ethical ‘don’t-go-there’ voice to intervene.

      The human-in-the-loop was a key breakpoint between the model's findings as concepts and the physical instantiation of the model's findings. As the article goes on to say, unwanted outcomes come from both taking the human out of the loop and replacing the human in the loop with someone with a different moral or ethical driver.

  24. Feb 2022
  25. inst-fs-iad-prod.inscloudgate.net inst-fs-iad-prod.inscloudgate.net
    1. he gaps between data scientists and critics are wide, but critique di-vorced from practice only increases them.

      In my opinion this struggle needs to play out. Data science can always be streamlined to remove the bias that everyone has. the conflict this article speaks of is just the natural progression of what has been happening already. The only way this will get better is through constant struggle, just like everything else.

    1. The paper did not seem to have consent from participants for: (a) Agreeing to participate in the different types of interventions (which have the potential of hurting the health of citizens and could even lead to their death.); (b) using their personal health data to publish a research paper.

      Given that the authors are government actors who can easily access millions of citizens and put them in the study groups they desire, without even telling citizens that they are in a study, I worry that this practice will be popularized by governments in Latin America and put citizens in danger.

      I also want to note that this is a new type of disinformation where government actors can post content on these repositories and given that most citizens do NOT know it is NOT peer reviewed it can help the government actors to validate their public policy. The research paper becomes political propaganda and the international repository helps to promotes the propaganda.

  26. Jan 2022
    1. And there are, again, ethical questions that must be asked and answered when dealing with the quantitative study of human atrocity, which is what we’re ultimately doing when we bring statistical and mathematical methods to the study of slavery.
    1. Σύμφωνα μάλιστα με τον νομπελίστα Ινδό οικονομολόγο Αμάρτια Σεν και η γενναιοδωρία είναι μια μεταμφιεσμένη υποκρισία, ένας ιδιοτελής αλτρουισμός, διότι αν είστε καλός με τους ανθρώπους επειδή αυτό σας κάνει να νιώθετε καλύτερα, τότε η ευσπλαχνία σας είναι ιδιοτελής και όχι ανιδιοτελής.

      Αντι να παραδεχτει την ανιδιοτέλεια σας μακροπρόθεσμη στρατηγική ιδιοτέλειας, καταργει κυνικα την έννοια, και κρατάει μονο την ιδιοτέλεια,ωστε να ταιρίαζει με το νοεφιλελευθερισμό - κυκλικό επιχείρημα που φτωχαίνει τη λέξη και το μυαλο.

  27. Dec 2021
    1. How do I allow students to voice contentious, ugly, or even ignorant views, so that they can learn without fear of recrimination?

      Too broad of a spectrum here. And why should students not fear recrimination? This is coddling, pure and simple.

    1. people end up being told their needs are not important, and theirlives have no intrinsic worth. The last, we are supposed to believe, isjust the inevitable effect of inequality; and inequality, the inevitableresult of living in any large, complex, urban, technologicallysophisticated society. Presumably it will always be with us. It’s just amatter of degree.

      People being told they don't matter and don't have intrinsic worth is a hallmark of colonialism. It's also been an ethical issue in the study of anthropology for the past 150 years.

      Anthropologist Tim Ingold in Anthropology: Why It Matters touches on some of this issue of comparing one group of people with another rather than looking at and appreciating the value of each separately.

  28. Nov 2021
    1. The dopamine reward system has also been shown to bestimulated by most drugs of abuse and plays an important rolein addiction [33]. An important question is whether jhanameditators are subject to addiction and tolerance effects thatcan result from stimulation of the dopamine reward system.

      The question of potential addiction to self-induced states that activate the dopamine (and/or other neurochemical) reward system(s) is important. From a more philosophical angle, should we welcome beneficial addictions that, if cultivated, might significantly improve individual and group quality of life? Isn't this related to our high regard for replacing detrimental with positive habits? Habit formation and maintenance also depends on activation of neural reward systems (see Nir Eyal's book, Hooked).

    1. https://thedispatch.com/p/a-note-to-our-readers-from-steve

      Center-right journalists Steve Hayes and Jonah Goldberg of The Dispatch have severed ties with Fox News over a misinformation campaign from Tucker Carlson based on the January 6 events.

      Kudos to them for drawing a line on this issue.

    1. Not that everyone really wants an apology. One former journalist told me that his ex-colleagues “don’t want to endorse the process of mistake/apology/understanding/forgiveness—they don’t want to forgive.” Instead, he said, they want “to punish and purify.” But the knowledge that whatever you say will never be enough is debilitating. “If you make an apology and you know in advance that your apology will not be accepted—that it is going to be considered a move in a psychological or cultural or political game—then the integrity of your introspection is being mocked and you feel permanently marooned in a world of unforgivingness,” one person told me. “And that is a truly unethical world.”

      How can restorative justice work in a broader sense when public apologies aren't more carefully considered by the public-at-large? If the accuser accepts an apology, shouldn't that be enough? Society-at-large can still be leery of the person and watch their behavior, but do we need to continue ostracizing them?

      An interesting example to look at is that of Monica Lewinsky who in producing a version of her story 20+ years later is finally able to get her own story and framing out. Surely there will be political adherents who will fault her, but has she finally gotten some sort of justice and reprieve from a society that utterly shunned her for far too long for an indiscretion which happens nearly every day in our society? Compare her with Hester Prynne.

      Are we moving into a realm in which everyone is a public figure on a national if not international stage? How do we as a society handle these cases? What are the third and higher order effects besides the potential for authoritarianism which Applebaum mentions?

  29. Oct 2021
    1. This is a nice introduction to some issues of concern to me. For instance, the absence of pain is good - but why is it good? The empirical reason for this is that it satisfies evolved instinct. So again, what is good tracks to what is natural. But the naturalistic fallacy undermines that. And most importantly, there is no known scientific connection between evolution and instinct on the one hand, and "good" on the other. My answer is: morality is not natural, it is an artifice of humanity. And since it's an artifice, we can make it whatever we want.

    1. UX is now "user exploitation."

      As a student in the new generation of UX designers i've been more sensible to these matters but i do see signs of alternative behaviors emerging (the right to repair, for example) which could benefit hugely from our discipline methods and learnings.

    1. Victor Papanek’s book includes an introduction written by R. Buckminster Fuller, Carbondale, Illinois. (Sadly, the Thames & Hudson 2019 Third Edition does not include this introduction. Monoskop has preserved this text as a PDF file of images. I have transcribed a portion here.)

    1. The real conspiracies are hiding in plain sight.

      The big difference between the paranoiac's conspiracy theories and the real ones is that in the fake ones the conspirators are "in it together" and form a like-minded group. In reality, the billionaires would be very happy to through each other under the bus if they could.

      So it's not so much that there are real conspiracies as there are a known set of methods and tools - known to everyone, everywhere - that allow this gross power imbalance to be created. These methods and tools are known to all but can only be used by the rich because they are themselves very costly.

  30. Sep 2021
    1. In fact, researchers must decide how to exercise their power based on inconsistent and overlapping rules, laws, and norms.

      It shouldn't be decided by researchers.

    1. Under the no-harmprinciple, we owe it to our communities to undertake respon-sible scholarship that minimizes the possibilities of harm.

      No harm principle



    1. should we do research that is not consciousness raising for the participants? Is such research an oppressive process that of necessity exploits the subject?

      Do the researchers have any responsibility towards safeguarding the wellbeing of their interviewees when such research of difficult topics and discusses can create emotionally challenging or be restimulating for the participants.

    2. We were pushed to develop our analysis further by women in the study whom we asked to read the manuscript. They were hesitant about being negative, but were clearly critical.

      Justice - allowing women to read and critque manuscript. Result - researchers - participant design changes - participants asked for sociological interpretation by the researchers. They were looking for insights.

    3. right suggests that researchers must take care not to make the research relationship an ex- ploitative one.

      Don't exploit the relationship



    1. The authors propose a feminist ethics of care approach to CTCs in order to address the gendered power dynamics that often define and shape existing infomediary practices, distribute care work, and make existing care work visible.
    1. liberty of conscience

      "Liberty of conscience" is a phrase Roger Williams uses in a religious context to denote the freedom for one to follow his or her religious or ethical beliefs. It is an idea that refers to conscious-based thought and individualism. Each person has the right to their own conscience. It is rooted in the idea that all people are created equal and that no culture is better than the other.

      This idea is strongly tied to: freedom from coercion of conscience (own thoughts and ideas), equality of rights, respect and toleration. It is a fundamental element of what has come to be the "American idea of religious liberty". Williams spoke of liberty of conscience in reference to a religious sense. This concept of individualism and free belief was later extrapolated in a general sense. He believed that government involvement ended when it came to divine beliefs.

      Citation: Eberle, Edward J. "Roger Williams on Liberty of Conscience." Roger Williams University Law Review: vol 10:, iss: 2, article 2, pp. 288-311. http://docs.rwu.edu/rwu_LR/vol10/iss2/2. Accessed 8 Sept. 2021.

    1. The tunnel far below represented Nevada’s latest salvo in a simmering water war: the construction of a $1.4 billion drainage hole to ensure that if the lake ever ran dry, Las Vegas could get the very last drop

      Deep Concept: Modern America is mostly corrupt from it's own creation of wealth. Wealth is power, power corrupts and absolute power corrupts absolutely! Money and wealth have completely changed the underlying foundation of America. Modern America is the corrupted result of wealth. Morality and ethics in modern American have been reshaped to "fit" European Aristocracy, ironically the same European aristocracy America fled in the Revolutionary War.

      Billions and billions of tax payer money is spent on projects that could never pass rigorous examination and best public ROI use. Political authoritative conditions rule public tax money for the benefit of a few at the expense of the many. The public "cult-like" sheep have no clue how they are being abused.

      The authoritative abusers (politicians) follow the "mostly" corrupt American (fuck-you) form of government and individual power tactics that have been conveniently embedded in corrupt modern morality and ethics, used by corrupted lawyers and judges to codify the fundamental moral code that underpins the original American Constitution.

  31. Aug 2021
  32. Jul 2021
    1. "For example, human annotators rarely reached agreement when they were asked to label tweets that contained words from a lexicon of hate speech. Only 5% of the tweets were acknowledged by a majority as hate speech, while only 1.3% received unanimous verdicts."

      This seems shocking to me.

    2. Well, no. I oppose capital punishment, just as (in my view) any ethical person should oppose capital punishment. Not because innocent people might be executed (though that is an entirely foreseeable consequence) but because, if we allow for capital punishment, then what makes murder wrong isn't the fact that you killed someone, it's that you killed someone without the proper paperwork. And I refuse to accept that it's morally acceptable to kill someone just because you've been given permission to do so.

      Most murders are system 1-based and spur-of-the-moment.

      System 2-based murders are even more deplorable because in most ethical systems it means the person actively spent time and planning to carry the murder out. The second category includes pre-meditated murder, murder-for-hire as well as all forms of capital punishment.