13 Matching Annotations
  1. Jan 2022
    1. So I was thinking about the brief conversation we'd had about [[effective altruism]], and I started writing, and I wrote a lot, so my preamble is that I mean here to put words to a seed of a heuristic I'm working with, not just criticize. But I don't really have a clean phrase for the topic... so I'm tossing this in my daily note, and maybe it'll make sense to move later?

      Thank you so much, this is awesome [[maya]]!

  2. Dec 2021
    1. It is also related to the EA movement in that, despite no official relationship between SFF and EA, despite the person who runs SFF not considering himself an Effective Altruist (Although he definitely believes, as I do, in being effective when being an altruist, and also in being effective when not being an altruist), despite SFF not being an EA organization, despite the words ‘altruist’ or ‘effective’ not appearing on the webpage, at least this round of the SFF process and its funds were largely captured by the EA ecosystem. EA reputations, relationships and framings had a large influence on the decisions made. A majority of the money given away was given to organizations with explicit EA branding in their application titles (I am including Lightcone@CFAR in this category). 

      Indeed. Because the people funding it think like that. They are in a given worldview.

    2. Whether or not they would consider themselves EAs as such, the other recommenders effectively thought largely Effective Altruist frameworks, and seemed broadly supportive of EA organizations and the EA ecosystem as a way to do good. One other member shared many of my broad (and often specific) concerns to a large extent, mostly the others did not. While the others were curious and willing to listen, there was some combination of insufficient bandwidth and insufficient communicative skill on our part, which meant that while we did get some messages of this type across on the margin and this did change people’s decisions in impactful ways, I think we mostly failed to get our central points across more broadly.

      +1.

  3. Oct 2021
    1. Using evidence and reason to find the most promising causes to work on. Taking action, by using our time and money to do the most good we can.

      I think I learned about effective altruism through Ezra Klein and the Future Perfect podcast.

    1. human beings started to take control of human evolution; that we stood on the brink of eliminating immeasurable levels of suffering on factory farms; and that for the first time the average American might become financially comfortable and unemployed simultaneously

      Effective Altruism

      The shift from an attention economy to an intention economy

  4. Jul 2017
  5. Sep 2016
    1. EA principles can work in areas outside of global poverty. He was growing the movement the way it ought to be grown, in a way that can attract activists with different core principles rather than alienating them.
    2. Effective altruism is not a replacement for movements through which marginalized peoples seek their own liberationAnd you have to do meta-charity well — and the more EA grows obsessed with AI, the harder it is to do that. The movement has a very real demographic problem, which contributes to very real intellectual blinders of the kind that give rise to the AI obsession. And it's hard to imagine that yoking EA to one of the whitest and most male fields (tech) and academic subjects (computer science) will do much to bring more people from diverse backgrounds into the fold.
    3. The other problem is that the AI crowd seems to be assuming that people who might exist in the future should be counted equally to people who definitely exist today. That's by no means an obvious position, and tons of philosophers dispute it. Among other things, it implies what's known as the Repugnant Conclusion: the idea that the world should keep increasing its population until the absolutely maximum number of humans are alive, living lives that are just barely worth living. But if you say that people who only might exist count less than people who really do or really will exist, you avoid that conclusion, and the case for caring only about the far future becomes considerably weaker
    4. The problem is that you could use this logic to defend just about anything. Imagine that a wizard showed up and said, "Humans are about to go extinct unless you give me $10 to cast a magical spell." Even if you only think there's a, say, 0.00000000000000001 percent chance that he's right, you should still, under this reasoning, give him the $10, because the expected value is that you're saving 10^32 lives.
    5. At one point, Russell set about rebutting AI researcher Andrew Ng's comment that worrying about AI risk is like "worrying about overpopulation on Mars," countering, "Imagine if the world's governments and universities and corporations were spending billions on a plan to populate Mars." Musk looked up bashfully, put his hand on his chin, and smirked, as if to ask, "Who says I'm not?"

      Es decir, debemos preocuparnos ahora por los riesgos imaginarios de inversiones que ni los gobiernos, ni las universidades están haciendo para un "apocalipsis Sci Fi" un lugar de preocuparnos por los problemas reales. Absurdo!