25 Matching Annotations
  1. Nov 2024
    1. I wasn’t immune to the incentive gradient, either. After I was dismissed from the crypto hedge fund I’d planned to work for in February 2022, I kept my distance from EA for a few months, wary of what I perceived as wastefulness and superficiality in the slice of the community I had encountered. But by May, I needed a job, and it was not hard to see that the fastest path to prosperity in the Effective Altruism world included a pit stop in the Bahamas. So I bought a plane ticket to Nassau, and within two weeks of my trip I had a fantastic position at an exciting new nonprofit organization funded by the FTX Foundation. I don’t know how to feel now about that plane ticket. On the one hand, the job I ended up in was a perfect fit. I was eminently qualified, and both I and the organization were substantially better off as a result of me joining. It introduced me to a community of earnest, introspective, devoted people, banded together to try to change the world for good, a community that I feel extraordinarily lucky to now call home. On the other hand, I was a willing participant in a web of incentives that likely compromised my epistemics and ethics. Participating in it had such high expected value — first in dollar terms, when I planned to trade crypto, and then in impact-on-the-world terms, when I went in search of an altruistic job. It seemed absurd to keep my distance just because the “vibes felt off” in the world of FTX and EA (at that point, the two were interchangeable in my mind), with no concrete cause for concern or evidence of wrongdoing in my field of vision. But if the incentives hadn’t been so strong, would I have paid more attention to the suspicious feelings in my gut?I think sometimes about the versions of me out there who would have held back from buying that plane ticket. There are alternate-universe-Rickis who smelled something rotten in FTX land and decided to stay away from that rot despite the enormous incentives not to. Those Rickis don’t end up in the Effective Altruism world. I think we would have benefited from having more of them around.

      Those Rickis don’t end up in the Effective Altruism world. I think we would have benefited from having more of them around.

      Indeed ... and what a coincidence those other Ricki's are not the author. We desperately want it to be others who took the bullet, who committed to the costly collective action whilst we stayed home (or got out of jail early etc etc).

  2. Nov 2023
  3. Oct 2023
    1. we weren't gifted with that virtuous extra-caring that prominent altruists must have

      what if new generations could be better at this? and they don't know how to assume?

    2. learned not to trust their care-o-meters

      I would say "train our internal care-o-meter" instead of living against it

    3. caring about the world isn't about having a gut feeling that corresponds to the amount of suffering in the world, it's about doing the right thing anyway. Even without the feeling.

      our brain can bypass moral feelings in multiple ways. Our moral feelings are produced by the brain?

    4. instead of just asking his gut how much he cares about de-oiling lots of birds, he shuts up and multiplies.

      what else could we do if we train our brain to ignore the feeling and just act on numbers?

    1. Theories of change that focus solely on overturning current societal structures generally lack concreteness

      In my opinion, it is nearly impossible for any individual to completely understand and exert complete control over the entire world. Rather, the world progresses through a collective effort shaped by countless individuals. This implies that no one person can directly alter the overall direction, but over time, with sufficient influence, the direction can indeed be subject to change.

    2. if they live in a high-income country, even an average person earning a modest salary is often wealthy compared to the rest of the world

      "What about wealth beyond income? If you have a family without assets or inheritance, and your only source of support is your salary, should you invest in a home to safeguard your family when you can no longer work? How do you manage the high cost of living in wealthy countries? For many families, their entire financial stability relies on their ability to work. Moreover, what can be done about the increasing living costs that surpass income growth? And how do we address jobs with poor conditions that cannot be sustained until retirement?"

  4. Mar 2023
    1. In the new collection, The Good It Promises, The Harm It Does, activists and scholars address the deeper problems that EA poses to social justice efforts. Even when EA is pursued with what appears to be integrity, it damages social movements by asserting that it has top-down answers to complex, local problems, and promises to fund grass-roots organizations only if they can prove that they are effective on EA’s terms.
    2. Despite the liberating intentions of many of its advocates, EA is, irredeemably conservative.  It favors welfare-oriented interventions that increase countable measures of well-being and both neglects and diverts funds from social movements that address injustices and agitate for social change, particularly in marginalized communities both in the US and in the Global South.
      • Inherent in its design, Effective Altruism treats and instead of the root of social ills.
      • Despite the liberating intentions of many of its advocates,
      • EA is, irredeemably conservative.
      • It favors welfare-oriented interventions that increase countable measures of well-being
      • It does harm to social movements that address injustices and agitate for social change
      • particularly in marginalized communities both in the US and in the Global South
      • by:
        • diverting and neglecting funding to them
        • funding an “effective” organization’s expansion into another country
      • encourages colonialist interventions that impose elite institutional structures
      • and sideline community groups whose local histories and situated knowledges are invaluable guides to meaningful action.
  5. Dec 2022
    1. You’re walking to work and you see a burning mansion. You’ve been in that mansion and know that there’s a Picasso worth $100 million. (Quick math: 100,000 lives saved.) You’re about to run into the mansion to save the Picasso…But right next to you, there’s a lake. And in that lake, there’s a drowning child.

      not all good actions can necessarily be quantified.

  6. Jan 2022
    1. So I was thinking about the brief conversation we'd had about [[effective altruism]], and I started writing, and I wrote a lot, so my preamble is that I mean here to put words to a seed of a heuristic I'm working with, not just criticize. But I don't really have a clean phrase for the topic... so I'm tossing this in my daily note, and maybe it'll make sense to move later?

      Thank you so much, this is awesome [[maya]]!

  7. Dec 2021
    1. It is also related to the EA movement in that, despite no official relationship between SFF and EA, despite the person who runs SFF not considering himself an Effective Altruist (Although he definitely believes, as I do, in being effective when being an altruist, and also in being effective when not being an altruist), despite SFF not being an EA organization, despite the words ‘altruist’ or ‘effective’ not appearing on the webpage, at least this round of the SFF process and its funds were largely captured by the EA ecosystem. EA reputations, relationships and framings had a large influence on the decisions made. A majority of the money given away was given to organizations with explicit EA branding in their application titles (I am including Lightcone@CFAR in this category). 

      Indeed. Because the people funding it think like that. They are in a given worldview.

    2. Whether or not they would consider themselves EAs as such, the other recommenders effectively thought largely Effective Altruist frameworks, and seemed broadly supportive of EA organizations and the EA ecosystem as a way to do good. One other member shared many of my broad (and often specific) concerns to a large extent, mostly the others did not. While the others were curious and willing to listen, there was some combination of insufficient bandwidth and insufficient communicative skill on our part, which meant that while we did get some messages of this type across on the margin and this did change people’s decisions in impactful ways, I think we mostly failed to get our central points across more broadly.

      +1.

  8. Oct 2021
    1. Using evidence and reason to find the most promising causes to work on. Taking action, by using our time and money to do the most good we can.

      I think I learned about effective altruism through Ezra Klein and the Future Perfect podcast.

    1. human beings started to take control of human evolution; that we stood on the brink of eliminating immeasurable levels of suffering on factory farms; and that for the first time the average American might become financially comfortable and unemployed simultaneously

      Effective Altruism

      The shift from an attention economy to an intention economy

  9. Jul 2017
  10. Sep 2016
    1. EA principles can work in areas outside of global poverty. He was growing the movement the way it ought to be grown, in a way that can attract activists with different core principles rather than alienating them.
    2. Effective altruism is not a replacement for movements through which marginalized peoples seek their own liberationAnd you have to do meta-charity well — and the more EA grows obsessed with AI, the harder it is to do that. The movement has a very real demographic problem, which contributes to very real intellectual blinders of the kind that give rise to the AI obsession. And it's hard to imagine that yoking EA to one of the whitest and most male fields (tech) and academic subjects (computer science) will do much to bring more people from diverse backgrounds into the fold.
    3. The other problem is that the AI crowd seems to be assuming that people who might exist in the future should be counted equally to people who definitely exist today. That's by no means an obvious position, and tons of philosophers dispute it. Among other things, it implies what's known as the Repugnant Conclusion: the idea that the world should keep increasing its population until the absolutely maximum number of humans are alive, living lives that are just barely worth living. But if you say that people who only might exist count less than people who really do or really will exist, you avoid that conclusion, and the case for caring only about the far future becomes considerably weaker
    4. The problem is that you could use this logic to defend just about anything. Imagine that a wizard showed up and said, "Humans are about to go extinct unless you give me $10 to cast a magical spell." Even if you only think there's a, say, 0.00000000000000001 percent chance that he's right, you should still, under this reasoning, give him the $10, because the expected value is that you're saving 10^32 lives.
    5. At one point, Russell set about rebutting AI researcher Andrew Ng's comment that worrying about AI risk is like "worrying about overpopulation on Mars," countering, "Imagine if the world's governments and universities and corporations were spending billions on a plan to populate Mars." Musk looked up bashfully, put his hand on his chin, and smirked, as if to ask, "Who says I'm not?"

      Es decir, debemos preocuparnos ahora por los riesgos imaginarios de inversiones que ni los gobiernos, ni las universidades están haciendo para un "apocalipsis Sci Fi" un lugar de preocuparnos por los problemas reales. Absurdo!