3 Matching Annotations
  1. Jan 2026
    1. Narrowness of mind is something that makes me uneasy about the tech world. Effective altruists, for example, began with sound ideas like concern for animal welfare as well as cost-benefit analyses for charitable giving. But these solid premises have launched some of its members towards intellectual worlds very distant from moral intuitions that most people hold; they’ve also sent a few into jail.

      yes, [[Effective Altruism 20200713101714]] as utilitarianism ad absurdum.

  2. Apr 2024
    1. https://web.archive.org/web/20240420102854/https://www.theguardian.com/technology/2024/apr/19/oxford-future-of-humanity-institute-closes

      Oxford shut down 'Future of Humanity Institute'. Vgl [[Jaan Tallinn]] Nick Bostrom Part of phil dept, but less and less phil on staff. Original existential threat list seemed balanced, over time non-existent AI became only focus, ignoring clear and present dangers like climate change.

  3. Feb 2023
    1. the problem is particularly acute in EA. The movement’s high-minded goals can create a moral shield, they say, allowing members to present themselves as altruists committed to saving humanity regardless of how they treat the people around them. “It’s this white knight savior complex,” says Sonia Joseph, a former EA who has since moved away from the movement partially because of its treatment of women. “Like: we are better than others because we are more rational or more reasonable or more thoughtful.” The movement “has a veneer of very logical, rigorous do-gooderism,” she continues. “But it’s misogyny encoded into math.”

      Lofty goals can serve as 'moral shield', excusing immoral behaviour in other situations, because the higher ends 'prove' the ultimate morality of the actor.