41 Matching Annotations
  1. Jan 2022
    1. So I was thinking about the brief conversation we'd had about [[effective altruism]], and I started writing, and I wrote a lot, so my preamble is that I mean here to put words to a seed of a heuristic I'm working with, not just criticize. But I don't really have a clean phrase for the topic... so I'm tossing this in my daily note, and maybe it'll make sense to move later?

      Thank you so much, this is awesome [[maya]]!

  2. Dec 2021
    1. It is also related to the EA movement in that, despite no official relationship between SFF and EA, despite the person who runs SFF not considering himself an Effective Altruist (Although he definitely believes, as I do, in being effective when being an altruist, and also in being effective when not being an altruist), despite SFF not being an EA organization, despite the words ‘altruist’ or ‘effective’ not appearing on the webpage, at least this round of the SFF process and its funds were largely captured by the EA ecosystem. EA reputations, relationships and framings had a large influence on the decisions made. A majority of the money given away was given to organizations with explicit EA branding in their application titles (I am including Lightcone@CFAR in this category). 

      Indeed. Because the people funding it think like that. They are in a given worldview.

    2. Whether or not they would consider themselves EAs as such, the other recommenders effectively thought largely Effective Altruist frameworks, and seemed broadly supportive of EA organizations and the EA ecosystem as a way to do good. One other member shared many of my broad (and often specific) concerns to a large extent, mostly the others did not. While the others were curious and willing to listen, there was some combination of insufficient bandwidth and insufficient communicative skill on our part, which meant that while we did get some messages of this type across on the margin and this did change people’s decisions in impactful ways, I think we mostly failed to get our central points across more broadly.

      +1.

  3. Oct 2021
    1. Using evidence and reason to find the most promising causes to work on. Taking action, by using our time and money to do the most good we can.

      I think I learned about effective altruism through Ezra Klein and the Future Perfect podcast.

    1. human beings started to take control of human evolution; that we stood on the brink of eliminating immeasurable levels of suffering on factory farms; and that for the first time the average American might become financially comfortable and unemployed simultaneously

      Effective Altruism

      The shift from an attention economy to an intention economy

  4. Aug 2021
    1. OrganizationsCharities and other organizations that work on popular EA cause areas, or otherwise have some connection to the movement.Global DevelopmentAbdul Latif Jameel Poverty Action LabAgainst Malaria FoundationBill & Melinda Gates FoundationCopenhagen Consensus CenterDevelopment Media InternationalDeworm the World InitiativeDispensers for Safe WaterThe END FundEvidence ActionFood Fortification InitiativeGiveDirectlyGiveWellGlobal Alliance for Improved NutritionGlobal Health and Development FundHappier Lives InstituteHelen Keller InternationalIodine Global NetworkInnovations for Poverty ActionLead Exposure Elimination ProjectLiving GoodsMalaria ConsortiumMédecins Sans FrontièresNew IncentivesPolicy Entrepreneurship NetworkPrecision DevelopmentSanku - Project Healthy ChildrenSCI FoundationSightsaversSuvitaTarget MalariaZusha!Animal WelfareAlbert Schweitzer FoundationAnima InternationalAnimal Advocacy CareersAnimal AskAnimal Charity EvaluatorsAnimal EthicsAnimal Welfare FundAquatic Life InstituteCellular Agriculture SocietyFaunalyticsFish Welfare InitiativeGood Food InstituteHumane Slaughter AssociationThe Humane LeagueMercy for AnimalsNew HarvestSentience InstituteSentience PoliticsWild Animal InitiativeArtificial IntelligenceAI ImpactsAnthropicAI Safety CampAI Safety SupportAlignment Research CenterCenter for Human-Compatible Artificial IntelligenceCenter for Security and Emerging TechnologyCentre for Long-Term ResilienceCentre for the Governance of AICharity Science FoundationDeepMindLeverhulme Center for the Future of IntelligenceMachine Intelligence Research InstituteNonlinear FundOpenAIOughtLong-Term Risks / FlourishingALLFEDAll-Party Parliamentary Group for Future GenerationsBerkeley Existential Risk InitiativeBulletin of the Atomic ScientistsCenter for Emerging Risk ResearchCenter for Reducing SufferingCenter on Long-Term RiskCentre for the Study of Existential RiskForesight InstituteForethought FoundationFuture of Humanity InstituteFuture of Life InstituteGlobal Catastrophic Risk InstituteGlobal Challenges FoundationGlobal Priorities InstituteGuarding Against PandemicsLong-Term Future FundLongview PhilanthropyNuclear Threat InitiativePloughshares FundSimon Institute for Longterm GovernanceStanford Existential Risks InitiativeSurvival and Flourishing FundEA Community / Fundraising.impact80,000 HoursAyuda EfectivaCentre for Effective AltruismCentre for Enabling EA Learning & ResearchCharity EntrepreneurshipDoebemDonationalEffective Altruism and Consulting NetworkEffective Altruism AnywhereEffective Altruism FoundationEffective Altruism FundsEffective Altruism HubEffective Altruism Infrastructure FundEffective ThesisEffektiv-Spenden.orgFounders PledgeGeneration PledgeGiEffektivt.noGiving What We CanGood GrowthGood VenturesHigh Impact AthletesLet's FundThe Life You Can SaveLocal Effective Altruism NetworkLongtermist Entrepreneurship FellowshipOne for the WorldOpen PhilanthropyRaising for Effective GivingHighly Ineffective CharitiesScared StraightOther / Multiple AreasCambridge Summer Programme in Applied ReasoningCanopieCenter for Applied RationalityCenter for Election ScienceDemocracy Defense FundEffective Altruism CoachingEuropean Summer Program on RationalityGiving GreenGiving MultiplierHigh Impact Careers in GovernmentJohns Hopkins Center for Health SecurityLegal Priorities ProjectLeverage ResearchLessWrongMetaculusOrganisation for the Prevention of Intense SufferingOur World in DataProbably GoodOxford Prioritization ProjectQualia Research InstituteQuantified Uncertainty Research InstituteRC ForwardRethink CharityRethink PrioritiesSparkWaveSociety for the Diffusion of Useful KnowledgeSummer Program on Applied Rationality and CognitionSoGiveWANBAM
    2. Cause AreasProblems people work on, and concepts related to those problems.Global health and developmentAid and paternalismBurden of diseaseDewormingEconomic growthEducationFamily planningForeign aidForeign aid skepticismGlobal povertyImmigration reformMalariaMass distribution of long-lasting insecticide-treated netsMicronutrient programsResearch into neglected tropical diseasesSmallpox Eradication ProgrammeTobacco controlUniversal basic incomeGlobal Catastrophic Risk (other)AsteroidsBiosecurityCivilizational collapseCuban Missile CrisisClimate changeClimate engineeringConservationDystopiaExistential risks from fundamental physics researchGeomagnetic stormsGreat power conflictHuman extinctionManhattan ProjectNuclear warfareNuclear winterNuclear disarmament movementPandemic preparednessRussell–Einstein ManifestoTerrorismTrinitySupervolcanoWeapon of mass destructionAnimal welfareAnimal product alternativesCorporate cage-free campaignsCultured meatDietary changeFarmed animal welfareFish welfareInvertebrate welfareLogic of the larderMeat-eater problemSpeciesismWelfare biologyWild animal welfareBuilding effective altruismAltruistic motivationBuilding effective altruismCommunityCompetitive debatingConsultancyEffective altruism educationEffective altruism groupsEffective altruism in the mediaEffective altruism messagingEffective altruism outreach in schoolsEvent strategyField buildingFundraisingGlobal outreachMoral advocacyMovement collapseNetwork buildingPublic givingRequest for proposalScalably using labourValue driftValue of movement growthOther causesAnti-aging researchArmed conflictAutonomous weaponCause candidatesCause XCluster headachesCognitive enhancementCOVID-19 pandemicCriminal justice reformElectoral reformGlobal priorities researchInstitutional decision-makingLand use reformLess-discussed causesLife extensionLife sciences researchLocal priorities researchMental healthMeta-scienceMoral circle expansionNear-term AI ethicsResearchRisks from malevolent actorsSpace colonizationGlobal Catastrophic Risk (AI)AI alignmentAI boxingAI ethicsAI forecastingAI governanceAI risksAI safetyAI skepticismAI takeoffAI winterAnthropic captureArtificial intelligenceArtificial sentienceBasic AI driveCapability control methodCollective superintelligenceComprehensive AI ServicesComputation hazardHuman-level artificial intelligenceIndirect normativityInfrastructure profusionInstrumental convergenceIntelligence explosionMalignant AI failure modeMind crimeMotivation selection methodOracle AIOrthogonality thesisPerverse instantiationQuality superintelligenceSovereign AISpeed superintelligenceSuperintelligenceTool AIWhole brain emulation
    3. Other ConceptsConcepts that apply to multiple causes, or the entire project of trying to do more good.Moral PhilosophyAnimal cognitionAnimal sentienceApplied ethicsAstronomical wasteAxiologyClassical utilitarianismCluelessnessConsciousness researchConsequentialismCosmopolitanismDemandingness of moralityDeontologyEthics of existential riskEthics of personal consumptionExcited vs. obligatory altruismFuture of humanityHedonismHedoniumInfinite ethicsIntrinsic value vs. instrumental valueIntrospective hedonismIntuition of neutralityLongtermismMetaethicsMoral offsettingMoral patienthoodMoral uncertaintyMoral weightNaive vs. sophisticated consequentialismNegative utilitarianismNon-wellbeing sources of valueNormative ethicsNormative uncertaintyOther moral theoriesPain and sufferingPatient altruismPerson-affecting viewsPersonal identityPhilosophy of mindPopulation ethicsPrioritarianismSentienceSubjective wellbeingSuffering-focused ethicsUniverse's resourcesUtilitarianismValenceVirtue ethicsWelfarismWellbeingLong-Term Risks and FlourishingAlternative foodAnthropogenic existential riskAnthropic shadowBroad vs. narrow interventionsCompound existential riskDecisive strategic advantageDefense in depthDifferential progressEstimation of existential riskExistential catastropheExistential riskExistential risk factorExistential securityFermi paradoxFlourishing futuresGlobal catastrophic riskGlobal catastrophic biological riskHellish existential catastropheHinge of historyIndirect long-term effectsInstitutions for future generationsLong reflectionLong-term futureNatural existential riskNon-humans and the long-term futureS-riskSingletonSpeeding up developmentState vs. step riskTechnological completion conjectureTime of perils hypothesisTiming of existential risk mitigationTotal existential riskTrajectory changesTransformative developmentTranshumanismUnknown existential riskUnprecedented risksValue lock-inVulnerable world hypothesisWarning shotDecision Theory and RationalityAcausal tradeAlternatives to expected value theoryAltruistic coordinationAltruistic wagerAnthropicsBayesian epistemologyBounded rationalityCause neutralityCause prioritizationCognitive biasCounterfactual reasoningCredal resilienceCrucial considerationDebunking argumentDecision theoryDecision-theoretic uncertaintyDefinition of effective altruismDisentanglement researchDoomsday argumentEpistemic deferenceEpistemologyEvolution heuristicExpected valueFanaticismFermi estimationForecastingGame theoryIdeological Turing testInformation hazardInside vs. outside viewInstrumental vs. epistemic rationalityIntervention evaluationLong-range forecastingMarginal charityMeasuring and comparing valueModel uncertaintyModelsMoral cooperationMoral psychologyMoral tradePrediction marketsPrinciple of epistemic deferencePsychology researchRandomized controlled trialsResearch methodsReversal testRisk aversionScope neglectSimulation argumentStatistical methodsStatus quo biasThinking at the marginUnilateralist's curseValue of informationEconomics and FinanceAdjusted life yearBlockchainCost-benefit analysisDivestmentImpact investingInternational tradeMacroeconomic policyMechanism designMicrofinanceWelfare economicsPolitics, Policy, and CultureBallot initiativeConflict theory vs. mistake theoryCultural evolutionCultural lagCultural persistenceDemocracyElectoral politicsGlobal governanceInternational organizationInternational relationsLawLeadershipMisinformationPeace and conflict studiesPolarityPolicyPolitical polarizationProgress studiesSafeguarding liberal democracySocial and intellectual movementsSpace governanceSystemic changeSurveillanceTotalitarianismEffective GivingCash transfersCertificate of impactCharity evaluationConstraints on effective altruismCost-effectivenessCost-effectiveness analysisDiminishing returnsDonation choiceDonation matchingDonation pledgeDonation writeupDonor lotteriesEffective altruism fundingFunding high-impact for-profitsGiving and happinessImpact assessmentImportanceInterpersonal comparisons of wellbeingInvestingITN frameworkMarket efficiency of philanthropyMarkets for altruismNeglectednessOrg strategyPhilanthropic coordinationPhilanthropic diversificationProblem frameworkRoom for more fundingSocially responsible investingTemporal discountingTiming of philanthropyTractabilityVolunteeringWorkplace activismCareer choiceAcademiaCareer capitalCareer choiceCareer frameworkEarning to giveEffective altruism hiringEntrepreneurshipExpertiseFellowships & internshipsIndependent researchJob satisfactionOperationsPersonal fitPublic interest technologyReplaceabilityResearch careersResearch training programsRole impactSoftware engineeringSupportive conditionsWorking at EA vs. non-EA orgsOtherAtomically precise manufacturingChinaComputational power of the human brainComputroniumCryonicsEuropean UnionExtraterrestrial intelligenceFabianismGene drivesHistoryHistory of philanthropyIndiaInformation securityIterated embryo selectionKidney donationRationality communityPhilippinesPhilosophic RadicalsQueen's Lane Coffee HouseReligionRussiaScientific progressSemiconductorsUnited States politicsUtilitarian SocietyTransparency
  5. May 2021
  6. Oct 2020
  7. Sep 2020
  8. Aug 2020
  9. Jul 2020
  10. May 2020
    1. In evolutionary terms, certainly, because the individuals that show these traits have a higher chance of survival in the long term.

      Not surprisingly, nature is a great teacher. Not until the 1950s and Johnny von Neumann did game theory get developed, but it was found that tit for tat with forgiveness is the optimal model. In other words, altruism or as Henry Ford called it, enlightened self-interest (https://www.wikiwand.com/en/Game_theory)

  11. Apr 2020
  12. Feb 2020
    1. there is littleelse on this list which can be considered part of a life history strategy if ‘life history’ is meant to be anchored in evolutionary biological research

      Isn't altruism, the willingness and ability to sacrifice for kin, the rate of drop-off of willing altruism, etc all directly connected to evolutionary biology?

  13. Nov 2017
    1. “The practical implications of this positive feedback loop could be that engaging in one kind deed (e.g., taking your mom to lunch) would make you happier, and the happier you feel, the more likely you are to do another kind act,”
  14. Jul 2017
    1. We do not help everyone equally—some people just seem to be more worthy of help than others. Our cognitions about people in need matter as do our emotions toward them.

      *Social experiment*: Our cognitive perception of others ha s an effect on whether we decide to help or not.

  15. Sep 2016
    1. EA principles can work in areas outside of global poverty. He was growing the movement the way it ought to be grown, in a way that can attract activists with different core principles rather than alienating them.
    2. Effective altruism is not a replacement for movements through which marginalized peoples seek their own liberationAnd you have to do meta-charity well — and the more EA grows obsessed with AI, the harder it is to do that. The movement has a very real demographic problem, which contributes to very real intellectual blinders of the kind that give rise to the AI obsession. And it's hard to imagine that yoking EA to one of the whitest and most male fields (tech) and academic subjects (computer science) will do much to bring more people from diverse backgrounds into the fold.
    3. The other problem is that the AI crowd seems to be assuming that people who might exist in the future should be counted equally to people who definitely exist today. That's by no means an obvious position, and tons of philosophers dispute it. Among other things, it implies what's known as the Repugnant Conclusion: the idea that the world should keep increasing its population until the absolutely maximum number of humans are alive, living lives that are just barely worth living. But if you say that people who only might exist count less than people who really do or really will exist, you avoid that conclusion, and the case for caring only about the far future becomes considerably weaker
    4. The problem is that you could use this logic to defend just about anything. Imagine that a wizard showed up and said, "Humans are about to go extinct unless you give me $10 to cast a magical spell." Even if you only think there's a, say, 0.00000000000000001 percent chance that he's right, you should still, under this reasoning, give him the $10, because the expected value is that you're saving 10^32 lives.
    5. At one point, Russell set about rebutting AI researcher Andrew Ng's comment that worrying about AI risk is like "worrying about overpopulation on Mars," countering, "Imagine if the world's governments and universities and corporations were spending billions on a plan to populate Mars." Musk looked up bashfully, put his hand on his chin, and smirked, as if to ask, "Who says I'm not?"

      Es decir, debemos preocuparnos ahora por los riesgos imaginarios de inversiones que ni los gobiernos, ni las universidades están haciendo para un "apocalipsis Sci Fi" un lugar de preocuparnos por los problemas reales. Absurdo!

  16. Sep 2015
    1. Heroism is about one thing: It’s about a concern for other people in need, a concernto develop, to defending a moral cause knowing there is a personal cause or risk. That’s the key.And you do it without expectation of reward. So altruism is heroism light. Compassion is avirtue that may lead to heroism, but we don’t know. Nobody’s established said link.
    2. A second line of research is about "elevation," which refers to the warm, uplifting feeling we get when we witness someone else's good deed. Research by moral psychologist Jonathan Haidt, as well as by Simone Schnall, has found that elevation systematically motivates people to perform altruistic acts themselves.
    1. Giving has also been linked to the release of oxytocin, a hormone (also released during sex and breast feeding) that induces feelings of warmth, euphoria, and connection to others. In laboratory studies, Paul Zak, the director of the Center for Neuroeconomics Studies at Claremont Graduate University, has found that a dose of oxytocin will cause people to give more generously and to feel more empathy towards others,
    2. A study by James Fowler of the University of California, San Diego, and Nicholas Christakis of Harvard, published in the Proceedings of the National Academy of Science, shows that when one person behaves generously, it inspires observers to behave generously later, toward different people. In fact, the researchers found that altruism could spread by three degrees—from person to person to person to person. “As a result,” they write, “each person in a network can influence dozens or even hundreds of people, some of whom he or she does not know and has not met.”
    3. The happier participants felt about their past generosity, the more likely they were in the present to choose to spend on someone else instead of themselves. Not all participants who remembered their past kindness felt happy. But the ones who did were overwhelmingly more likely to double down on altruism.
    1. Some evolutionary biologists argue that organisms may sometimes put themselves at risk in order to help another because they expect that the other organism will return the favor down the line, a concept known as reciprocal altruism.
    2. Altruism: Altruism is when we act to promote someone else’s welfare, even at a risk or cost to ourselves
    3. Taken together, our strands of evidence suggest the following. Compassion is deeply rooted in human nature; it has a biological basis in the brain and body. Humans can communicate compassion through facial gesture and touch, and these displays of compassion can serve vital social functions, strongly suggesting an evolutionary basis of compassion. And when experienced, compassion overwhelms selfish concerns and motivates altruistic behavior.