80 Matching Annotations
  1. Last 7 days
    1. PRELIMINARY DRAFT — This page is under development. Questions and response formats are subject to change. Feedback welcome.

      It's no longer a "preliminary draft"; it's still in progress, but you don't need such a strong caveat in the banner.

    2. Technical subquestions (CM_12-20)

      give a 1 sentence preamble before the technical subquestions ... "These questions depend on several cost-relevant factors" (unfold below)

    1. the most pivotal and tractable question for animal welfare funding decisions.

      That's too strong a claim. Maybe leave that out entirely or change it to something like 'a high-value tractable question'

    2. Following our evaluation of Rethink Priorities' cultured meat forecasting work and ongoing TEA evaluations, this workshop focuses on what the evidence tells us about cultivated meat's production cost trajectory. We recognize that consumer acceptance, regulatory pathways, and environmental implications also matter — but we're centering on costs because this seems among the most pivotal and tractable questions right now, and we want to bring focused expertise to bear. Pivotal Questions Initiative → 📊 Cost Modeling Dashboard → EA Forum: CM Viability → CM_01 on Metaculus → RP Evaluation →

      this feels overwhelming/too many links -- find a way to make it less cluttered

    3. Or mark your availability on the grid (optional) Click cells for any time blocks you could join. Click a date to select that row, a time header to select that column, or a week label to select the whole week. All times US Eastern; hover for UK/CET.

      adJust this to start on April 15th and go through the first week of May #implement

    4. Note: This workshop is still in early planning. We're gathering initial interest and availability. Final dates and agenda will be confirmed once we have responses from key participants.

      Make it clear that we're planning for the late April or very early May #implement

    5. Your primary role in this conversation (optional)

      give a Free response for the 'other' box -- in general, use the improved schedule version from https://uj-wellbeing-workshop-archive.netlify.app/interest #implement

    1. Risner et al. (2024)"Environmental impacts of cultured meat," ACS Food Science & Technology. Life cycle assessment finding CM's global warming potential could be 4–25x greater than retail beef if pharmaceutical-grade purification is required. GFI published a formal critique. — ACS Food Sci & Tech. LCA raising environmental cost concerns.

      But mention the Swartz rebuttal here too

    2. showing Humbird's AA costs were 2–10x too high.

      Don't say "showing" -- that's too definitive. That's the claim, and perhaps they provide evidence, but we shouldn't agree with them in this doc without further consideration.

    1. The paper’s object is an abstract characterization of strategy-proof social choice rules for selecting a public-good level. While public decision rules can matter in principle, the abstract theorem is not tied to a concrete policy domain, institution, or implementation setting. There is no evident link to a specific decision-maker, welfare question, or operational policy lever where an evaluation would affect choices at scale.

      So why did you rate it 10/10 for decision relevance?

    2. This is a strong Unjournal candidate: it is directly about improving job recommendation systems used by a public employment service, has clear welfare implications for job seekers, and uses randomized field experiments rather than purely predictive metrics. The paper addresses a decision-relevant policy question—how to design algorithms that improve worker outcomes rather than platform clicks/applications—and appears to offer actionable guidance for public and private labor-market intermediaries. As a working paper with experimental evidence and a model-based welfare metric, it has high timing value and likely benefit from independent evaluation.

      I don't see what global priorities relevant decision this targets. Not sure why this was prioritized.

    1. Environmental & broader context

      Add a tool tip or note that this workshop is focused more on the production cost aspect rather than the environmental consequences.

    1. 🔬 Upcoming Workshop: Cultivated Meat Cost Trajectories We’re organizing an online expert workshop (late April / early May 2026) to dig into the key cost cruxes — media costs, bioreactor scale-up, and the gap between TEA projections and commercial reality. This model is one of the tools we’ll use. Workshop details & signup → · State your cost beliefs → 💬 We Want Your Feedback! Comment directly on this page using Hypothesis — click the < tab on the right edge. Highlight any text, parameter, or result to annotate it. We actively monitor comments and will respond to questions, incorporate suggestions, and improve the model based on your feedback. 🎧 Listen: Technical Review (22 min MP3) — Audio walkthrough of model architecture and areas for review

      Should probably be folded a bit more. It's taking up too much space on the page until we get to the actual model.

    1. Comment directly on this page using the Hypothes.is sidebar (look for the < tab on the right edge of the page). Highlight any text and add your annotation — visible to all Hypothes.is users. You can also use the feedback buttons on each paper card.

      Add a filter by year as well.

  2. Mar 2026
    1. We're continuing the discussion asynchronously and will be publicly sharing key materials soon. This site is evolving into a resource page.

      We're continuing the discussion asynchronously and will be publicly sharing key materials soon. This site is evolving into a resource page and hub for feedback, dialogue, and belief elicitation.

    1. Evaluation: Cash Transfers vs Psychotherapy in Liberia (McGuire et al.) Unjournal Evaluation Summary Applied Comparison Direct experimental comparison of cash transfers and psychotherapy in an LMIC context. Particularly relevant because it measures multiple outcomes—psychological distress, consumption, life satisfaction—allowing cross-metric comparison. Evaluation Summary

      This is not the title nor the authors -- fix this hallucination

    2. Essential

      'essential' is too strong. Maybe 'Most important for discussion'. And note there's no way to do a thorough read of all of these in 2 hours. Just leave that 'time allotment' out'

    1. 📚 Further Reading: Unjournal Evaluations The Unjournal has commissioned independent evaluations of papers relevant to this debate: → StrongMinds & Friendship Bench Evaluation — Critical assessment of HLI's meta-analysis and cost-effectiveness claims → Long-Run Effects of Psychotherapy on Depression — Cuijpers et al. meta-analysis on therapy durability → Cash Transfers vs Psychotherapy: Comparative Impact — McGuire et al. direct comparison in Liberia → Mental Health Therapy as a Core Strategy (Ghana) — Barker et al. on scaling community-based therapy

      Put this somewhere else - I don't think it belongs within the focal case folding box. It should have its own folding box in the reading section and references

    1. Practical guidance for funders now Given the uncertainties above, what should funders actually do? This section offers a decision-oriented framework, not a single prescription.

      I didn't want the AI to give this 'practical guidance' -- that's meant to come out of the session!!

    1. Zoom chat for quick reactions;

      No, I only want the Zoom chat to be used by the session organizers and mainly just to guide people on the structure of the workshop and where we're going next

    2. Segment structure is set; timing may adjust slightly. Updated March 11, 2026

      12 Mar 2026 -- Not entirely set -- we may add some small things. But close to set, and trying to harden the timings so we can send out a schedule soon that people can trust

    1. calibrated

      Give the definition of 'calibration' here as a footnote/tooltip. Roughly, things that when you say something will happen X% of the time it in fact occurs X% of the time, not much more nor less.

      If you are asked to give 80% CIs, the true values should fall in those intervals close to 80% of the time. If it happens less than 8/10 times, you're being overconfident, and stating too narrow intervals. If it happens more than 8/10 times, you're being underconfident, and stating overly wide intervals

    2. Consider the value obtained when using the best feasible measure for cross-intervention comparison in contexts like the focal context. What share of this value is obtained, in expectation, from using the simple linear WELLBY measure (as defined above) for all interventions?

      Above the 'operationalized version' Add a discussion box here for people to answer the more general question.

    1. We're organizing the discussion around four key questions:

      Restate this to more directly address the question in the heading on "what we want to achieve".

      We want to: - Help researchers understand practitioners' highest-value questions and considerations and trade-offs. - Help practitioners understand the most relevant and useful up to date research and its implications - Enable communication and collaboration, by getting on the same page, agreeing on terminology, identifying points of consensus and high-value cruxes, etc. - State and measure our beliefs about key issues and questions openly, with precision and calibrated uncertainty, driving high "value of information" Bayesian updating - Drive better decisions over measuring the impact of interventions in LMICs and using existing measures, leading to better funding decisions

      (This is a bit long -- just adjust the basic first sentence a tiny bit, and then footnote this more detailed theory of change. ) #implement

    2. The neutral point is the life satisfaction level representing neither positive nor negative welfare—essentially the boundary between "life worth living" and "suffering." Estimates range from 2-5 on the 0-10 scale. Peasgood et al. (2018) tentatively estimate ~2.

      Add: "This is particularly important for comparing interventions that have impacts on mortality (and perhaps fertility). We should discuss this in this workshop to an extent, but we might de-emphasize it to avoid overstretching the scope, depending on interest and timing.

    3. Other measures include QALYs (quality-adjusted life years), income-equivalent measures, and multi-dimensional poverty indices. QALYs are similar to DALYs but measure health gained rather than lost.

      This is being adjusted. NB we focus more on DALY than QALY because it's used a lot more in the LMIC intervention context, largely due to its ease of collection

    4. Unlike WELLBYs, DALYs are based on expert-derived disability weights rather than self-reported wellbeing—weights are constructed through surveys of health professionals rating hypothetical health states.

      Are you sure that it's through surveys of health professionals? I thought the surveys were of people in the general population. And this explanation doesn't mention how an individual's DALY is constructed based on asking them about their health states or something. What's the data used?

    5. Vignette exercises: respondents rate hypothetical people's life satisfaction based on descriptions, revealing how individuals anchor the scale and enabling cross-person calibration.

      Do they actually do this in the paper? doublecheck

    6. Calibration questions ask respondents to rate well-defined scenarios (e.g., "How satisfied would you be if you won $1,000?"). By observing how people rate the same reference points, researchers can estimate individual differences in scale use.

      Is this a reasonable examlpe? Do they ask questions like that in the exercises mentioend in the paper?

    7. Cost-effectiveness estimates vary by an order of magnitude depending on how WELLBYs are valued relative to DALYs.

      What's the source for this OOM claim?? Find and link it with a verbatim quote . #implement

      Also it's not in our 'evaluation summary as far as I know'

    1. Each scale point represents an equal welfare increment. If violated, summing is invalid and interventions targeting different baselines become incomparable.

      David Reinstein --- personally, this is the one I find least plauslible and most important.

    2. nterpersonal Comparability LSA = 7 ≈ LSB = 7 implies UA ≈ UB When two people report the same score, they experience similar welfare. Scale-use heterogeneity violates this assumption.

      I don't think this one is necessary if we can (instead) assume that differences are equivalent. For example, if we assume that person A is actually experiencing higher welfare at all levels of reported score, but the differences between the scores are comparable, then compared to interventions for measured differences in well-being, that shouldn't matter.

      I think it could also still be reliable if the distribution between the two populations is the same, even though we don't have specific inter-person comparability between any two compared individuals.

    3. 1 WELLBY = 1-point increase on a 0-10 life satisfaction scale × 1 person × 1 year W = Σi Σt LSit

      Those are not clearly defined here, nor the indexing

    1. We'll produce a practitioner-focused summary document, belief elicitation results with confidence intervals, and structured notes.

      Change this to "we hope to" and "We will share outputs". -- I can't guarantee right now that we'll get enough input or have bandwidth to produce this. #implement

    2. (Note: QALYs may be more directly comparable than DALYs for this purpose.)

      Leave out the QALYs parentheses bit here. Add "(or QALYs)" after "~1 SD in DALYs". #implement

    3. scale?

      Add "is a move from 1-3 for one person as good as a move from 1-2 for 2 people"? At the end of this paragraph... "even if these don't hold, does the linear WELLBY aggregation yield 'nearly as much value' for decisionmaking as other potential measures"? #adjust #implement

    4. When comparing a mental health intervention (measured in WELLBYs) to a physical health intervention (measured in DALYs)

      Either of these, especially the physical health intervention, could be measured either way. This overstates it a bit. Perhaps, just to give this as an example, suppose there is a case... #adjust #implement

    5. but more work is needed.

      "more work is neeeded" That's very much vague -- we nIt would be nice to have at least one specific point suggesting that the difference in scale means potentially matters and merits more study

    6. Each has strengths and limitations—and how they relate to each other, and whether either reliably captures what matters for human welfare, directly affects which interventions get prioritized.

      I'm allergic to platitudes. IIRC you should have some notes somewhere providing at least one case where this matters .

  3. Feb 2026
    1. adversarial manipulation.

      I don't think we discussed adversarial manipulation or have any results on it, so I'm a little worried that whatever generated this discussion is doing a sort of generic pandering and putting in what it generally expects to see in papers like this.

    2. Our results support AI as structured screening and decision support rather than full automation,

      This seems like a sort of milquetoast generic caveat. In what sense is this what our AI results support? This seems a bit pandering.

    3. xhibiting consistent failure modes: compressed rating scales, uneven criterion coverage, and variable identification of expert-flagged concerns.

      I'm guessing this is a bit premature/too much rounding up a few observations to general conclusions, but let me look at the results a bit more carefully.

    4. often approach the ceiling implied by human inter-rater variability on several criteria,

      This is interesting and strong. It comes across maybe a little bit overstated, so we just need to be careful about how we're framing this result.

    5. high-quality but noisy reference signal

      I think this is right, but the term "reference signal" sounds technical in an information theoretic sense, and we want to make sure we're not misapplying it.

    6. narrative critiques

      Yes, we focus on the critiques here, but the on journal evaluations do more than just critique. They discuss, they offer suggestions, implications, et cetera.

    7. overing economics and social-science working papers

      "covering ... working papers" Is mostly accurate but not quite right. We don't cover all working papers, and we have a specific focus on research relevant to global priorities. We can also evaluate post-journal publication, but I'm not sure how to best summarize this in a simple way in the abstract.

      The idea of "open evaluation platform" also could be a bit confusing here because it's not mainly about crowd sourcing. Yes, the "paid expert review packages" cover this, but I don't quite think this is worded in the best possible way.

    8. Peer review is strained, and AI tools generating referee-like feedback are already adopted by researchers and commercial services—yet field evidence on how reliably frontier LLMs can evaluate research remains scarce.

      This is a decent first sentence, although it bears the marks of AI-generated text. But also I'm not sure if it's really in line with our newest spin on this.

  4. Nov 2025
    1. returned file id keyed by path, size, and modification time.

      what does this mean? "Keyed by" ?

      This implies it is kept on the server and won't need a later upload.

  5. Sep 2025
    1. Zhang and Abernethy (2025) propose deploying LLMs as quality checkers to surface critical problems instead of

      Is this the only empirical work? I thought there were others underway. Worth our digging into. Fwiw I can do an elicit.org query.