we monitor and respond to all comments)
And we make adjustments.
we monitor and respond to all comments)
And we make adjustments.
PRELIMINARY DRAFT — This page is under development. Questions and response formats are subject to change. Feedback welcome.
It's no longer a "preliminary draft"; it's still in progress, but you don't need such a strong caveat in the banner.
Technical subquestions (CM_12-20)
give a 1 sentence preamble before the technical subquestions ... "These questions depend on several cost-relevant factors" (unfold below)
the most pivotal and tractable question for animal welfare funding decisions.
That's too strong a claim. Maybe leave that out entirely or change it to something like 'a high-value tractable question'
Following our evaluation of Rethink Priorities' cultured meat forecasting work and ongoing TEA evaluations, this workshop focuses on what the evidence tells us about cultivated meat's production cost trajectory. We recognize that consumer acceptance, regulatory pathways, and environmental implications also matter — but we're centering on costs because this seems among the most pivotal and tractable questions right now, and we want to bring focused expertise to bear. Pivotal Questions Initiative → 📊 Cost Modeling Dashboard → EA Forum: CM Viability → CM_01 on Metaculus → RP Evaluation →
this feels overwhelming/too many links -- find a way to make it less cluttered
Async Discussion & Suggestions
we'll just do this so remove this question. #implement
rkshops in 2026: Wellbeing Measurement WELLBY reliability & DALY conversion · March 2026
This one already happened -- make it clear. #implement
Or mark your availability on the grid (optional)
make this a folding box, folded by default #implement
Or mark your availability on the grid (optional) Click cells for any time blocks you could join. Click a date to select that row, a time header to select that column, or a week label to select the whole week. All times US Eastern; hover for UK/CET.
adJust this to start on April 15th and go through the first week of May #implement
Note: This workshop is still in early planning. We're gathering initial interest and availability. Final dates and agenda will be confirmed once we have responses from key participants.
Make it clear that we're planning for the late April or very early May #implement
Your primary role in this conversation (optional)
give a Free response for the 'other' box -- in general, use the improved schedule version from https://uj-wellbeing-workshop-archive.netlify.app/interest #implement
Other Pivotal Questions Workshops 🧠 Wellbeing Measurement (held Mar 16)
March 2026 - if you say March 16, people might interpret that as 2016.
Plant-Based Alternatives (May 2026)
This might need to be postponed until June.
Risner et al. (2024)"Environmental impacts of cultured meat," ACS Food Science & Technology. Life cycle assessment finding CM's global warming potential could be 4–25x greater than retail beef if pharmaceutical-grade purification is required. GFI published a formal critique. — ACS Food Sci & Tech. LCA raising environmental cost concerns.
But mention the Swartz rebuttal here too
our
our --> Unjournal's
more
More than what?
showing Humbird's AA costs were 2–10x too high.
Don't say "showing" -- that's too definitive. That's the claim, and perhaps they provide evidence, but we shouldn't agree with them in this doc without further consideration.
. The 10x gap
That's not a 10x gap. The numbers you just gave make it look more like a 5x gap.
The paper’s object is an abstract characterization of strategy-proof social choice rules for selecting a public-good level. While public decision rules can matter in principle, the abstract theorem is not tied to a concrete policy domain, institution, or implementation setting. There is no evident link to a specific decision-maker, welfare question, or operational policy lever where an evaluation would affect choices at scale.
So why did you rate it 10/10 for decision relevance?
This is a strong Unjournal candidate: it is directly about improving job recommendation systems used by a public employment service, has clear welfare implications for job seekers, and uses randomized field experiments rather than purely predictive metrics. The paper addresses a decision-relevant policy question—how to design algorithms that improve worker outcomes rather than platform clicks/applications—and appears to offer actionable guidance for public and private labor-market intermediaries. As a working paper with experimental evidence and a model-based welfare metric, it has high timing value and likely benefit from independent evaluation.
I don't see what global priorities relevant decision this targets. Not sure why this was prioritized.
Environmental & broader context
Add a tool tip or note that this workshop is focused more on the production cost aspect rather than the environmental consequences.
🔬 Upcoming Workshop: Cultivated Meat Cost Trajectories We’re organizing an online expert workshop (late April / early May 2026) to dig into the key cost cruxes — media costs, bioreactor scale-up, and the gap between TEA projections and commercial reality. This model is one of the tools we’ll use. Workshop details & signup → · State your cost beliefs → 💬 We Want Your Feedback! Comment directly on this page using Hypothesis — click the < tab on the right edge. Highlight any text, parameter, or result to annotate it. We actively monitor comments and will respond to questions, incorporate suggestions, and improve the model based on your feedback. 🎧 Listen: Technical Review (22 min MP3) — Audio walkthrough of model architecture and areas for review
Should probably be folded a bit more. It's taking up too much space on the page until we get to the actual model.
Comment directly on this page using the Hypothes.is sidebar (look for the < tab on the right edge of the page). Highlight any text and add your annotation — visible to all Hypothes.is users. You can also use the feedback buttons on each paper card.
Add a filter by year as well.
We're continuing the discussion asynchronously and will be publicly sharing key materials soon. This site is evolving into a resource page.
We're continuing the discussion asynchronously and will be publicly sharing key materials soon. This site is evolving into a resource page and hub for feedback, dialogue, and belief elicitation.
1. WELLBY Reliability and Value
make an anchorable link here and for the other headers.
Join the discussion (Google Doc)
probably moving to have this discussion more in hypothes.is on web content and less in that Google doc; it's hard to make the Gdoc attractive and organized.
Evaluation: Cash Transfers vs Psychotherapy in Liberia (McGuire et al.) Unjournal Evaluation Summary Applied Comparison Direct experimental comparison of cash transfers and psychotherapy in an LMIC context. Particularly relevant because it measures multiple outcomes—psychological distress, consumption, life satisfaction—allowing cross-metric comparison. Evaluation Summary
This is not the title nor the authors -- fix this hallucination
Essential
'essential' is too strong. Maybe 'Most important for discussion'. And note there's no way to do a thorough read of all of these in 2 hours. Just leave that 'time allotment' out'
The Controversy: Happier Lives Institute estimated StrongMinds a
Use This link instead -- https://www.happierlivesinstitute.org/report/strongminds-cost-effectiveness-analysis/
@Samuel_Dupret let me know if you think a better link is appropriate.
You might be wondering why I'm still bothering with this at the workshop - I want to turn this into a resource page for further practical work and discussion.
otentially more cost-effective than AMF. GiveWell's 2023 assessment disagreed, citing concerns about: (1) mapping depression scales to LS, (2) assumed effect duration, (3) demand effects in self-reported outcomes, and (4) publication bias.
Link needs fixing -- https://www.givewell.org/international/technical/programs/strongminds-happier-lives-institute
Also mention and link HLI's response to this assessment here
Peasgood et al. (unpublished)
We have a copy
Unit-change comparability
I'm not sure this is stated correctly. It seems to overlap cardinality.
📚 Further Reading: Unjournal Evaluations The Unjournal has commissioned independent evaluations of papers relevant to this debate: → StrongMinds & Friendship Bench Evaluation — Critical assessment of HLI's meta-analysis and cost-effectiveness claims → Long-Run Effects of Psychotherapy on Depression — Cuijpers et al. meta-analysis on therapy durability → Cash Transfers vs Psychotherapy: Comparative Impact — McGuire et al. direct comparison in Liberia → Mental Health Therapy as a Core Strategy (Ghana) — Barker et al. on scaling community-based therapy
Put this somewhere else - I don't think it belongs within the focal case folding box. It should have its own folding box in the reading section and references
mortality-focused interventions
When comparing among interventions, some of which that affect mortality.
Practical guidance for funders now Given the uncertainties above, what should funders actually do? This section offers a decision-oriented framework, not a single prescription.
I didn't want the AI to give this 'practical guidance' -- that's meant to come out of the session!!
Zoom chat for quick reactions;
No, I only want the Zoom chat to be used by the session organizers and mainly just to guide people on the structure of the workshop and where we're going next
Segment structure is set; timing may adjust slightly. Updated March 11, 2026
12 Mar 2026 -- Not entirely set -- we may add some small things. But close to set, and trying to harden the timings so we can send out a schedule soon that people can trust
calibrated
Give the definition of 'calibration' here as a footnote/tooltip. Roughly, things that when you say something will happen X% of the time it in fact occurs X% of the time, not much more nor less.
If you are asked to give 80% CIs, the true values should fall in those intervals close to 80% of the time. If it happens less than 8/10 times, you're being overconfident, and stating too narrow intervals. If it happens more than 8/10 times, you're being underconfident, and stating overly wide intervals
Consider the value obtained when using the best feasible measure for cross-intervention comparison in contexts like the focal context. What share of this value is obtained, in expectation, from using the simple linear WELLBY measure (as defined above) for all interventions?
Above the 'operationalized version' Add a discussion box here for people to answer the more general question.
Consider the value obtained
add a sub-sub-header "Operationalized version" here
We're organizing the discussion around four key questions:
Restate this to more directly address the question in the heading on "what we want to achieve".
We want to: - Help researchers understand practitioners' highest-value questions and considerations and trade-offs. - Help practitioners understand the most relevant and useful up to date research and its implications - Enable communication and collaboration, by getting on the same page, agreeing on terminology, identifying points of consensus and high-value cruxes, etc. - State and measure our beliefs about key issues and questions openly, with precision and calibrated uncertainty, driving high "value of information" Bayesian updating - Drive better decisions over measuring the impact of interventions in LMICs and using existing measures, leading to better funding decisions
(This is a bit long -- just adjust the basic first sentence a tiny bit, and then footnote this more detailed theory of change. ) #implement
The neutral point is the life satisfaction level representing neither positive nor negative welfare—essentially the boundary between "life worth living" and "suffering." Estimates range from 2-5 on the 0-10 scale. Peasgood et al. (2018) tentatively estimate ~2.
Add: "This is particularly important for comparing interventions that have impacts on mortality (and perhaps fertility). We should discuss this in this workshop to an extent, but we might de-emphasize it to avoid overstretching the scope, depending on interest and timing.
evaluation summary
Link it here https://unjournal.pubpub.org/pub/evalsumstrongminds/ -- however, I don't see anything in that summary that provides details suggesting this order of magnitude thing. Find a better reference.
QALYs (quality-adjusted life years)
Link one authoritative external resource presenting these sin detail
instruments like EQ-5D
dead link
Other measures include QALYs (quality-adjusted life years), income-equivalent measures, and multi-dimensional poverty indices. QALYs are similar to DALYs but measure health gained rather than lost.
This is being adjusted. NB we focus more on DALY than QALY because it's used a lot more in the LMIC intervention context, largely due to its ease of collection
—and what would change their minds?
remove 'and what would change their minds' -- this doesn't fit. #implement
Unlike WELLBYs, DALYs are based on expert-derived disability weights rather than self-reported wellbeing—weights are constructed through surveys of health professionals rating hypothetical health states.
Are you sure that it's through surveys of health professionals? I thought the surveys were of people in the general population. And this explanation doesn't mention how an individual's DALY is constructed based on asking them about their health states or something. What's the data used?
Vignette exercises: respondents rate hypothetical people's life satisfaction based on descriptions, revealing how individuals anchor the scale and enabling cross-person calibration.
Do they actually do this in the paper? doublecheck
Calibration questions ask respondents to rate well-defined scenarios (e.g., "How satisfied would you be if you won $1,000?"). By observing how people rate the same reference points, researchers can estimate individual differences in scale use.
Is this a reasonable examlpe? Do they ask questions like that in the exercises mentioend in the paper?
Cost-effectiveness estimates vary by an order of magnitude depending on how WELLBYs are valued relative to DALYs.
What's the source for this OOM claim?? Find and link it with a verbatim quote . #implement
Also it's not in our 'evaluation summary as far as I know'
Open Philanthropy
It's now "Coefficient Giving" -- correct this on every page. And hyperlink "https://coefficientgiving.org/research/cost-effectiveness/" here. #implement
Each scale point represents an equal welfare increment. If violated, summing is invalid and interventions targeting different baselines become incomparable.
David Reinstein --- personally, this is the one I find least plauslible and most important.
nterpersonal Comparability LSA = 7 ≈ LSB = 7 implies UA ≈ UB When two people report the same score, they experience similar welfare. Scale-use heterogeneity violates this assumption.
I don't think this one is necessary if we can (instead) assume that differences are equivalent. For example, if we assume that person A is actually experiencing higher welfare at all levels of reported score, but the differences between the scores are comparable, then compared to interventions for measured differences in well-being, that shouldn't matter.
I think it could also still be reliable if the distribution between the two populations is the same, even though we don't have specific inter-person comparability between any two compared individuals.
equires four implicit assumptions
Give a linked source and citation for this.
1 WELLBY = 1-point increase on a 0-10 life satisfaction scale × 1 person × 1 year W = Σi Σt LSit
Those are not clearly defined here, nor the indexing
We'll produce a practitioner-focused summary document, belief elicitation results with confidence intervals, and structured notes.
Change this to "we hope to" and "We will share outputs". -- I can't guarantee right now that we'll get enough input or have bandwidth to produce this. #implement
Participants can opt out of recording for specific segments if needed
Add "and we will ask for final approval before posting anything". #implement
(Note: QALYs may be more directly comparable than DALYs for this purpose.)
Leave out the QALYs parentheses bit here. Add "(or QALYs)" after "~1 SD in DALYs". #implement
scale?
Add "is a move from 1-3 for one person as good as a move from 1-2 for 2 people"? At the end of this paragraph... "even if these don't hold, does the linear WELLBY aggregation yield 'nearly as much value' for decisionmaking as other potential measures"? #adjust #implement
Where is the "neutral point" on the scale?
Remind me why the neutral point is important.
When comparing a mental health intervention (measured in WELLBYs) to a physical health intervention (measured in DALYs)
Either of these, especially the physical health intervention, could be measured either way. This overstates it a bit. Perhaps, just to give this as an example, suppose there is a case... #adjust #implement
but more work is needed.
"more work is neeeded" That's very much vague -- we nIt would be nice to have at least one specific point suggesting that the difference in scale means potentially matters and merits more study
Each has strengths and limitations—and how they relate to each other, and whether either reliably captures what matters for human welfare, directly affects which interventions get prioritized.
I'm allergic to platitudes. IIRC you should have some notes somewhere providing at least one case where this matters .
adversarial manipulation.
I don't think we discussed adversarial manipulation or have any results on it, so I'm a little worried that whatever generated this discussion is doing a sort of generic pandering and putting in what it generally expects to see in papers like this.
Our results support AI as structured screening and decision support rather than full automation,
This seems like a sort of milquetoast generic caveat. In what sense is this what our AI results support? This seems a bit pandering.
xhibiting consistent failure modes: compressed rating scales, uneven criterion coverage, and variable identification of expert-flagged concerns.
I'm guessing this is a bit premature/too much rounding up a few observations to general conclusions, but let me look at the results a bit more carefully.
often approach the ceiling implied by human inter-rater variability on several criteria,
This is interesting and strong. It comes across maybe a little bit overstated, so we just need to be careful about how we're framing this result.
high-quality but noisy reference signal
I think this is right, but the term "reference signal" sounds technical in an information theoretic sense, and we want to make sure we're not misapplying it.
narrative critiques
Yes, we focus on the critiques here, but the on journal evaluations do more than just critique. They discuss, they offer suggestions, implications, et cetera.
overing economics and social-science working papers
"covering ... working papers" Is mostly accurate but not quite right. We don't cover all working papers, and we have a specific focus on research relevant to global priorities. We can also evaluate post-journal publication, but I'm not sure how to best summarize this in a simple way in the abstract.
The idea of "open evaluation platform" also could be a bit confusing here because it's not mainly about crowd sourcing. Yes, the "paid expert review packages" cover this, but I don't quite think this is worded in the best possible way.
Peer review is strained, and AI tools generating referee-like feedback are already adopted by researchers and commercial services—yet field evidence on how reliably frontier LLMs can evaluate research remains scarce.
This is a decent first sentence, although it bears the marks of AI-generated text. But also I'm not sure if it's really in line with our newest spin on this.
“high” reasoning effort
Not relevant to Pro -- cut this
OpenAI Responses API
"Responses" is the newer one (as of 4 Nov 2025)
returned file id keyed by path, size, and modification time.
what does this mean? "Keyed by" ?
This implies it is kept on the server and won't need a later upload.
d the best performance from top reasoning models
Best relative to what? Better than the 'non-top reasoning models'? @valik
Zhang and Abernethy (2025) propose deploying LLMs as quality checkers to surface critical problems instead of
Is this the only empirical work? I thought there were others underway. Worth our digging into. Fwiw I can do an elicit.org query.
but still recommend human oversight.
why? based on some evidence of LLM limitations or risks?
emphasize
I'd say 'they argue' instead of 'emphasize'; the latter seems like a statement of absolute truth that we agree with.
The population of papers
Should we adjust "the population of papers" to "the reference is" ? to be more explicit?