1,143 Matching Annotations
  1. Last 7 days
    1. Given the available collected data [...], how should [funders] measure the impact on wellbeing? [...] What measures of well-being should charities, NGOs, and RCTs collect for impact analysis?

      Let's split up the answer boxes within this question to ask separately about the best use of currently collected data for these cases, and also ask what data should be collected in the future.

    2. How reliable is the WELLBY measure [...] relative to other available measures in the 'wellbeing space'? How much insight is lost by using WELLBY and when will it steer us wrong?

      signpost more that we are talking about the very simple use of the WELLBY measure

    3. More detailed questions on WELLBY reliability

      Should be 'on WELLBY reliability and wellbeing measures' ... but also the folding box is still not ideal here -- better for this to link out to another page/subpage (open in new window)

    4. "Meaningful change" = at least one intervention currently in the top 5 moves out of the top 5, OR the #1 ranked intervention changes. This assumes future RCTs incorporate these methods and Founders Pledge updates their CEA accordingly.

      This one is nice -- is it the same in the PQ table?

    5. If you propose a measure other than linear WELLBY in your answer above, how much more would it cost to achieve the same welfare improvement using linear WELLBY instead?

      Make it clear that 'speculative' is OK here

    6. Currently, standard practice (used by HLI and Founders Pledge) treats SDs on different mental health instruments as interconvertible with WELLBY SDs on a roughly 1:1 basis.

      I don't know if that's standard practice -- these are 2 EA-linked or adjacent groups -- moderate

    7. Key assumptions

      Skip the 'key assumptions' part -- I don't think this gets it correctly anyways. E.g., comparisons of SD units shouldn't require linearity per se, that would be sufficient but not necessary. I'm not sure if interpersonal comparability in levels is necessary either -- if we had linearity and cardinality the measured changes wouldn't depend on the starting points

    8. If it's unreliable or systematically misleading, billions of dollars in funding decisions could be poorly directed.

      This point seems obvious. Maybe skip it

    9. Is the WELLBY (linear, 0–10 life satisfaction) a useful and reliable measure for comparing interventions—particularly those involving mental health, consumption, and life-saving—in the context that organizations like Founders Pledge use it?

      Claude rephrased these in simpler but less rigorous and imo less useful ways. This page should use the actual wording of the key questions on the page https://coda.io/d/Unjournal-Public-Pages_ddIEzDONWdb/Wellbeing-PQ_suPg8sEH#_luVrD0mE -- use quotes where possible, ellipse where necessary, and link or fold the details

      I find 'useful and reliable' a bit too vague perhaps

    1. If "7 out of 10" means something very different to different people, that's a fundamental challenge for the WELLBY as a tool for comparing interventions.

      This is a bit too simple. Note WELLBY, as used in the simplest approaches, mainly requires differences to be comparable-- and even linear -- across individuals. Moving from 1 to 3 is equally valued as moving from 4 to 6 or 8 to 10, and gets twice the value in this measure as moving 2 people from 3 to 4.

    2. Two measures dominate these analyses. The DALY (disability-adjusted life year) comes from health economics and captures years of healthy life lost to disease or disability.

      Not sure 'dominate' is accurate. Is DALY used more than QALY? Is WELLBY on par with either of these? Are there other heavily used measures

  2. Feb 2026
    1. Our highest priority will be to avoid wasting author time. We’re very cognizant from first-hand experience that poor conversion quality, perhaps requiring back-and-forth with the author, is very unpleasant and a huge time suck.

      I suspect this is much less of an issue in the days of Claude

    2. Following acceptance, authors may pass their manuscript to the journal in any reasonable format (LaTeX or markdown preferred; Word and PDF acceptable).The document will be published in a “web-first” format, such as the Distill version of R Markdown.This allows reflowable text and mobile readability.We currently do not plan to support interactive content, as we do not think the large effort is worth the modest benefit.

      You don't have to host -- why not just evaluate and curte?

      Or you can have a compromise -- a 'traditional summary' in the journal, linking to the interactive version created by the author, the latter being the canonical one

      NB, I think interactive content is high value, but the authors can produce it, especially given Claude code etc

    3. The review process will be done using a manuscript in PDF format, which can be generated by the authors using whatever software they prefer (e.g., LaTeX). This avoids wasting the time of authors of papers that are later rejected.

      Not sure you even need pdf -- markdown should be acceptable, for example

    4. The journal Alignment will be a fast and rigorous venue for theoretical AI alignment—research on agency, understanding, and asymptotic behavior of advanced and potentially self-modifying synthetic agents

      Definitely theoretical alignment, not AI governance?

    5. Many potential criticisms of papers are “NP” (can be checked easily), so credentials of reviewer should be irrelevant

      I see that as a reasonable sterile manning of what PREreview is doing. In contrast The Unjournal we look for legible signs of expertise when we source and commission evaluators, although we do also encourage in a separate "independent evaluation" mode (which has had very little takeup)

    6. If confidential: Massive reviewer effort (the report) boiled down to a single bit (!)

      Yeah, that's the most obvious limitation of the journal system. That's why we say "publicly evaluate and rate, don't accept/reject"

    7. . Our bet is that we can could unify and expand the field of alignment by establishing a legitimate academic journal with an unorthodox review pipeline.

      There are some costs here -- maybe you can have your thing ALSO be an overlay journal Publish-review-curate thing, at least for those interested

    8. Since they are regarded as informal by institutional academia, time spent on such outputs is dead time, from the perspective of institutional research performances indicators and career progressio

      This is big -- and something Unjournal is also hoping to remedy in a sense

    9. Our experimental solution to address this problem is to publish each accepted paper with a “reviewer abstract”.  Its main goal is to help a potential reader decide — on the paper’s merits — if the paper is worth reading.

      I like this idea. We ask for "abstracts" too but I particularly like the way you have phrased it, targeted at a potential reader

    10. We think it’s very reasonable to spend an average of ~$3k per paper on reviewer payments.

      We spend less on the 'evaluators' but something of this magnitude including eval manager time and my own time etc.

    11. We intend to experiment with LLM recommendations to surface candidates that might not be salient to the editors.

      We have experience with this and I can sugggest some good tools

    12. Author identity known to reviewers

      This would not work well in situations I'm familiar with. Need to provide the opportunity for single-blind review, especially if there is some meaningful rating or filtering. Otherwise you just get back-slapping

    13. If a submission is published in the journal, the AF post is updated to reflect this, and the reviewer abstract is added. The reviewer abstract can be upvoted on AF, with the reviewers with AF accounts who sign the abstract receiving karma as appropriate.

      Why filter rather than just rate and sort? ... and let users choose how to filter?

    14. Journal Not Conference

      OK, your situation is rather different than for Unjournal -- I guess you are trying to build credibility and institutional structure for a new and fledgling tield

    15. We are tentatively planning on making the journal archival, meaning that publication there constitutes the “version of record”, in contrast to a workshop publication. (Preprints of course are allowed.)

      Bad idea IMO ... although these concerns may vary by field

    16. Public review avoids this, but introduces additional problems due to lack of confidentiality: less honest, more combative and defensive conversations between authors and reviewers. Public review also produces an artifact that is poorly suited to a reader because the c

      anonymous public review exists

  3. Jan 2026
    1. Full LLM evaluations, human adjudication, new human evaluations

      This iss likely to occur over a longer time. There will be overlap between these phases.

  4. Dec 2025
    1. To assess whether different LLMs produce systematically different evaluations, we collected ratings from multiple providers: OpenAI (GPT-4o-mini), Anthropic (Claude Sonnet 4), and Google (Gemini 2.0 Flash)

      We should note that none of these are frontier models other than GPT pro (which version?) @Valentin

    2. We first use the earlier GPT‑5 Pro evaluation run that covered all papers in our Unjournal sample with a simpler JSON‑schema prompt

      @Valentin which 'simpler prompt' was this? We should link it

    1. temporal “data leakage” between the multi‑year regrowth label and contemporaneous predictors;

      Did it really show this? OK we can check this in the appendix -- but it might be good to present a bit more side-by-side comparison.

      OK I checked the appendix and I couldn't find any mention of the temporal data leakage issue. It mentioned other issues that I interpreted as more about fitting a model on one time period and expecting it to pertain to another period, but that's not 'leakage'.

    2. We start by examining selected evaluations in detail. In the next step we will juxtapose these LLM assessments with the human evaluators’ written reports.

      Let's put a table of the relative ratings here (human vs AI, for each category etc), especially for this subset

    3. sensitivity of return‑on‑investment calculations to assumptions about donor lifetime value and unobserved costs.

      iirc this overlaps the human evaluation

    4. potential spillovers and spatial correlation across postal codes

      this was raised by the authors themselves, and they had an approach to accounting for it

    5. Case study: Williams et al. (2024)

      Valentin: Did an LLM write this comparison or did you? It's so very detailed that I am wondering how you had time to dig in this much.

    6. not fully propagate uncertainty

      Did the model really mention this? If so, awesome. (Although I'm a bit concerned about whether our evaluations entered it's corpus because I on't think it did so in the last version)

    7. the treatment of “biophysical potential” versus business‑as‑usual regrowth,

      is this the same as something the LLM identified? Maybe better to present the exact language in parallel (for human vs llm)?

    8. edictors; incomplete or optimistic treatment of uncertainty around the headline 215 Mha estimate; a broad and permissive definition of land “available for natural regeneration”; limitations of the carbon overlay and permanence assumptions; and only partial openness of code and workflows, which increases barriers to full replication.

      I would want to look at this correspondence between human and LLM critiques more closely. (Can also ask LLMs to check that)

    9. relatively

      Relative to other papers or relative to other ratings categories? I don't think we showed summary statistics or plots for the overall set of evaluations to compare this to know if it is 'relatively high'.

      Or maybe you meant 'relative to the human evaluations'?

    10. This October 2025 run asked the model only for numeric ratings and journal‑tier scores (no diagnostic summary or reasoning trace);

      I thought we asked it for 'reasoning for each rating'?

    11. To understand what GPT‑5 Pro is actually responding to, we re‑ran the model on four focal papers (Adena and Hager 2024; Peterman et al. 2024; Williams et al. 2024; Green, Smith, and Mathur 2025) using a refined prompt (as shown in the previous section).

      It's not clear to me how the prompt use here is different from the prompt used on the rest of the papers.

    1. ourth, when sufficient information to compute a standardized mean difference (SMD) was lacking and the text reported a “null,” outcomes were set to an “unspecified null” of 0.01. This imputation is transparent but ad hoc; it could bias pooled estimates upward (relative to zero) and may not reflect the true variance of those effects. The manuscript would benefit from sensitivity checks setting these to 0, excluding them, or modeling them with conservative variances.

      IIRC this echoes the human evaluation (although one of the evaluators had a particular detailed suggestion for this)

    2. For “tier_will,” given its status as a WZB discussion paper and the need to disregard actual publication knowledge, I might predict it will land around 3.2 to 4.0.

      This suggests/confirms that the model does not have access to the latest 'news' about the publication (in Management Science)

    3. Heterogeneity analyses suggest stronger effects in urban areas and in PLZs with higher employment, more children, and more Catholics, and with higher predicted giving potential. These patterns can guide targeting but also indicate that the ITT estimates average over meaningful heterogeneity.

      Seems to miss the issue of MHT, and some very surprising heterogeneity suggests spurious estimates.

      Also, divergence from the PAP, although I'm not sure it had access to the PAP

    4. Data construction choices appear reasonable but introduce some judgment calls. Winsorizing PLZ-day donations at €1,000 reduces variance from heavy tails; the authors show that results are directionally robust, but precision trades off.

      Good it noted the Winsorizing -- something Reiley emphasized.

    5. The most important methodological limitations concern exposure heterogeneity and spillovers. Treatment is assigned at the PLZ level, but impressions are probabilistic and sparse (roughly one in ten Facebook users in treated PLZs received at least one impression), so the estimates are ITT and likely attenuated relative to the effect of actually seeing the ad; the TOT is not estimated. The allocation strategy partly allows Facebook to endogenously concentrate impressions, creating within-treatment variation in exposure that is not exploited for causal TOT analysis (e.g., using randomized budgets as an instrument in a dose–response framework). Spillovers across PLZs are plausible (algorithmic leakage of geotargeting and social diffusion). The authors document positive “share of treated neighbors” effects and argue the main estimates are lower bounds, but the neighbor-treatment share is not itself randomized, and spatial correlation or common shocks could inflate these coefficients; the spillover analysis should be interpreted cautiously. Robustness to spatial correlation in errors is only partly addressed by robust standard errors and randomization inference; alternative SEs (e.g., spatial HAC or clustering at larger administrative units) and placebo geographies would further strengthen inference.

      At a first look (and from my memory) this seems like an extremely useful and plausible report!

    1. In this project, we test whether current large language models (LLMs) can generate research evaluations that are comparable, in structure and content, to expert human reviews.

      This is only a part of the project though

    2. a high‑stakes, policy‑relevant domain, and as the first step toward a broader benchmark and set of tools for comparing and combining human and AI research evaluations.

      Last sentence seems relevant to the grant application language

    3. model reliably identifies many of the same methodological and interpretive issues

      "reliably identifies" feels too strong ... or at least I haven't seen the evidence yet.

    4. and to produce a narrative assessment anchored in the PDF of each paper.

      I don't understand 'anchored in the PDF of each paper' -- maybe LLM wrote this?

  5. Nov 2025
    1. Strong pantropical mapping, but several methodological and interpretive risks remain. Training data on natural regrowth include substantial omission error in humid biomes; pseudo-absences and misclassification may bias the model. Random forests were trained on class-balanced points and probabilities are treated as calibrated; no prevalence correction or probability calibration is shown, yet expected areas/carbon rely on these values. Validation is not fully spatial; accuracy likely inflated by autocorrelation (declines at greater distances); no formal spatial block cross-validation or predictive uncertainty mapping. Reported “confidence intervals” for total area/carbon are effectively deterministic sums, not uncertainty; overall uncertainty is understated. Predictions at 30 m depend on several coarser predictors (300 m–1 km), so effective map resolution is coarser and may mislead fine-scale planning. Final maps omit socioeconomic predictors (despite similar accuracy), assuming stationarity from 2000–2016 to 2030 and potentially overstating practical feasibility. Carbon estimates exclude permanence/leakage dynamics and use coarse downscaled inputs. Data products are open, but code is only “on request,” limiting full reproducibility.

      This new one seems to show some potential to be reflecting the key concerns but I need to check this in more detail as it could just be credible sounding garbage. It still doesn't seem to pick up the key 'data leakage' concern.

      But actually I'm a bit puzzled as to what data is being piped in there because if I recall the latest version we had didn't ask for rationales for specific categories. So where is it getting this from?

    2. 78

      Much higher than the humans, but this category is rated lowest or second-lowest for both ... so perhaps close to a monotonic transformation of sorts.

  6. Oct 2025
    1. Show cod

      @Valentin These forest plots are really hard to read, it's so dense without spacing. Let's work together on some ways of making it more informative.

      I'm also puzzled as to why so many papers are shown only one type of rating and not the other. I know that some of our evaluators did not give ratings like this, and in some cases, we didn't even encourage it. But why is it missing for some of the llms? Did it just not have time to finish processing it?

      Maybe it's a display issue? It seems that the papers that were rated highest in terms of these tiers by the human raters did not get rated by the LLMs. Or maybe it just didn't show up in the graph?

  7. Sep 2025
    1. Inter-rater reliability (Cohen’s κ): We treated the AI as one “reviewer” and the average human as another, and asked: how often do they effectively give similar ratings, beyond what chance alignment would predict? For overall scores, Cohen’s κ (with quadratic weighting for partial agreement) came out around 0.25–0.30. This would be characterized as “fair” agreement at best – a significant gap remains between AI and human judgments. For most specific criteria, the weighted κ was lower, often in the 0.1–0.2 range (and for one category, effectively 0). A κ of 0 would mean no more agreement than random chance (given the distribution of scores), so some categories are bordering on that. In contrast, typical inter-human agreement on these kinds of scoring tasks can also be low, but usually one would hope for κ in the 0.3–0.6 range among trained reviewers on well-defined criteria. Our finding of κ < 0.3 in all cases suggests that the AI’s ratings are not interchangeable with a human reviewer’s – at least not without further calibration.

      @Valentin I want to focus on interpreting statistics like these. I think we could make info-theoretic statements about things like "how often would the LLM's relative ranking of 2 randomly chosen papers agree with a human's rannking". Is that kappa? Let's dig in

    2. Correlation (Pearson’s r) between the AI’s and human scores across papers: This tells us, for example, if a paper that humans gave a high score also tended to get a high score from AI (regardless of absolute difference). For the Overall scores, Pearson r ≈ 0.30, indicating a weak-to-moderate positive correlation.

      @Valentin Important -- let's put these measures in context. How does this correlation compare with what other papers found, and with inter-human ratings, for example.

      We might also find a way to introduce information-theoretic measures that say something in an intuitive absolute sense.

    3. Table 3.1 show agreement metrics across rating criteria. To quantify the agreements and differences observed, we calculated several statistics comparing LLM scores to the human scores, aggregated by criterion:

      @Valentin perhaps we should start with the measures of agreement and then get into unpacking the discrepancies?

    4. has a distinct “taste,” elevating some work and devaluing other work differently than human referees.

      I don't think we can call this 'taste' yet. It might be random noise or perhaps ~bias (to top institutions, authors, journals, etc.).

    5. or example, Aghion et al. 2017 was among the top few for human reviewers, but the LLM overall score put it notably lower relative to others, hence a downward green curve.

      @Valentin I don't think that was the greatest discrepancy -- should we identify some with a greater discrepancy here? (Ideally, we even soft-code it, as this is likely to change as we adjust the prompts, anonymize, etc.)

    6. high quality due to its robust modeling and accessible methodologies, making it relevant for climate mitigation and biodiversity

      This is also not particular coherent. Strange. If we just asked the llm to evaluate the paper it tends to give a much better response.

    7. on the right, the papers are ordered by the AI’s overall score (rank 1 = highest rated by AI)

      labeling-wise, would it be better to add the paper authors on the right as well? @valentin.klotzbuecher@posteo.de

    8. For example, in the Logic & Communication column, we see many light-orange cells – the AI often thought papers were a bit clearer or better argued (by its judgment) than the human evaluators did.

      I wonder if we should normalize this in a few ways, at least as an alternative measure.

      I suspect the AI's distribution of ratings may have different than the human distribution of ratings overall and, the "bias" may also differ by category.

      Actually, that might be something to do first -- compare the distributions of (middle -- later more sophisticated) ratings for humans and for LLMs in an overall sense.

      One possible normalization would be to state these as percentiles relative to the other stated percentiles within that group (humans, LLMs), or even within categories of paper/field/cause area (I suspect there's some major difference between the more applied and niche-EA work and the standard academic work (the latter is also probably concentrated in GH&D and environmental econ). On the other hand, the systematic differences between LLM and human ratings on average might also tell us something interesting. So I wouldn't want to only use normalized measures.

      I think a more sophisticated version of this normalization just becomes a statistical (random effects?) model where you allow components of variation along several margins.

      It's true the ranks thing gets at this issue to some extent, as I guess Spearman also does? But I don't think it fully captures it.

    9. deed, GPT often noted lack of code or data sharing in papers and penalized for it, whereas some human reviewers may have been more forgiving or did not emphasize open-science practices as strongly (especially if they focused more on content quality). As a result, for many papers the AI’s Open Science score is 5–10 points below the human average.

      This is interesting. The human evaluators may have had low expectations because they don't expect the open code and data to be provided until the paper has been published in a peer-reviewed journal. Here I would agree more with the LLM. "What should be" sense.

    10. Figure 3.2: Relative ranking (overall) by LLM and Human evaluators

      A quick impression: the LLMs tend to rank the papers from prominent academic authors particularly high?

    1. he brief rationales clarify what evidence in the paper drove each score.

      I don't think we can be confident that whatever it puts here accurately reflects the reasoning or process that determined the rating. (added discussion in text).

      Also, is there some way to extract the 'thinking steps' from the process ... that reasoning thread the models show you (which May or may not also reflect its true reasoning )

      @valik

    2. - tier_should = where the paper deserves to publish if quality-only decides. - tier_will = realistic prediction given status/noise/connections.

      This seems like an LLM abbreviation of our instructinos. @valik can you put the actual instructions back in?

    3. We use a single‑step call with a reasoning model that supports file input. One step avoids hand‑offs and summary loss from a separate “ingestion” stage. The model reads the whole PDF and produces the JSON defined above. We do not retrieve external sources or cross‑paper material for these scores; the evaluation is anchored in the manuscript itself.

      We should probably give a citation for this point.

      But is this the same point you made above?

    4. • Default = arithmetic mean of the other six midpoints (rounded).

      @Valentin I'm not sure why this should be the default. Note that in an earlier version of our evaluation framework, we used a weighted scheme, which we dropped

      I suspect it would be better for you to ask the question the same way we asking the evaluators here ... Which would simply mean getting rid of this bullet point. I think.

      The second bullet point is interesting though. I would be curious to hear how it considered the relevance of each metric in its overall. ... although I expect that asking that question might alter the overall score

      @valik Maybe GPT suggested this default averaging? It also seems so optimal because we're asking to get something that we could easily compute ourselves.

    5. plus a short rationale

      @valik @Valentin

      I don't see where in the prompt you explained to it what the rationale was supposed to mean. I only see some discussion on how to use that. It should be stated if they override the simple mean for aggregating the other categories to get the overall.

      (And see my other comment on why I think we should remove the request to do simple means.)

    6. The credible intervals communicate uncertainty rather than false precision

      @Valentin That's the intention, sure, but this paragraph makes it seem like the JSON schema somehow ensure that this is used. I'll try to adjust.

    7. # Environment setup

      Note to self -- this chunk is set to 'eval=false', it doesn't need re-running every time (and has trouble playing in the Quarto/Rstudio environment)

    8. Direct ingestion preserves tables, figures, equations, and sectioning, which ad‑hoc text scraping can mangle. It also avoids silent trimming or segmentation choices that would bias what the model sees.

      Useful to add a citation here.

    9. he sample includes r dim(research_done)[1] papers

      @Valentin the number comes out in my Rstudio but not on render. Wondering what I did wrong here. I think it may be syntax, but also it starts again with every chapter, so we need to carry the data in first for this discussion.

    10. per criterion (and noting the range of individual scores).

      @valentin we should probably do something more sophisticated at the next pass ... Either using each evaluation as a separate observation in the analysis or imputing the median taking the statedcredible intervals into account with some Bayesian procedure.

    1. Please turn on the hypothes.is plugin, and view public annotations to see Latex math representations

      What if I annotate this in hypothes.is -- is it preserved usefully?\(CRF(r,n) = \frac{r(1+r)^n}{(1+r)^n - 1} \)

      Yes, it seems to stay in the same place in the notebook even if the notebook is edited.

      Wait now it's an orphan?

  8. Aug 2025
    1. Figure 3.4: Human uncertainty (CI width) vs |LLM − Human|. Spearman correlation reported.

      This one is rather intricate, it might need some more analysis and talking through.

    2. Table 3.3: CI coverage: does one interval contain the other’s point estimate?

      I think DHK also looked at 'human in human' -- Might as well add this as a comparator?

    3. Table 3.1: Agreement metrics by metric (Pearson, Spearman, mean/median bias, κ unweighted/weighted).

      I assume this is referring to the correlation of (the average of?) the human ratings and the LLM ratings?

      Because we could also look at the agreement between humans (something David HJ was working on ... Might be useful to share this with him for his thoughts if he's interested at all.)

    4. As another indicator of agreement

      Exactly. It's sort of a measure of agreement, although I'm not sure quite how to interpret it. It's not a measure of calibration per se because we were asking them to rate these things, not to predict them.

      Although we could ask the LLM to do this as a prediction exercise, with or without training data, that might be interesting, And then, of course, the calibration would be meaningful.

    5. he horizontal radius covers the union of all human evidence for that paper — combining individual raters’ point scores and their CIs

      It's not fully clear to me what this does. I guess you center it (horizontally) at the midpoints of the human ratings.

      Are you using the outer bounds of both raters' CIs?

      Probably the best thing to do ultimately would be to impute a distribution over each individual ratings and CI these and then do some sort of belief aggregation. -- Horizontal linear aggregation actually feels the most intuitive to me from what I've read. ... And then give the implied CIs for that

    6. For each paper (selected metric), the ellipse is centered at the pair of midpoints (Human, LLM).

      This is pretty clear but it took me a second to get this, so maybe mention human = horizontal, LLM = vertical in the text.

    7. robust CI ellipses

      Robust in what sense? Also might be worth mentioning that we are talking about 90% credible intervals (at least that's what we asked the humans to give us).

    8. Uncertainty fields. Where available, we carry lower/upper bounds for both LLM and humans; these are used in optional uncertainty checks but do not affect the mid‑point comparisons below.

      I suspect we ultimately should do something more sophisticated with this ... like some Bayesian updating/averaging. It's also not entirely clear what you mean by "we carry".

      But of course, the "right" way to do this will depend on what precisely is the question that we're asking. Something we should have some nice chats about.

    9. Human criteria are recoded to the LLM schema (e.g., claims → claims_evidence, adv_knowledge → advancing_knowledge, etc.)

      This is just about coding the variables, right? Not really about the content?

    10. Sources. LLM ratings come from results/metrics_long.csv (rendered in the previous chapter). Human ratings are imported from your hand‑coded spreadsheet and mapped to LLM paper IDs via UJ_map.csv.

      Okay, I see these are basically notes to ourselves here.

    11. We (i) harmonize the two sources to a common set of metrics and paper IDs

      This is fine for our own notes for now, but it's not something that outsiders need to read, I guess.

    1. Limited International Scope: The focus on US data and markets (acknowledged in footnote 20) may limit generalizability, particularly given different regulatory environments and consumer preferences globally.

      Perhaps we should footnote something more about how we're focusing on US data because of its availability and thus focusing the question around the US, but will encourage making reasonable extrapolations to global consumption. #todo? Although we already have a footnote about it

    2. Supply Chain Complexity: The analysis doesn't fully account for how restaurant vs. grocery substitution patterns might differ, despite acknowledging this limitation.

      This kind of asks for too much detail, but maybe we state what the purpose of the post was as clearly as we should have?

    3. The explicit recognition that 'given the range of unknowns involved' makes direct comparison infeasible demonstrates strategic research thinking that EA organizations should emulate.

      @ozzie does your tool have explicit background instructions talking about "EA organizations", or is it getting this from the context of the post? Obviously the former would make it hard to apply in a more general range of settings.

    4. Score: 78/100

      @ozzie this is a critique of the above evaluation of our post. That's a very cool idea! Given that our post is already a bit meta, my brain is struggling to grok this, but I like it.

    5. Temporal Dynamics: The document doesn't adequately address how substitution patterns might change as plant-based products become more mainstream or as consumer familiarity increases.

      That's a fair point, and one we might want to incorporate. #todo?

    6. Meta-Analysis Limitations: While the document presents cross-price elasticity estimates, it could better address how traditional meta-analysis approaches might be inappropriate given the fundamental methodological concerns raised.

      This is a mostly valid comment, it's a pretty reasonable thing to ask for here. We probably don't want to go into this detail here, but it's a natural thing someone might ask.

      OK. #todo? (maybe a footnote)

    7. This nuanced view prevents oversimplified dismissal of conflicting results.

      But does this tool reward valid nuance or just any type of "it's complicated because of" statements?

    8. However, the document could strengthen its analysis by exploring potential explanations for these inconsistencies more systematically.

      This perhaps misses the point of the post. It's not meant to be an actual research exploration. It's just demonstrating the problem to help motivate and guide the next step in our project.

    9. The progression from broad goal-focused questions to specific operationalized questions follows sound research design principles.

      I'm wondering how much the tool just being agreeable with our reasoning and how much the tool is challenging it. I wonder whether it just focuses on whether the author seems to be saying things a lot about "why" and thinking/overthinking, and it rewards that.

      That said, I do think this is a post that we've been putting a lot of care into, so it should get at least a decent rating here.

    10. supermarkets raising PBA prices during economic windfalls effectively illustrates how naive analysis could overstate substitution effects.

      This would be a useful insight for a regular layman's post (the current context), but it would be kind of trivial if we were applying this to an economics paper.

      But maybe the tool adapts to that.

    11. This represents sophisticated epistemic humility—recognizing that documenting limitations transparently provides significant value even when definitive answers remain elusive

      Okay, but this also seems like something that might be easy to fake with shibboleths and hollow phrases.

    12. This document presents The Unjournal's approach to investigating whether plant-based meat alternatives effectively displace animal product consumption—a critical question for animal welfare funding decisions. The analysis demonstrates sophisticated awareness of methodological limitations while proposing a structured approach to synthesize conflicting evidence.

      The language is a little too formal and flowery for my taste.

    13. This document presents The Unjournal's Pivotal Questions initiative investigating plant-based meat substitution effects on animal product consumption. The analysis reveals significant methodological challenges in the existing research, with conflicting findings across studies using similar data sources. The document proposes a focused operationalized question about Impossible/Beyond Beef price changes and chicken consumption, while acknowledging fundamental limitations in current estimation approaches. Key insights include the recognition of endogeneity problems, the value of transparent uncertainty quantification, and the need for more rigorous experimental designs to inform animal welfare funding decisions.

      The summary is accurate, although it emphasizes methodology more than I thought the post was. That said, it might be that the post is actually just doing that more than we had intended.

    1. Extracted forecast: Will Impossible Beef reach price-parity with conventional ground beef by December 31, 2030? Quality scores: Precision: 85 | Verifiable: 90 | Important: 75 | Robustness: 65 Our reasoning: Based on 2 independent analyses, the estimated probability is 72.3%. There is high consensus among the forecasts. Individual model forecasts: Model 1: 72.3% - "Plant-based meat costs have declined significantly due to scale economies and R&D investments, while conventional beef prices face upward pressure from environmental regulations and feed costs, making price parity likely within the 5+ year timeframe."

      @ozzie this is indeed directly helpful for us!

    2. Based on 2 independent analyses, the estimated probability is 7.5%. There is high consensus among the forecasts.

      @ozzie ah, it seems you've set it up to only make predictions for binary things.

    3. 1. "What will be the price of IB+ on January 1, 2030?"

      @ozzie I'm puzzled here by what's going on. These were not our predictions. These were just questions we asked for others to predict. I guess it defaults to 'author predicts 50%' for such questions?

    1. op 3 Most Overconfident Predictions 1. "Will most cultured meat (by volume) be produced wi..." Reality check: ~32.6% | Gap: 17.4%

      @ozzie I'm rather puzzled by what it did here, as we didn't make a prediction; we were just proposing questions for others to predict on.

  9. Jul 2025
    1. Figure 2.2: Per-paper mid-point scores across all metrics. Darker green → higher percentile. Columns ordered by each paper’s overall average. #altair-viz-41a3d5b933fe425c841b312f24a54cc6.vega-embed { width: 100%; display: flex; } #altair-viz-41a3d5b933fe425c841b312f24a54cc6.vega-embed details, #altair-viz-41a3d5b933fe425c841b312f24a54cc6.vega-embed details summary { position: relative; } Save as SVGSave as PNGView SourceView Compiled VegaOpen in Vega Editor var VEGA_DEBUG = (typeof VEGA_DEBUG == "undefined") ? {} : VEGA_DEBUG; (function(spec, embedOpt){ let outputDiv = document.currentScript.previousElementSibling; if (outputDiv.id !== "altair-viz-41a3d5b933fe425c841b312f24a54cc6") { outputDiv = document.getElementById("altair-viz-41a3d5b933fe425c841b312f24a54cc6"); } const paths = { "vega": "https://cdn.jsdelivr.net/npm/vega@5?noext", "vega-lib": "https://cdn.jsdelivr.net/npm/vega-lib?noext", "vega-lite": "https://cdn.jsdelivr.net/npm/vega-lite@5.20.1?noext", "vega-embed": "https://cdn.jsdelivr.net/npm/vega-embed@6?noext", }; function maybeLoadScript(lib, version) { var key = `${lib.replace("-", "")}_version`; return (VEGA_DEBUG[key] == version) ? Promise.resolve(paths[lib]) : new Promise(function(resolve, reject) { var s = document.createElement('script'); document.getElementsByTagName("head")[0].appendChild(s); s.async = true; s.onload = () => { VEGA_DEBUG[key] = version; return resolve(paths[lib]); }; s.onerror = () => reject(`Error loading script: ${paths[lib]}`); s.src = paths[lib]; }); } function showError(err) { outputDiv.innerHTML = `<div class="error" style="color:red;">${err}</div>`; throw err; } function displayChart(vegaEmbed) { vegaEmbed(outputDiv, spec, embedOpt) .catch(err => showError(`Javascript Error: ${err.message}<br>This usually means there's a typo in your chart specification. See the javascript console for the full traceback.`)); } if(typeof define === "function" && define.amd) { requirejs.config({paths}); let deps = ["vega-embed"]; require(deps, displayChart, err => showError(`Error loading script: ${err.message}`)); } else { maybeLoadScript("vega", "5") .then(() => maybeLoadScript("vega-lite", "5.20.1")) .then(() => maybeLoadScript("vega-embed", "6")) .catch(showError) .then(() => displayChart(vegaEmbed)); } })({"config": {"view": {"stroke": "transparent"}, "background": "white", "title": {"font": "Source Sans Pro", "fontSize": 16, "color": "#222"}, "axis": {"labelFont": "Source Sans Pro", "titleFont": "Source Sans Pro", "labelColor": "#222", "titleColor": "#222", "gridOpacity": 0.15}, "legend": {"labelFont": "Source Sans Pro", "titleFont": "Source Sans Pro"}, "range": {"heatmap": ["#f6fbf3", "#e2f1d7", "#cfe7ba", "#badc9c", "#a6d27f", "#92c861", "#7dbd43", "#69b325", "#55a807", "#477b13"], "category": ["#99bb66", "#f19e4b", "#6bb0f3", "#d9534f", "#636363", "#ffb400", "#53354a", "#2780e3", "#3fb618", "#8e6c8a"]}}, "data": {"name": "data-b64186ba8bd194e866931748b3fda927"}, "mark": {"type": "rect"}, "encoding": {"color": {"field": "midpoint", "legend": {"title": "Score"}, "scale": {"domain": [0, 100]}, "type": "quantitative"}, "tooltip": [{"field": "short", "type": "nominal"}, {"field": "metric", "type": "nominal"}, {"field": "midpoint", "type": "quantitative"}], "x": {"field": "short", "sort": ["Banerjee (2023)", "Acemoglu (2024)", "Buntaine (2023)", "Kremer (2022)", "Akram (2017)", "Walker (2023)", "Jack (2022)", "Alcott (2024)", "Crawfurd (2023)", "Bhat (2022)", "Liang (2021)", "Haushofer (2020)", "Epperson (2024)", "Arora (2023)", "Fangwa (2023)", "Trammel (2025)", "Alatas (2019)", "Hill (2024)", "Barker (2021)", "Aghion (2017)", "Bahar (2022)", "Carson (2023)", "Clancy (2024)", "Kubo (2023)", "Schuett (2023)", "Barberio (2022)", "Chuard (2022)", "Bettle (2023)", "Kremer (2020)", "Bhattacharya (2020)", "Bruers (2021)"], "title": null, "type": "nominal"}, "y": {"field": "metric", "sort": null, "title": null, "type": "nominal"}}, "height": 280, "width": 434, "$schema": "https://vega.github.io/schema/vega-lite/v5.20.1.json", "datasets": {"data-b64186ba8bd194e866931748b3fda927": [{"metric": "global_relevance", "paper": "Acemoglu et al. 2024", "midpoint": 92, "short": "Acemoglu (2024)"}, {"metric": "open_science", "paper": "Acemoglu et al. 2024", "midpoint": 75, "short": "Acemoglu (2024)"}, {"metric": "logic_communication", "paper": "Acemoglu et al. 2024", "midpoint": 90, "short": "Acemoglu (2024)"}, {"metric": "advancing_knowledge", "paper": "Acemoglu et al. 2024", "midpoint": 88, "short": "Acemoglu (2024)"}, {"metric": "methods", "paper": "Acemoglu et al. 2024", "midpoint": 83, "short": "Acemoglu (2024)"}, {"metric": "claims_evidence", "paper": "Acemoglu et al. 2024", "midpoint": 85, "short": "Acemoglu (2024)"}, {"metric": "overall", "paper": "Acemoglu et al. 2024", "midpoint": 86, "short": "Acemoglu (2024)"}, {"metric": "global_relevance", "paper": "Aghion et al. 2017", "midpoint": 70, "short": "Aghion (2017)"}, {"metric": "open_science", "paper": "Aghion et al. 2017", "midpoint": 40, "short": "Aghion (2017)"}, {"metric": "logic_communication", "paper": "Aghion et al. 2017", "midpoint": 85, "short": "Aghion (2017)"}, {"metric": "advancing_knowledge", "paper": "Aghion et al. 2017", "midpoint": 80, "short": "Aghion (2017)"}, {"metric": "methods", "paper": "Aghion et al. 2017", "midpoint": 72, "short": "Aghion (2017)"}, {"metric": "claims_evidence", "paper": "Aghion et al. 2017", "midpoint": 65, "short": "Aghion (2017)"}, {"metric": "overall", "paper": "Aghion et al. 2017", "midpoint": 69, "short": "Aghion (2017)"}, {"metric": "global_relevance", "paper": "Akram et al. 2017", "midpoint": 85, "short": "Akram (2017)"}, {"metric": "open_science", "paper": "Akram et al. 2017", "midpoint": 60, "short": "Akram (2017)"}, {"metric": "logic_communication", "paper": "Akram et al. 2017", "midpoint": 83, "short": "Akram (2017)"}, {"metric": "advancing_knowledge", "paper": "Akram et al. 2017", "midpoint": 80, "short": "Akram (2017)"}, {"metric": "methods", "paper": "Akram et al. 2017", "midpoint": 88, "short": "Akram (2017)"}, {"metric": "claims_evidence", "paper": "Akram et al. 2017", "midpoint": 85, "short": "Akram (2017)"}, {"metric": "overall", "paper": "Akram et al. 2017", "midpoint": 83, "short": "Akram (2017)"}, {"metric": "global_relevance", "paper": "Alatas et al. 2019", "midpoint": 70, "short": "Alatas (2019)"}, {"metric": "open_science", "paper": "Alatas et al. 2019", "midpoint": 50, "short": "Alatas (2019)"}, {"metric": "logic_communication", "paper": "Alatas et al. 2019", "midpoint": 80, "short": "Alatas (2019)"}, {"metric": "advancing_knowledge", "paper": "Alatas et al. 2019", "midpoint": 75, "short": "Alatas (2019)"}, {"metric": "methods", "paper": "Alatas et al. 2019", "midpoint": 85, "short": "Alatas (2019)"}, {"metric": "claims_evidence", "paper": "Alatas et al. 2019", "midpoint": 80, "short": "Alatas (2019)"}, {"metric": "overall", "paper": "Alatas et al. 2019", "midpoint": 75, "short": "Alatas (2019)"}, {"metric": "global_relevance", "paper": "Alcott et al. 2024", "midpoint": 65, "short": "Alcott (2024)"}, {"metric": "open_science", "paper": "Alcott et al. 2024", "midpoint": 75, "short": "Alcott (2024)"}, {"metric": "logic_communication", "paper": "Alcott et al. 2024", "midpoint": 90, "short": "Alcott (2024)"}, {"metric": "advancing_knowledge", "paper": "Alcott et al. 2024", "midpoint": 85, "short": "Alcott (2024)"}, {"metric": "methods", "paper": "Alcott et al. 2024", "midpoint": 77, "short": "Alcott (2024)"}, {"metric": "claims_evidence", "paper": "Alcott et al. 2024", "midpoint": 80, "short": "Alcott (2024)"}, {"metric": "overall", "paper": "Alcott et al. 2024", "midpoint": 80, "short": "Alcott (2024)"}, {"metric": "global_relevance", "paper": "Arora et al. 2023", "midpoint": 60, "short": "Arora (2023)"}, {"metric": "open_science", "paper": "Arora et al. 2023", "midpoint": 70, "short": "Arora (2023)"}, {"metric": "logic_communication", "paper": "Arora et al. 2023", "midpoint": 78, "short": "Arora (2023)"}, {"metric": "advancing_knowledge", "paper": "Arora et al. 2023", "midpoint": 85, "short": "Arora (2023)"}, {"metric": "methods", "paper": "Arora et al. 2023", "midpoint": 80, "short": "Arora (2023)"}, {"metric": "claims_evidence", "paper": "Arora et al. 2023", "midpoint": 75, "short": "Arora (2023)"}, {"metric": "overall", "paper": "Arora et al. 2023", "midpoint": 77, "short": "Arora (2023)"}, {"metric": "global_relevance", "paper": "Bahar et al. 2022", "midpoint": 80, "short": "Bahar (2022)"}, {"metric": "open_science", "paper": "Bahar et al. 2022", "midpoint": 40, "short": "Bahar (2022)"}, {"metric": "logic_communication", "paper": "Bahar et al. 2022", "midpoint": 70, "short": "Bahar (2022)"}, {"metric": "advancing_knowledge", "paper": "Bahar et al. 2022", "midpoint": 75, "short": "Bahar (2022)"}, {"metric": "methods", "paper": "Bahar et al. 2022", "midpoint": 72, "short": "Bahar (2022)"}, {"metric": "claims_evidence", "paper": "Bahar et al. 2022", "midpoint": 70, "short": "Bahar (2022)"}, {"metric": "overall", "paper": "Bahar et al. 2022", "midpoint": 68, "short": "Bahar (2022)"}, {"metric": "global_relevance", "paper": "Banerjee et al. 2023", "midpoint": 95, "short": "Banerjee (2023)"}, {"metric": "open_science", "paper": "Banerjee et al. 2023", "midpoint": 80, "short": "Banerjee (2023)"}, {"metric": "logic_communication", "paper": "Banerjee et al. 2023", "midpoint": 85, "short": "Banerjee (2023)"}, {"metric": "advancing_knowledge", "paper": "Banerjee et al. 2023", "midpoint": 95, "short": "Banerjee (2023)"}, {"metric": "methods", "paper": "Banerjee et al. 2023", "midpoint": 90, "short": "Banerjee (2023)"}, {"metric": "claims_evidence", "paper": "Banerjee et al. 2023", "midpoint": 88, "short": "Banerjee (2023)"}, {"metric": "overall", "paper": "Banerjee et al. 2023", "midpoint": 90, "short": "Banerjee (2023)"}, {"metric": "global_relevance", "paper": "Barberio et al. 2022", "midpoint": 85, "short": "Barberio (2022)"}, {"metric": "open_science", "paper": "Barberio et al. 2022", "midpoint": 40, "short": "Barberio (2022)"}, {"metric": "logic_communication", "paper": "Barberio et al. 2022", "midpoint": 80, "short": "Barberio (2022)"}, {"metric": "advancing_knowledge", "paper": "Barberio et al. 2022", "midpoint": 60, "short": "Barberio (2022)"}, {"metric": "methods", "paper": "Barberio et al. 2022", "midpoint": 70, "short": "Barberio (2022)"}, {"metric": "claims_evidence", "paper": "Barberio et al. 2022", "midpoint": 65, "short": "Barberio (2022)"}, {"metric": "overall", "paper": "Barberio et al. 2022", "midpoint": 67, "short": "Barberio (2022)"}, {"metric": "global_relevance", "paper": "Barker et al. 2021", "midpoint": 85, "short": "Barker (2021)"}, {"metric": "open_science", "paper": "Barker et al. 2021", "midpoint": 55, "short": "Barker (2021)"}, {"metric": "logic_communication", "paper": "Barker et al. 2021", "midpoint": 80, "short": "Barker (2021)"}, {"metric": "advancing_knowledge", "paper": "Barker et al. 2021", "midpoint": 75, "short": "Barker (2021)"}, {"metric": "methods", "paper": "Barker et al. 2021", "midpoint": 70, "short": "Barker (2021)"}, {"metric": "claims_evidence", "paper": "Barker et al. 2021", "midpoint": 65, "short": "Barker (2021)"}, {"metric": "overall", "paper": "Barker et al. 2021", "midpoint": 72, "short": "Barker (2021)"}, {"metric": "global_relevance", "paper": "Bettle 2023", "midpoint": 75, "short": "Bettle (2023)"}, {"metric": "open_science", "paper": "Bettle 2023", "midpoint": 40, "short": "Bettle (2023)"}, {"metric": "logic_communication", "paper": "Bettle 2023", "midpoint": 70, "short": "Bettle (2023)"}, {"metric": "advancing_knowledge", "paper": "Bettle 2023", "midpoint": 65, "short": "Bettle (2023)"}, {"metric": "methods", "paper": "Bettle 2023", "midpoint": 60, "short": "Bettle (2023)"}, {"metric": "claims_evidence", "paper": "Bettle 2023", "midpoint": 55, "short": "Bettle (2023)"}, {"metric": "overall", "paper": "Bettle 2023", "midpoint": 61, "short": "Bettle (2023)"}, {"metric": "global_relevance", "paper": "Bhat et al. 2022", "midpoint": 85, "short": "Bhat (2022)"}, {"metric": "open_science", "paper": "Bhat et al. 2022", "midpoint": 60, "short": "Bhat (2022)"}, {"metric": "logic_communication", "paper": "Bhat et al. 2022", "midpoint": 80, "short": "Bhat (2022)"}, {"metric": "advancing_knowledge", "paper": "Bhat et al. 2022", "midpoint": 85, "short": "Bhat (2022)"}, {"metric": "methods", "paper": "Bhat et al. 2022", "midpoint": 80, "short": "Bhat (2022)"}, {"metric": "claims_evidence", "paper": "Bhat et al. 2022", "midpoint": 75, "short": "Bhat (2022)"}, {"metric": "overall", "paper": "Bhat et al. 2022", "midpoint": 78, "short": "Bhat (2022)"}, {"metric": "global_relevance", "paper": "Bhattacharya and Packalen 2020", "midpoint": 55, "short": "Bhattacharya (2020)"}, {"metric": "open_science", "paper": "Bhattacharya and Packalen 2020", "midpoint": 40, "short": "Bhattacharya (2020)"}, {"metric": "logic_communication", "paper": "Bhattacharya and Packalen 2020", "midpoint": 70, "short": "Bhattacharya (2020)"}, {"metric": "advancing_knowledge", "paper": "Bhattacharya and Packalen 2020", "midpoint": 60, "short": "Bhattacharya (2020)"}, {"metric": "methods", "paper": "Bhattacharya and Packalen 2020", "midpoint": 45, "short": "Bhattacharya (2020)"}, {"metric": "claims_evidence", "paper": "Bhattacharya and Packalen 2020", "midpoint": 55, "short": "Bhattacharya (2020)"}, {"metric": "overall", "paper": "Bhattacharya and Packalen 2020", "midpoint": 54, "short": "Bhattacharya (2020)"}, {"metric": "global_relevance", "paper": "Bruers 2021", "midpoint": 55, "short": "Bruers (2021)"}, {"metric": "open_science", "paper": "Bruers 2021", "midpoint": 25, "short": "Bruers (2021)"}, {"metric": "logic_communication", "paper": "Bruers 2021", "midpoint": 60, "short": "Bruers (2021)"}, {"metric": "advancing_knowledge", "paper": "Bruers 2021", "midpoint": 50, "short": "Bruers (2021)"}, {"metric": "methods", "paper": "Bruers 2021", "midpoint": 35, "short": "Bruers (2021)"}, {"metric": "claims_evidence", "paper": "Bruers 2021", "midpoint": 40, "short": "Bruers (2021)"}, {"metric": "overall", "paper": "Bruers 2021", "midpoint": 45, "short": "Bruers (2021)"}, {"metric": "global_relevance", "paper": "Buntaine et al. 2023", "midpoint": 90, "short": "Buntaine (2023)"}, {"metric": "open_science", "paper": "Buntaine et al. 2023", "midpoint": 70, "short": "Buntaine (2023)"}, {"metric": "logic_communication", "paper": "Buntaine et al. 2023", "midpoint": 85, "short": "Buntaine (2023)"}, {"metric": "advancing_knowledge", "paper": "Buntaine et al. 2023", "midpoint": 85, "short": "Buntaine (2023)"}, {"metric": "methods", "paper": "Buntaine et al. 2023", "midpoint": 90, "short": "Buntaine (2023)"}, {"metric": "claims_evidence", "paper": "Buntaine et al. 2023", "midpoint": 90, "short": "Buntaine (2023)"}, {"metric": "overall", "paper": "Buntaine et al. 2023", "midpoint": 85, "short": "Buntaine (2023)"}, {"metric": "global_relevance", "paper": "Carson et al. 2023", "midpoint": 60, "short": "Carson (2023)"}, {"metric": "open_science", "paper": "Carson et al. 2023", "midpoint": 55, "short": "Carson (2023)"}, {"metric": "logic_communication", "paper": "Carson et al. 2023", "midpoint": 80, "short": "Carson (2023)"}, {"metric": "advancing_knowledge", "paper": "Carson et al. 2023", "midpoint": 65, "short": "Carson (2023)"}, {"metric": "methods", "paper": "Carson et al. 2023", "midpoint": 70, "short": "Carson (2023)"}, {"metric": "claims_evidence", "paper": "Carson et al. 2023", "midpoint": 75, "short": "Carson (2023)"}, {"metric": "overall", "paper": "Carson et al. 2023", "midpoint": 68, "short": "Carson (2023)"}, {"metric": "global_relevance", "paper": "Chuard et al. 2022", "midpoint": 65, "short": "Chuard (2022)"}, {"metric": "open_science", "paper": "Chuard et al. 2022", "midpoint": 50, "short": "Chuard (2022)"}, {"metric": "logic_communication", "paper": "Chuard et al. 2022", "midpoint": 75, "short": "Chuard (2022)"}, {"metric": "advancing_knowledge", "paper": "Chuard et al. 2022", "midpoint": 60, "short": "Chuard (2022)"}, {"metric": "methods", "paper": "Chuard et al. 2022", "midpoint": 65, "short": "Chuard (2022)"}, {"metric": "claims_evidence", "paper": "Chuard et al. 2022", "midpoint": 70, "short": "Chuard (2022)"}, {"metric": "overall", "paper": "Chuard et al. 2022", "midpoint": 64, "short": "Chuard (2022)"}, {"metric": "global_relevance", "paper": "Clancy 2024", "midpoint": 80, "short": "Clancy (2024)"}, {"metric": "open_science", "paper": "Clancy 2024", "midpoint": 55, "short": "Clancy (2024)"}, {"metric": "logic_communication", "paper": "Clancy 2024", "midpoint": 70, "short": "Clancy (2024)"}, {"metric": "advancing_knowledge", "paper": "Clancy 2024", "midpoint": 75, "short": "Clancy (2024)"}, {"metric": "methods", "paper": "Clancy 2024", "midpoint": 60, "short": "Clancy (2024)"}, {"metric": "claims_evidence", "paper": "Clancy 2024", "midpoint": 65, "short": "Clancy (2024)"}, {"metric": "overall", "paper": "Clancy 2024", "midpoint": 68, "short": "Clancy (2024)"}, {"metric": "global_relevance", "paper": "Crawfurd et al. 2023", "midpoint": 90, "short": "Crawfurd (2023)"}, {"metric": "open_science", "paper": "Crawfurd et al. 2023", "midpoint": 70, "short": "Crawfurd (2023)"}, {"metric": "logic_communication", "paper": "Crawfurd et al. 2023", "midpoint": 85, "short": "Crawfurd (2023)"}, {"metric": "advancing_knowledge", "paper": "Crawfurd et al. 2023", "midpoint": 80, "short": "Crawfurd (2023)"}, {"metric": "methods", "paper": "Crawfurd et al. 2023", "midpoint": 75, "short": "Crawfurd (2023)"}, {"metric": "claims_evidence", "paper": "Crawfurd et al. 2023", "midpoint": 70, "short": "Crawfurd (2023)"}, {"metric": "overall", "paper": "Crawfurd et al. 2023", "midpoint": 78, "short": "Crawfurd (2023)"}, {"metric": "global_relevance", "paper": "Epperson and Gerster 2024", "midpoint": 60, "short": "Epperson (2024)"}, {"metric": "open_science", "paper": "Epperson and Gerster 2024", "midpoint": 85, "short": "Epperson (2024)"}, {"metric": "logic_communication", "paper": "Epperson and Gerster 2024", "midpoint": 80, "short": "Epperson (2024)"}, {"metric": "advancing_knowledge", "paper": "Epperson and Gerster 2024", "midpoint": 70, "short": "Epperson (2024)"}, {"metric": "methods", "paper": "Epperson and Gerster 2024", "midpoint": 78, "short": "Epperson (2024)"}, {"metric": "claims_evidence", "paper": "Epperson and Gerster 2024", "midpoint": 80, "short": "Epperson (2024)"}, {"metric": "overall", "paper": "Epperson and Gerster 2024", "midpoint": 75, "short": "Epperson (2024)"}, {"metric": "global_relevance", "paper": "Fangwa et al. 2023", "midpoint": 85, "short": "Fangwa (2023)"}, {"metric": "open_science", "paper": "Fangwa et al. 2023", "midpoint": 45, "short": "Fangwa (2023)"}, {"metric": "logic_communication", "paper": "Fangwa et al. 2023", "midpoint": 80, "short": "Fangwa (2023)"}, {"metric": "advancing_knowledge", "paper": "Fangwa et al. 2023", "midpoint": 75, "short": "Fangwa (2023)"}, {"metric": "methods", "paper": "Fangwa et al. 2023", "midpoint": 80, "short": "Fangwa (2023)"}, {"metric": "claims_evidence", "paper": "Fangwa et al. 2023", "midpoint": 78, "short": "Fangwa (2023)"}, {"metric": "overall", "paper": "Fangwa et al. 2023", "midpoint": 75, "short": "Fangwa (2023)"}, {"metric": "global_relevance", "paper": "Haushofer et al. 2020", "midpoint": 75, "short": "Haushofer (2020)"}, {"metric": "open_science", "paper": "Haushofer et al. 2020", "midpoint": 65, "short": "Haushofer (2020)"}, {"metric": "logic_communication", "paper": "Haushofer et al. 2020", "midpoint": 80, "short": "Haushofer (2020)"}, {"metric": "advancing_knowledge", "paper": "Haushofer et al. 2020", "midpoint": 70, "short": "Haushofer (2020)"}, {"metric": "methods", "paper": "Haushofer et al. 2020", "midpoint": 85, "short": "Haushofer (2020)"}, {"metric": "claims_evidence", "paper": "Haushofer et al. 2020", "midpoint": 80, "short": "Haushofer (2020)"}, {"metric": "overall", "paper": "Haushofer et al. 2020", "midpoint": 76, "short": "Haushofer (2020)"}, {"metric": "global_relevance", "paper": "Hill et al. 2024", "midpoint": 65, "short": "Hill (2024)"}, {"metric": "open_science", "paper": "Hill et al. 2024", "midpoint": 60, "short": "Hill (2024)"}, {"metric": "logic_communication", "paper": "Hill et al. 2024", "midpoint": 85, "short": "Hill (2024)"}, {"metric": "advancing_knowledge", "paper": "Hill et al. 2024", "midpoint": 70, "short": "Hill (2024)"}, {"metric": "methods", "paper": "Hill et al. 2024", "midpoint": 78, "short": "Hill (2024)"}, {"metric": "claims_evidence", "paper": "Hill et al. 2024", "midpoint": 75, "short": "Hill (2024)"}, {"metric": "overall", "paper": "Hill et al. 2024", "midpoint": 73, "short": "Hill (2024)"}, {"metric": "global_relevance", "paper": "Jack et al. 2022", "midpoint": 88, "short": "Jack (2022)"}, {"metric": "open_science", "paper": "Jack et al. 2022", "midpoint": 60, "short": "Jack (2022)"}, {"metric": "logic_communication", "paper": "Jack et al. 2022", "midpoint": 85, "short": "Jack (2022)"}, {"metric": "advancing_knowledge", "paper": "Jack et al. 2022", "midpoint": 78, "short": "Jack (2022)"}, {"metric": "methods", "paper": "Jack et al. 2022", "midpoint": 80, "short": "Jack (2022)"}, {"metric": "claims_evidence", "paper": "Jack et al. 2022", "midpoint": 85, "short": "Jack (2022)"}, {"metric": "overall", "paper": "Jack et al. 2022", "midpoint": 79, "short": "Jack (2022)"}, {"metric": "global_relevance", "paper": "Kremer et al. 2020", "midpoint": 85, "short": "Kremer (2020)"}, {"metric": "open_science", "paper": "Kremer et al. 2020", "midpoint": 40, "short": "Kremer (2020)"}, {"metric": "logic_communication", "paper": "Kremer et al. 2020", "midpoint": 80, "short": "Kremer (2020)"}, {"metric": "advancing_knowledge", "paper": "Kremer et al. 2020", "midpoint": 55, "short": "Kremer (2020)"}, {"metric": "methods", "paper": "Kremer et al. 2020", "midpoint": 45, "short": "Kremer (2020)"}, {"metric": "claims_evidence", "paper": "Kremer et al. 2020", "midpoint": 60, "short": "Kremer (2020)"}, {"metric": "overall", "paper": "Kremer et al. 2020", "midpoint": 61, "short": "Kremer (2020)"}, {"metric": "global_relevance", "paper": "Kremer et al. 2022", "midpoint": 95, "short": "Kremer (2022)"}, {"metric": "open_science", "paper": "Kremer et al. 2022", "midpoint": 70, "short": "Kremer (2022)"}, {"metric": "logic_communication", "paper": "Kremer et al. 2022", "midpoint": 85, "short": "Kremer (2022)"}, {"metric": "advancing_knowledge", "paper": "Kremer et al. 2022", "midpoint": 90, "short": "Kremer (2022)"}, {"metric": "methods", "paper": "Kremer et al. 2022", "midpoint": 85, "short": "Kremer (2022)"}, {"metric": "claims_evidence", "paper": "Kremer et al. 2022", "midpoint": 80, "short": "Kremer (2022)"}, {"metric": "overall", "paper": "Kremer et al. 2022", "midpoint": 84, "short": "Kremer (2022)"}, {"metric": "global_relevance", "paper": "Kubo et al. 2023", "midpoint": 60, "short": "Kubo (2023)"}, {"metric": "open_science", "paper": "Kubo et al. 2023", "midpoint": 55, "short": "Kubo (2023)"}, {"metric": "logic_communication", "paper": "Kubo et al. 2023", "midpoint": 78, "short": "Kubo (2023)"}, {"metric": "advancing_knowledge", "paper": "Kubo et al. 2023", "midpoint": 65, "short": "Kubo (2023)"}, {"metric": "methods", "paper": "Kubo et al. 2023", "midpoint": 75, "short": "Kubo (2023)"}, {"metric": "claims_evidence", "paper": "Kubo et al. 2023", "midpoint": 70, "short": "Kubo (2023)"}, {"metric": "overall", "paper": "Kubo et al. 2023", "midpoint": 67, "short": "Kubo (2023)"}, {"metric": "global_relevance", "paper": "Liang et al. 2021", "midpoint": 70, "short": "Liang (2021)"}, {"metric": "open_science", "paper": "Liang et al. 2021", "midpoint": 70, "short": "Liang (2021)"}, {"metric": "logic_communication", "paper": "Liang et al. 2021", "midpoint": 75, "short": "Liang (2021)"}, {"metric": "advancing_knowledge", "paper": "Liang et al. 2021", "midpoint": 80, "short": "Liang (2021)"}, {"metric": "methods", "paper": "Liang et al. 2021", "midpoint": 85, "short": "Liang (2021)"}, {"metric": "claims_evidence", "paper": "Liang et al. 2021", "midpoint": 85, "short": "Liang (2021)"}, {"metric": "overall", "paper": "Liang et al. 2021", "midpoint": 78, "short": "Liang (2021)"}, {"metric": "global_relevance", "paper": "Schuett et al. 2023", "midpoint": 70, "short": "Schuett (2023)"}, {"metric": "open_science", "paper": "Schuett et al. 2023", "midpoint": 80, "short": "Schuett (2023)"}, {"metric": "logic_communication", "paper": "Schuett et al. 2023", "midpoint": 70, "short": "Schuett (2023)"}, {"metric": "advancing_knowledge", "paper": "Schuett et al. 2023", "midpoint": 60, "short": "Schuett (2023)"}, {"metric": "methods", "paper": "Schuett et al. 2023", "midpoint": 55, "short": "Schuett (2023)"}, {"metric": "claims_evidence", "paper": "Schuett et al. 2023", "midpoint": 65, "short": "Schuett (2023)"}, {"metric": "overall", "paper": "Schuett et al. 2023", "midpoint": 67, "short": "Schuett (2023)"}, {"metric": "global_relevance", "paper": "Trammel and Aschenbrenner 2025", "midpoint": 90, "short": "Trammel (2025)"}, {"metric": "open_science", "paper": "Trammel and Aschenbrenner 2025", "midpoint": 40, "short": "Trammel (2025)"}, {"metric": "logic_communication", "paper": "Trammel and Aschenbrenner 2025", "midpoint": 80, "short": "Trammel (2025)"}, {"metric": "advancing_knowledge", "paper": "Trammel and Aschenbrenner 2025", "midpoint": 85, "short": "Trammel (2025)"}, {"metric": "methods", "paper": "Trammel and Aschenbrenner 2025", "midpoint": 75, "short": "Trammel (2025)"}, {"metric": "claims_evidence", "paper": "Trammel and Aschenbrenner 2025", "midpoint": 70, "short": "Trammel (2025)"}, {"metric": "overall", "paper": "Trammel and Aschenbrenner 2025", "midpoint": 75, "short": "Trammel (2025)"}, {"metric": "global_relevance", "paper": "Walker et al. 2023", "midpoint": 90, "short": "Walker (2023)"}, {"metric": "open_science", "paper": "Walker et al. 2023", "midpoint": 60, "short": "Walker (2023)"}, {"metric": "logic_communication", "paper": "Walker et al. 2023", "midpoint": 80, "short": "Walker (2023)"}, {"metric": "advancing_knowledge", "paper": "Walker et al. 2023", "midpoint": 85, "short": "Walker (2023)"}, {"metric": "methods", "paper": "Walker et al. 2023", "midpoint": 80, "short": "Walker (2023)"}, {"metric": "claims_evidence", "paper": "Walker et al. 2023", "midpoint": 85, "short": "Walker (2023)"}, {"metric": "overall", "paper": "Walker et al. 2023", "midpoint": 80, "short": "Walker (2023)"}]}}, {"mode": "vega-lite"});

      This is really nice in principle, but it needs some tweaking. something about the colors and spacing make it hard to different here. Love the 'hover titles' though!!

  10. Jun 2025
  11. May 2025
    1. including probability judgement and robustness suggestions.

      --> 'making a probabilistic judgement about the accuracy of this claim, and making suggestions for robustness checks'

    1. → scoring ssrn-4071953.pdf …

      what was the run time? -- put in a timer? Something like 5 minutes for 3 papers

      NB some papers too long for o3 -- can be chunked and summarized by another model, langchain helps, or we use a larger context model like Gemini

    2. Table 2.1: Overall assessments (percentiles)

      Let's try one out on a paper that our evaluators gave low scores ... or one we think would likely to receive low scores in general, for comparison

    3. # 1. Point to the papers you want to score

      Looks like you put the pdfs in the local directory and the git repo? Let's think about ways to mor eautomatically 'feed these in' in bulkl

    4. We call GPT‑4o on each PDF, asking for a strict JSON percentile rating and a short rationale, then collect the results.

      I suspect 4o is not particularly insightful? But we can compare it to the (~manual) results we previously got from o3.

    1. Explore LangChain, OpenAI Evals

      Is langchain compatible with this quarto approach? ... probably yet. But we can first do some things more manually/simply.

    1. r in the following formula: P(H1|D)P(H0|D)= P(D|H1)P(D|H0) × P(H1)P(H0) Posterior Probability= Bayes Factor × Prior Probability A Bayesian analysis of data requires specifying the prior. Here, we will continu

      formula should refer to 'odds' not 'probability'

  12. Apr 2025
    1. Consider how choice of measure might affect assessment of policies targeting different population groups or levels of well-being.

      This seems a bit more actionable, but the qquickly linked text doesn't seem to provide insight

    2. Measure Selection: Choice between DALYs, QALYs, WELLBYs, or other measures can significantly impact estimated benefits of interventions. Airoldi (2007) demonstrates that DALYs consistently yield smaller values than QALYs for the same intervention, potentially affecting cost-effectiveness rankings and resource allocation decisions.

      This seems to argue 'this stuff does actually matter'. Not sure whether this shows it still matters after the SD ~normalization though

    3. Comparison of mental well-being and health-related quality of life measures: Johnson et al. (2016) compared the EQ-5D-3L with the Warwick-Edinburgh Mental Well-being Scale (WEMWBS). They found that WEMWBS better distinguished between different health levels due to the absence of a ceiling effect. This implies that standard deviation changes in mental well-being measures might capture more variation at higher levels of health or well-being compared to traditional health-related quality of life measures.

      Not sure I understand this, but the Ceiling effect issue seems important at a glance. If one measure is hitting ceilings, that can mess up 'like for like SD comparisons'.

    4. Sensitivity of subjective well-being measures: Christopher et al. (2014) found that subjective well-being measures were less sensitive to health differences and changes compared to SF-6D values. This suggests that standard deviation changes in subjective well-being measures (which are related to WELLBYs) might not correspond directly to changes in health-related quality of life measures (which are more closely related to DALYs).

      Seems 'important if true'

    5. We didn’t find direct comparisons of standard deviation changes in WELLBYs versus DALYs in the included studies.

      Is that a bad omen for this modeling choice?

    6. ound that subjective well-being measures were less sensitive or responsive to health differences and changes compared to SF-6D values

      What 'health differences'? Should this be seen as a gold standard comparison?

    7. They found that WEMWBS better distinguishes health levels due to the absence of a ceiling effect, which is present in the EQ-5D-3L.

      So why are others using 'WELLBYs" instead? Easier to collect?

    8. Johnson et al. (2016) compared the Warwick-Edinburgh Mental Well-being Scale (WEMWBS) with the EQ-5D-3L.

      Factor analysis. EFA and CFA validation here. (Something I've never fully been able to grok; it seems too many degrees of freedom if the elements can load onto any factor and factors can be correlated ... but this seems to be established practice.)

    9. Johnson et al. (2016) compared the Warwick-Edinburgh Mental Well-being Scale (WEMWBS) with the EQ-5D-3L.

      Lots of factor analysis to validate this measure. I've not fully been able to understand the logic behind EFA and CFA myself, even after some trying. But it seems to be well-accepted.

    10. First, proposed conversion factors between different well-being measures (such as the 0.3 QALY/DALY ratio) showed significant variation across contexts and populations. Second, while WELLBYs were proposed as an alternative to traditional health-related quality of life measures, the studies indicated limitations in their current validation. Third, the reviewed empirical work found that subjective well-being measures demonstrated lower sensitivity to health status changes compared to established measures like SF-6D, particularly in health intervention contexts.

      This strikes me as as particularly worth digging into!

    1. Here we present an initial set of forecasts from a panel of paid forecasters (including Linch and Neil). We plan to expand forecasting on similar questions in a Metaculus tournament[1] so that we can see how forecasts are affected by news of supposedly important breakthroughs.

      A small team. This aspect might be seen as informal

  13. Mar 2025
    1. These systems provide no structural advantages or disadvantages to either the Democratic or Republican parties or to any single politician.

      This seems like it's probably an overstatement; may be shorthand here

  14. Nov 2024
    1. Dr McCulloch has provided expert input, based on his research, to provide a solid evidence base to the Better Deal for Animals campaign coordinated by over 40 leading NGOs. McCulloch’s proposed animal welfare impact assessment has been supported by HMG Official Opposition Labour Party in Parliament and the UK Government has stated in official correspondence that it is considering how to implement post-Brexit sentience policy in response to coordinated lobbying strongly underpinned by McCulloch’s research.

      REF impact ... frameworks for The Unjournal?

  15. Oct 2024
    1. Publish-then-Review; Transparent and Open Peer Review

      The Unjournal supports this. I think we have done this in more depth and rigor than other initiatives in economics and social science

  16. Aug 2024
    1. Table 3 shows loadings for the first three components.

      We should discuss and analyze this further. What's a reasonable interpretation? How much of the variation do each of these explain?

    2. The table above already looks a bit worrying: typically no more than half of our evaluators’ midpoints fall within their co-evaluator’s 90% credible interval

      I'd like to dig into this to find some specific instances. Are these all the 'real CIs' and not the ones we imputed from the earlier likert CI data?

    3. Krippendorff's Alpha

      Some comparison numbers would also be helpful. I asked chatgpt

      give some comparison alpha's from other contexts to help understand this measure

      Response:

      Krippendorff's alpha is used across a variety of disciplines to measure inter-rater reliability. Below are some examples from different contexts to provide a point of reference for interpreting the values in your data:

      1. Content Analysis (Media Studies):
      2. A typical Krippendorff's alpha for coding reliability in content analysis (e.g., categorizing themes in media) might range from 0.6 to 0.8. Values below 0.67 are often considered insufficient for drawing strong conclusions, though thresholds can vary depending on the field. For example, a study evaluating news articles might report an alpha around 0.7.

      3. Healthcare (Medical Diagnostics):

      4. In medical diagnostics, particularly in assessments involving subjective judgment (e.g., radiology, pathology), alpha values are expected to be relatively high—often above 0.75. An alpha of 0.8 or higher is often considered acceptable for diagnostic consistency. For instance, a study on agreement among radiologists in interpreting mammograms might report an alpha of around 0.81.

      5. Education (Grading & Evaluation):

      6. In educational assessment, inter-rater reliability for grading essays or open-ended responses typically yields alphas in the range of 0.6 to 0.9, depending on the specificity of the rubric and training provided to raters. A study on grading consistency among multiple teachers might report an alpha of 0.65 to 0.75.

      7. Psychology (Behavioral Coding):

      8. In psychology, for tasks like coding behavior or assigning labels based on psychological interviews, Krippendorff’s alpha values of 0.4 to 0.6 are relatively common, but higher values (above 0.7) are preferred for more reliable and replicable findings. For example, coding nonverbal behaviors in therapy sessions might produce an alpha of 0.5.

      9. Social Science (Survey Coding):

      10. When coding open-ended survey responses, social science researchers often see alpha values ranging from 0.6 to 0.7, reflecting moderate agreement. An alpha of 0.65 might be reported in a study on the reliability of coding responses to survey questions about political preferences.

      11. Linguistics (Speech Annotation):

      12. In linguistics, particularly in tasks such as speech transcription or annotation, inter-rater reliability can vary widely depending on the complexity of the task. Alphas around 0.7 are often considered acceptable. For example, a study transcribing dialectical variations in speech might report an alpha of 0.72.

      General Guidelines for Interpreting Krippendorff's Alpha:

      • >0.80: Strong agreement, generally considered acceptable for high-stakes decision-making.
      • 0.67 - 0.80: Moderate to strong agreement, often considered sufficient for exploratory studies or content analysis.
      • 0.50 - 0.67: Moderate agreement, acceptable in some fields, but may require more consistency for high-stakes or formal evaluations.
      • <0.50: Low agreement, indicates significant variability in ratings; further training or clarification of criteria may be needed.

      Comparison to Your Data:

      • Your highest alpha values (e.g., 0.804 for "gp_relevance" and 0.778 for "merits_journal") indicate strong agreement, similar to the reliability found in high-stakes fields like medical diagnostics or well-defined content analysis tasks.
      • The lower values (e.g., 0.271 for "adv_knowledge" and 0.134 for "methods") suggest substantial disagreement among evaluators. These values are below what would be considered acceptable in most research contexts, even for exploratory analysis, and might require addressing the clarity or consistency of evaluation criteria.

      These comparisons provide a framework for assessing where your data stands in relation to typical expectations in various research areas.

    4. pe=1/6. Krippendorff’s alpha is a widely accepted statistic that corrects for pe and also defines “agreement” appropriately for different levels of measurement.

      I think you are missing the basic characterization here ...

      1 indicates perfect agreement, 0 indicates no agreement beyond chance, and negative values indicate systematic disagreement.

    5. Krippendorf's alpha statistics for our quantitative measures. N = 21 papers, 39 evaluations.

      Do we have an interpretation of this? Are these high, low, reasonable?

      (By the way if you wanted to integrate this into a permanent dashboard you might not want to add a narrative about the actual values, but you could still add a general discussion of 'what is considered a high alpha')

    6. There is a single paper with three evaluations; adding this in would give us many missing values in the “third evaluation” column, and we’d have to use more advanced techniques to deal with these.

      We should find some way to integrate this in,. There's so little data it's a shame to drop these.