6 Matching Annotations
  1. Nov 2025
    1. More important than any specific method on show, linguistic analysis is a larger enterprise than any single scholar or team of researchers, and we appreciate that B&GN have used this exercise to highlight a number of ways that methodological robustness can be better executed in linguistic typology.

      What stood out to me is the reminder that good research depends on collaboration and transparency. It shows how improving methods benefits the whole field, not just one study or researcher.

    2. For research to be replicable, transparency in methods is required so others can follow those methods with new data. For research to be reproducible, transparency in both the method and the data is required, which widens the scope of open research. Therefore we find it useful to distinguish between reproduction of analysis using original data and replication of methods on new data.

      The distinction between “replication” and “reproduction” became much clearer here. It helped me understand that replication uses new data, while reproduction uses the same dataset, which is an important concept for evaluating research quality.

    3. Becker and Guzmán Naranjo’s (2025) (henceforth B&GN) exploration of new methods for analysing existing typological data is a great example of the importance of transparency in research. This includes both transparency in regard to the data analysed as well as transparency in the methods of analysis. This open approach to data and methods requires open mindedness, on behalf of the current researchers, those whose work is being built on, and the reviewers and editors who facilitate this research.

      This paragraph introduces the main idea of the article: transparency in data and methods. I learned that transparency is not only about sharing data but also about having an open mindset that supports clear and honest research practices.

    1. As mentioned in Section 2, an important but largely ignored function of replication is the evaluation of the methodological robustness of the statistical methods. Roberts (2018) notes that “if the same core components cause the same result across a range of alternative models, then the results are robustly due to those core components.” We adapt this idea in the present study by evaluating how robust effects in the data are when using a different statistical approach for analysis. Crucially, we use the same dataset, i.e. sample and annotation, as in the original study. This leads to a controlled environment where we can test how much the results depend on the analysis alone, having eliminated variation across samples and annotation decisions. If the results of the previous studies can be replicated when using more advanced statistical techniques, we can be somewhat more confident about the effects found in the original studies. If our replications lead to different results, we should interpret the original results as less certain.

      The authors discuss the idea of methodological robustness conceptually but do not specify any quantitative measures or criteria for evaluating robustness across methods.

    2. Yet, replication has not played a very important role in language typology so far, with most of the discussion around replication concerned with different types of language samples and sampling methods (e.g. Dryer 1989; Haspelmath and Siegmund 2006; Maddieson 2006; Widmann and Bakker 2006).

      I think it might be useful to include a short rationale explaining why replication has been relatively neglected in typology.

    3. Replicability is also referred to as reproducibility in the literature; we regard the two terms as interchangeable and use “replicability” for consistency with the term “replication”.

      The distinction between replication and replicability is well drawn. A visual schematic (e.g., a conceptual diagram) could make these relationships clearer for readers unfamiliar with reproducibility terminology.