44 Matching Annotations
  1. Jul 2025
    1. En-deçà de ces controverses, ce qu’illustre la lecture distante pour notre propos c’est que “la donnée”, plutôt que comme format ou matériau, peut-être envisagée comme une modalité de saisie des documents par des usagers humains et non humains

      idée intéresante à creuser.

    2. Plus encore, les documents (de quelque nature que ce soit) s’y trouvent réduits à des 1 et des 0, encodés dans le même format et intégrés à un même moteur d’indexation si bien qu’un robot indexateur n’est pas concerné par leur nature (qui n’est qu’une métadonnée), ou qu’un algorithme d’humanités numériques peut être, dit-on, ‘agnostique’.

      on est bien d'accord qu'il s'agit d'une représentation caricaturale ? Dans ce cas, les porteurs de cette vision ne sont pas qualifiés.

    3. une

      se demander si la plateforme offre une ou plusieurs interfaces. Ma suggestion est que précisément, ce qui fait la fonction plateforme c'est de s'appuyer sur plusieurs interfaces qui vont s'adresser à des catégories d'usagers différents et la fonction de la plateforme est d'intermédier ces différents catégories d'usagers en articulant les différentes interfaces entre elles.

    4. donnons-nous une définitio

      Difficile d'inventer comme cela une définition de plateforme en ignorant toute une littérature qui s'efforce de définir le concept

    5. c’est le cas notamment d’OpenEdition qui héberge et diffuse des revue

      La phrase est ambigüe : "c'est le cas de" semble s'appuyer sur "bibliothèque numérique" qui précède et donc qualifier OpenEdition de bibliothèque numérique alors que c'est le contraire qui veut être dit (si j'ai bien suivi)

    6. d’ingénieurs

      idem à ma remarque précédente. + rien n'est vraiment fait de cette qualification en passant : si on dit qu'un collectif est "composé d'ingénieurs", quelle conséquence en tire-t-on `?

    7. d’ingénieurs

      Ingénieurs est une méta-catégorie qui ne dit pas grand-chose si on ne précise pas. Surtout en France où ce terme peut renvoyer à des choses très différentes (diplômé d'une école d'ingénieurs ou personnel d'appui à la recherche par exemple (ingénieur d'études, ingénieur de recherche))

  2. May 2021
    1. authors

      It is interesting to choose the author as a starting point. But it is quite paradoxical as in the long collection of stakeholders involved in the publishing process, the author is currently the weakest one. Moreover, I think this section drift away from its author-centric claim, in particular in the last subsection (systemic changes), equating then "author" with "researcher" ? Anyway, My point is to say that, it is a pity that this section is quite disconnected from what precedes and doesn't try to imagine what could be a multi-stakeholder strategy in terms of collective action, based on Ostrom analytical framework to make a change. A work that still has to be done, in my opinion.

    2. Finally, as administrators with power to change academic policies, senior or former academics could lobby to amend informal and formal rules, and work to transform the culture in their local universities, national institutions, or academic international association

      Individually, an administrator doesn't have the power to change the rules, in particular because he is immersed in a wider socio-political system than just the institution.

    3. Systemic changes

      In my opinion, this section fails what it promises. What is described here is a collection of individual actions inside the policy-making process but it is not collective action, which is the only mean to change the system. See an example below.

    4. More recently, the FAIR principles ‘Findable, Accessible, Interoperable, Reusable’ extended this open access logic to research data.

      Not exactly. FAIR doesn't imply OA

    5. CLOCKSS

      It could be interesting to compare preserving systems : CLOCKSS (an organisation not-for-profit, LOCKSS (a distributed network), CINES (a national stated-owned institution), etc. and how theses differences impact the provision of the service.

    6. GoogleDocs

      Interesting ! There should be a real study on Google Docs as the hidden scholarly collaboration infrastructure : what, how much, why, who ? etc.

    7. not-for-profit, non-commercial organizations

      In the diamond study, We learnt to expand the definition of diamond. In particular, because the interactions between commercial/non-commercial for-profit/non-profit are more intense than expected.

    8. The Rights Retention Strategy may now push authors towards for-profit companies providing immediate APC gold

      Not clear how. RRS is quite neutral regarding the subsequent business model of the journal where the articles are published. So, I would say that RRS doesn't "push" towards gold APC but "is compatible with".

    9. lazy consensus

      any difference with the "rough consensus" concept that was used in regards to the governance of internet infrastructure ? (IETF if my memory doesn't fail me).

  3. Jan 2019
    1. Manuscripts, the text of a scholarly argument, regardless of length.Proposals, the initial statement of argument(s), intended audience, engagement with other existing texts, and proposed structure of the work (more typical in monograph publishing)Datasets, the accumulation of evidence by scholars upon which an argument is based, and which can be shared with other scholars who seek to replicate an author’s argument or explore further pathways of inquiry (more typical in the natural and social sciences, but increasingly found in digital humanities fields).

      incomplete list : sample chapter and previous work are missing

    2. In proposing a delineation between closed and open forms of review, our organizing principle is the information available to the reader about the content of review. This means that if the identities of authors and reviewers are known to each other, but not to readers, we regard that as a form of closed review.
    1. We now call on such gatherings of stakeholders as the Association of University Presses, the American Council of Learned Societies, the Open Access Scholarly Publishers Association, the Association of College and Research Libraries, and the Library Publishing Forum to take up this conversation both within their own constituencies and in collaboration with each other.
    2. In conversation, participants speculated about creating a standard suite of metadata to describe the peer review process a publication has undergone. These metadata can become a standard part of the Crossref schema and thus a standard part of each article or book’s metadata
    3. It might be the case that an existing entity might see this initiative as a logical extension of its own mission within scholarly communication, seeking to provide the offices of a central registry and the function of oversight.
    4. That said, in both article and (to a greater extent) monograph publishing, there can be a variety of reviewers whose insights shape the final published result—scholarly peers, to be sure, but in addition professional staff editors, a series editor, or members of an editorial board.
    5. Datasets are increasingly viewed not merely as instrumentalities, but as scholarly objects in themselves; and questions touching on the experimental and/or research methodologies by which they were compiled, the validity of statistical significance argued on the basis of the data, and the availability to other scholars of data upon which an argument is based are taken in view by review processes.
    6. Indeed, a search of the term “peer review” in the Google books database (Figure 2) shows that the phrase only emerges in the late 1950s, and only becomes a widespread term of art in the last twenty years of the twentieth century
    7. the purpose of “peer review” had more to do with defending and maintaining the boundaries between members and non-members of the (selective, private) scholarly societies—and, by extension, protecting the reputation, and the exclusivity, of those organizations.
    8. That model (Figure 1) has as a central focus identifying what has been reviewed (a proposal? A complete manuscript? A dataset?); who has done the reviewing (a scholarly peer? An editor? A reader?); and how the work has been reviewed, on a spectrum of fully closed to completely open.
    9. While inherent to the meaning of “scholarly publication” is a process for evaluation—the peer review process—it is by no means clear that scholarly publishers have established a simple, meaningful, and uniform means of providing readers with information about how that process was brought to bear on the work in their hands
    10. We set out a proposed taxonomy of types of peer review, offering the idea of a first-order division between “closed” (or historically, but somewhat archaically, “blind) and “open” forms of review, acknowledging that new forms of the latter sort are emerging as more responsive to the needs of an increasing number of fields
    11. In undertaking this work, we had in view the example of the system developed and implemented by the efforts of Creative Commons to make possible for authors, artists, and content-creators the communication of the rights they are willing to share with users of their works
    12. But one distinctive and identifying characteristic they hold in common is the practice of some form of review of a proposed publication by qualified expert referees as part of the decision process in committing to publishing—the practice of peer review.
  4. Apr 2018
    1. The two forms of dissemination of the research paper, namely through the journal website and through a repository, operate in parallel and thereby enable the research findings to reach a broader audience

      But it's not the same version

  5. Mar 2016
    1. Lorsque Hubert Guillaud, rédacteur en chef d’InternetActu, pose la question de l’utilité des humanités numériques dans un billet du blog qu’il anime sur l’édition numérique,

      Is the reference relevant as it comes from a popular web magazine ?

  6. Jun 2015
    1. Editions are a particularly striking example, because they are not being considered scholarly accomplishments (since they don’t deliver an interpretation). They show the gap, in the conception of the research process, between analog and digital methods, even if anyone working with digital methods will, again, tell you that he or she does no more no less deliver an interpretation than his or her colleagues do with analog methods.

      You mean "digital editions" ? because print editions published at HonoréChampion are considered as scholarly work, aren't they ?

    2. This is different in the digital world. In the digital world, you are constantly self-aware of being constructing something, be it a corpus or a method (often both), and the process of reflecting these work steps makes it almost impossible to separate them from the interpretation, which then is not a final result, but a processual one. Which is why, whenever I give an example, I would need to explain the whole encoding system and the way the corpus is constituted and annotated in order for the hermeneutical value of that example to make sense. And I can’t just start explaining XML every time I have this conversation. Or should I?

      I wonder if this "difference" is caused by technology itself or by changes in research practices in SSH which began before or for other reasons than technology. Technology as a causality seems too obvious here in your argumentation