68 Matching Annotations
  1. Sep 2020
    1. Introduction

      Just a food for thought: wouldn't it be a better style to use a neutral form? I.e., "Because the user controls" instead of "Because we control"

    2. This specification does not require any particular technology or cryptography to underpin the generation, persistence, resolution or interpretation of DIDs.

      I am not sure this is well formulated. The specification does not require, but implementation does require a bunch of particular technologies. I think the intention here is to say something like "This specification does not depend on any particular technology..."

    3. A DID document might contain the DID subject itself (e.g. a data model).

      I do not understand this statement. The DID subject is defined as:

      The entity identified by a DID and described by a DID document. A DID has exactly one DID subject. Anything can be a DID subject: person, group, organization, physical thing, digital thing, logical thing, etc. The document cannot contain a person…

    4. DIDs are URLs

      Strictly speaking, they are not. They are URI-s and there is a thing called DID URL…

      This is only an abstract, but it should still be precise…

  2. Feb 2020
    1. The value of id MUST be a single valid DID.

      DID or DID URL?

      If DID URL is used, then the subject and the resource will be different. And that I find problematic.

    2. Implementers are strongly discouraged from using a DID fragment for anything other than a method-independent reference into the DID document to identify a component of a DID document (for example, a unique public key description or service endpoint).

      This should be SHOULD NOT or maybe MUST NOT (my preference).

  3. Aug 2019
    1. opy each additional property of

      Well, not exactly. If, e.g., the item's property is 'name', but the value is a simple string, then it should be turned into a LocalizableString with that value

    2. object that represents the processed manifest.

      We should probably refer to the WebIDL here

    3. does not contain the value

      At the moment we say it must be schema.org and the pub-context in this order. I guess this is what we ought to say here as well.

    1. JSON-LD

      This means we expect DID implementations to include a full-blown JSON-LD 1.1 implementation.

      Just wonder whether this should be elaborated upon. For examples, processing should normatively defined on the flattened form, or something like that.

    2. ollowing URL: did:example:contexts:987654321.

      What does that mean? A DID can refer to a DID document, which is a JSON-LD with some goodies, but is it also usable to refer to any type of document on the Web?

    3. Extensibility

      What is normative and what is not in this section? The reference to non-standard documents (proof, signatures) is a source of a problem

    4. MUST be a valid JSON-LD proof as defined by Linked Data Proofs

      Linked Data Proof is a CG document, not a standard. Ie, MUST cannot be used in this context...

    5. methods for implementing Authorization and Delegation

      shouldn't there be a standard "authorization" and/or "delegation" term defined for the DID document to separate them from, say, "authentication"?

    6. community

      Which community?

    7. in the Linked Data Cryptographic Suite Registry

      The status of this registry is unclear; the reference is a CG note. For this requirement to be normative we will have to settle the status of the registry, too.

      (Setting up and maintaining registries is a point of discussion in the W3C team, relevant for this case, too.)

    8. https://www.w3.org/2019/did/v1

      At this moment the context does not exist (which is of course o.k.), which also means that the final URL for it is pending. (Personally, I would prefer, for consistency with other specs, to use https://www.w3.org/ns/did)

    9. did:example:123456/path

      Note that this example is valid if an empty method-specific-id is allowed (issue 198)

      (Same note is valid for a number of examples later, too.)

    10. A resource hash of the DID

      This depends on the hash link draft (which, in turn, also depends on a separate ietf draft) which are not (yet?) standard. This means that the "hl" parameter name cannot be normative.

  4. May 2019
    1. da*R) = k^-1 (z + da*R – z’ -dA*R)

      hm. da = dA, right?

    2. 160

      It is probably more than 160 bits for Ethereum... (but I am not sure)

    3. The Hash

      For ethereum: a variant of SHA3 is used (Keccak)

  5. Jan 2019
    1. the main bottleneck

      I agree it is a bottleneck, not that this is the main one. Formalizing one's thoughts in something like RDF is simply way too hard, even for people in the formal world. We just cannot expect this to happen...

    1. we state that our paper critiques the concept of semantic publishing, but we do not say why and in what way, namely that we claim its interpretation to be not intuitive and not visionary. We are not aware of any ontology that would allow us to express this, and we restricted ourselves for this demonstration to existing resources. More work will be needed on establishing such ontologies and best practices to facilitate more precise and more inclusive formal models of scientific findings and arguments, but the currently existing vocabularies already allow – at least in our case – to achieve a basic level of genuine semantic publishing.

      Would it be possible to have ontologies that cover every aspect of human life that may be subject of science? That looks like a (very) tall order...

    2. Such a formal statement can be taken out of its context and stripped from natural language explanations attached to it, and it still means exactly the same thing, as far as the formal semantics are concerned.

      I wonder whether this is true... maybe true in exact sciences, but less convincing in "soft" sciences (history, sociology, psychology, but even in some areas of medical sciences...)

  6. Sep 2018
    1. digital signatures

      There is work on canonical RDF (and, related to that, signature), partially deployed but badly documented. We do not need a workshop on this, but get together the 2-3 existing approaches to see if they can be reconciled. There is a meeting planned at TPAC on this, brining this to a separate workshop would just slow down things.

    2. Scalability of vocabularies across communities

      NoSQL people often frown upon vocabularies. Have these as a separate workshop???

    3. eyond deduction: incomplete, uncertain and inconsistent knowledge, AI and Machine Learning

      This is even worse than before: it is a HUGE area enough for its own workshop. Temporal reasoning, fuzzy logic or logic with uncertainty: there were workshops before as well as incubator groups which did not lead to tangible plans. Maybe it is time to look at this again, but with again a separate workshop!

    4. Enterprise-wide knowledge graphs

      Do we have Google & Microsoft on board? Without them this is not really meaningful for us. Also, the complexity here is way beyond what this workshop can handle

    5. Graph Databases and Link Annotations

      This section contains enough material for a full workshop!!!

    6. Link Annotations

      Can we use a different term? With a separate Web annotation spec plus the term of linked data, it is very misleading...

  7. May 2018
    1. It is worth noting that blank node identifiers may be relabeled during processing. If a developer finds that they refer to the blank node more than once,

      How can I express several named graphs with a shared context? Like TriG?

  8. Oct 2016
    1. Informative references

      We may want to remove this as a formal reference (for now at least): the PWP document is so utterly out of sync...

    2. Boris Anthony (Rebus Foundation), Luc Audrain (Hachette Livre), Nick Barreto (Invited Expert), Baldur Bjarnason (Rebus Foundation), Timothy Cole (University of Illinois at Urbana-Champaign), Garth Conboy (Google), Dave Cramer (Hachette Livre), Romain Deltour (DAISY Consortium), Brady Duga (Google), Heather Flanagan (Invited Expert), Markus Gylling (IDPF), Ivan Herman (W3C), Deborah Kaplan (Invited Expert), Bill Kasdorf (BISG)

      Add to the list: Marcos, Mike Smith, and Hadrien...

    3. Within their publication, in addition to the print rendition, a publisher may include a fully narrated rendition, or a video with described audio and captioning. This requirement is the same as alternative modalities BigBox Publisher needs to add content such as a braille style sheet, image descriptions, or video captioning (text/descriptive audio) to a PWP.

      Let us not duplicate requirements...

    4. Anita is a school student who knows only uncontracted Hindi Braille

      Can we put a more Indian sounding name here?

    5. Req. 28: There should be a way to indicate whether one or more PWP components contain descriptive metadata. An archiving service needs a reliable way to determine which, if any, PWP components contain descriptive metadata, such as that described in metadata and resources. Without such a mechanism, the archiving service will have to develop and maintain publisher- and/or platform-specific heuristics for locating or parsing out descriptive metadata, making archiving more expensive and decreasing the reliability of reporting.

      If we add a reference to ONIX, as proposed elsewhere, I wonder whether this is a new requirement or whether it could be subsumed by an existing one

    6. A copyright dispute results in the takedown of a published book. An archiving service regularly polls for changes to this book, which it has already archived, and discovers that it has been taken down. It records that the resources that constitute the object are no longer accessible and propagates this update to a preservation repository.

      Isn't this expressed by the previous two requirements, in the technical sense at least?

    7. Req. 26: There should be a way to discover that a new version of one or more PWP components have been published. An archiving service needs a reliable way to learn that a new version of one or more PWP components have been published at the same locations as previously published. This requirement is the same as the constituent resources requirement

      Same comment as before...

    8. Req. 25: The locations of all PWP components should be discoverable. An archiving service needs a reliable way to learn where all of the components that constitute a PWP are located in order to be able to archive it. This requirement is the same as the constituent resources requirement.

      I agree that this requirement should be emphasized, but I would not use it as a formal requirement (as Req 25). Those requirements are used in references, listed in the table at the end of the document, can be used by external documents: we should avoid overlaps.

      Alternatively, we should move this as an explicit use case into the relevant section

    9. Metadata and Resources

      We may want to add a use case to ensure that, say, publishers have a possibility to include, or include a reference to, major industry standard metadata to publications, e.g., ONIX data

    10. Alternative Reading Orders

      Isn't this one subsumed by 5.1?

    11. Alice acquires a PWP through a subscription service and downloads it. When, later on, she decides to unsubscribe from the service, this PWP becomes unavailable to her. Bill acquires a PWP through a re

      I wonder whether these two are not, fundamentally, the same use case: a time range used to limit the access to a publication

    12. He expects to be able to receive the PWP as a file (rather than only having access to it)

      I am not 100% sure that is true. The second half of the sentence is, but whether that is accomplished through a single file or not is probably irrelevant for Ahmed (as long as it is easily done).

    13. in (technically) different

      What does "(technically)" stand for here?

    14. Data

      I think this requirement should move to the previous section. This is, fundamentally, a user requirement and not an implementation one

    15. The system needs to identify which components must be downloaded to a local user’s device to support offline reading The system needs to preload some document components in order to provide a more responsive reading experience. When creating a packaged publication it must be clear which documents should or should not be added to the distribution package The size of the components must be known in advance. The user agents needs to know if a publication contains/requires support for a specified media type (without processing the complete PWP).

      We may want to re-formulate these following the 'personal use case' style. Or can they be removed altogether?

    16. Collections

      Is this, in its current form, an essential requirement? Isn't it better to push it down somewhere? After all, there are also overlaps with 3.3 (Constituent Resources) which is already fairly elaborate

    17. by local customers.

      Why having removed the use case on device independence? The idea was to have a (albeit possibly simple and obvious) use case for each of the horizontal aspects listed into the explanation text

    1. The service worker being pointed to is on a different origin to that of your app.

      Does it mean that I cannot store the SW somewhere on the Web, separately from my publication?

    1. var urlsToCache = [  '/',  '/styles/main.css',  '/script/main.js'];

      This, of course, can be dynamic, and can get the list from a manifest file (for example)

  9. Jun 2016
    1. The Web, in particular, has revolutionised the way we create, disseminate, explore and consume information, and its potentials are not fully exploited yet for scholarly communications.

      Absolutely. It is ironic that we put annotation to a paper, and we cannot do it for publications in general... That papers online are, most of the time, just a frozen, digital version of a printed paper, and none of the possibilities for, say, interacting with data, 3D visualization, etc, are used... And the list of examples would be very long.

    2. As a minimum requirement, the re-search process should be traceable, e.g. by providing access to raw data and docu-menting the research process as well as the (intermediate) results (discussions, research diaries, pre-publications etc.)

      See my comments above: all these should be recognized parts of a scientific career, part of one's assessment. Otherwise it won't happen.

    3. The pos-sibility to reuse data, materials and results enables researchers and communities to learn from each other and to speed up the production of new knowledge.

      True. But that requires a different views in what constitutes scholarly communication, what is accepted as such by those who assess researchers. Publishing scientific data, scientific software, participating in the creation of patents or international standards, etc: these should have an equal value to traditional publications. We are not yet there...

    4. uction of scholarly knowledge often happens in a closed system exclud-ing expertise and experiences of scholars outside academia

      I do not really believe that is a separate issue; it is the issue, referred to above, of the restricted, subscription based access to scholarly content. On the other hand, scholarly communication is not the only reason why these types of cooperations are difficult, the issues go beyond the publication problems.

  10. May 2016
    1. "a Demokrata Párt vezetői mögött (...) Soros Györgyöt kell látnunk"

      Ekkora baromság... mintha a 70's évek retorikáját olvasnám a gonosz kapitalistákról...

  11. Dec 2015
    1. will continue to update, a list of others ongoing efforts to improve digital scholarship

      This did not really work out:-( The last activity on this goes back to 2011.

    2. Relevant papers and books are listed on the Force11 web

      Did we systematically do that? Can we? If such a listing is based on somebody collecting these data and adding there manually, it would soon become a bottleneck and would be lost when the person moves on to other areas (I have experienced this myself). Maybe a more distributed/social approach, involving the community as a whole, would be a better match.

    3. Coordinated standard and technology

      I would add IDPF to the list, due to their development of EPUB. In general, I believe that an explicit adoption of EPUB by the scholarly community should be promoted: it looks like an ideal way to provide offline versions of publications without the downsides of PDF.

    4. OA journals, including some that are regarded as comparable with the most highly regarded subscription access publications

      Just an idea: it is still a matter of discussions and, in some way, competition on what kinds of services scholarly Journals (or proceedings) on the Web should provide. There are some great examples (F1000, Plos, PeerJ; Utopia although bound to PDF) but there is probably lots of wheel reinventing is happening. Of course, competition is good, but documenting some sort of a "minimal" set of services and possibilities may make the development of new Web Journals easier. These are sometimes presented at our conferences, but we may need more.

      There was an idea of having our own "Web Publication"; maybe that could serve as a showcase, cooperating with various tool developers to turn it into one.

    5. digital research objects in standardized ways should be closely linked

      The data citation work is a major success. The software citation WG is, crudely speaking, a flop: nothing is happening in that group. I believe that WG should be stopped and/or new people should be found to move that forward, because we cannot afford keeping such a lame group. (The work itself would be more important than ever!)

      Similar "citation" issues may also appear in other areas (workflows, video or audio objects, living organisms like bacteria, etc). We should make a systematic overview on where such work would be needed and concentrate on those.

    6. formal semantic representation in OWL/RDF of the metadata describing these research objects

      I believe this goes back to the fundamental work to be done as part of the concept of RO-s, it should not be taken separately. Such a work should be done with a clear proof of usage and interest for other developers (there has been too much work concentrating on RDF/OWL without any tangible interest by possible users).

    7. Force11OA provides the gateway to new modes of scholarly communication, and is the cornerstone that must be promoted and extended if significant change to the scholarly publishing ecosystem is to take place

      I believe Force11 should be seen as the forum where discussions on business models would happen. I am not sure how; the yearly conference is one place, but we should probably do more (can have online discussion, webinars, better presence at major events; just throwing things up in the air here). This is essential to achieve any change in my view.

    8. new communication modes will need to demonstrate tangible value to both producers and consumers.

      See my comments above...

    9. globally unique identifier (URL, DOI, HDL)

      I think our contribution here was noticeable, although the real merit goes to crossref and similar organization. We should be instrumental, though, in getting the word out. I still meet many people in the scholarly (or related) community who do not appreciate the value of these identifiers (let alone identifiers like ORCID or similar).

    10. research object[De Roure and Goble, 2009, Bechhofer et al., 2010], a container for a number of related digital objects—for example a paper with associated datasets, workflows, software packages, etc., that are all the products of a research investigation and that together encapsulate some new understanding. Publishing of research objects is not necessarily publishing as we know it today, achieved by the same mechanisms as used for traditional scholarly articles. It consists of providing free and open access to the component parts of the research object, that may or may not have been individually reviewed by others either pre- or post-publication

      Research objects seem to be a very forward looking concept. However, my feeling is that it does not have a real "home" to develop it further, to expose it to the scholarly community, academic institutions, funders, etc. There is a W3C Community Group that is fairly silent. I think that Force11 may have to do some concerted effort to do something like that. ( I say "something" because RO-s may evolve significantly, changing its nature, and we have to embrace that if it happens.)

    1. different default values or interpretations

      different interpretations? What does that mean? Isn't that a major problem for interoperability?

  12. Sep 2015
    1. At least on my machine, I have to run the

      source .../virtualenvwrapper.sh

      before any such run. It is not a fixed script somewhere