80 Matching Annotations
  1. Jun 2018
    1. the data model, protocol and vocabulary for annotations have been agreed to by a diverse group of stakeholders

      the most important question not covered in the post: is hypothes.is one of the stakeholders who has agreed to adopt the said spec?



    1. Appendix C. Change History

      needs update

    2. JimConallen


    3. ChrisArmstrong


    4. in both these

      in both of these?

    5. In oslc:Dialog elements, the two optional child elements; oslc:hintWidth and oslc:hintHeight specify the suggested size of the dialog or frame to render the HTML content in. Expected for the size values are defined by CSS length units.

      why is this paragraph here?

    6. simple query syntax

      what is a simple query syntax? cf. my oslc.where note.

    7. oslc.where

      there are optional features in oslc.where, such as wildcards. AM should specify whether they are a MUST too for AM SPs.

    8. oslc:usage property MUST NOT be set

      Such formulation allows the implementer to omit it altogether.

    9. will

      that does not sound like normative language

    10. [!OSLCCore3]]


    11. RM Clients

      AM clients.

  2. Mar 2018
    1. Product Lifecycle Management (PLM)

      shall these become informative references?

    2. Domain

      I am starting to define the OSLC domain when I talk to people as a contrained RDF vocab PLUS OSLC shapes. It there anything wrong in such definition?

    3. See Server

      this is a cryptic reference to the Terminology section of the respective standards and may not be understood by regular folks who will dare to read the spec

    4. defined by at least on OSLC-based specification

      unclear language

    5. MAY provide

      WARN! a normative ref in a non-normative section

    6. Fig. 1 OSLC Core 3.0 Architecture

      fix the squiggly line

    1. querying resources via HTTP GET or POST

      what are the cases for querying via POST?

    2. Resource creation is done by sending an HTTP POST to a URI that supports resource creation, providing the resource content in the entity request body. Clients can discover resources that support resource creation either through the http://open-services.net/ns/core#creationFactory property of a Service in a ServiceProvider resource, or by using an OPTIONS request on an LDPC to determine if it accepts the POST method.

      Is it not conflicting with the LDP philosophy? I thought it promoted to do the POST on the same container that contains the resource; i.e. a ServiceProviver shall accept POSTs if the POSTed resources will belong to it.

    3. ServiceProvider resource members include Service LDPCs.

      Vague wording, we need to explain which containers can the managed resources belong to.

    4. 4.1.2 Servers SHOULD minimize the use of HTTP response headers on various HTTP operations as to avoid unnecessary additional response content for clients to consume. This is also to avoid the complexity on server implementations that would be needed to provide such additional content.

      why is it an H4 heading?

    1. specific means

      <del>where is the reference to the OSCL 3.0 discovery mentioned at the top of the page?</del>

      strike that, I see it below. you should not say "beyond the scope" if you suggest the method, which is within the scope (not precisely this spec, but of the complete set of new OSLC specs)

    2. may

      is it an informative MAY?

    3. RM Servers

      We need to explicitly say that we just renamed them from 2.0 and there is no big difference between an RM Service Provider 2.0 and an RM Server 2.1

    4. 409 Conflict

      bloody compatibility... Anyway, the code 409 CONFLICT is for use when "The request could not be completed due to a conflict with the current state of the target resource. This code is used in situations where the user might be able to resolve the conflict and resubmit the request." https://httpstatuses.com/409

      I think https://httpstatuses.com/400 is more suited here

    5. A client MAY request

      Side question: does RM spec specify what a client MAY/SHOULD do etc.? I though this spec only defines the behaviour of the RM server and the clients read the spec.

      In other words, I would rewrite this as "A server MAY support updating a subset of a resource's properties by ... PATCH".

    6. OSLC RM servers SHOULD support pagination of query results and MAY support pagination of a single resource's properties as defined by [OSLCCore3].

      Backwards compat, yes, yes, I know. But how much sense does is make to have §2.8.1 a MUST while keeping Query results pagination a SHOULD? Vice versa makes more sense to me.

    7. OSLC RM servers SHOULD support [OpenIDConnect]

      OpenID is for people, the current W3C direction is to recommend WebID-TLS, which works well both for humans and machines (and does a few more things correctly). I would recommend to replace it with "may support WebID-TLS [xxx] and/or OpenID [xxx]".

    8. RM Servers MUST support RDF/XML representations with media-type application/rdf+xml. RM Clients MUST be prepared to deal with any valid RDF/XML document. RM Servers MUST support XML representations with media-type application/xml. The XML representations MUST follow the guidelines outlined in the OSLC Core Representations Guidance to maintain compatibility with [OSLCCore2]. RM Servers MAY support JSON representations with media-type application/json. The JSON representations MUST follow the guidelines outlined in the OSLC Core Representations Guidance to maintain compatibility with [OSLCCore2].

      see my comments to the GET below, I accidentally provided them for OSLC Query while thinking it's for the resources PLUS my comments in the table above.

    9. RM Servers SHOULD provide an [X]HTML representation and a user interface (UI) preview as defined by UI Preview Guidance

      contradicts the MAY in the table above

    10. OSLC Servers MAY refuse to accept RDF/XML

      SHOULD, because RDF/XML MUST have rdf:RDF as its root: https://www.w3.org/TR/rdf-syntax-grammar/#doc

    11. The OSLC Core describes an example, non-normative algorithm for generating RDF/XML


    12. RM Servers MUST support XML representations with media-type application/xml.

      see my comment above on the signal MUST sends to the implementors.

    13. RM Servers MAY support JSON representations with media-type application/json.

      see my comments above on JSON-LD

    14. http://open-services.net/ns/rm#
      1. We should support more than one format with content neg
      2. The ontology should have more properties filled (see the tab on the right in http://www.visualdataweb.de/webvowl/#iri=http://open-services.net/ns/rm# to see what's missing).
      3. Is anything in the file supposed to change across versions? If yes, I suggest we create http://open-services.net/ns/rm/v2.1# and do a 303 from http://open-services.net/ns/rm# to it in the final spec.
    15. oslc_rm

      should be added to https://prefix.cc/, I created https://github.com/cygri/prefix.cc/issues/24 for that

    16. Core defined error formats

      normative reference?

      Core formats, what is that?

    17. resources that may be referenced by other resources

      that's too vague. any resource with a URI can be referenced from somewhere else, no?

    18. OAuth

      OAuth 1 or OAuth 2?

    19. MUST conform to the OSLC Core Guidelines for JSON
      1. normative reference
      2. I think it should be MUST support JSON-LD and
      3. SHOULD support deprecated JSON (https://www.w3.org/TR/rdf-json/)
    20. MUST

      I think this should become SHOULD now that RDF/XML support is widespread (plus other RDF serialisations) and a non-standard XML should not get such an endorsement as a MUST in the newest spec.

    21. OSLC servers MUST support RDF/XML representations for OSLC Defined Resources

      here we should say it SHOULD support any RDF representation, as the new OSLC 3.0 draft does

    22. XML

      non-RDF application/xml, application/rdf+xml or RDF/XML-ABBREV? Which one?

    23. OSLC Core Guidelines for XML

      Normative reference?

    24. integration use cases documented by the OSLC-RM workgroup

      normative reference?

    25. Requirements Management

      I think Requirements Management as a discipline includes more things. How about The RM specification...

    26. The specification is modified to build on the [OSLCCore3] Specification.

      compatibility: "is modified"... What is that supposed to mean? Is 2.1 RM provider a fully compliant 2.0 RM provider? Vice versa?

    27. ??

      what are the substaintial changes in 2.1-WD01 compared to 2.0?

    28. Working Draft 01

      is it possible to tag the origina commit on Git with rm2.1-wd01 or something similar, please?

    29. RESTful web services interface

      Is the domain a RESTful interface?

    1. The syntax defined in this document should not be used unless there is a specific reason to do so. Use of JSON-LD is recommended.

      ie the spec is deprecated in favour of JSON-LD

  3. Aug 2017
    1. What we really want in many cases is SQL-like primitives built into our languages, but this is often challenging (.NET's LINQ is one example of a successful approach).

      I would certainly want something like that for SPARQL

    1. the following code is about as close as you can get in Java: public interface Inttoint { public int call(int i); } public static Inttoint foo(final int n) { return new Inttoint() { int s = n; public int call(int i) { s = s + i; return s; }}; }

      Maybe a bit shorter with the modern lambdas, but they still use interfaces underneath.

    2. (defun foo (n) (lambda (i) (incf n i)))


    3. This is the kind of possibility that the pointy-haired boss doesn't even want to think about. And so most of them don't. Because, you know, when it comes down to it, the pointy-haired boss doesn't mind if his company gets their ass kicked, so long as no one can prove it's his fault. The safest plan for him personally is to stick close to the center of the herd.Within large organizations, the phrase used to describe this approach is "industry best practice." Its purpose is to shield the pointy-haired boss from responsibility: if he chooses something that is "industry best practice," and the company loses, he can't be blamed. He didn't choose, the industry did.
    4. What he's thinking is something like this. Java is a standard. I know it must be, because I read about it in the press all the time. Since it is a standard, I won't get in trouble for using it. And that also means there will always be lots of Java programmers, so if the programmers working for me now quit, as programmers working for me mysteriously always do, I can easily replace them.


    5. we all know that software is best developed by teams of less than ten people. And you shouldn't have trouble hiring hackers on that scale for any language anyone has ever heard of. If you can't find ten Lisp hackers, then your company is probably based in the wrong city for developing software.

      here the issue might be that we are offering an SDK and people are already afraid of linked data and adding a new programming language would do little help.

    6. Instead of simply writing your application in the base language, you build on top of the base language a language for writing programs like yours, then write your program in it. The combined code can be much shorter than if you had written your whole program in the base language

      Which is exactly what Leo did with OSLC: he defined OSLC in Turtle, then defined basic rules for handling some predicates and then built a whole OSLC server by applying those rules to the OSLC definition on-the-fly.

    7. As for libraries, their importance also depends on the application. For less demanding problems, the availability of libraries can outweigh the intrinsic power of the language. Where is the breakeven point? Hard to say exactly, but wherever it is, it is short of anything you'd be likely to call an application.

      for us, it is mainly a question of the RDF library availability and some basic REST support.

    8. This is why we even hear about new languages like Perl and Python. We're not hearing about these languages because people are using them to write Windows apps, but because people are using them on servers. And as software shifts off the desktop and onto servers (a future even Microsoft seems resigned to), there will be less and less pressure to use middle-of-the-road technologies.


    9. if you control the whole system and have the source code of all the parts, as ITA presumably does, you can use whatever languages you want. If any incompatibility arises, you can fix it yourself.

      We run in the cloud and in the age of docker, we can control the environment fully. The problem arises with a few narrow cases where we need windows, but people have systematically misused that: I have once read how some people ran a Jetty server to host an OSLC adaptor inside an Eclipse IDE (my brain goes puff every moment I recall this).

    10. I can think of three problems that could arise from using less common languages. Your programs might not work well with programs written in other languages. You might have fewer libraries at your disposal. And you might have trouble hiring programmers.


    11. the biggest win for languages like Lisp is at the other end of the spectrum, where you need to write sophisticated programs to solve hard problems in the face of fierce competition. A good example is the airline fare search program that ITA Software licenses to Orbitz. These guys entered a market already dominated by two big, entrenched competitors, Travelocity and Expedia, and seem to have just humiliated them technologically.


    12. I would learn more about macros.


    13. deas 8 and 9 together mean that you can write programs that write programs. That may sound like a bizarre idea, but it's an everyday thing in Lisp. The most common way to do it is with something called a macro.


    14. A notation for code using trees of symbols and constants. The whole language there all the time. There is no real distinction between read-time, compile-time, and runtime. You can compile or run code while reading, read or run code while compiling, and read or compile code at runtime.


    15. Programs composed of expressions. Lisp programs are trees of expressions, each of which returns a value. This is in contrast to Fortran and most succeeding languages, which distinguish between expressions and statements.


    16. Dynamic typing. In Lisp, all variables are effectively pointers. Values are what have types, not variables, and assigning or binding variables means copying pointers, not what they point to.


    17. A function type. In Lisp, functions are a data type just like integers or strings. They have a literal representation, can be stored in variables, can be passed as arguments, and so on.


    18. So the short explanation of why this 1950s language is not obsolete is that it was not technology but math, and math doesn't get stale. The right thing to compare Lisp to is not 1950s hardware, but, say, the Quicksort algorithm, which was discovered in 1960 and is still the fastest general-purpose sort.


    19. Another way to show that Lisp was neater than Turing machines was to write a universal Lisp function and show that it is briefer and more comprehensible than the description of a universal Turing machine. This was the Lisp function eval


    20. If you look at these languages in order, Java, Perl, Python, you notice an interesting pattern. At least, you notice this pattern if you are a Lisp hacker. Each one is progressively more like Lisp.


    21. But if languages vary, he suddenly has to solve two simultaneous equations, trying to find an optimal balance between two things he knows nothing about: the relative suitability of the twenty or so leading languages for the problem he needs to solve, and the odds of finding programmers, libraries, etc. for each. If that's what's on the other side of the door, it is no surprise that the pointy-haired boss doesn't want to open it.


    22. But all languages are not equivalent, and I think I can prove this to you without even getting into the differences between them. If you asked the pointy-haired boss in 1992 what language software should be written in, he would have answered with as little hesitation as he does today. Software should be written in C++. But if languages are all equivalent, why should the pointy-haired boss's opinion ever change? In fact, why should the developers of Java have even bothered to create a new language?Presumably, if you create a new language, it's because you think it's better in some way than what people already had. And in fact, Gosling makes it clear in the first Java white paper that Java was designed to fix some problems with C++. So there you have it: languages are not all equivalent. If you follow the trail through the pointy-haired boss's brain to Java and then back through Java's history to its origins, you end up holding an idea that contradicts the assumption you started with.


    23. But it's all based on one unspoken assumption, and that assumption turns out to be false. The pointy-haired boss believes that all programming languages are pretty much equivalent. If that were true, he would be right on target. If languages are all equivalent, sure, use whatever language everyone else is using.


    24. The pointy-haired boss miraculously combines two qualities that are common by themselves, but rarely seen together: (a) he knows nothing whatsoever about technology, and (b) he has very strong opinions about it.


  4. Jul 2017
    1. Discovery will always have to start with at least one discovery resource URI to bootstrap discovery

      One of such URIs might be a SPARQL endpoint for the adaptor, as we may assume triplestores will have more use in the future OSLC applications.