54 Matching Annotations
  1. Jun 2024
    1. https://web.archive.org/web/20240630131807/https://www.w3.org/TR/ethical-web-principles/

      Dated June 2024 a set of 'ethical' principles for the web. Curiously it never mentions linking, not even in context of the principle of enabling verification of info.

      Some things are handy checklist to run against my own website / web activities though.

  2. Jan 2024
    1. The W3C standard for Epub ebooks. Nav [[How Standard Ebooks serves millions of requests per month with a 2GB VPS; or, a paean to the classic web - Alex Cabal]]

  3. Oct 2023
  4. Sep 2023
    1. From 2014 to 2017, we led the W3C Web Annotation Working Group, helping to develop the vocabulary, data model, protocol, and recommendation that resulted in the standard for web annotation.
  5. Apr 2023
  6. Feb 2023
  7. tantek.com tantek.com
    Five years ago last Monday, the @W3C Social Web Working Group officially closed^1. Operating for less than four years, it standardized several foundations of the #fediverse & #IndieWeb: #Webmention #Micropub #ActivityStreams2 #ActivityPub Each of these has numerous interoperable implementations which are in active use by anywhere from thousands to millions of users. Two additional specifications also had several implementations as of the time of their publication as W3C Recommendations (which you can find from their Implementation Reports linked near the top of each spec). However today they’re both fairly invisible "plumbing" (as most specs should be) or they haven’t picked up widespread use like the others: #LinkedDataNotifications (LDN) #WebSub To be fair, LDN was only one building block in what eventually became SoLiD^2, the basis of Tim Berners–Lee’s startup Inrupt. However, in the post Elon-acquisition of Twitter and subsequent Twexodus, as Anil Dash noted^3, “nobody ran to the ’web3’ platforms”, and nobody ran to SoLiD either. The other spec, WebSub, was roughly interoperably implemented as PubSubHubbub before it was brought to the Social Web Working Group. Yet despite that implementation experience, a more rigorous specification that fixed a lot of bugs, and a test suite^4, WebSub’s adoption hasn’t really noticeably grown since. Existing implementations & services are still functioning though. My own blog supports WebSub notifications for example, for anyone that wants to receive/read my posts in real time. One of the biggest challenges the Social Web Working Group faced was with so many approaches being brought to the group, which approach should we choose? As one of the co-chairs of the group, with the other co-chairs, and our staff contacts over time, we realized that if we as chairs & facilitators tried to pick any one approach, we would almost certainly alienate and lose more than half of the working group who had already built or were actively interested in developing other approaches. We (as chairs) decided to do something which very few standards groups do, and for that matter, have ever done successfully. From 15+ different approaches, or projects, or efforts that were brought^5 to the working group, we narrowed them down to about 2.5 which I can summarize as: 1. #IndieWeb building blocks, many of which were already implemented, deployed, and showing rough interoperability across numerous independent websites 2. ActivityStreams based approaches, which also demonstrated implementability, interoperability, and real user value as part of the OStatus suite, implemented in StatusNet, Identica, etc. 2.5 "something with Linked Data (LD)" — expressed as a 0.5 because there wasn’t anything user-visible “social web” with LD working at the start of the Working Group, however there was a very passionate set of participants insisting that everything be done with RDF/LD, despite the fact that it was less of a proven social web approach than the other two. As chairs we figured out that if we were able to help facilitate the development of these 2.5 approaches in parallel, nearly everyone who was active in the Working Group would have something they would feel like they could direct their positive energy into, instead of spending time fighting or tearing down someone else’s approach. It was a very difficult social-technical balance to maintain, and we hit more than a few bumps along the way. However we also had many moments of alignment, where two (or all) of the various approaches found common problems, and either identical or at least compatible solutions. I saw many examples where the discoveries of one approach helped inform and improve another approach. Developing more than one approach in the same working group was not only possible, it actually worked. I also saw examples of different problems being solved by different approaches, and I found that aspect particularly fascinating and hopeful. Multiple approaches were able to choose & priortize different subsets of social web use-cases and problems to solve from the larger space of decentralized social web challenges. By doing so, different approaches often explored and mapped out different areas of the larger social web space. I’m still a bit amazed we were able to complete all of those Recommendations in less than four years, and everyone who participated in the working group should be proud of that accomplishment, beyond any one specification they may have worked on. With hindsight, we can see the positive practical benefits from allowing & facilitating multiple approaches to move forward. Today there is both a very healthy & growing set of folks who want simple personal sites to do with as they please (#IndieWeb), and we also have a growing network of Mastodon instances and other software & services that interoperate with them, like Bridgy Fed^6. Millions of users are posting & interacting with each other daily, without depending on any large central corporate site or service, whether on their own personal domain & site they fully control, or with an account on a trusted community server, using different software & services. Choosing to go from 15+ down to 2.5, but not down to 1 approach turned out to be the right answer, to both allow a wide variety^7 of decentralized social web efforts to grow, interoperate via bridges, and frankly, socially to provide something positive for everyone to contribute to, instead of wasting weeks, possibly months in heated debates about which one approach was the one true way. There’s lots more to be written about the history of the Social Web Working Group, which perhaps I will do some day. For now, if you’re curious for more, I strongly recommend diving into the group’s wiki https://www.w3.org/wiki/Socialwg and its subpages for more historical details. All the minutes of our meetings are there. All the research we conducted is there. If you’re interested in contributing to the specifications we developed, find the place where that work is being done, the people actively implementing those specs, and even better, actively using their own implementations^8. You can find the various IndieWeb building blocks living specifications here: * https://spec.indieweb.org/ And discussions thereof in the development chat channel: * https://chat.indieweb.org/dev If you’re not sure, pop by the indieweb-dev chat and ask anyway! The IndieWeb community has grown only larger and more diverse in approaches & implementations in the past five years, and we regularly have discussions about most of the specifications that were developed in the Social Web Working Group. This is day 33 of #100DaysOfIndieWeb #100Days ← Day 32: https://tantek.com/2023/047/t1/nineteen-years-microformats → 🔮 Post Glossary: ActivityPub https://www.w3.org/TR/activitypub/ ActivityStreams2 https://www.w3.org/TR/activitystreams-core/ https://www.w3.org/TR/activitystreams-vocabulary/ Linked Data Notifications https://www.w3.org/TR/ldn/ Micropub https://micropub.spec.indieweb.org/ Webmention https://webmention.net/draft/ WebSub https://www.w3.org/TR/websub/ References: ^1 https://www.w3.org/wiki/Socialwg ^2 https://www.w3.org/wiki/Socialwg/2015-03-18-minutes#solid ^3 https://mastodon.cloud/@anildash/109299991009836007 ^4 https://websub.rocks/ ^5 https://indieweb.org/Social_Web_Working_Group#History ^6 https://tantek.com/2023/008/t7/bridgy-indieweb-posse-backfeed ^7 https://indieweb.org/plurality ^8 https://indieweb.org/use_what_you_make - Tantek
    1
  8. Nov 2022
    1. From a technical point of view, the IndieWeb people have worked on a number of simple, easy to implement protocols, which provide the ability for web services to interact openly with each other, but in a way that allows for a website owner to define policy over what content they will accept.

      Thought you might like Web Monetization.

    1. Donations

      To add some other intermediary services:

      To add a service for groups:

      To add a service that enables fans to support the creators directly and anonymously via microdonations or small donations by pre-charging their Coil account to spend on content streaming or tipping the creators' wallets via a layer containing JS script following the Interledger Protocol proposed to W3C:

      If you want to know more, head to Web Monetization or Community or Explainer

      Disclaimer: I am a recipient of a grant from the Interledger Foundation, so there would be a Conflict of Interest if I edited directly. Plus, sharing on Hypothesis allows other users to chime in.

  9. Aug 2022
  10. May 2022
    1. DCAT is an RDF vocabulary designed to facilitate interoperability between data catalogs published on the Web. This document defines the schema and provides examples for its use. DCAT enables a publisher to describe datasets and data services in a catalog using a standard model and vocabulary that facilitates the consumption and aggregation of metadata from multiple catalogs. This can increase the discoverability of datasets and data services. It also makes it possible to have a decentralized approach to publishing data catalogs and makes federated search for datasets across catalogs in multiple sites possible using the same query mechanism and structure. Aggregated DCAT metadata can serve as a manifest file as part of the digital preservation process.
  11. Feb 2022
    1. Aus diesem Grund definiert das World Wide Web Consortium (W3C) verschiedene Wissensrepräsentationssprachen, die inhaltlich aufeinander aufbauen,

      Grund für Standardisierung der Repräsentationssprachen

    1. Die mit Linked Data beschrittene Lösung macht sich existierende Technologien zu Nut-ze, die vom W3C2 als Semantic Web Technologien standardisiert wurden.

    Tags

    Annotators

  12. Jan 2022
    1. This document contains information about embedding metadata in W3C Technical Reports (TR) using RDFa.
    1. The internet is for end users: any change made to the web platform has the potential to affect vast numbers of people, and may have a profound impact on any person’s life. [RFC8890]
  13. Nov 2021
  14. Jul 2021
  15. Apr 2020
    1. The WCAG 2.0 is a stable, referenceable technical standard that helps developers of any kind of online content (from websites to text and PDF files), create or check their materials for accessibility. Many grant givers or governments (like the European Union) even require institutions to follow those guidelines when publishing public sector information or education resources. https://www.w3.org/TR/WCAG20/

      The ultimate resource on accessibility directly from the World Wide Web Consortium (W3C). WCAG 2.0 documentation is the definitive guide for accessibility on the web and is aimed at web developers, designers, and content creators.

    1. pushing the grain-size of the hypertext to the morphemic level

      This ability is clearly reflected in 2017's W3C Web Annotation standard

  16. Jan 2020
    1. The Web Annotation Data Model specification describes a structured model and format to enable annotations to be shared and reused across different hardware and software platforms.

      The publication of this web standard changed everything. I look forward to true testing of interoperable open annotation. The publication of the standard nearly three years ago was a game changer, but the game is still in progress. The future potential is unlimited!

    1. The idea of a system enabling a multiplicity of independent individuals to create lightweight value-added "trails" through a document space was envisaged most prominently by Vannevar Bush as early as 1945 [BUSH]. ComMentor can be thought of as a tool which such "trail blazers" use to add value to contents, and which other people use to find guidance based on these human-created super-structures. The overall architecture can be seen as a platform where value-added providers can provide their services (as a third player next to content providers and end users).

      I'd heard of ComMentor before, but I hadn't noticed that Terry and Martin cited Vannevar and mentioned the notion of trails here. https://www.theatlantic.com/magazine/archive/1945/07/as-we-may-think/303881/

  17. Dec 2018
    1. without opening a separate tab)

      "In general, it is better not to open new windows and tabs since they can be disorienting for people, especially people who have difficulty perceiving visual content."

      https://www.w3.org/TR/WCAG20-TECHS/G200.html

  18. May 2018
  19. Sep 2017
  20. Aug 2017
    1. <script src="https://hypothes.is/embed.js" async></script>

      One line of code adds open, standards-based annotation to any website.

  21. Jun 2017
    1. The whole point of the newly-minted web annotation standard is to enable an ecosystem of interoperable annotation clients and servers, analogous to comparable ecosystems of email and web clients and servers.

      I think one of the ideas I'm struggling with here. Is web annotation just about research, or to advance conversation on the web? I sense this is part of decentralization too (thus, an ecosystem), but where does it fit?

  22. Apr 2017
    1. Really useful session well worth your time! All the longed for teacher, student, researcher, creator & user annotation desires for the web, at long last on the way to fulfilment!

  23. Feb 2017
    1. Felt way more appropriate to comment here than in the comments at the body of the page :).

      It was humbling to interact act with such dedicated researchers and practitioners and to watch these documents take shape.

      Thanks, everyone!

    1. The W3C’s existence depends on its mission being grounded in moral certainty. That certainty is the only substantial obstacle preventing it from being replaced with easier, more pragmatic standardisation efforts that focus exclusively on implementation.

      End of the W3C's influence?

    1. A URI can be further classified as a locator, a name, or both. The term "Uniform Resource Locator" (URL) refers to the subset of URIs that, in addition to identifying a resource, provide a means of locating the resource by describing its primary access mechanism (e.g., its network "location").
    1. Many Annotations refer to part of a resource, rather than all of it, as the Target. We call that part of the resource a Segment (of Interest). A Selector is used to describe how to determine the Segment from within the Source resource.
  24. Feb 2016
    1. The feed is how stuff enters their content system. But the feed itself is outside, leaving it available for other services to use. It's great when this happens, rather than doing it via a WG that tend to go on for years, and create stuff that's super-complicated, why not design something that works for you, put it out there with no restrictions and let whatever's going to happen happen.

      Interesting approach for hypothes.is to consider?

  25. Nov 2015
    1. If you have a copy of the ReSpec repository handy, you may see that there is also a respec2html.js tool under tools/. Feel free to try using it instead of the above process, but please note that it is not used much currently and may behave in a somewhat experimental manner (experiences with it vary — but it's worth a shot if you're looking for a way to generate ReSpec output from the command line).

      Respec (sadly) doesn't quite have a command line tool...at least not one comparable to a browser's output.

      Maybe PhantomJS (which Respec uses for tests) would do a better job?

  26. Oct 2015
    1. Having these two axioms in place and given e.g. the information that Sasha is related to Hillary via the property hasWife, a reasoner would be able to infer that Sasha is a man and Hillary a woman.

      Not necessarily. Increasingly same-sex marriages are more widely accepted. W3C should re-visit their documentation to ensure that they're not excluding LGBTQ populations and don't perpetuate heteronormativity.

  27. Sep 2015
    1. The W3C Annotation Working Group has a joint deliverable with the W3C Web Application Working Group called “Robust Anchoring”. This deliverable will provide a general framework for anchoring; and, although defined within the framework of annotations, the specification can also be used for other fragment identification use cases. Similarly, the W3C Media Fragments specification [media-frags] may prove useful to address some of the use cases. Finally, the Streamable Package Format draft, mentioned above, also includes a fragment identification mechanism. Would that package format be adopted for EPUB+WEB, that fragment identification may also come to the fore as an important mechanism to consider.

      Anchors are a key issue. Hope that deliverable will suffice.

  28. Aug 2015
    1. Packages can be used to populate caches associated with multiple URLs without making multiple requests.

      Alex Russell, a Googler who loves Chrome but hates app stores, is passionate about this.

      Interesting chatter at https://mobile.twitter.com/fabricedesre/status/636014195893342208

  29. May 2015
    1. Most of these difficulties would be addressed by the fundamental characteristics of a digital annotation system. The digital annotation system would automatically store and link annotations and sources with machine tidiness. As noted above, it is more than likely in a distributed system that annotations will be stored separately from the sources to which they refer. However, unlike the real-world equivalents, they would automatically hold information that links them effectively to the associated source. However, it is incumbent upon that system to display a clear association between annotation and source. But the potentially limitless capacity of an electronic writing space, indeed one that expands its viewing size to the later reader commensurate with the size of text inserted, would easily resolve the analogue annotator's problem of insufficient writing space. Moreover, it is worth taking into consideration the change that such expanded capacity may have upon the behaviour of annotators; an uncramped writing space may equally 'uncramp' their style and encourage them to be more expansive and, possibly, more informative. Equally important, there is no limit upon subsequent annotations relating to the original source or, for that matter, to the initial annotation. Clearly an example where the distributed nature of digital annotation presents a clear advantage. Even a clearer annotation generally still lacks all or some of the following: an author, or author status, a date or time, and where the annotation relies on other text or supporting evidence, (e.g. "This contradicts his view in Chapter 3"), it may have no clear direct reference either. A further complication might be the annotations, (or even counter-annotations) of another anonymous party. It is worth remarking that a digital system would be able to record the date and time of the annotation action, the source, and give some indication of the person who initiated the marking. If it were deemed unacceptable in certain systems, the annotation could be rejected as giving inadequate content. Once again the advantage of virtually unlimited writing space would allow the annotator to quote, if desired, the text to which (s)he refers elsewhere; alternatively the functionality that permits the annotator to highlight a source could also be adapted to permit the highlighting of a reference item for inclusion in the annotation body as a hypertext link. Some, but not necessarily all of the analogue difficulties may have been encountered; but they all serve to illustrate the difficulties that arise the moment annotations cease only to be read by their original author. It is outside that limited context that we largely need to consider annotations in the distributed digital environment. Picking up on this aspect, we might therefore consider the challenge posed by any system of annotation that intends to have an audience of greater than one, and, conceivably of scores or hundreds of annotators and annotation readers. Irrespective of their number, what makes such multiple annotations unreliable is one's ignorance of the kind of person who made the annotation: expert? amateur? joker? authority? Who wrote the annotation probably ranks as more important than any other undisclosed information about it. In this regard an annotator is no different from an author or writer of papers. Understanding the authority with which an annotation is made can be a key determinant in users' behaviour when accessing annotations across a distributed system.

      refs to role of UX in enabling digital annot to best "handwritten annots"

  30. Apr 2015
    1. This part of the Character Model for the World Wide Web covers string matching—the process by which a specification or implementation defines whether two string values are the same or different from one another.
  31. Nov 2014
    1. Still, there are hints that, while the discussion of the group is still being framed, the bitcoin industry could assert itself in the process through greater involvement.

      "Hints"? That's the whole point. Participation is all there is. Do it.