15 Matching Annotations
  1. Jul 2022
    1. the company is announcing the release of a three-part open source toolkit to quickly get the technology into developers’ hands and out in the wild. Adobe’s new open source tools include a JavaScript SDK for building ways to display the content credentials in browsers, a command line utility and a Rust SDK for creating desktop apps, mobile apps and other experiences to create, view and verify embedded content credentials.

      Implementation of the C2PA specification

  2. bafybeibbaxootewsjtggkv7vpuu5yluatzsk6l7x5yzmko6rivxzh6qna4.ipfs.dweb.link bafybeibbaxootewsjtggkv7vpuu5yluatzsk6l7x5yzmko6rivxzh6qna4.ipfs.dweb.link
    1. he only thing needed is a shared medium or workspace in which clear traces of the work areregistered (Heylighen, 2011a; Parunak, 2006). The aggregated trace functions as a collectivememory that keeps track of the different contributions and indicates where further work may beneeded. This function is typically performed by the community website, such as the Wikipedia site.A more advanced example of this functionality can be found in the issue queue used byDrupal developers (Kiemen, 2011; Zilouchian Moghaddam, Twidale, & Bongen, 2011). This is acommunity-maintained, ordered list of feature requests or problems that need to be addressed,together with the status of the work being done on each. The issue queue makes it easy forcontributors to see where their contribution would be most helpful, and to keep track of theadvances made by others. It can be seen as a more spontaneous, self-organizing version of the jobticketing systems that are commonly used in technical support centers, where each incomingproblem is assigned a “job ticket”, after which the ticket is assigned to one or more employees, andmonitored so as to make sure it is adequately dealt with (Heylighen & Vidal, 2008; Orrick, Bauer,& McDuffie, 2000).

      Indyweb can increase traceability across the entire network through built in provenance mechanism.

  3. May 2022
    1. a society-wide hyperconversation. This hyperconversation operationalizes continuous discourse, including its differentiation and emergent framing aspects. It aims to assist people in developing their own ways of framing and conceiving the problem that makes sense given their social, cultural, and environmental contexts. As depicted in table 1, the hyperconversation also reflects a slower, more deliberate approach to discourse; this acknowledges damaged democratic processes and fractured societal social cohesion. Its optimal design would require input from other relevant disciplines and expertise,

      The public Indyweb is eminently designed as a public space for holding deep, continuous, asynchronous conversations with provenance. That is, if the partcipant consents to public conversation, ideas can be publicly tracked. Whoever reads your public ideas can be traced.and this paper trail is immutably stored, allowing anyone to see the evolution of ideas in real time.

      In theory, this does away with the need for patents and copyrights, as all ideas are traceable to the contributors and each contribution is also known. This allows for the system to embed crowdsourced microfunding, supporting the best (upvoted) ideas to surface.

      Participants in the public Indyweb ecosystem are called Indyviduals and each has their own private data hub called an Indyhub. Since Indyweb is interpersonal computing, each person is the center of their indyweb universe. Through the discoverability built into the Indyweb, anything of immediate salience is surfaced to your private hub. No applications can use your data unless you give exact permission on which data to use and how it shall be used. Each user sets the condition for their data usage. Instead of a user's data stored in silos of servers all over the web as is current practice, any data you generate, in conversation, media or data files is immediately accessible on your own Indyhub.

      Indyweb supports symmathesy, the exchange of ideas based on an appropriate epistemological model that reflects how human INTERbeings learn as a dynamic interplay between individual and collective learning. Furthermore, all data that participants choose to share is immutably stored on content addressable web3 storage forever. It is not concentrated on any server but the data is stored on the entire IPFS network:

      "IPFS works through content adddressibility. It is a peer-to-peer (p2p) storage network. Content is accessible through peers located anywhere in the world, that might relay information, store it, or do both. IPFS knows how to find what you ask for using its content address rather than its location.

      There are three fundamental principles to understanding IPFS:

      Unique identification via content addressing Content linking via directed acyclic graphs (DAGs) Content discovery via distributed hash tables (DHTs)" (Source: https://docs.ipfs.io/concepts/how-ipfs-works/)

      The privacy, scalability, discoverability, public immutability and provenance of the public Indyweb makes it ideal for supporting hyperconversations that emerge tomorrows collectively emergent solutions. It is based on the principles of thought augmentation developed by computer industry pioneers such as Doug Englebart and Ted Nelson who many decades earlier in their prescience foresaw the need for computing tools to augment thought and provide the ability to form Network Improvement Communities (NIC) to solve a new generation of complex human challenges.

  4. Mar 2022
    1. Environmental Data & Governance Initiative (EDGI), an organization that sprang up in the wake of the Trump administration in an effort to prevent a climate-denialist administration from reducing public access to critical government-held data about the environment. In the EDGI working group then called Archiving, we were looking at ways to back up datasets such that scientists would be able to use them as proof — implying a strong chain of provenance — even if the original source were to remove access.

      Indylab procenance could be a good match!

  5. Jan 2022
  6. Dec 2020
    1. “provenance” — broadly, where did data arise, what inferences were drawn from the data, and how relevant are those inferences to the present situation? While a trained human might be able to work all of this out on a case-by-case basis, the issue was that of designing a planetary-scale medical system that could do this without the need for such detailed human oversight.

      Data Provenance

      The discipline of thinking about:

      (1) where did the data arise? (2) what inferences were drawn (3) how relevant are those inferences to the present situation?

    2. There is a different narrative that one can tell about the current era. Consider the following story, which involves humans, computers, data and life-or-death decisions, but where the focus is something other than intelligence-in-silicon fantasies. When my spouse was pregnant 14 years ago, we had an ultrasound. There was a geneticist in the room, and she pointed out some white spots around the heart of the fetus. “Those are markers for Down syndrome,” she noted, “and your risk has now gone up to 1 in 20.” She further let us know that we could learn whether the fetus in fact had the genetic modification underlying Down syndrome via an amniocentesis. But amniocentesis was risky — the risk of killing the fetus during the procedure was roughly 1 in 300. Being a statistician, I determined to find out where these numbers were coming from. To cut a long story short, I discovered that a statistical analysis had been done a decade previously in the UK, where these white spots, which reflect calcium buildup, were indeed established as a predictor of Down syndrome. But I also noticed that the imaging machine used in our test had a few hundred more pixels per square inch than the machine used in the UK study. I went back to tell the geneticist that I believed that the white spots were likely false positives — that they were literally “white noise.” She said “Ah, that explains why we started seeing an uptick in Down syndrome diagnoses a few years ago; it’s when the new machine arrived.”

      Example of where a global system for inference on healthcare data fails due to a lack of data provenance.

  7. Jun 2019
    1. e was able to describe in just a paragraph, she was able to get the data, it's just one table.

      now we turn data tables into charts and graphs, and you often can't get the data back out. Atypon showed a neat system, based on some Authorea work, on making interactive figures where the mouse position showed the data points at that position, even in 3D.

    2. What we realize is to do open science properly, you have to do it all the way along, you have to change all the processes, and capture the data, as it's being created, capture this software as it's being used. So we took the same software stack, reworked it a little bit and created something called the analysis preservation portal,

      basically a sort of ELN, still no capturing data directly from instruments, because in particle physics, they have a few big instruments, not the many small instruments we deal with in biomed.

    3. uestion any parts of that chain

      The idea of knowledge chains ties in nicely with my idea of provenance chains - capturing data at the source.

    4. it was the combined effort of the year of computer scientists, information scientists and physicists to actually make this work

      The CERN open data portal took huge amounts of effort to make useful.

  8. Jan 2018
  9. Apr 2017
  10. Jan 2016
    1. The journal will accommodate data but should be presented in the context of a paper. The Winnower should not act as a forum for publishing data sets alone. It is our feeling that data in absence of theory is hard to interpret and thus may cause undue noise to the site.

      This will be the case also for the data visualizations showed here, once the data is curated and verified properly. Still data visualizations can start a global conversation without having the full paper translated to English.

  11. Sep 2015
    1. It was transferred to Boehringer-Mannheim as Clone 12H11, resold to Roche and finally bought by Chemicon, and it is now sold as MAB3026.