60 Matching Annotations
  1. Oct 2024
  2. Sep 2023
  3. May 2023
    1. IPFS can open up to 1000 connections by default and suck up all your bandwidth – and that’s just for exchanging keys with other DHT peers.

      imho, the main problem with IPFS is that it does DHT over TCP, which is crazy-inefficient, compared to bittorrent, which does DHT over UDP, which "just works"

      one reason for DHT over TCP is to make this work in a web browser, which does not support UDP. so instead of teaching web browsers to talk UDP, IPFS took the simple route of "lets take bittorrent and run DHT over TCP"

      IPFS is obsolete, many goals of IPFS can be achieved with bittorrent v2

      generally, treating web browsers as the main target platform (and thus inheriting their limitations) is a bad idea, equally stupid as the "lets run everything in javascript/WASM" idea

  4. Dec 2022
  5. Nov 2022
  6. Aug 2022
    1. Content addressing is the big little idea behind IPFS. With content addressing (CIDs), you ask for a file using a hash of its contents. It doesn't matter where the file lives. Anyone in the network can serve that content. This is analogous to the leap Baran made from circuit switching to packet switching. Servers become fungible, going from K-selected to r-selected.

      Content addressing is when a piece of content has its own permanent address, a URI. Many copies may exist of the content, hosted by many in the network, all copies have the same address. Whoever is best situated to serve you a copy, does so. It makes the servers interchangeable. My blogposts have a canonical fixed address, but it's tied to a specific domain and only found on 1 server (except when using a CDN).

      IPFS starts from content addressing.

      Content addressing, assuming the intention 'protocol for thought' here, does match with atomic notes type of pkm systems. All my notes have unique names that could as human readable names map to CIDs. CIDs do change when the content changes, so there's a mismatch with the concept of 'permanent notes' that are permanent in name/location yet have slowly evolving content.

    1. Noosphere was presented at the Render conf on tools for thought. There's overlap with Boris' work on IPFS, judging by his tweets he's involved in this effort too.

      This Noosphere Explainer explains the tech used, not the 'massive-multiplayer knowledge graph' it is posited to be, how that would come about with this tech, or what that is meant to be for.

  7. Jun 2022
  8. May 2022
    1. a society-wide hyperconversation. This hyperconversation operationalizes continuous discourse, including its differentiation and emergent framing aspects. It aims to assist people in developing their own ways of framing and conceiving the problem that makes sense given their social, cultural, and environmental contexts. As depicted in table 1, the hyperconversation also reflects a slower, more deliberate approach to discourse; this acknowledges damaged democratic processes and fractured societal social cohesion. Its optimal design would require input from other relevant disciplines and expertise,

      The public Indyweb is eminently designed as a public space for holding deep, continuous, asynchronous conversations with provenance. That is, if the partcipant consents to public conversation, ideas can be publicly tracked. Whoever reads your public ideas can be traced.and this paper trail is immutably stored, allowing anyone to see the evolution of ideas in real time.

      In theory, this does away with the need for patents and copyrights, as all ideas are traceable to the contributors and each contribution is also known. This allows for the system to embed crowdsourced microfunding, supporting the best (upvoted) ideas to surface.

      Participants in the public Indyweb ecosystem are called Indyviduals and each has their own private data hub called an Indyhub. Since Indyweb is interpersonal computing, each person is the center of their indyweb universe. Through the discoverability built into the Indyweb, anything of immediate salience is surfaced to your private hub. No applications can use your data unless you give exact permission on which data to use and how it shall be used. Each user sets the condition for their data usage. Instead of a user's data stored in silos of servers all over the web as is current practice, any data you generate, in conversation, media or data files is immediately accessible on your own Indyhub.

      Indyweb supports symmathesy, the exchange of ideas based on an appropriate epistemological model that reflects how human INTERbeings learn as a dynamic interplay between individual and collective learning. Furthermore, all data that participants choose to share is immutably stored on content addressable web3 storage forever. It is not concentrated on any server but the data is stored on the entire IPFS network:

      "IPFS works through content adddressibility. It is a peer-to-peer (p2p) storage network. Content is accessible through peers located anywhere in the world, that might relay information, store it, or do both. IPFS knows how to find what you ask for using its content address rather than its location.

      There are three fundamental principles to understanding IPFS:

      Unique identification via content addressing Content linking via directed acyclic graphs (DAGs) Content discovery via distributed hash tables (DHTs)" (Source: https://docs.ipfs.io/concepts/how-ipfs-works/)

      The privacy, scalability, discoverability, public immutability and provenance of the public Indyweb makes it ideal for supporting hyperconversations that emerge tomorrows collectively emergent solutions. It is based on the principles of thought augmentation developed by computer industry pioneers such as Doug Englebart and Ted Nelson who many decades earlier in their prescience foresaw the need for computing tools to augment thought and provide the ability to form Network Improvement Communities (NIC) to solve a new generation of complex human challenges.

  9. Feb 2022
  10. Jan 2022
    1. Making a Memento

      To create an archived version of the page that could be played back properly, I used the Internet Archive’s “Save” feature by going to this URL in my web browser:

      http://web.archive.org/save/http://iipc.github.io/warc-specifications/primers/web-archive-formats/hello-world.txt

      …which created this snapshot:

      http://web.archive.org/web/20150709104019/http://iipc.github.io/warc-specifications/primers/web-archive-formats/hello-world.txt

      From here, we can use wget to look at what gets played back:

      $ wget --server-response http://web.archive.org/web/20150709104019/http://iipc.github.io/warc-specifications/primers/web-archive-formats/hello-world.txt
      

      …giving:

        HTTP/1.0 200 OK
        Server: Tengine/2.1.0
        Date: Thu, 09 Jul 2015 10:41:38 GMT
        Content-Type: text/plain;charset=utf-8
        Content-Length: 13
        Set-Cookie: wayback_server=19; Domain=archive.org; Path=/; Expires=Sat, 08-Aug-15 10:41:38 GMT;
        Memento-Datetime: Thu, 09 Jul 2015 10:40:19 GMT
        Link: <http://iipc.github.io/warc-specifications/primers/web-archive-formats/hello-world.txt>; rel="original", <http://web.archive.org/web/timemap/link/http://iipc.github.io/warc-specifications/primers/web-archive-formats/hello-world.txt>; rel="timemap"; type="application/link-format", <http://web.archive.org/web/http://iipc.github.io/warc-specifications/primers/web-archive-formats/hello-world.txt>; rel="timegate", <http://web.archive.org/web/20150709104019/http://iipc.github.io/warc-specifications/primers/web-archive-formats/hello-world.txt>; rel="first last memento"; datetime="Thu, 09 Jul 2015 10:40:19 GMT"
        X-Archive-Orig-x-cache-hits: 0
        X-Archive-Orig-x-served-by: cache-sjc3122-SJC
        X-Archive-Orig-cache-control: max-age=600
        X-Archive-Orig-content-type: text/plain; charset=utf-8
        X-Archive-Orig-server: GitHub.com
        X-Archive-Orig-age: 0
        X-Archive-Orig-x-timer: S1436438419.302921,VS0,VE141
        X-Archive-Orig-access-control-allow-origin: *
        X-Archive-Orig-last-modified: Wed, 08 Jul 2015 22:33:03 GMT
        X-Archive-Orig-expires: Thu, 09 Jul 2015 10:50:19 GMT
        X-Archive-Orig-accept-ranges: bytes
        X-Archive-Orig-vary: Accept-Encoding
        X-Archive-Orig-connection: close
        X-Archive-Orig-date: Thu, 09 Jul 2015 10:40:19 GMT
        X-Archive-Orig-via: 1.1 varnish
        X-Archive-Orig-content-length: 13
        X-Archive-Orig-x-cache: MISS
        X-Archive-Wayback-Perf: {"IndexLoad":359,"IndexQueryTotal":359,"RobotsFetchTotal":1,"RobotsRedis":1,"RobotsTotal":1,"Total":371,"WArcResource":10}
        X-Archive-Playback: 1
        X-Page-Cache: MISS
      
    2. Extracting a WARC record

      Once we’ve identified the offset and length of a particular record (in this case, an offset of 1260 bytes and a length of 1085 bytes), we can snip out an individual record like this:

      $ tail -c +1261 hello-world.warc | head -c 1085
      
    3. Making the CDX

      To generate a content index (CDX) file, we have at least two options. There’s JWATTools:

      $ jwattools cdx hello-world.warc
      

      …(which created cdx.unsorted.out), or the cdx-indexer from OpenWayback:

      $ cdx-indexer hello-world.warc > hello-world.warc.cdx
      

      …(which created hello-world.warc.cdx).

    4. Making the WARC

      To create a WARC, we used wget:

      $ wget --warc-file hello-world http://iipc.github.io/warc-specifications/primers/web-archive-formats/hello-world.txt
      

      …which created the compressed hello-world.warc.gz file. These special block-compressed files are often used directly, but in this primer, we uncompress it so we can see what’s going on:

      $ gunzip hello-world.warc.gz
      

      …leaving us with hello-world.warc.

  11. Oct 2021
  12. Sep 2021
  13. Apr 2021
  14. Jun 2020
    1. An object, on the other hand, refers to a block that follows the Merkle DAG protobuf data format.

      object是遵循Merkle DAG protofbuf 数据格式的block

    1. This client library implements the IPFS Core API enabling applications to change between an embedded js-ipfs node and any remote IPFS node without having to change the code. In addition, this client library implements a set of utility functions.

      实现了IPFS Core API,以及一些应用函数。

  15. May 2020
    1. 移植相对简单,将数据拷过去就差不多了。不过要注意如果原备份还运行的话Pear ID一样会有问题的。

    1. The actions that you take only affect your own IPFS node, not nodes belonging to your peers.

      感觉象是断网状态似的,不影响其它节点,只影响本地节点,有点不解。

    2. the multicodec prefix in an IPFS CID will always be an IPLD codec.

      IPFS CID 总是使用IPLD codec

    3. Multiformats CIDs are future-proof because they use multihash to support multiple hashing algorithms rather than relying on a specific one.

      因为支持未来算法,所以不会过时。

    4. IPFS uses sha2-256 by default, though a CID supports virtually any strong cryptographic hash algorithm.

      支持任何强加密算法。

    1. const cid = results[0].hash

      这一句运行有问题!undefined!

    1. js-ipfs-http-client is a smaller library that controls an IPFS node that is already running via its HTTP API. js-ipfs actually uses this library internally if it detects that another node is already running on your computer

      使用js-ipfs-http-client更小,并且控制一个已经运行的节点。

    2. Whenever reasonable, we recommend the second method (interacting with a separate IPFS node via the HTTP API). Keeping the IPFS node in a separate process (even if it’s one your program spawns) isolates you from any stability problems with the node.

      建议使用独立节点,然后使用Http api。

    1. The core IPFS team maintain implementations in Golang and Javascript. Those are commonly referred to as go-ipfs and js-ipfs. The official binaries are built from the Go implementation.

      IPFS主要的实现有 go-ipfs 和 js-ipfs。官方二进制包使用go实现。

  16. Nov 2019
  17. May 2019
  18. Oct 2018
    1. InterPlanetary Wayback (ipwb) facilitates permanence and collaboration in web archives by disseminating the contents of WARC files into the IPFS network. IPFS is a peer-to-peer content-addressable file system that inherently allows deduplication and facilitates opt-in replication. ipwb splits the header and payload of WARC response records before disseminating into IPFS to leverage the deduplication, builds a CDXJ index with references to the IPFS hashes returned, and combines the header and payload from IPFS at the time of replay.
    1. This website converts any IPFS-hosted file to an HLS file and reuploads it to IPFS.
  19. Sep 2018
    1. John wants to upload a PDF file to IPFS but only give Mary accessHe puts his PDF file in his working directory and encrypts it with Mary’s public keyHe tells IPFS he wants to add this encrypted file, which generates a hash of the encrypted fileHis encrypted file is available on the IPFS networkMary can retrieve it and decrypt the file since she owns the associated private key of the public key that was used to encrypt the fileA malicious party cannot decrypt the file because they lack Mary’s private key
    1. Browse files stored on IPFS easily and securely with Cloudflare’s Distributed Web Gateway without downloading software. Serve your own content hosted on IPFS from a custom domain over HTTPs.
    1. A peer-to-peer hypermedia protocol to make the web faster, safer, and more open.
  20. Aug 2018
  21. Mar 2018
  22. Nov 2016
    1. This is a picture of the first HTTP web server in the world. It was Tim Berners-Lee's NeXT computer at CERN. Pasted on the machine is an ominous sticker: "This machine is a server, do not power it down!!". The reason it couldn't be powered down is that web sites on other servers were starting to link to it. Once they linked to it, they then depended on that machine continuing to exist. If the machine was powered down, the links stopped working. If the machine failed or was no longer accessible at the same location, a far worse thing happened: the chain between sites becomes permanently broken, and the ability to access that content is lost forever. That sticker perfectly highlights the biggest problem with HTTP: it erodes.

      This is interesting, since the opening video for https://hypothes.is/ mentions the early web also - in this case, for its annotation features that were removed.

      It seems to me that hypothes.is is even more powerful used on IPFS content identified by hash since that underlying content cannot change.

      Thanks to both services I'm doing exactly this right now!

    2. I think this is exactly what I've wanted - and what a lot of people have wanted - for a long time. It's certainly not the first time I've seen someone call for using hashes for referring to files, but the design and implementation behind this look like they do a lot of things right.

  23. Jan 2016
    1. ipfs cat /ipfs/QmW2WQi7j6c7UgJTarActp7tDNikE4B2qXtFCfLPdsgaTQ/cat.jpg >cat.jpg

      Same with this one. Dropping the /ipfs/ fixed it.

    2. hash=`echo "I <3 IPFS -$(whoami)" | ipfs add -q`

      Also, this gives me an error from ipfs add:

      $ hash=`echo "I <3 IPFS -$(whoami)" | ipfs add -q`
      Error: Argument 'path' is required
      
      Use 'ipfs add --help' for information about this command
      
  24. Sep 2015
  25. Aug 2015
    1. This is neat, as I understand it's like kio slaves in KDE or GIO, FUSE, etc.

      Might be useful for exposing some of the tagged distributed storage systems as browsable filesystems with JIT access.