101 Matching Annotations
  1. Sep 2023
    1. The shareToken is a long DID key that looks something like this

      Sharing keys is not good pattern, I think it's best to make do with some short delegation that will allow to wire up delegatio.

      Also note that just like you can delegate access to your email, you could delegate to friends email. I think it might be a good idea to leverage that in fireproof. something along the lines of

      js if (cx.authorized) { ctx.share(friendsEmail) }

      Now when account authorizes with friendsEmail and pulls down delegations, this one will show up.

    2. cx.ready.then(() => { if (cx.authorized) { // hide email input } else { // get email from user input cx.authorize(email) }})

      This is a nice API ❤️

    3. In practice, users will need to have more than one logged-in device open at the same time to transfer initial credentials. After that, the devices can operate independently.

      This seems unfortunate, I wonder if you could also workaround this with some email trick, e.g. send some seed key so that in conjunction with some phrase you'd be able to recover.

      I have done experiment in the past that used android unlock pattern and a time (with low precision) to do this.

  2. Apr 2021
    1. Sigils reserved for future use

      Not a fan of sigils to be honest. Also have you considered emojis instead (not that they are going to be much better)

    2. Quote blocks start with > . Quote blocks SHOULD be presented in a manner that denotes they are quoted text. Visual clients MAY render quote blocks by indenting them, or by rendering them with a line to the left, as seen in many email clients. Non-visual clients, such as screen readers MAY read quote blocks in a different voice style.

      How do you quote multiple paragraphs ? Multiple quotes ? How do you differentiate between disjoint quotes vs same quote ?

    3. hierarchy is probably a sign your note needs to be refactored or unbundled into multiple notes

      But on the flip side it turns sharing such documents a lot more complicated than sharing a single file of text.

      I’m not sure if I have a constructive feedback here. Part of me feels like we should just leave textual files behind and embrace more structural DAG like things with actual links in them. Other part of me likes idea of works with any text editor. Don’t know where this sits between two

    4. & punctuated-equilibrium.st

      Part of the struggle I have with markdown and HTML is: I want some terms in the body of text to be links, but here is the dilemma

      • Do I turn all occurrences of these term into a link ?
      • Do I turn a single occurrence of the term into a link, then which one ?
      • Different terms are used to refer to the same thing should they all be links to the same thing ?

      I think idea of just listing link/references separately is neat as it avoids those questions. On the other hand it also no longer communicates connection a thing link is in reference of.

      In an ideal world I wish hovering a link provided a sort of heatmap of what it’s related to.

    5. Link blocks (lines starting with &) allow you to link to other files within the flow of a Subtext document

      I’m more and more convinced that 1:1 mapping has been a mistake. Have you considered links that point to multiple sources ? That way user agent can decide what is the best place to get it from.

      Part of me wishes links were hashes followed by URLs where you can grab those from.

    6. allows data sources like .csv, or .png to be OPTIONALLY embedded in-place

      Sounds like multipart/form-data but I’m not sure I understand how embedding in this file format is going to work

  3. Oct 2020
    1. import blake2b from '@multiformats/blake2/blake2b.js'

      From what I can tell neither of two have default exports, so it's unclear to me what's this going to export.

  4. May 2020
    1. Swarm
      • Should probably adapt to conditions
    2. Pubsub
      • Are there reasons to disabling it ?
      • Can be disabled at API call level ?
    3. Bootstrap

      Seems like something should be allowed to add through API call with user consent instead.

    4. MDNS

      We should have this via IPFSDesktop

    5. Gateway

      I assume this is irrelevant in browser context.

    6. API

      I assume this is irrelevant in browser context

    7. Delegates: []

      Should probably try to delegate to IPFS Desktop

    1. hop (object) enabled (boolean): Make this node a relay (other nodes can connect through it). (Default: false) active (boolean): Make this an active relay node. Active relay nodes will attempt to dial a destination peer even if that peer is not yet connected to the relay. (Default: false)
      • Does not seem relevant for in browser node.
      • Even if practical in browser it's probably not embedders call but users.
    2. options.pass
      • Can we store in navigator.credentials instead ? And is that better ?
      • Is this for IPNS keys ?
      • Could we derive with node key + origin instead ?
      • Also appears something that should be between user & node (especially in cross-origin scenario) not between user & app.
    3. bits (number) Number of bits to use in the generated key pair. (Default: 2048) privateKey (string/PeerId) A pre-generated private key to use. Can be either a base64 string or a PeerId instance. NOTE: This overrides bits. // Generating a Peer ID: const PeerId = require('peer-id') // Generates a new Peer ID, complete with public/private keypair // See https://github.com/libp2p/js-peer-id const peerId = await PeerId.create({ bits: 2048 }) pass (string) A passphrase to encrypt keys. You should generally use the top-level pass option instead of the init.pass option (this one will take its value from the top-level option if not set).

      Guessing that passing keys is common to facilitate recovery.

      • Can we use navigator.credentials ?
      • Maybe key management could be improved to make this unnecessary ?
      • What are the keys used for other than for peer ID ?
    4. emptyRepo

      Maybe ipfs.clear() could be added that will clear data for this origin.

    5. options.repo

      I think we should just embrace current default "ipfs". But should verify with users custom name isn't used.

      This also doesn't fit cross-origin use case as client should not be dictating the repo name.

    6. options.libp2p

      Seems like browser node should just come with right defaults. In some cases options should probably be moved to API call options (e.g timeout)

    7. sharding

      Do in browser apps need to deal with this ? Also seems like it should be part of ipfs.add option instead of global one.

    8. options.preload

      Seems like this should be per API call option.

    9. enabled (boolean): Enable circuit relay dialer and listener. (Default: true)

      What would be a reason to disable this ?

    10. options.init

      This is probably going to be complicated. I imagine users might be using / wanting to use own privateKeys (for recovery etc..). At the same time, maybe reasons for wanting to pass it could be addressed instead.

    11. options.silent

      Should probably default to silent and have alternative mechanism for enabling logging e.g. ipfs.log.enable() / ipfs.log.disable().

    12. options.offline

      This seems important, although seems something user should control globally rather than app.

      I can also imagine app wanting perform certain data read / write ops to be performed offline, however this option should probably be added to the API calls instead.

    13. options.start

      This is doesn't fit shared node design. As node could already be started. We still could allow it, in which case node doesn't start unless it's already started but seems it's best to not have it.

    14. options.repoAutoMigrate

      Can we just commit to true. If someone want's to stay behind they can use own isolated node.

      Does this goes beyond MFS ? Potentially it could be done there if we have MFS per origin. On the second thought even that is going to be problematic as on tab may choose true other false.

    15. ipnsPubsub

      What would be consequence of having this set to true in browsers if you do not publish to IPNS ?

    16. Browser config does NOT include by default all the IPLD formats. Only ipld-dag-pb, ipld-dag-cbor and ipld-raw are included.

      I know Textile uses custom formats. We will need to figure out format dist format so they could be loaded on demand. Node could also cache those for the future. Seems like formats should be hosted on IPFS 🤷‍♂️

    1. If IPFS Desktop replaced its standalone app screens with their equivalents in the browser (in other words, Web UI) — BUT kept the system menubar/tray actions as they are right now — how would that impact your workflow? *

      I think I'd rephrase to if IPFS Desktop loaded it's interface in the default browser tab instead of it's standalone window.

    2. Other

      I personally use it mostly because it doesn't involve CLI interactions. Not sure if it's common enough use case to justify a dedicated checkbox.

    3. Web UI

      I'm guessing this refers to https://webui.ipfs.io/ but it was not immediately obvious to me.

    4. Companion

      Not sure would be clear what companion would be for someone who's not very well familiar with IPFS ecosystem.

    5. verall development workflow

      I would just say "workflow" not to presume development.

  5. Mar 2020
    1. Cloud apps like Google Docs and Trello are popular because they enable real-time collaboration with colleagues, and they make it easy for us to access our work from all of our devices. However, by centralizing data storage on servers, cloud apps also take away ownership and agency from users. If a service shuts down, the software stops functioning, and data created with that software is lost.

      Test

  6. Feb 2020
    1. ID should be a unique, beautiful object

      Can not agree more! After seeing QR code hanko I have being thinking how beautiful object can an excellent way to bridge digital identity with a physical space.

    1. Today, we’re more or less back to the timesharing model of the 1970s

      This is such a good comparison.

    2. Urbit OS makes the server side usable for individuals without the need for MEGACORP

      🤔

    1. Ultimately, we think that new technology is most likely to get adopted if it can provide a much, much better user experience

      👍

  7. Jan 2020
    1. There is currently no mobile build, but a prototype of Beaker browser using a Mozilla mobile reference browser is technically possible.

      There is in fact https://bunsenbrowser.github.io/ (which I heard is not very useful), also @sammacbeth has build full featured beaker browser with ref browser and his dat-fox extension that uses libdweb under the hood.

    2. although there are tools and services that can host dat content, becoming another peer with more consistent uptime.

      One often overlooked property of this setup is that arbitrary number of mirroring services like hashbase could be used per archive. It also implies that service providers can be switched at will without migration costs or URL changes (assuming one uses DNS records and not the hasbase provided URL of course)

    3. although some applications add multiwriter support for Dat.

      There is multiwriter dat already, although it has not being widely deployed just yet. Under the hood it is still single author per log it's just canonical log will provide pointer to another authors log and project mergeer of corresponding logs.

    4. ote that only the latest version's files are stored in a dat, unlike git versioning.

      That is not incorrect. Log is append only and there for contains all the versions. It's just hyper-drive projects "the latest" version. In fact beaker allows you to access older versions as well (if you go to beaker://library/dat://.... you can even see a version drop down)

    5. metadata

      I would not use "metadata" as archives consist of append-only log for data and metadata that serves as an index for that log. Any change is usually write to both log & metadata.

      I would just restraint from using "metadata" term as in dat that tends to refer to a specific thing.

    6. own/create and seed/share.

      You can also fork sites and make them your own!

    7. or simply as a new local file that never leaves its host machine.

      maybe it would be more clear if this case is compared to git repo due to versioning etc..

      Here as well I find "local file" a bit misleading, maybe "data set" or something along those lines would be a better term.

    8. single file

      I'm not sure I follow what are you trying to convey with "single file" here.

    9. secure

      I would omit "secure" or elaborate in more detail what it means in this context. Valid arguments can be made for and against security properties of dat.

    10. dat

      I'd say "dat archive" for clarity

    11. diffable

      Does not seems like right link here

    12. Looks like the "draft" overlay makes links unclickable & unhoveraable

  8. Jun 2019
    1. In this failed state, the sender can send the message to the recipient's always-online cafe(s), ensuring the message is delivered when the recipient returns online. In practice, this means that, when available, cafe "inbox" addresses are attached to contacts, which get published to the network

      I'd be interested in why delivering message to an inbox was chosen over publishing a message to say an outbox that recipient can read from instead. In that setup if both sender and recipient use cafe recipients cafe could read form outbox into it's inbox. But if recipient doesn't use cafe it can query own contact outboxes to get messages. In fact if both sender and receiver have no cafe they still would be able to get message across once both are online. Additionally it opens opportunity to rely messages through gossiping across contact graph.

    2. In this failed state, the sender can send the message to the recipient's always-online cafe(s), ensuring the message is delivered when the recipient returns online. In practice, this means that, when available, cafe "inbox" addresses are attached to contacts, which get published to the network.

      Does not this imply that there could be condition when message / update is send to the old inbox that has being changed but has not yet replicated.

    3. This involves a combination of authenticated IPFS pinning and thread snapshots. A thread snapshot contains only the metadata and latest known update hash (HEAD) needed to reconstruct the entire thread from scratch, and is encrypted with the client's account key. This means that cafes only issue encrypted backups and are not able to read their clients' threads. This also means that the snapshots are useless if a client loses their account key.

      I think it is not very clear specifically I'm not sure what are the answers to the following questions:

      1. Does cafe replicate all of the account threads or just a special subset ?
      2. What if I want some data to be replicated by cafe A and other data by cafe B and maybe some data not be on any cafe at all ?
    1. Search for contacts

      I don't seem to be able to ever find any contacts even if I try to find myself (real account on ios textile photos app) from a try app with a diff account.

    2. participants across the entire network stream results directly to the requester.

      What if I do not want to be discovered ?

    3. A contact is essentially a collection of peers that share the same account.

      This is a bit confusing because in this case it refers to other accounts.

    4. peers

      Term peer here is confusing because usually it's meant to refer to others on the network rather than yourself.

    5. If we had any threads, these updates would have been announced to them so that other members could pick up the changes.

      It is unclear what announcing this to other threads implies here.

    1. Snapshots are an encrypted representation of a thread.

      I still don't quite understand how can I enable snapshots in my own custom thread. Like if I'm using JSON-CRDT library like automerge my OPs will be messages in the thread and state is computed from all messages. Ideally there will be a way for occasional state snapshots so that new participant can trust inviter to bootstrap quickly with current state and optionally & lazily fetch OPs that result in the state.

    1. "pin": true,

      What does pin here mean & how it differs from pin in thumb ?

  9. Apr 2019
    1. (add-right 3 [0 1 2]) ;; ==> [0 1 2 3]

      Shouldn't this be ;; ==> [1 2 3 0] instead ?

  10. Mar 2019
  11. Feb 2019
  12. Jan 2019
  13. gozala.hashbase.io gozala.hashbase.io
    1. Every document can also be collaborative edited by many users and they can all do it without being connected to each other

      Documents are in fact far more interesting than just CRDTs. It's changelog with a pointers to the other similar changelogs sources collectively projecting state of the document. Better yet alternative projections can be explored by ignoring or incorporating other sources - implying that anyone could project every possible set of changelogs.

      Also implying that anyone can edit anything without anyone's permission and in collaboration with anyone each choosing their own tool to do so.

    2. Comments are wellcome

  14. Dec 2018
    1. (do not know how to do later programmatically)

      Jim Pick pointed out tls-keygen package on npm does add trusted roots into system, so it might be worth to take a look at.

    2. Visual representation of that idea Picture

    1. JavaScript - ipfs.cat(ipfsPath, [options], [callback])

      I would suggest same approach here. Just have a single cat that returns IPCat which could have some canonical interface I'd suggest AsyncIterator again that can be coerced to ReadableStream, PullStream, etc... And not all the options need to be included with ipfs itself that can be a separate library that one pulls in depending on the use case.

    2. JavaScript - ipfs.add(data, [options], [callback]) Where data may be: a Buffer instance a Readable Stream a Pull Stream an array of objects, each of the form:

      I would highly recommend to making exposed interface less generic because:

      1. It makes it straight forward without any cognitive overhead on consumer. 2 Does not require checks / guesses at implementation site which allows better optimizations and reduces maintenance overhead.
      2. Allows implementation to be free from node or browser specific bits.

      What I'd like to suggest is following instead:

      ipfs.add(files:IPFiles, options):Promise<Res[]>

      While above makes API monomorphic you still could still multiple data formats by pushing polymorphish at the IPFiles instantiation.

      For example you IPFiles could have following type:

      type IPFiles = AsyncIterator<{path:string, content:ArrayBuffer}>
      

      And there could be multiple helper functions to instantiate IPFiles from ReadableStream, PullStream, Buffer etc... And those helpers don't all need to be bundled with ipfs as likely depending on usage you'll need one or the other if any.

  15. Nov 2018
  16. Oct 2018
    1. The privacy model isn’t as robust as Dat since all the files you’re downloading are being broadcast via content discovery.

      Isn't that just a flip side of the de-duplicating everything ?

    2. If the value in this chain isn’t the same as what’s stored locally, the connection will be closed and the block will be ignored

      Whi do you need to traverse find the version corresponding to the local version. Since IPNS updates are signed by authors key I would imagine you can verify that new version is signed by author so no need to verify further. Or am I missing something ?

    3. With this field in place, one can traverse the history by following the previous links.

      One major downside is though that traversing history would require n steps for n version back.

    4. Since each IPNS entry is signed, and should have a decreasing sequence number over time, the only additional verification needed will be to check that the sequence number doesn’t decrease by more than one.

      Can you provide some more details on the sequence numbers you're referring to, I'm lacking some context here.

    1. The data/ folder is a managed object-store folder.

      I probably have made this comment before so I apologize if I'm starting to becoming annoying, but I do finally have more concrete argument.

      Inter-op between at the file / schema level is problematic as it introduces high coordination costs. Meaning update to the format needs to be coordinated between apps. Which is why interop at the message passing (API) level is my recommendation. As it allows all inter-op with much lover costs:

      1. App A can move to new data format but still provide legacy API for Apps B to read / write data.
      2. App C can develop an adapter for app A's new API so that app B and others can continue functioning while they migrate to new API.
      3. API can expose multy-phaseted transactions as atomic. For example I'm working on blog publishing app that publishes posts in multiple formats .md + .html but I could expose API that would take .md as input to publish a post still in both formats. Other interesting use case would be adding a contact to address-book which would include key exchange claimed proof validations etc. Failing a single operation could produce corrupt dataset, but carefully designed API would avoid that.

      I suspect that 3 would amplify need for data schema updates to address some mistakes, which in turn would amplify inter-op concerns with others.

    2. navigator.requestUserSession() Method to create an access-session with the user's FS. If the user-session doesn't already exist with the given parameters, the user will be prompted with a dialog to sign in and grant the requested permissions.

      I think introducing a sign-in in apps could be a mistake. Again if you consider the user flow, when you go to an app you often want do things and you only care about how they'll be discovered once you're about to publish. With that here are some of my conclusions / suggestions:

      1. Let everyone have an implicit public account that is there but you don't have to know it exists unless you care.
      2. When user is about to publish something that's an excellent opportunity to introduce the fact that it can be published under that account which they can customize. Or choose to publish anonymously under some URL they can share off band.
      3. If app end up publishing to profile it either automatically gains privilege to make updates or there is good user experience build around publishing an update.
      4. Building a file picker around the same metaphor would also make sense. Root entries will be accounts + entry containing everything you shared anonymously.
    3. The root folder of the private dat is "protected" and so apps can only write to the various "private folders" unless the user creates new unprotected folders.

      I had some very interesting conversation where I was able to capture some very interesting UX insights (I'll try to turn that into a blog post at some point) which got me convinced that apps having a default non-user facing space for writing data into without permissions prompts would be very important. That would allow natural user flows like when user want's to share a created content application will have great opportunity to share it publicly by linking to profile or anonymously at which point that data from app local store can move into use facing space.

      That is to say I think it would be best to sandbox apps by default such that they can accomplish most without interrupting user flow and introduce concept of where only after user took action that asks that question.

    4. The user filesystem is comprised of two dats called the "Private" and the "Public" dats.

      +1 I think separation of public and private data is very important and I'm happy to see you're heading this direction.

  17. Sep 2018
    1. The proto.json file would look something like this:

      I think this still kind of assumes that you can statically encode invariants which I don't believe possible (for the reasons described in prior comment where drew analogy with type systems).

      Argument could be made that this will address 90% use case and there for it's a good compromise. It's hard to estimate how large is the % of use cases this will or will not be able address.

      Here as an example of interaction off top of my head that no static description will be able to address

      Suppose there is a "secrets" app that lets you store arbirtary data encrypted with some master key. It seems reasonable to allow other apps store data there as well without sharing master key with them.

      If you go with process metaphor over file metaphor it becomes fairly simple 'secrets' app would just need to expose read & write operations letting arbitrary app read / write files that would be encrypted by master key that no other app but "secrets" app knows. Secrets app also could do all kind of user consents along the way.

      In fact other apps could expose compatible read / write operations for doing different things and achieve interoperability without coordination with anyone.

    2. the browser knows exactly what permission to ask from the user.

      This is a good point and makes me wonder if there should be a general mechanism for an app to communicate with user before deciding to grant a permission.

      It could be all done in user land say by displaying an overlay iframe with permission request but having some kind of browser level API might be worth considering.

    3. In winter of 2017/18, we proposed adding high-level data semantics to the browser itself. We would create a set of standard data formats, schemas, and APIs which everybody shares. This was somewhat reminiscent of Schema.org, in that it would try to create "one entology to rule them all."

      I would like to point out a well known problem in languages with very powerful type system like Haskell. Which is typing (or schema for that matter) is powerful tool to make impossible states impossible or in other words encode invariants in types. But even with very powerful type system not all invariants can be entyped and solution there is to encode invariants at the module level, which in practice means have type that can have invalid state but only expose operations that are guaranteed to end up in valid state.

      Which is to say I don't think schema could is capable to address data corruption problem and if you take inspiration from typed languages you wind up with encoding invariants at application level that can expose API to update data such that would prevent data corruption.

    4. Identifiers are domain names

      I would encourage you to drop this requirement. It's sure convenient and could be advertised as idiomatic way to go about it, but there are few cases where you'd want something else:

      1. Multiple identities - Having domain per identity like my work profile, personal profile, family profile.
      2. Contextual identity - Sometime you want to have contextual identify that could be part (linked) to your larger identity.
      3. Anonymity - Sometimes you want to stay anonymous and in that instance you probably don't want to tie your identity to domain.
  18. Jan 2017
    1. ADN : nvALT] GrabLinks went o

      Good luck with this one

    2. Some note

    3. I improved the rollover highlighting so that it doesn’t highlight every parent element, just the one it’s actually going to grab. I added this fix to GrabLinks as well, so if you have that installed you should already be seeing the improvement. Like GrabLinks, Bullseye loads from a GitHub Gist and will “auto-update” when I make changes. No need to keep reinstalling it.

      Just an example.

    1. By Tyler Benedict - Posted on January 28, 2017 by Tyler Benedict - 6:00 pm This year is off to a bang! The last couple week’s we’ve seen a ton of pro bikes from the Tour Down Under and CX Nats, and this past week saw an increase in new product announcements. But more than anything, it gave us the chance to ride some of the biggest new products that’ll hit store shelves in the coming months. Cory had a ride aboard the 9100-series Dura-Ace Di2 and hydraulic disc brakes (with an aside about the rounded rotors for UCI), and Tyler hopped aboard the FSA WE electronic shifting system.

      I really doubt it will look ok, once page updates.

    Tags

    Annotators

    URL

    1. Video: Old Skool x New School – Tom Ritchey meets up with the Bicycle Academy By Zach Overholt - Posted on January 30, 2017 by Zach Overholt - 3:05 pm If you’re in the UK, and you want to learn how to build a bike frame, The Bicycle Academy is quickly becoming one of the premier destinations.

      Example with some content with an image.