940 Matching Annotations
  1. Last 7 days
    1. Again, the discloser can retain the ‘Admit’ message as non-repudiable digital proof that the disclosee has admitted the disclosure of the ACDC

      What stops disclosee to not send the "Admit"? Thus being repudiable to "I haven't received the promised and agreed upon info".

    1. Events in the TEL are sealed (anchored) in a Key Event Log (KEL) using seals.

      Every TEL event gets anchored?

    2. r can provide an API where transaction events or their seal references and their KEL seal references can be looked up by the SAID of the ACDC associated with the transaction event

      I guess lookup is meant to be specified, and not ad-hoc. So software can do it without human (programmer) guidance.

    3. NOT

      This serves to convey different semantics than the above fields. I'd expect it to be in a different field.

    4. When an Edge block does not include a SAID, d field, then the node, n field must appear as the first field in the block.

      Strange. All we need is d. Introducing n makes it not uniform. I'd expect there to be d solely, replaced at run-time with it's content.

    5. Notable is the fact that no top-level type fields exist in an ACDC. This is because the Schema, s, field itself is the type field for the ACDC and its parts
    6. where each Edge block is a leaf of a branch

      Rather, "where each Edge block is a leaf or a branch".

    7. Without the entropy provided the UUID, u, field, an adversary may be able to reconstruct the block contents merely from the SAID of the block and the Schema of the block using a rainbow or dictionary attack on the set of field values allowed by the Schema

      I.e., knowing a schema makes it possible to derive content from its hash.

    1. where K ≤ L

      And K > L / 2, I guess. Otherwise two blocks can be authZ in a fork.

    2. The specific details of this recovery are explained later (see Section 11.6). In gener-al, the witnessing policy is that the first seen version of an event always wins, that is, the firstverified version is witnessed (signed, stored, acknowledged and maybe disseminated) and allother versions are discarded

      Seems to harm convergence

    1. blockchains are too expensive, too complex, too hard to scale, and too slow-moving in their governance
    2. Open identity systems should stay agnostic to all silos and blockchains

      I.e., identity system should be ledger-agnostic.

    3. consensus network called KERI
  2. May 2024
    1. the message payload inside an IP packet includes aunique identifier provided by the identity system that is exclusive to the sender of the packet

      Including identifier in message, coupled with key-pair->identifier mapping is meant to give authenticity - that message came from the identifier.

      Problem 1: keypair will be eventually revoked, how to convey authenticity of messages before revokation and not after?

      Solution 1.1: Issue event with logical revokation time. I.e., revocation is issued on Fri, but meant to take place logically on Thu.

      Then, messages received on Fri and forward, signed by that key-pair, will be deemed not authentic.

      Solution 1.2: Use multiple signatures to prove authenticity.

      Then, even if one gets revoked - authenticity holds.

      Con: metadata overhead, need for keypair agents to be connected, higher delay - meh.

      Is there value to include identifier, given recepient still needs to resolve mapping keypair->identifier and keypair can be binded to one identifier only.

      Yes, identifier hints how to resolve it.

    2. Also more recently theTrusted Computing Group (TCG) uses the term “implicit identity” and “embedded certificateauthority” to describe the process whereby device identifiers are automatically generated by theassociated computing device [143]
    1. Any selective disclosure is potentially ineffective unless per-formed within the confines of a contractually protected disclosure that imposes an incentive onthe disclosee (verifier) to protect that disclosure (counter-incentive against the exploitation ofthat discloser).
    1. This makes the DID functionally equivalent to a did:key value, and visually similar, except that a peer DID will have the numeric algorithm as a prefix, before the multibase encoded, multicodec-encoded public key

      Then why have it?

  3. matrix-org.github.io matrix-org.github.io
    1. In order to do this, a form of network-wide election takes place, where the node with the numerically highest ed25519 public key will win

      Eh, that doesn't seem fair. What stops one generating keypairs until he gets a decently high one, cheating his chances?

    1. Even though the Pinecone address space is considered to be global in scope, it is important to note that there is no true “single” Pinecone network. It is possible for multiple disjoint and disconnected Pinecone network segments to exist
    1. In some cases, source addresses could be sealed/encrypted, although this is not implemented today
    1. but limiting the conference to 7 or 8 participants given all the duplication of the sent video required. In Element Call Beta 2, end-to-end encryption was enabled; easy, given it’s just a set of 1:1 calls.

      I imagine having to transmit stuff to every peer would not work well for low-bandwidth devices.

      Also, complete connectivity graph is a LOT of connections.

      Additionally, given one channel fails - recipient doesn't see content, while others do. (they could have broadcasted it, hey!)

      Also, won't scale for streaming.

    2. instead it’s more of a QoSed incremental sync
    3. So if the network roundtrip time to your server is even 100ms, and Sliding Sync is operating infinitely quickly, you’re still going to end up showing a placeholders for a few frames

      What stops loading context around the window?

      Better off, start from windows position and sync everytihng from that priority.

      I.e., prioritize what gets replicated by how user's acting.

    4. Faster Joins (lazy-loading room state when your server joins a room)
    5. The European Union’s Digital Markets Act (DMA) is a huge step in that direction - regulation that mandates that if the large centralised messaging providers are to operate in the EU, they must interoperate
    6. 111,873,374 matrix IDs on the public network, spanning 17,289,201 rooms, spread over 64,256 servers
    1. All API endpoints within the specification are versioned individually. This means that /v3/sync (for example) can get deprecated in favour of /v4/sync without affecting /v3/profile at all. A server supporting /v4/sync would keep serving /v3/profile as it always has

      It's cool, I guess. But then leaves one wonder, are they compatible? How to tell? There should be a kind of manifest then that maps v4 to "playing well together" granular vs.

    2. For example, if /test were to be introduced in v1.1 and deprecated in v1.2, then it can be removed in v1.3.

      That's.. not intuitive for those familiar with semver.

      Like, bump it to 2.0 instead. It's a breaking change. Everybody will get it. What's the point of the second digit if it guarantees "maybe nothing will be broken" - tell it explicity, is it or is it not.

    3. vX.Y

      v bit makes it non-semver.

      Better: semver. It's widely known. You can leave Z in X.Y.Z as 0.

    4. Users may publish arbitrary key/value data associated with their account

      This is personal information one may wish to chose who gets access to.

    5. Usage of an IS is not required in order for a client application to be part of the Matrix ecosystem. However, without one clients will not be able to look up user IDs using 3PIDs

      Better: linke from 3rd party to Matrix.

      E.g., by adding on Twitter link to Matrix ID.

    6. Each state event updates the value of a given key.

      I hope it's a CRDT update.

    7. file transfers

      Better would be to include files as is. Use hash of it in an event. Download from whoever has it, the IPFS way. Only between friends

    8. Thus if one event is before another, then it must have a strictly smaller depth

      Doesn't seem to be true. There can be forks whose events are not partially ordered.

      Better: derive Version Vector & still compare hashes to mitigate equivocations.

    9. Every event graph has a single root event with no parent

      Weird. That means one user must start a topic. Whereas a topic like "Obama" could be started by multiple folks, not knowing about each other, later on discovering and interconnecting their reasoning, if they so wish.

    10. Events exchanged in the context of a room are stored in a directed acyclic graph (DAG) called an “event graph”.

      Well, better call it "event DAG" then?

      Or, "event hash-DAG" / "Merkle-DAG".

    11. type values MUST be uniquely globally namespaced following Java’s package naming conventions, e.g. com.example.myapp.event

      We're in the Web. Better: URIs.

    12. In a mobile client, it might be acceptable to reuse the device if a login session expires, provided the user is the same

      Yeah, we wouldn't have this problem if agent's key been authorized.

    13. each device gets its own copy of the decryption keys

      Whoa! Sharing the one and only private key is.. sub-optimal.

    14. each of which would be its own device

      Better call them "agents". "An app is a device" sounds incorrect.

    15. signing the message in the context of the graph for integrity

      That's weird. User's not in chare of creating an event (as user-generated event is not a complete event in Matrix model, it lacks causal history).

      Relying for creation of an event on a server means you need to be online in order to use apps.

      Better: let user's device be enough, so user can create events offline, sync it later. Server is dumb - just relaying it to user's friends.

    16. Room data is replicated across all of the homeservers whose users are participating in a given room
    17. and shares data with the wider Matrix ecosystem by synchronising communication history with other homeservers and their clients

      That's a con. There's no need to sync globe-wide, creating a giant ledger. You have a set of peers that you want to share your stuff with (friends), leave it at that.

    18. Use of 3rd Party IDs (3PIDs) such as email addresses, phone numbers, Facebook accounts to authenticate, identify and discover users on Matrix

      Good stuff. I.e., associate existing accs.

    19. Managing user accounts (registration, login, logout)

      Better: always log in with a server, unless you choose to migrate.

    20. Extensible user profile management (avatars, display names, etc)

      Better: let peers have personal profiles of their friends.

      Like you do with your contacts on phone. You know their id (phon number), you give it a name, assign a pic. It's up to you.

    21. Extensible user management (inviting, joining, leaving, kicking, banning) mediated by a power-level based user privilege system

      Additionally: community-based management, ban polls.

      Alternative: per-user configuration of access. Let rooms be topics on which peers discuss. A friend can see what he's friends and foafs are saying.

    22. REST

      That's a standard - good for devs. Yet it's grotesque.

    23. JSON

      Meh. Verbose, bad types, needs parsing.

    24. The user should know precisely where their data is stored

      And be able to store it locally, on trusted devices, replicated.

    25. no single points of control over conversations or the network as a whole

      Ideally support p2p. Servers/brokers are optional.

    26. Sending and receiving extensible messages

      Better: text messages & structured stuff (data).

    27. across a global open network of federated servers and services

      Better: across devices (p2p) and (optionally) their servers.

    28. Eventually-consistent cryptographically secure synchronisation of room state

      That's good. Matrix Event Graph is cool.

    1. However, certain algo-rithms executed on the MEG do not scale well with thenumber of parent events, i.e., they can become very re-source intensive, especially when old parts of the MEGare referenced as parents [8]. In practice, the maximumnumber of parent events therefore has to be restrictedto a finite value d.
    1. The protocol flow is therefore modified as follows

      Looks complex. Requires online SI OP. Sub-optimal UX.

    1. Given a set of n agents, at most f of which arefaulty and the rest are correct, a fault-resilient supermajority, or supermajorityfor short, is any fraction of agents greater than n+f2 . For example, if f = 0 thena simple majority is a supermajority, and if f < 13 then 23 is a supermajority
    1. 푏 the first demonstration of 푞 being Byzantine

      ^

    2. the blocklace stores notonly valid blocks but also blocks that constitute a proof thattheir creator is Byzantine

      Only the first equivocation evidence is needed to be included, right? Subsequent equivocations can be buffered.

    3. Thus only the valid subset of theblocks will be considered as state by the replicated abstrac-tion (the CRDT).

      Note: here valid seems to refer to both well-formed and valid.

      Detection of this malformedness is possible as funciton(block). Thus a correct node will not accept such block's ops and won't make more atop.

      Additional check is required to ensure that nodes do that^.

      Nodes that do not comply are byzantine as well, and their ops can be "not observed" as well.

    4. Thus the harm a Byzantine agent may cause is limited to afinite prefix of the computation

      If it's a permission-less system, an agent may spawn more nodes to act Byzantine. So I guess the strategy of "accept equivocation in order to spread the knowledge of Byzantine agents" is suitable for permissioned systems only.

    5. On the contrary, sets ofrandomly generated numbers or sets of signed-hashes (inthe case of a blocklace) do not allow such a compact repre-sentation

      Nothing stops to derive versions for events, as their virtual chain length. But Version Vectors are not BFT.

    6. Virtual chain axiom
    1. Conclusions

      Use content-based ids.

      Yet node may equivocate. Equivocations are not detectable, as no virtual chain is proposed to have.

      Further, how to limit harm of equivocations was not mentioned.

    2. For example, in WOOT [ 33]and YATA [ 32 ], an insertion operation must referencethe IDs of the predecessor and successor elements, andthe algorithms depend on the predecessor appearingbefore the successor in the sequence. The order ofthese elements is not apparent from the IDs alone, sothe algorithm must inspect the CRDT state to checkthat the predecessor and successor references are valid

      Yeah, again, have causality in DAG. Or better, don't use this kind of causality. Referring to heads and having op like "insert at position X" will be enough to restore the context - what's befere, what's after.

    3. but Byzantine nodes maynot set this metadata correctly

      Strange. If we do have causality as refering to "heads" by content-address - that's not a problem. Won't happen.

      I.e., have causality at the DAG level, not in payloads.

    4. The 3f + 1 assumption meansthese protocols cannot be deployed in open peer-to-peersystems, since they would be vulnerable to Sybil attacks [ 15].

      I.e., 3f+1 is not suitable for open peer-to-peer systems.

    5. blockchains [ 5 ]

      refs to Hashgraph

    6. If authentication of updates is desired, i.e. if it is importantto know which node generated which update, then updatescan additionally be cryptographically signed. However, thisis not necessary for achieving Strong Eventual Consistency
    7. When updates are generatedconcurrently and then merged, the next update containsmultiple predecessors

      Given many updates are merged at the same time pointers are = many. But given we merge right away having learned an update - we always merge two updates - hence two pointers.

    8. Consequently, if updateu2 depends on update u1, it could happen that somenodes deliver u1 before u2 and hence process bothupdates correctly, but other nodes may try to deliveru2 and fail because they have not yet delivered u1. Thissituation also leads to divergence.

      I.e., linking by non-content-based id is not BFT.

    9. Say a Byzantine node generates twodifferent updates u1 and u2 that create two differentitems with the same ID. If a node has already deliveredu1 and then subsequently delivers u2, the update u2will be rejected, but a node that has not previously de-livered u1 may accept u2. Since one node accepted u2and the other rejected it, those nodes fail to converge,even if we have eventual delivery

      I.e., giving nodes IDs that are not content-based is not BFT strategy.

    10. Unfortunately, version vectors are not safe in the presenceof Byzantine nodes, as shown in Figure 1. This is because aByzantine node may generate several distinct updates withthe same sequence number, and send them to different nodes(this failure mode is known as equivocation). Subsequently,when correct nodes p and q exchange version vectors, theymay believe that they have delivered the same set of updatesbecause their version vectors are identical, even though theyhave in fact delivered different updates.

      Version vectors are not BFT

    11. This algorithm has the downside thatmany updates may be sent to nodes that have already re-ceived them from another node, wasting network bandwidth

      I.e., relying on stale knowledge of what others know in order to sync them may result in many "knew that already" cases.

    12. In some cases, a causal de-livery algorithm additionally ensures that when one updatehas a dependency on an earlier update, the earlier update isdelivered before the later update on all replicas
    13. The network is not necessarily com-plete, i.e. it is not required for every node to be able to com-municate with every other node
    14. they must either exer-cise centralised control over which nodes are allowed to jointhe network

      Not necessarily. PoS can be used to invite, sharing with invited member the invitee's stake. Thus it does not affect consensus.

      As stake determines weight on consensus.

      And as stake may determine how often peers choose to sync to that node. Thus limiting harm done on the communication level by low-stake Sybils.

    1. This, in turn, requires either having knowledge of thecurrent replica-set or using an external source of truth (i.e. ablockchain), a system constraint that we did not have before

      Snapshot requires knowledge that everybody knows the snapshotted stuff.

      Given we have derived total order, we can snapshot and garbage collect ops that been received and seen by all.

      Given members are managed as ops - we know who members are at the point of total ordering.

    2. Operations are easy to define, asthey just need to be commutative, so that the resulting state willbe the same in every replica regardless of the order in whichthey have received the operations

      And non-commutative ops can be supported by adding derived total order.

    3. wecan sign the broadcast messages, thus leaving signatures outof the Merkle-DAG

      We lose trust in messages. They don't prove authenticity. So peers talk "off the record". And we can't trust anything said.

    4. Comparing version vectors betweenpayloads is an inclusion check without the need to perform aDAG-walking

      Version Vectors does not represent equivocations / forks.

      E.g., it conveys "Alice's 3rd" (is X remotely). Where Alice could have created equivocation, and locally Alice's 3rd event is Y.

    5. It is clear thatthis approach will bring some benefits

      Namely, less metadata.

      Perhaps it could be mitigated via metadata compaction on sync and snapshots to garbage-collect history.

    1. Smart Merge is built on a customized adaptation of Myer's diff algorithm and Google's diff-match-patch

      The Git's way. Snapshots as the source of truth, derive deltas to merge.

  4. Apr 2024
    1. received_orders

      Begs for gossip-about-gossip.

    2. Once a recipient has received one copy of a message, they MUST ignore subsequent copies that arrive, such that resends are idempotent

      Strange. I'd thought that hash-based equility check would be enough to give idempotence on receival. "I've seen it already, noop".

    3. When Alice and Bob are both bidding in an auction protocol, each of them marks their first bid with sender_order: 1, their second bid with sender_order: 2, and so forth.

      If we were to use causal broadcast / Cordial Dissemination - they would not arrive out-of-order.

      Cordial Dissemination would require gossip-about-gossip though.

    4. sender_order

      A way to construct self-parent chain in thread's context.

    1. We suggest that the messaging standards all erred by treating public-key encryption and digital signatures as if they were fully independent operations

      I'd expect that

    2. When B receives A's signed & encrypted message, B can't know how many hands it has passed through, even if B trusts A to be careful.

      Craft a new pub key and disclose it only to Alice?

    3. But in reality, when Charlie gets Alice's message via Bob, Charlie very likely will assume that Alice sent it to him directly

      Sign&Encrypt is not meant to be user-facing.

      Devs should take care of not having ambiguity of the recipient if it's needed.

    4. In this case, Alice will be blamed conclusively for Bob's exposure of their company's secrets.

      Again a wrong assumption. Alice signed the "sales plan". Yet we know nothing of whether she sent it to anybody, thus she can't be blamed for it popping somewhere.

    5. Here, Bob has misled Charlie to believe that ``Alice loves Charlie.''

      Charlie here takes on wrong assumption that "I love you" is meant for him. Whereas what Alice expressed with her message is ambiguous, a simple attestation that she loves something.

    1. forces the sender to talk “on the record”

      Signature is "on the record".

    2. centralized servers and certificate authorities perpetuate a power and UX imbalance between servers and clients that doesn’t fit with peer-oriented DIDComm Messaging
    3. web security is provided at the transport level (TLS); it is not an independent attribute of the messages themselves

      I.e., in web, parties that reside on the ends of an encrypted channel authorize each other. Whereas data that's passed between them does not have this authorization built in.

      Taking a reverse approach, akin to having locks on data and not a channel, we can have authorization on data and not the channel.

    1. The Broadcast Channel API allows basic communication between browsing contexts (that is, windows, tabs, frames, or iframes) and workers on the same origin.

      Broacdast Channel API works on the same origin.

    1. withtokens provided by their IdP.
    2. The new device needs to receive the secret s withoutleaking it to any third party
    3. Finally, EL PASSO supports multi-device scenarios. It en-ables users to easily register new devices (e.g., laptop, phone,tablet) and supports easy identity recovery in case of the theftof one device. It natively supports 2FA: An RP may requestand assess that users connect from two different devices inorder to sign on their services (multi-device support).
    4. intra-RP linkability

      Perhaps user's ID for that RP can be a hash of userID + RPID.

    1. Commonly, this is accomplished through a pub-licly available list of credential statuses (either listing the revokedor valid credentials).

      Claims about one's identity (authorized devices), could be maintained by the quorum of these devices. Or by a quorum of one's devices and his friends.

    2. Trust anchors affirm the provenance of identitiesand public attributes (DIDDocs)

      DID is an id + a way to resolve associated to ID info (DIDDoc). Seems strange to couple the two. I'd like to have one ID and plenty of methods to resolve info behind it. A kind of MultiDID.

    3. Alternatively, a simple public key canserve as an identifier, eliminating the DID/DIDDoc abstraction2

      This would require having private key portable. Which is not secure.

    1. If one user marks a word as bold and another user marks the same word as non-bold, thereis no final state that preserves both users’ intentions and also ensures convergence

      Having "bold" mean some semantics, e.g., important.

      Then they merge. Alice does not consider it important, Bob does -> render both. E.g., Bob's "importance" expressed as bold, Alices "not important" as grayed text.

    2. If the context has changed

      E.g.,

      "Ny is a great city."

      Alice removes "great".

      Bob wonts to replace it with "gorgeous", by removing "reat" and adding "orgeous".

      Having merged:

      "Ny is a orgeous city."

      Begs for semantic intent preservation, such as "reword great to gorgeous".

    3. Rather than seeing document history as a linear sequence of versions,we could see it as a multitude of projected views on top of a database of granular changes

      That would be nice.

      As well as preserving where ops come from.

      Screams to have gossip-about-gossip. I'm surprised people do not discover it.

    4. Another approach is to maintain all operations for a particular document version in anordered log, and to append new operations at the end when they are generated. To mergetwo document versions with logs 𝐿1 and 𝐿2 respectively, we scan over the operations in𝐿1, ignoring any operations that already exist in 𝐿2; any operations that do not exist in 𝐿2are applied to the 𝐿1 document and appended to 𝐿1 in the order they appear in 𝐿2.

      This much like Chronofold's subjective logs.

      Do the docs need to be shared in full? The only thing we need is delta ops.

    5. Peritext works bycapturing a document’s edit history as a log of operations

      How that works is not mentioned.

      I guess ops are collected into logs, per member, based on their IDs.

      A step short from having gossip-about-gossip.

    6. but since a given client never uses the same counter value twice, the combination ofcounter and nodeId is globally unique

      What stops him?

    7. Another area for future exploration is moving and duplicating text within a document. If twopeople concurrently cut-paste the same text to different places in a document, and then furthermodify the pasted text, what is the most sensible outcome?
    8. span can be deleted once causalstability [2] has ensured that there will be no more concurrent insertions into that span
    9. Fig. 4

      Uhh, I'd imagine "remove" would refer to "bold" annotation.

      Otherwise, there can be another "bold" with t < 20, that would be accidentally removed.

      Syntactic intent is not preserved.

    10. For example, a developer mightdecide that colored text marks should be allowed to overlap, with the overlap region rendering ablend of the colors

      That would be better for user to decide.

      I think a good default is to express semantics explicitly.

      I.e., for Bob to not automatically state his utterance as important just because it's atop of Alice's, that she considers important.

      If Bob tries to reword - ok. If Bob want to add - no.

    11. No

      Given Bold, Italic, Colored are syntactic representation of semantics that we do capture - they can overlap.

      Moreover, in Bob's user-defined mapping from semantics to syntax, Bob's "important" can be bold, while Alice's "important" can be italic.

    12. Conflicts occur not only with colors; even simple bold formatting operations canproduce conflicts

      Again, let them capture semantics.

      Say, "Alice considers "The fox jumped"" as important. Alice changes mind, only "The" is important. Bob considers "jumped" as important.

      Result: Alice considers "The" important. Bob considers "jumped" important.

    13. Consider assigning colored highlighting to some text

      Color is meant to convey some semantics. Like "accent", or "important". These semantics can coexist, just like italic and bold.

      So a solution may be to: 1. Let users express semantics of their annotations 2. Give user-customizable defaults of how they are to be rendered.

      Ensuring, that semantics's render is composable. I.e., it conveys originally asigned semantics.

    14. Furthermore, as with plain text CRDTs, this model only preserves low-level syntactic intent,and manual intervention will often be necessary to preserve semantic intent based on a humanunderstanding of the text

      Good remark of syntactic vs semantic intent preservation.

      Semantics are in the head of a person, that conveys them as syntactic ops. I.e., semantics get specified down to ops.

      Merging syntactically may not always preserve semantics. I.e., one wants to "make defs easier to read by converting them to CamelCase", another wants the same but via snake-case. Having merged them syntactically, we get Camel-Snake-Case-Hybrid, which does not preserve any semantic intent. The semantics intent here are not conflict-free in the first case, though.

      Make defs readable | | as CamelCase as Snake Case | | modify to CC modify to SC They diverged at this point, even before getting to syntactic changes.

      The best solution would be to solve original problem in a different way - let defs be user-specific. But that's blue sky thinking. Although done in Unison, we do have syntactic systems around.

      So staying in a syntactic land, the best we could do is to capture the original intent: "Make defs readable".

      Then we need a smart agent, human or an AI, specify it further.

    15. With this strategy, the rendered result is
    16. The key idea is to store formatting spans alongside the plaintext character sequence,linked to a stable identifier for the first and last character of each span, and then to derive the final formattedtext from these spans in a deterministic way that ensures concurrent operations commute

      I.e., let's capture user's intent as ops, not their result.

    1. Privacy is realized by the group founder creating a specialkeypair for the group and secretly sharing the private group key with every new group member.When a member is removed from a group or leaves it, the founder has to renew the group’s keypair

      Have locks on data is a nice idea.

      But given we have dissemination/disclosure among friends only, do we need to encrypt the blocks? Given communication channels are encrypted.

    2. However, by doing so the sender mayreveal additional information through the block’s hash pointers, e.g. the identities of other groupmembers

      Well, when sharing such a block off-group, you may skip transmitting its deps. In Social Networking that may be alright. And when off-group agent gets accepted to group, he's able to get the stuff below.

      However, that does complicate piggybacking, as it'll be seen that the previously off-group agent has some block (but actually he doesn't have its deps).

    3. reserved words

      Perhaps a sort of protobuf is better.

    4. A group creator can invite other agents to become members and remove members atwill.

      Goes against democratic principles.

      A democratic way will be to raise a BAN poll.

    5. Thegrassroots WhatsApp-like protocol WL employs a hypergraph (a graph in which an edge mayconnect any number of vertices). A hyperedge connecting agents 𝑃 ⊂ Π means that the agents in 𝑃are members in a group represented by the hyperedge

      I.e., an edge of a hypergraph is a set of vertices.

      This is akin to a pub/sub topic.

    6. SMS
    7. has an IPaddress

      Multiaddr could be used instead, to aid connectivity

    8. Inpractice, an agent is an app running on a smartphone

      Agent = process

    9. Federated systems aimto distribute control, but federated servers are still controlled autocratically
    10. Note that Grassroots Social Networking requires dissemination, and Grassroots Cryptocurrenciesrequire also equivocation exclusion, but neither require consensus. Consensus is needed only byhigher-level applications that employ “on-chain” governance such as grassroots social contracts [7 ],the grassroots counterpart of miner-operated smart contracts, and grassroots representative assem-blies (in preparation).

      Consensus is required for smartcontracts and governance.

      Payments and social networking require weaker equivocation-exclusion.

    11. and if located behind a firewall that preventssmartphones from communicating directly,

      Huh, such firewalls exist? I thought they can be hole-punched.

    12. In particular, adeep-fake payload that is not attributed to its source can be promptly filtered as spam

      ^

    13. However, sinceevery block in GSN is signed, when one breaches privacy within the protocol the breach carriestheir signature so the culprit can be identified.

      What stops a culprit to send off-group a message that is not his own? We can only achieve the "culprit detection" by addressing and signing every message we send to A. This is a lot of re-signing. And we won't have a convergent DAG.

    14. A rather elaborate set of protocols and infrastructure (named STUN, TURN, TURNS, ICE)is needed to overcome these Internet limitations.

      And that is not enough if one's (or is it both?) behind a symmetric NAT.

    15. Furthermore, each block sent includes the most recent IP address of the sender,allowing agents to keep track of their friend’s changing IP addresses.

      Perhaps better to attach a new IP address to a message once it does change. What's the point in telling over-and-over the same IP?

    16. Every so often, an agent 𝑝sends to every friend 𝑞 every block 𝑝 knows and believes that 𝑞 needs, based on the last blockreceived from 𝑞.

      ^

    17. Agents communicate only with their friends

      More like an edge gives a communication path.

      A->B (A follows B) - B can talk to A.

      A<->B - B can talk to A, A can talk to B.

    18. However, their exclusion is not required in social networking, and hence social networking protocolscan be simpler than payment systems protocols

      I.e., Equivocation exclusion is not required for social networking.

    19. Grassroots Social Networking safety implies that each member can be held accountable to theirown utterances, as well as to utterances that they forward. As Grassroots Social Networking hasno controlling entity that can be held accountable in case members of the social network breakthe law, accountability and law enforcement in Grassroots Social Networking are more similarto real life then to the social networks we are familiar with.

      I.e., protocol creators are not responsible for how it's used.

    20. In WhatsApp, members of a group must trust the serviceto correctly identify the member who authored an utterance, and utterances forwarded from onegroup to another have no credible attribution or context

      ^

    21. In existing social networks, utterances by members do not carry intrinsic attribution orprovenance. Hence, Twitter members must trust the social network operator to correctly identifythe authorship of a tweet

      ^

    22. As a screenshot with a tweet could easily be a fake, an author of a tweetcan later repudiate it simply by deleting it, and no proof that the tweet ever existed can be provided,except perhaps by the operator itself.

      ^

    23. Naturally, an actual Grassroots Social Networking application may integratethe two protocols to provide both public feeds and private groups

      Perhaps better to have a permissions mechanism that is generic to express both cases, and more, so folks can manage it flexibly as they need.

    24. The WhatsApp-like network has private groups, members of each group being both the authors andfollowers of its feed

      This couples two things: read permissions, write permissions.

      Having them defined separately and explicitly, group's empowered with flexible controls.

      E.g., as in OrbitDB's policies.

    25. The Twitter-like network has public feeds

      So this is a kin to pub/sub, with single authorized publisher.

    26. friendsneed

      "want" rather?

      "need" would be for refs of a message you don't have. But then such a message could have been required to come with deps.

    27. , free of third-party control

      E.g., Airbnb that may refuse access to service because you're from country X, despite that host is willing to accept you.

    1. The consensus is reached in the same way as fortransactions i.e. using hasgraph consensus algorithm. The onlydifference is, that the concerning events in the hashgraph nowcontain other type of data instead of transactions

      Not necessarily, how to store received events is an implementation detail. One could dump them in an array on a side. Can be as efficient as array of pointers to events. Where idx of this array is event's position in total order.

    2. DL
    1. Composing Implementations

      Any correct implementation can be composed with any other (compatible) correct implementation, and it is guaranteed to be correct .

    2. This implies that any correct run of the imple-mentation that stutters indefinitely has infinitely many opportunities to activatethe specification. Under the standard assumption that an opportunity that ispresented infinitely often is eventually seized, a live implementation does notdeadlock as it eventually activates the specification.
    3. Live

      I.e., there is a possible further computation from y to y', as well as from sigma(y) to sigma(y').

      I.e., from any TS' computable mapped state y there is a computable mapped state y'.

    4. Complete

      Any compute in a TS can be performed in an implementing TS TS'.

      I.e., any compute in TS maps to compute in TS'.

      I.e., any TS compute is translatable to TS'

    5. Safe

      I.e., any compute in an implementing TS TS' can be performed in TS.

      I.e., any compute in TS' maps to compute in TS.

      I.e., any TS' compute is translatable to TS.

    6. implementedtransition system (henceforth – specification),

      specification is an implementation of a TS by a TS'.

    7. An implementation is correct if it is safe, complete and live.
    8. Given two transition systems T S = (S, s0, T ) and T S′ = (S′, s′0, T ′) an im-plementation of T S by T S′ is a function σ : S′ → S where σ(s′0) = s0.
    9. empty if s = s′

      empty meaning, noop \ self?

      I guess any s has such empty transition for it.

    10. Also note that T and T f are not necessarydisjoint, for the same reason that even a broken clock shows the correct houronce in a while

      Huuh?

    11. We denote by s ∗−→ s′ ∈ T the existence of a correctcomputation (empty if s = s′) from s to s′
    12. A transition in T f \ T is faulty, and a computation is faulty if it
    13. A transition s → s′ ∈ T is correct, and a computation of correct transitionsis correct.
    14. a run of T S is a computation that starts froms0.
    15. A computation of T S is a sequenceof transitions s −→ s′ −→ · · · ,
    16. Atransition system T S = (S, s0, T, T f ) consists of a set of states S, an initialstate s0 ∈ S, a set of (correct) transitions T ⊆ S2 and a set of faulty transitionsT f ⊆ S2. If T f = ∅ then it may be omitted
    17. the transitions over S are all pairs (s, s′) ∈ S2, also written s → s′.
    18. Given a set S, referred to asstates,
    19. ?

    20. and σ32 :S3 → S3

      S3 -> S2 ?

    21. What does * mean?

    1. TL;DR: Decoupling data dissemination from metadata ordering is the key mechanism to allow scalable and high throughput consensus systems. Moreover, using an efficient implementation of a DAG to abstract the network communication layer from the (zero communication overhead) consensus logic allows for embarrassingly simple and efficient implementations (e.g., more than one order of magnitude throughput improvement compared to Hotstuff).

      I.e., collecting data of how processes talk is cheap and preserves what happens on the network. Consensus can be made locally based on that info. Damm simple, data FTW.

    1. Every p-block with a payment in r-coins by a correct trader p ̸ = ris eventually approved or disapproved by an r-block [provided p and r are friends or have acommon friend in SG(B)]a.

      Strange that you need to have a friend-path in order to use r's coins. I'd expect r to accept&approve a message from me, given I hold his coin (which he can see from my message).

    2. An r-block b that repays a redemption claim with comment (redeem, P ), P =[p1, p2, . . . , pk], k ≥ 1, has a payment in pi-coins, i ∈ [k], provided the balance of r does notinclude any pj -coins for any 1 ≤ j < i at the time of creating b, and has a payment in r-coins,r /∈ P , provided the balance of r does not include any P -coins at the time of creating b

      Why have signed coins?

      Makes it possible to track which coins been equivocated.

      But what's the use for it?

      What's the difference between "you can't use these specific coins as they were equivocatedly used already" and "you can't use these opaque coins, as they are in equivocation"?

      Later, at least, may succed given equivocation issuer have enough balance for both. Although then there's no point in creating equivocation in the first place. So nevermind, won't happen, except by silly equivocators.

    3. An r-block b with an empty payment and comment (disapprove, h′) pointsto an r-coin payment block b′, where h′ points to the reason for disapproval: To b′, if it isunbalanced, or to a block b′′ equivocating with b′

      Again, this can be derived out of DAG. Seems redundant.

      It would somewhat spare computation, as one can check equivocation by following a pointer, but then he would need to ensure that both equivocated blocks are observed by a self-parent chain of the "DISAPPROVE" issuer.

    4. An r-block b with an empty payment and comment approve points to balancedr-coin payments block b′, where b does not observe a block equivocating with b′

      What's the point of explicit approvals if it can be derived out of DAG?

      It won't spare computation, as one would still need to go through DAG and ensure payment's balanced and equivocation-free. Can't trust the issuer of "APPROVE".

    5. Given a blocklace B and two agents p, r ∈ Π, the r-coins balance of p in B isthe sum of r-coins payments accepted by p in B minus the sum of r-coins payments issued by p in B.

      Great, so r-balance of p is derived by the history of ops regarding r. So no need for p to calculate it and add to every op, would be redundant. Not sure the derivation is the proposed protocol though.

    6. Finality: A p-block consumes a non-p-block b′ with a payment to p only if r approves b′.

      Would be nice to have finality optional. As to not incur round-trip to a possibly offline r. Double-spends will be detected and punished. Given the value of double-spend spend is less than a cost - no incentive to do so.

    7. and only if r does not have enough p1 coins then r pays any remainder inp2 coins, and only if r does not have enough p2 coins then r pays to p any remainder in r-coins

      This behavior needs to be specified more strictly.

      I'd assume r would be in charge to explicitly describe how he wants redemption to happen.

    8. not letting

      More like "not being able", as it's not up to him, he can't reject redemption.

    9. consisting of r-coins, which can be thought of as IOUs issued and signed by r

      Why do coins need to be signed?

    10. The black agent mints a black coin, increasing its balance from 3 to 4 coins

      Why to capture 4? Ops like burn(10), mint(1) do the same, yet being more semantic, as they convey what happens, rather than the result.

      E.g., when green has 3 green_coins, and we see send(1, green_coin, (to) black), send(3, green_coins, (to) green) did green just miscalculated his balance (should be 2), or did he sent and minted one at the same time?

    11. C

      That looks messy, accidentally so, it seems.

      1. Green agent only needs to REDEEM(green_coin) op to convey what he wants.

      2. Self-payments are redundant.

      3. Links other than self-parent and other-parent(s) are redundant. You can derive anybody's balance out of their self-parent chain.

      3.1 Other-parent_s_ make the order of received messages ambiguous.

      1. REPAY is redundant. When REDEEM is received, and given one can indeed redeem (recepient has his coin at the moment of receival) - the REDEEM should be automatic. I.e., plainly observing that REDEEM been accepted by recepient is enough to derive out of it one of 1) it's a suffessfull redeem 2) it's a failed redeem.
    12. and the red agent acceptsthe payment, increasing its balance to 6 black coins.

      Why does he need to explicitly accept? Can't it be done by default? Can he reject?

    13. The black agent approves the payment

      Why does he need to approve? It is a mean of equivocation detection. But it requires all coins going through its creator. Incurring latency and possible indefinete out-of-service as creator goes offline.

      Why is it not optional? Like we can exchange coins with recepient directly, and he may come to redeem it later, if he wishes, detecting eqiuvocation at that point.

      Some services, that offer say a cup of coffee, would be willing to take the risk of loosing $5 of value on to-be-detected equivocations. Since equivocators will be punished severily that does not worth 5$ of value. So they can be rest assured that nobody's gonna do that.

      Now this example holds if coffee provider prices in currency other than its own, say bank's.

      And banks are generally online. But still. Why force it? Let them do a round-trip to currency owner at their choice, a tradeoff that's up to them.