- Nov 2023
-
martin.kleppmann.com martin.kleppmann.com
-
In a partially ordered system it is still possible to enforce a to-tal order on events after the fact, as illustrated in Figure 2. Wedo this by attaching a logical timestamp to each event; Lamporttimestamps [45] are a common choice.
-
However, other eventsmay be concurrent, which means that neither happened before theother; in this case, different replicas may process those events in adifferent order [10].
-
If permanent deletion of records is required (e.g. to delete per-sonal data in compliance with the GDPR right to be forgotten [62]),an immutable event log requires extra care.
-
In applications with a high rate of events, storing and replaying thelog may be expensive
-
the level of indirection between the event log andthe resulting database state adds complexity in some types of appli-cations that are more easily expressed in terms of state mutations
-
itis less familiar to most application developers than mutable-statedatabases
-
Blockchains and distributed ledgers also use SMR, in which casethe chain of blocks (and the transactions therein) constitutes theevent log, the ledger (e.g. the balance of every account) is the result-ing state, and smart contracts or the network’s built-in transactionprocessing logic are the state transition function [66].
-
it is easy to maintain severaldifferent views onto the same underlying event log if needed
-
If the applicationdevelopers wish to change the logic for processing an event, forexample to change the resulting database schema or to fix a bug,they can set up a new replica, replay the existing event log usingthe new processing function, switch clients to reading from thenew replica instead of the old one, and then decommission theold replica [34].
-
well-designed events oftencapture the intent and meaning of operations better than eventsthat are a mere side-effect of a state mutation [68].
-
- Oct 2023
-
mattweidner.com mattweidner.com
-
To refer to a piece of content, assign it an immutable Unique ID (UID). Use that UID in operations involving the content, instead of using a mutable descriptor like its index in a list.
Or use ID for a node in causally aware DAG as its hash.
-
-
lofi.re lofi.re
-
all commits depending on an expiring commit must expire at the same time as, or earlier than the one they’re depending on.
why so?
-
Data deletion is possible by setting an expiry time for the storage objects.
This is akin to specifying "validTo". However, why remove "no more valid" commits? The can be of use for time-travelling queries? E.g., "give me the state as of that validTime". And performing such queries at time when something's expired could omit/filter it out.
-
its commits can be garbage collected
those, that are not used in other branches
-
or the branch owner can make a snapshot and compact the commits of the previous branch and thus remove dependencies on earlier commits
Huh, we're losing authorship this way. And commit deduplication.
-
and removing access from past members by excluding those members from the newly encrypted branch definition
why not have a removeAccess kind of commit?
Which would allow to have management of authorized parties without the need for creating a new branch.
-
in order to reduce branching in the DAG
how so?
-
and to allow deduplication of commit bodies
Nice
-
Valid commits are acknowledged by a quorum of publishers in subsequent commits.
We may end up in scenario, where a SPARQL tx that generated commits acrsoss multiple repos is only partially valid. In one repo it's commit is valid, in another it's consider invalid. Leaving us in a half-horse-half-zebra state.
-
ackDelay
commitAckDelay?
-
Reliability
Transmitting an event twice is a noop when we have causal deps on events. Is this purely for optimization purpose?
-
Causal delivery
Why have it if each commit has it's dependencies?
-
public key & secret
Why have secret? Is public key not enough to uniquely ID a repo?
-
Each user is responsible for acquiring access to one or more core nodes of their choice, which can be self-hosted or offered by service providers.
So the brokers are not per pub/sub / repo, but per user. They are a contact point / API / gateway for that user.
-
The core network facilitates communication among remote nodes in different edge networks, and enables asynchronous communication via store-and-forward message brokers that provide routing and storage services.
These brokes take on two responsibilities: 1) overall network health 2) store-and-forward for specific overlay
They'd need to be incentivized. Stakeholder of responsibilities differ. 1) stakeholders are everybody 2) stakeholders are that specific overlay's members
How this incentive's done? Can IPFS services be used for 2)? Such as Web3Storage for store.
-
Data that has been removed by a commit remains in the branch, since all commits are kept in the branch.
Since it's content-addressed, can peers agree on not keeping / pinning such content?
-
-
nextgraph.org nextgraph.org
-
how does decentralised identity and authentication work?
e.g., auth as DID that have public keys on it
-
CRDTs are the answer to this challenge as they bring strong eventual consistency and also offline-first capabilities.
CRDTs at the level of compute-as-data especially interesting imo
-
E2EE forces the data to be present and manipulated locally, which in turn implies that data processing (query engine) must run in embedded/mobile devices.
Still, a third-party can be authorized with access to data and asked to perform a query on behalf of user.
-
Therefor decentralised PubSub technologies should be used in order to synchronise efficiently semantic data between peers.
Semantic Web PubSub on top of libp2p?
-
-
odin.cse.buffalo.edu odin.cse.buffalo.edu
-
O(m ∗n) time
Hmm, it seems that the proposed algorithm may terminate prior to reaching
n
, having found the latest dependency.Additionally, algorithm can be restructured to complete in one pass, as "go through the log until you have found all latest deps". Then algorithm will have time complexity up to O(n). If I'm not mistaken.
As an alternative, perhaps keeping indexes per each variable may provide interesting tradeoff of time for space. UPD: this technique is described in 3.3.1.
-
Connect the new node with existing DFG nodes to track any AST readdependencies.
It seems this can also be delayed up to read request. Since dependencies are needed to perform computation, and computation is only needed for read.
-
Each node in the DFG is labelled with the positive integer, whichspecifies their specific position in the ordered sequence.
Could instead store the last AST and have that AST point out to dependent ASTs, making one AST DAG. Content-addressing of ASTs seems nice too
-
-
bricolage.io bricolage.io
-
familiar querying power of SQL in the client.
SQL may not be that familiar for web devs.
GraphQL and dot.notation is what they're used to.
-
They both suggested emulating API request/response patterns through a distributed state machine running on a replicated object.
Have authority to sign / process updates.
I.e., there's an authority, e.g., a shop, and it would need to process requests.
What the architecture can do is to abstract from URLs/exact endpoints, having ID of the authority instead, and abstract away transport layer, i.e., nodes in network talk with each other and we don't care how, all we care is expressing intents in data and they get passed around somehow.
-
-
-
Invoke Invoke
Creating Invocation in order to delegate execution?
-
-
-
I had off channel conversation with @expede where she identified following problem with the task/spawn / task/fork design Introduces more privitives for things that we can already do (enqueuing more tasks) Introduces nesting of invocations (more cycles and network to discover the entire workflow) Need to add a version that says "but don't run it yourself" when that's not a problem that we have today The label "spawn" reads poorly if it's going to dedup (looks like an imperative command but isn't) One that especially resonated that is that they imply that you always ask executor to run the task as opposed to find a prior task execution.
Good points.
-
-
github.com github.com
-
actors
couples produces with consumer. pubsub would be a simpler approach
-
Task
Receipt, perhaps?
-
WasmTask
Task, perhaps
-
signature
Have it as part of Workflow, signing over its fields? As in UCAN and Invocation spec
-
-
github.com github.com
-
This field helps prevent replay attacks and ensures a unique CID per delegation.
have REQUIRED decisionTime instead?
-
-
github.com github.com
-
which requires gaining a global lock on the job
Perhaps an alternative strategy is to authorize side-effectful resources with "only once" restrictions?
E.g., issue UCAN that limits capability to 1 invocation.
E.g., one tweet.
It can be freely delegated by executor further, but only one invocation will be possible.
-
There may simply be a tradeoff for the programmer to say "yes I really need this effect, even though I'll have fewer options".
Decouple effect from pure compute? Make it "await" that compute.
-
-
github.com github.com
-
| Success "ok" -- End task with Success object
ok != handle(error)
-
- Sep 2023
-
-
Yeah, I agree with you, at least in principal! I think that revocation and memoization get harder if we aren't including the proof CIDs in the credential directly. How would you handle these cases?
Memoization: ``` let allUCANs = findAllUCANs(UCANs, UCANRevocations) // a solution of UCANs that allows for op let opUCANs = findOpUCANs(op, allUCANs) let whetherWithinTimeBounds = withinTimeBounds?(opUCANs, now)
let whetherStillCan = stillCan?(opUCANs, opUCANRevokations, whetherWithinTimeBounds) // memoized // becomes false when revokations arrive or time bounds exceeded ```
-
I do still think that defining and spec-ing unattenuated delegation is good idea. With all the above I also think that { with: “*”, can: “*” } is the most intuitive way.
att
represents attenuation / restriction / narrowing down.Perhaps absence of attenutaion (
att
field) is a way to represent "nothing's restricted".
-
-
www.semanticscholar.org www.semanticscholar.org
-
because it now has to beevaluated twice
Even if BGP evaluation engine is to cache results, merging low-selectivity BGP, b1, would incur costs of joining it with the merged-in nodes, b2 and b3. Which is one merge more than in the original BE-tree.
-
μ1 ∈ Ω1 ∧ μ2 ∈ Ω2 ∧ μ1
Shouldn't these be intersecting / common bindings?
I.e., for each common binding, they should be equivalent.
I.e.,
Ω1 |><| Ω2 = {μ1 ∪ μ2 | for each common variable Vcommon, u1(Vcommon) ~ u2(Vcommon)}
-
1
meant 2, I guess
-
1
meant 2, I guess
-
-
www.semanticscholar.org www.semanticscholar.org
-
(tp1 ✶ tp2) ✶ tp3
But we need tp2 >< tp3.
This example rewrite violates that, as optional tp2 will be added on top even if it doesn't >< with tp3.
-
E.g
By lifting optionals to the upper level this example of
((Pa ]>< Pb) >< (Pc ]>< Pd)) ]>< (Pe ]>< Pf)
could be transformed into:((Pa >< Pc) >< (Pa ]>< Pb) >< (Pc ]>< Pd)) ]>< (Pe ]>< Pf)
in prefix notation with multiple args to functions, looks:(]>< (>< Pa Pc) Pb Pd (]>< Pe Pf))
-
However, the inner-join P2 = tp2 ✶ tp3 has tobe evaluated before the left-outer-join P1 ✶ P2, due to therestrictions on the reorderability of left-outer-joins.
]><
is kinda "enhancing".I.e., can be evaluated on top of
><
.And so it can be lifted to the upper layer.
E.g.,
tp1 ]>< (tp2 >< tp3) = tp1 >< ((tp1 ]>< tp2) >< (tp1 ]>< tp2))
E.g.,
(tp1 ]>< tpo1) >< (tp2 ]>< tpo2) = ((tp1 >< tp2) ]>< tpo1) >< ((tp1 >< tp2) ]>< tpo2)
Or, if we allow joints to be a function of many arguments, in prefix notation, then:
(]>< tp1 tpo1 tpo2)
(]>< (>< tp1 tp2) tpo1 tpo2)
Overall, we can built a plan where non-optionals are evaluated first and then enhanced by optionals. And it makes sense to do so, it's the least computation-expensive strategy.
-
e.g., in the case of Q2 above, left-outer-join between tp1 andtp2 cannot be performed before the inner-join P2 = (tp2 ✶tp3).
I.e.,
tp1 ]>< (tp2 >< tp3) not= (tp1 ]>< tp2) >< tp3
This shows that]><
is not associative.To show that it's not commutative,
tp1 ]>< (tp2 >< tp3) not= (tp2 >< tp3) ]>< tp1
Also,
]><
is not distributive over><
.E.g.,
tp1 ]>< (tp2 >< tp3) not= (tp1 ]>< tp2) >< (tp1 ]>< tp2)
Also,
><
is not distributive over]><
.E.g.,
tp1 >< (tp2 ]>< tp3) not= (tp1 >< tp2) ]>< (tp1 >< tp2)
-
-
www.w3.org www.w3.org
-
It is useful to be able to have queries that allow information to be added to the solution where the information is available, but do not reject the solution because some part of the query pattern does not match.
Optional is meant to only accrete solution with values, never restrict a match for a solution.
-
It is useful to be able to have queries that allow information to be added to the solution where the information is available, but do not reject the solution because some part of the query pattern does not match.
Optional is meant to only accrete solution with values, never restrict a match for a solution.
-
GROUP BY
Can also be used with multiple variables.
-
(?p*(1-?discount) AS ?price)
This, and BIND, could be expressed as CONSTRUCT, allowing for uniform representation of how data's stored - as triples.
E.g.,
sparql PREFIX dc: <http://purl.org/dc/elements/1.1/> PREFIX ns: <http://example.org/ns#> SELECT ?title ?discountedPrice WHERE { ?x ns:discountedPrice ?discountedPrice} CONSTRUCT { ?x ns:discountedPrice (?p*(1-?discount)) } WHERE { ?x ns:price ?p . ?x dc:title ?title . ?x ns:discount ?discount }
I.e.,
clojure (-> graph match-product-with-discounts derive-discounted-price (select-keys [:title :discountedPrice]))
-
(?p*(1-?discount) AS ?price)
This can be expressed as BIND at the level of query, sparing the need to introduce another special bind at the level of SELECT. Although SELECT may work on GROUPped solutions,
-
GRAPH ?g
CONSTRUCT returns a graph, perhaps it would be of value to be able further feed it into queries, perhaps as GRAPH ?constructedGraph {query}
-
HAVING operates over grouped solution sets, in the same way that FILTER operates over un-grouped ones.
What's value of HAVING over writing it as FILTERing of a subquery?
-
ToMultiSet
There won't be duplicate values, yet we wrap with a multiset
-
Filter( ?v1 < 3 , LeftJoin( BGP(?s :p1 ?v1), BGP(?s :p2 ?v2), true) , )
Here, for ?s that are to be removed, they still will get LeftJoined with the optional. Seems like a redundant work. Perhaps filter first and then add optional?
-
-
Local file Local file
-
However, after having received the complete sequence of triples, itturns out that μ2 is a solution for the query but μ1 is not; instead, thefollowing new mapping is another solution in the (sound and complete)query result:
From what I understand, OPTIONAL would behave like that only when appearing preceeding to non-optional clause. When it appears after a non-optional clause, it may only accrete values, but it would not restrict.
-
-
www.semanticscholar.org www.semanticscholar.org
-
In fact, as updates change the content of the RDF store, all the activesubscriptions must be checked on the same RDF store snapshot
It would be possible to not block on updates if they were to captured with tx time. Then notifications would be able to grab the delta at tx time and do notification detection in parallel to coming updates.
-
-
-
ucan:./* represents all of the UCANs in the current proofs array.
If we are to have UCANs as delta restrictions, then this behaviour would be expressed automatically - all capabilities of UCANs in proofs would be delegated as is if no further restrcitions are specified.
-
diagram
Perhaps delegation arrows could be reverse to denote that they include / reference
Also a separate
prf
field would be of use to show for tracking delegationAlso arrows between capabilities are misleading, capabilities are not references
-
All of any scheme "owned" by a DID
", that the issuer is authorized to"?
-
"*"
Is there value in having "*", perhaps instead we could make ucan-selector optional?
-
eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCIsInVjdiI6IjAuOC4xIn0
Not clear what's in it, perhaps annotate that it's a UCAN or substitute with a human-readable hash "bafy...UCAN"?
-
Escalation by adding new capability
"Amplification by adding new caveat"?
-
capability
"caveat"?
-
Removes
"Attenuates"?
-
Escalation
"Amplification"?
-
as an empty array means "in no case"
This is contrary to the intuition behind caveats - they are restrictions, if no restrictions set - allow all.
-
(an "OR", NOT an "and")
"and" seems useful to have.
E.g., in the example above I'd read it as "and", actually
json "mailto:username@example.com": { "msg/send": [{}], // Proof is (correctly) broader than claimed "msg/receive": [ { "max_count": 5, "templates": [ "newsletter", "marketing" ] } ] },
-
Capabilities composed from multiple proofs (see rights amplification)
Could be unified with 1., having it as
A strict subset (attenuation) of the capability authorities from the prf field
-
Ability
"Operation"? The way it's named in incovation spec.
-
Abilities
"Ability" or "Capability"?
-
Capabilities
"Abilities"?
-
Capabilities composed from multiple proofs (see rights amplification)
What's value in having amplification?
Such composed tokens may become partially invalid if one of the proofs becomes revoked.
E.g., composing write access to / and /photos/ into / And having / proof become invalid would prevent access to /photos/, as it's been merged. Keeping them separately would insure more invalidation-resilient token. Although that is a point towards not merging rather than composition.
Also, composition may accidentally attenuate, as merged / access inherits timeframe of /, whereas timeframe of /photos/ could have been larger.
Also, due to difference in caveats merging by ability with no regard to caveats may lead to / access attenuate less-caveaty /photos/ access. We could try to determine if / caveats are more ampe than /photos/ caveats and merge only then, but that may be error-prone and complex to do.
Also proofs most likely will have different valid timeframes, so the composed token will be partially invalid at some time.
-
This map is REQUIRED but MAY be empty.
Seems redundant. What's value in having an empty map vs absent key?
-
until
"to"?
-
identifier for multiple keys instead of an identity
"identify with multiple keys instead of a specific identifier"?
-
broader
This delegated capabilities did not attenuate
-
2.11.1 Valid Time Range
Could be additionally clarified, that delegation validFrom + validTo timerange should be contained in the timerange of its Proof UCAN.
-
exp
exp
may be mistakenly thought of "expires in some time" rather than "expires at time".Perhaps use "validFromTime"
vft
and "validToTime"vtt
or smth? -
Yes
Why not optional, as with
nbf
?
-
-
datatracker.ietf.org datatracker.ietf.org
-
Use of this claim is OPTIONAL.
.
-
- Aug 2023
-
github.com github.com
-
alice
bob?
-
bafy...sendBobEmailInvocation
Should it include "cause" field, pointing to updateDnsInvocation?
-
Batched
Should we reverse arrows to indicate that tasks depend / contain?
-
Executor MUST fail Task that Awaits failed output of the successful Task.
What's the value from doing so? Could Executor not run such task at all?
-
Effect
Can we represent Effect as Task?
In order to make spec more compact / generic.
-
version
of what?
-
fork and join
"sync" and "async"
-
Result
How is it different from Receipt?
-
"cause": {"/": "bafy...somePriorInvocation"},
Can it reuse await/ok approach?
-
updateDnsTask
updateDnsInstruction?
-
sendEmailTask
sendEmailInstruction?
-
createBlogPostTask
createBlogPostInstruction?
-
-
github.com github.com
-
"200GB"
Should it be in the same domain model as other byte values? E.g., [200 "giga" "bytes"].
-
[500, "kilo", "bytes"]
Would it be useful for "memory" (and other byte value fields) to support number value in bytes?
-
[500, "milli", "seconds"]
Would it be useful for "timeout" to support number value in milliseconds.
It's a rather standard approach, may be easy to use.
-
-
arxiv.org arxiv.org
-
The operation of adding upall changes is stream integration.
Akin to
reduce(previousDB, tx)
=> currentDB -
Δ푉 = D(↑푄(퐷퐵)) = D(↑푄(I(푇)))
^Q can be generalized as yet another T, denotade sa ^T (^ hints that it this "live" T may be applied on top of other Ts / maintains a "live" view).
This gives the ability for a ^T to depend on other ^Ts.
So, for each ^T in Ts, ^T(I(Ts)) = ^T(DB).
Additionally, DB is a snapshot. Perhaps ^T(DB) better denoted as ^T(Ts).
Thus the relation can be written as
ΔV = D(^T(Ts)
Additionally, D is akin to Δ, denoting it as such we end up with
ΔV = Δ^T(Ts), for each ^T in Ts.
And since Ts are versioned, ^T(TsN) implicitly has access to ^T(TsN-1).
I.e., TsN contains ^T(TsN-1), for each ^T.
Which allows ^T to be incrementally computed over it's previous value.
^T(^T(TsN-1), TN)
^T has function signature akin to that of
reduce
, i.e., ^T(accumulator
,sequence element
)
-
-
dreamsongs.com dreamsongs.com
-
However, while developing a system, classes will be defined in various places, and it makes sense to be able to see relevant (applicable) methods adjacent to these classes.
Classes / onthologies are not a core feature of the language.
It's how we have RDF and OWS - they're separate.
Classes can be build on top of pure functions and data - these two are the core, nothing else.
Perhaps even functions can be eliminated from the core. Function is a template of some computation. It can be baked-in into the program. Since names are user-level feature.
So we end up with data and ops on it as core, and some flow control primitives (perhaps
or
andand
is enough). The rest can be built on top. As to what data to provide, multisets seem to be the most universal data structure / less restrictive, out of which more special data structures can be derived. And with advent of SSDs we are not limited by performance to sequential reads, so perhaps it'll be not all-to-crazy to switch to multisets as basic structural block of programs from lists. -
There will also be means of associating a name with the generic function.
Naming system is not the core part of a language.
Naming system serves two purposes:
-
Create structure of a program
-
Give a user-friendly interface
You don't need 2. in core of your language. How data (your program) is displayed should be up to the end-user (programmer). If he wants to see it as text, formatted as a LISP - his choise, if he wants to see it as text in a Java-like style - ok, Haskel-like - sure, visual - no prob.
Having languages as data allows just that. It helps us get rid of accidental complexity from managing a syntax-heavy bag of text files (and having compilers). E.g., how Unison lang have AST as data structure and text-based interface to tweak it.
Having code as data would also make run-time tweaking more easier, bringing us closer to the promise of LISP.
And also all the rest of neat features on top of content-addressing of code, that are now waaay easier to implement, such as incremental compilation, distributed compute, caching.
Have names as user-level feature, their personal dictionaries. Some will call reducing function
reduce
, somefold
, somefoldr
, some will represent it as a triangle (for visual code management). -
-
more elaborate object-oriented support
In no part is a core feature for a language.
The mainstream OOP is complex and has many responsibilities.
OOP as envisioned by it's creator is actor model - state management (state + managing actor) paired with linking actors together - a complex approach. Can be broken down to it's primitives. And OOP can be constructed out of them, if so desired, but not at the core level.
A good reference of language decomplection is Clojure.
-
In the second layer I include multiple values
Treating single values as a special case of multiple values is generally more performant.
-
the elaborate IO functions
IO is not the core of the language. It's more of an utility layer that allow the language to speak to the outside world.
-
macros
Would be nice to have them at run-time.
-
and very basic object-oriented support.
OOP is an abstraction that is useful for a very narrow amount of use-cases, giving accidental complexity to others. Personally, I'd leave it out of the core.
-
I believe nothing in the kernel need be dynamically redefinable.
This moves us away from the value of LISP as a meta-language that can change itself. We have macros at compile time, and not run-time. Having them at run-time gives us power we've been originally promised by LISP philosophy. Having no run-time dynamism would not allow for this feature.
I.e., having codebase as persistent data structure, tweakable at run-time sounds powerful.
-
Our environments should not discriminate against non-Lisp programmers the way existing environments do. Lisp is not the center of the world.
A possible approach to that is having LISP hosted, providing a simpler interface on top of established but less simple and expressive environments. E.g., the Clojure way.
-
All interested parties must step forward for the longer-term effort.
This is an effort to battle the LISP Curse. Would be a great movement, as it's one of the LISPs hinderspots. (another is adoption, tried to be solved by Clojure)
-
And soon they will have incremental compilation and loading.
That would be great. And it is. Unison Lang gives us this power. Hope for wider adoption.
Having content-addressable codebase simplifies it a ton.
Perhaps an AST can be built on top of IPVM, akin to Unison Lang, but for any WASM 40+ langs out there.
-
and using the same basic data representations
Having common data structures would be great.
Atm each language implements the same data structures on its own.
Would be nice to have them protocol-based, implementation-agnostic.
Likely, IPLD gives us the common data structures.
-
The very best Lisp foreign functionality is simply a joke when faced with the above reality.
Been true. Fixed in Clojure. Clojure is top-tier in integration with host platforms.
-
The real problem has been that almost no progress in Lisp environments has been made in the last 10 years.
That's a good point. Again, not something inherent in LISPs. Possibly due to lack of adoption and the LISP Curse.
-
Seventh, environments are not multi-user when almost all interesting software is now written in groups.
Not an inherent trait of LISP envs. And LISPs do have multiuser envs, e.g., REPL to which you can connect multiple clients.
-
Sixth, using the environment is difficult. There are too many things to know. It’s just too hard to manage the mechanics.
Environments for LISPs are simpler due to simplicity of the language they're for. For a more complex language they'd be only more complex and hence more difficult to use.
-
what is fully defined and what is partially defined
Not sure I get that
-
Fifth, information is not brought to bear at the right times.
How that is a trait of non-LISP environments?
More than that, it seems LISPs give you greater introspection. E.g., because of REPL. E.g., because of macros that can sprinkle code with introspection in dev environment and be rid of it in prod env.
-
Fourth, they do not address the software lifecycle in any extensive way.
How is that an inherent property of LISP environments?
Surely they can be extended with all the mentioned properties.
It does require effort to implement, and with sufficient resources behind an environment it would be wise to invest there.
E.g., Emacs has great docs. Clojure ecosystem has great docs and is a joy to work in.
I'd say LISPs have greater potential user-friendliness due to simplicity of interface. Simple interface + good docs is more user-friendly than complex interface + great docs.
And you don't need that much docs in the first place for simple interface.
As well as a simple, well-designed interface can serve as documentation itself, because you can grok on it, instead of going through docs. You mostly need docs for the mindset.
-
Third, they are not multi-lingual even when foreign interfaces are available.
This is great. It's a desirable trait. But I don't see how that is a unique value available only for Non-Lisp environments.
-
Files are used to keep persistent data -- how 1960s.
What's wrong with that? Files are universaly accesable on a machine (and with IPFS - across the machines), seems to be a good design for the times. Any programs can interoperate through files - a common interface.
Files are 'caches' of computations made.
Sure, it would be nice to capture computations behind it as well, although that is not practical for back in times and is not that much needed.
But nowadays IPVM does just that at a globe-scale. And, thankfully, we also have a data structures as means to communicate and not a custom text.
I don't see what's wrong with that approach. Taking it further (as IPVM does) gives a next level of simplicity and interoperability, along with immutability/persistence - a game changer.
-
In fact, I believe no currently available Lisp environment has any serious amount of integration.
Well, that's a shame. Composability is a valuable trait of computer programs, at user interface included. The facts that they're not composable may mean that the problem domain is not well known, so it wasn't clear what are the components. Perhaps with time it'll become clearer. This is, interestingly is a non-the-right-thing approach. UI got shiped to satisfy the need without covering all the cases (integration of UIs). A lean startup approach. E.g., Emacs starter as non-composable and now turns into composable.
-
The virus lives while the complex organism is stillborn. Lisp must adapt, not the other way around.
What's "right" is context-dependent. For programmers the right thing will be a simple and performant and mainstream etc. language.
LISP did not check all the boxes back then. Clojure now tries to get closer to checking what a programmer may need in production, and has a broader success.
Clojure had effort in it's design to make it a simple-interface thing, and it's excellent in that. It had effort in making it easy to adopt. So it's a well-design virus. The right thing. Virality is one of the traits of the right thing, in the context of production programming.
-
You know, you cannot write production code as bad as this in C.
Performance is not the only metric of "goodness" of code. Simplicity is one of.
-
The following examples of badly performing Lisp programs were all written by competent Lisp programmers while writing real applications that were intended for deployment
Often performance is not the higest value for business. It is especially so when we have ever-growing powerful hardware.
LISP allows for simplicity of interface. You can iterate on making the implementation performant, if you need so, later on.
-
The lesson to be learned from this is that it is often undesirable to go for the right thing first.
Great, don't go 100% in. Especially since those last 20% take 80% of time. But please do have a good interface design in those 50%. It is to stay.
-
The right thing is frequently a monolithic piece of software
Unix is a mono kernel. C is a fixed language, whereas LISP can be extended with macros.
Composability is a trait of a good design. I'd expect The Right Thing approach to produce composable products, and Worse Is Better approach to produce complexed ones.
-
there are many more compiler experts who want to make C compilers better than want to make Lisp compilers better
It is more fun to play with simple things. They're more rewarding.
C is more simple in implementation than a LISP. It's more fun to play with it's compiler. LISP is more simpler in interface than C, it's more fun to play with it as a language. (hence the LISP Curse)
I wonder why we don't have a C Curse at the level of compiler though. Or do we?
-
and third will be improved to a point that is almost the right thing
Huge doubts there. The original interface will stay there in some way or another. And it is complex. So users will pay the cost of it from then on.
E.g., Java is OOP, it introduces functional style, but OOP stays there. Some folks would like to switch to functional, but there is pressure from legacy codebase and legacy mindset around to carry on in the same fashion.
E.g., C is still a C. C++ and other C* are not far away in simplicity of their interface.
Unix is still a text-based chatter mono kernel.
It seems hard to impossible to change the core design, so in Worse is Better it stays Worse.
-
namely, implementation simplicity was more important than interface simplicity.
Can you go far with such a design before getting down in accidental complexity from having complex abstractions?
Abstracting away is the bread and butter of programming. In order to do it efficiently you need simple abstractions - simple interface. For this task interface simplicity is way more valuable than the implementation simplicity. E.g., you may have a functional interface on top of an imprerative implementation.
-
Early Unix and C are examples of the use of this school of design
C is complex, compared to LISPs.
Unix has a mono kernel and a ton of C.
Are they the crux of simplicity, which is the highest value of Worse is Better?
-
-
windowsreport.com windowsreport.com
-
it’s about half of the global population at 47.1 percent
.
-
there are 7.26 billion mobile phone users worldwide
.
-
- Jun 2023
-
en.wikipedia.org en.wikipedia.org
-
and is often intentionally limited further to reduce instability introduced by a fluctuating tickrate
Although an alternative approach of processing inputs as soon as they arrive is used as well, and may provide for better experience.
E.g., as seen in CS2.
-
-
github.com github.com
-
The second solution for persistent replication has to do with swapping pubsub for IPNS.
This is a fine solution to discovery of a latest known version of a log of a machine. However, using it as a primary way would mean machines need to: store log in IPFS, publish it to IPNS, others need to resolve IPNS -> IPFS (and perhaps get notified in some way to know that there's change) and fetch from IPFS - as a solution for syncing state between two machines it will be pretty costly on time and computation required. As a solution to persist local log for other's occasional offline access - seems fine.
-
since pubsub messages are not persistent
They can be made persistent by storing a log of messages to IPFS though, as OrbitDB does
The fact that they're not persistent by default may be a plus, as persistent is cost-heavy and can be done when required rather than always
-
- May 2023
-
www.w3.org www.w3.org
-
The rule QuadData, used in INSERT DATA and DELETE DATA, must not allow variables in the quad patterns.
.
-
because there is no match of bindings and so no solutions are eliminated.
didn't get it, I thought MINUS acts as disjoin
-
-
www.w3.org www.w3.org
-
Variables in QuadDatas are disallowed in INSERT DATA requests
.
-
-
arxiv.org arxiv.org
-
Furthermore, there are concerns regarding relevance andtrustworthiness of results, given that sources are selected dynamically.
Perhaps immutability can be provided by having domain name of URL point to an immutable value, say content-addressable RDF or content-addressable log of txes.
-
-
-
Загрузка
.
-
-
oparu.uni-ulm.de oparu.uni-ulm.de
-
This leads to an eventuallyconsistent semantics for queries.
Hmm. Query store may be eventually consistent with the command store. However, the issued query returned a stale result and there is no 'eventual getting the correct result' for it.
So we may end up in an inconsistent state no prob, where there is a command that transacts stuff based on a stale query.
Capturing query dependencies of a command in the command itself will allow for re-evaluation of queries on the consistent state.
-
Where CRUD offersfour operations to apply modifications, event-sourced systems are restrained to onlyone operation: append-only.
CRUD operations can be captured as events in an event log though.
-
-
docs.libp2p.io docs.libp2p.io
-
Peerings are bidirectional
That's a strange restriction. Full-message connection has its cost, and I'd imagine peers would like to minimize that by being able to set it unidirectionaly and per-topic.
-
-
github.com github.com
-
Run your own instance of *-star signalling service. The default ones are under high load and should be used only for tests and development.
.
-
-
www.w3.org www.w3.org
-
QuadData denotes triples to be removed and is as described in INSERT DATA, with the difference that in a DELETE DATA operation neither variables nor blank nodes are allowed
.
-
-
github.com github.com
-
path.friends.filter
Can this be captured as SPARQL FILTER instead? Then we can keep on building the query and delegate filtering to SPARQL engine.
-
-
arrow.apache.org arrow.apache.org
-
Array slots which are null are not required to have a particular value; any “masked” memory can have any value and need not be zeroed, though implementations frequently choose to zero memory for null values.
May not-zeroing result in unintended access to previously stored in that physical layout data?
E.g., if I intentionally create fully zeroed arrays and read previous physical layout.
-
-
arrow.apache.org arrow.apache.orgFormat1
-
The Apache Arrow format allows computational routines and execution engines to maximize their efficiency when scanning and iterating large chunks of data.
This will be damm handy for ECS engines. The memory layout, shown in the associated figure, organizes data in the way such engines are querying for it.
-
- Apr 2023
-
telegra.ph telegra.ph
-
Что касается пользы завтрака для снижения веса — она тоже пока не установлена.
Time-restricted feeding causes fat mass reduction, according to Dr. Satchin Panda.
It may be implemented as skipping breakfast, but it should not be breakfast necessarily.
However, there are benefits found in adopting early time-restricted feeding (skipping supper), as presented in this vid.
-
-
-
В работе Burke DG[6] ученые изучили 24-часовое выделение креатина и продукта его распада креатинина из организма и пришли к выводу, что в сутки усваивается не более 50 мг/кг добавки, все остальное выводится с мочой
Not sure that's the takeaway of the study
-
Это значит, что нет смысла принимать более 5-7 г креатина в сутки.
Loading phase does increase muscle creatine levels drastically and it relies on ~20g/day. So there are effects from high dosages.
-
сразу после тренировки, а не до начала
That may not be correct.
According to this figure, metabolic changes that increase absorbtion begin to appear during the train. If we are to take advantage of them perhaps we'd like to have peak creatine level at that point. However, it takes about 45 minutes for blood creatine levels to peak. Thus, it may be beneficial to ingest creatine 45 minutes prior to the train.
-
-
www.ncbi.nlm.nih.gov www.ncbi.nlm.nih.gov
-
Figure 1
.
-
-
gist.github.com gist.github.com
-
Trusty URIs are URLs just for convenience, so you can use good old HTTP to get them (in most cases), but the framework described above gives you several options that are highly unlikely to all fail at once.
To me Trusty URIs seem to have complected two concepts that make it bad at both.
These concepts are: mutable name + immutable content.
If you care about content-based addressing - then mutable name as part of it is of no value.
If you care about resolution of immutable content from a name - you're locked to the domain name service. Whereas there may be many parties online that have the content you want to get and it could have been resolved from them.
To me it seems IPNS got it right, decoupling the two, allowing mutable names on top of immutable content.
So you can resolve name -> content-based name, from peers.
So you can refer by content-based name.
So you can deref content-based name, from peers.
-
-
inqlab.net inqlab.net
-
it prevents the right to be forgotten
It seems by maintaining a 'blacklist' of removed entries per DMC we can both preserve the log of ops intact and remove enformation of the removed log op entry.
-
Implementations MAY also maintain a set of garbage collected block references.
I'd imagine it's a MUST. Otherwise replica may receive content it just removed and happily keep on storing it. Keeping such 'blacklist' is an ellegant solution. Such lists are ever-growing however and perhaps could be trimmed in some way. E.g., after everybody who's been holding the content signed that it's been removed. Although even then nothing stops somebody from uploading the newly deleted content.
I guess another solution would be not to delete somebody's content but to depersonalize that somebody. So content stays intact, signed by that somebody, however any personal information of that somebody gets removed, leaving only their public key. That would require for personal information to be stored in one mutable place that is not content-addressed.
-
However, this creates permanent links to past operations that can no longer be forgotten. This prevents the right to be forgotten and does not seem like a viable solution for DMC (Section 6.2.1).
That is a valid concern.
Perhaps we could have them both - partial order + removal of entities.
I guess it could be achieved by having op log and have a 'remove entity by its hash' op. Logs would not preserve data for such entities. However, in order to not re-hash log entries from the 'removed' log entry onward logs could keep the hash of the removed entry (but not data).
Maybe that's the way removal's done in orbitdb.
-
Public-key cryptographic signatures are used to ensure that operations are issued by authorized entitites.
Speaking about mutable graphs, signing ops seems to be a superior technique compared to signing entities. As when signing ops triples gets signed, giving finer granularity. So there may exist entities that are combined out of triples that are signed by different authorities. Finer profiling.
-
Operation to add an additional authorized key to a container (dmc:AddKey see Section 4.6)
Reifying key addition to be yet another op seems like a good idea. More generally it's about managing container's meta in container's state. One great benefit from it is that we have meta-ops and ops converging.
-
mutating operations and key revocations do not commute.
Perhaps having determenistic order for operations would sovle that problem. Then if key revocation happens before the op with that key - the op is dropped, if after - preserved.
Akin how ops are ordered in ipfs-log.
That requires key revokation to be reified - to be a plain op log entry.
-
-
-
Why not Matrix?
Also it's mutable - server authorities have moderation powers.
-
-
docs.unity3d.com docs.unity3d.com
-
For example, if you want to find all entities that have component types A and B, you can find all the archetypes with those component types, which is more performant than scanning through all individual entities.
Archetypes seems to be a kind of index. However, for the example given that index does not get used for its purpose. It seems a more fit solution would be to keep an index of entities per a set of components that your code actually filters by. E.g., such sets would come from Systems
-
-
carbonunits.org carbonunits.org
-
Responsibilities as a fiscal host.
We may remove it, as this reference was meant for internal use
-
Benefits from using OpenCollective (OC) with CarbonUnits (CU):
We can rephrase it as: "Benefits from using Open Collective with Carbon Units:"
-
OpenCollective
To be more professional we can write it in a more correct way: Open Collective
-
OC
We can replace it with the full name: Open Collective
-
This also allows for currently censored teammates (e.g., in Russia) to get paid.
This can be removed.
-
Bring more services to Web3Workshop
Can be rephrased to: "Provide more services to Web3Workshop participants"
-
(perhaps) Close integration with IPFS project-raising ecosystem
Can be rephrased to: "Close integration with ProtocolLabs project-raising ecosystem"
-
(perhaps) Attract the first customer that wants a Web3 application
Can be rephrased to: "Attract customers that want Web3 applications"
-
- Mar 2023
-
github.com github.com
-
sig
.
-
-
comunica.dev comunica.dev
-
In order to allow Comunica to produce more efficient query plans, you can optionally expose a countQuads method that has the same signature as match, but returns a number or Promise<number> that represents (an estimate of) the number of quads that would match the given quad pattern.
The amount of quads in a source may be ever-growing.
Then we couldnt't count them. Is that a problem for Comunica or will it handle infinite streams fine?
I.e., stream, returned by
match()
keeps accreating with new values (e.g., as they are being produced by somebody). -
If Comunica does not detect a countQuads method, it will fallback to a sub-optimal counting mechanism where match will be called again to manually count the number of matches.
Can't Comunica count quads as they arrive through the stream returned by
match()
?
-
-
www.w3.org www.w3.org
-
WITH <http://example/bookStore> DELETE { ?book ?p ?v } WHERE { ?book dc:date ?date ; dc:type dcmitype:PhysicalObject . FILTER ( ?date < "2000-01-01T00:00:00-02:00"^^xsd:dateTime ) ?book ?p ?v }
Can the DELET clause be right below the above INSERT clause? So we don't need to repeat the same WHERE twice.
Also would be nice to have transactional guarantees on the whole query.
-
-
docs.opencollective.com docs.opencollective.com
-
Fiscal hosting enables Collectives to transact financially without needing to legally incorporate.
.
-
The fiscal host is responsible for taxes, accounting, compliance, financial admin, and paying expenses approved by the Collective’s core contributors (admins).
.
-
-
-
An improvement for helia would be to switch this around and have the monorepo push changes out to the split-out repos, the reason being the sync job runs on a timer and GitHub disables the timer if no changes are observed for a month or so, which means a maintainer has to manually go through and re-enable the timer for every split-out repo periodically - see ipfs-examples/js-ipfs-examples#44
An ideal design seems to me to be the monorepo pulling from its dependent repos. It would allow for granular codebase management in repos, yet you can have discoverability via the monorepo. Also it would not complect repos with knowledge of the monorepo.
-
-
pl-strflt.notion.site pl-strflt.notion.site
-
Graded Q4 OKRs
Having more aligned developers improving PL Stack is a #1 goal of EngRes in 2022.
-
- Feb 2023
-
-
What about companies for whom core-js helped and helps to make big money? It's almost all big companies. Let's rephrase this old tweet: Company: "We'd like to use SQL Server Enterprise" MS: "That'll be a quarter million dollars + $20K/month" Company: "Ok!" ... Company: "We'd like to use core-js" core-js: "Ok! npm i core-js" Company: "Cool" core-js: "Would you like to help contribute financially?" Company: "lol no"
Corps optimise for money. Giving away money for nothing in return goes against their nature.
-
-
expathub.ge expathub.ge
-
Turnover (within the last 12 month period) of the sponsoring LLC or IE should exceed 50,000 GEL for each foreigner (director or employee) in the business
.
-
- Dec 2022
-
-
"scopes": { "/scope2/": { "a": "/a-2.mjs" }, "/scope2/scope3/": { "b": "/b-3.mjs" } }
An alternative domain model could be:
{"a": "/a.js" "b": "/b.js" "path1": {"a": "/a1.js" {"path11": {"b": "/b11.js"}}}}
This way path to a name is a 'scope'/'namespace'. Also we're spared the need of "/" in scope's names. It does look harder to parse visually than a flat model.. -
"scopes": { "/scope2/": { "a": "/a-2.mjs" }, "/scope2/scope3/": { "b": "/b-3.mjs" } }
An alternative domain model could be:
{:imports {} :scopes {:imports {} :scopes {}}
This is more verbose, but more uniformal.
-
- Nov 2022
-
docs.datomic.com docs.datomic.com
-
Attributes of type :db.type/bytes cannot be found by value in queries (see bytes limitations).
Bytes could be treated as an entity and resolved by a CID.
-
Datomic does not know that the variable ?reference is guaranteed to refer to a reference attribute, and will not perform entity identifier resolution for ?country
Datomic may learn at query execution time that it's a ref attribute, I'd expect it to be able to resolve a ref by it's id in such case..
-
- Oct 2022
-
expathub.ge expathub.ge
-
In Georgia, and many other countries, income which is earned through actual work performed within the country, whether the income comes from a foreign source or not, is considered to be Georgian source income. So, if you have a client who pays you from abroad, direct to a foreign bank account, even if the money never arrives in Georgia, it is still Georgian source income as the work was performed here.
.
-
- Sep 2022
-
book.fulcrologic.com book.fulcrologic.com
-
The form state support is about just that: form state. It basically keeps a "pristine" copy of the data of one or more entities and allows you to track if (and how) a particular field has been interacted with.
Having modifications done to the actual entity in the db results in reactive behaviour.
E.g., you have an app called "Test App", and you display "Test App" in navbar in place of logo. Also you have a form that lets you modify it. When you modify "Test App" to "My Supa Test App" it gets updated in the navbar as you type (and across all other places). That may not be what user wants, as likely they want to set a new value. This is akin to a problem of validation, when we don't want to show "field's incorrect" when user did not touch it or is touching it.
Perhaps having "dirty" form state kept separate, and being the subject of modification would be a solution to that, having the actual used value in DB pristine. I.e., swap "dirty" and "pristine".
-
-
pathom3.wsscode.com pathom3.wsscode.com
-
#com.wsscode.pathom3.connect.operation.Resolver
Also may contain
inferred-input
(seems it's pathom's effort to guess what's the input by analyzing fn params destructuring). -
#com.wsscode.pathom3.connect.operation.Resolver
Also may contain
docstring
-
requires
Lists only the required attrs, optionals (via pco/?) will be listed under
optionals
.
-
- Aug 2022
-
lepiller.eu lepiller.eu
-
DISPLAY=$DISPLAY
Can't we pass it via --share as we did with others?
-
XAUTHORITY=$XAUTHORITY
Didn't we already passed XAUTHORITY env via --share ?
-
$PROFILE/lib=/lib64
Couldn't we add a symlink from /lib64 pointing to /lib of this environment?
-