3,021 Matching Annotations
  1. May 2022
  2. autonomous-data.noeldemartin.com autonomous-data.noeldemartin.com
    1. Autonomous

      This term is well-suited for the sort of thing I was going for with S4/BYFOB.

      @tomcritchlow's comment about being hobbled by CORS in his attempt to set up an app[1] that is capable of working with his Library JSON is relevant. With a BYFOB "relay" pretty much anyone can get around CORS restrictions (without the intervention of their server administrator). The mechanism of the relay is pretty deserving of the "autonomous" label—perhaps even moreso that Noel's original conception of what Autonomous Data really means...

      1. https://library-json-node-2.tomcritchlow.repl.co/library?url=https://tomcritchlow.com/library.json
    1. instead of the “Mastodon appraoch” we take the “Replit approach”

      I'm confused by the continual references to the Replit. Once you have Replit-style power, you can do Mastodon interop—but it keeps you dependent on third-party SaaS. Continuing to violate the principle of least power isn't really any improvement. If you're going to shoot for displacing the status quo, it should be to enable public participation from people who have nothing more than a Neocities account or a static site published with GitHub Pages or one of the many other providers. Once you bring "live" backends into this (in contrast to "dead" media like RSS/Atom), you've pretty much compromised the whole thing.

    2. Here’s a super rough proof of concept Replit tiny library.

      There's nothing about this that requires Replit (or NodeJS, for that matter). The whole thing can be achieved by writing a program to run on the script engine that everyone already has access to—the one in the browser. No servers required.

    3. Here’s a real example. A while back I posted up some thoughts about a decentralized Goodreads: Library JSON - A Proposal for a Decentralized Goodreads. The idea is that a million individual static sites can publish their book lists in a way that allows us to build Goodreads-esque behavior on top of it.

      A sort of "backend on the frontend".

      A similar "BYFOB" design principle was the basis for a proposal to bring "Solid[-like] services for static sites" into existence. I submitted this proposal to NLnet in their call for applications for their user-operated Internet fund. It was not accepted.

    4. What happens if - maybe! - there’s a model of decentralization that feels more like a bunch of weird Replits networking with each other.

      Get rid of the networking, and make it more like the RSS/Atom model.

      ActivityPub, for example, shouldn't really require active server support if you just want publish to the clear Web (i.e. have no use for DMs). Anyone, anywhere can add RSS/Atom "support" to their blog—it's just dumping another asset on their (possibly static!) site. Not so with something like Mastodon, which is unfortunate. It violates the Principle of Least Power at a fundamental level.

    5. There’s no export button - everything is automatically replicated in all three places
    1. I wrote about my idea for Library.json a while back. It’s this idea that we might be able to rebuild these monolithic centralized services like Goodreads using nothing by a little RSS.

      See also this thread with Noel De Martin, discussing a (Solid-based) organizer for your media library/watchlist: https://noeldemartin.social/@noeldemartin/105646436548899306

      It shouldn't require Solid-level powers to run this. A design based upon "inert" data like RSS/Atom/JSON feeds (that don't require a smart backend to take on the role of an active participant in the protocol) would beat every attempt at Solid, ActivityPub, etc. that has been tried so far. "Inert"/"dead" media that works by just dumping some content on a Web-reachable endpoint somewhere, including a static site, is always going to be more accessible/approachable than something that requires either a server plug-in or a whole new backend to handle.

      The litmus test for any new proposal for a social protocol should be, "If I can't join the conversation by thumping on my SSG to get it to produce the right kind of output—the way that it's possible with RSS/Atom—then the design is fundamentally flawed and needs to be fixed."

    1. Maybe Mozilla could buy up Glitch and integrate it natively inside Firefox? Maybe BeakerBrowser will get enough traction and look beyond the P2P web? Maybe the Browser Company will do something like this?

      Before Keybase died, I had hopes that they would do something kind of like this. It'd work by installing worker services in the Keybase client and/or also allow you to connect to network-attached compute like AWS or DigitalOcean (or some Keybase-operated service) to seamlessly process worker requests when your laptop was offline. The main draw would be a friendly UI in the Keybase client for managing your workers. Too bad!

    2. Imagine if node.js shipped inside Chrome by default!

      There was something like that, in HTML5's pre-history: Google Gears.

      I've thought for a long time that someone should resurrect it (in spirit, that is) for the world's modern needs. Instead of running around getting everyone to install NodeJS and exhorting them to npm install && npm run, people can install the "Gears 2" browser extension which drastically expands the scope of the browser capabilities, and you can distribute app bundles that get installed "into" Gears.

      Beaker (mentioned later in this post) was an interesting attempt. I followed them for awhile. But its maintainers didn't seem to appreciate the value of frictionless onboarding experience, which could have been made possible by e.g. working to allow people to continue using legacy Web browsers and distributing an optional plug-in, in the vein of what I just described about Gears.

    3. Can you imagine if the beginner version of Node.js came pre-installed with a GUI for managing and running your code?

      A graphical JS interpreter? That's the browser! And it just so happens that it's already installed, too (everywhere; not just on Macs).

    4. build a browser that comes pre-installed with node.js

      Nah. Just stop programming directly against NodeJS to start with!

      The Web platform is a multi-vendor standardized effort involving broad agreement to implement a set of common interfaces. NodeJS is a single implementation of a set of APIs that seemed good (to the NodeJS developers) at the time, and that could change whenever the NodeJS project decides it makes sense to.

      (Projects like WebRun which try to provide a shim to let people continue to program against NodeJS's APIs but run the result in the browser is a fool's errand. Incredibly tempting, but definitely the wrong way to go about tackling the problem.)

    5. But… on installing node.js you’re greeted with this screen (wtf is user/local/bin in $path?), and left to fire up the command line.

      Agreed. NodeJS is developer tooling. It's well past the time where we should have started packaging up apps/utilities that are written in JS so that they can run directly in* the browser—instead of shamelessly targeting NodeJS's non-standard APIs (on the off-chance everyone in your audience is a technical user and/or already has it installed).

      This is exactly the crusade I've been on (intermittently) when I've had the resources (time/opportunity) to work on it.

      Eliminate implicit step zero from software development. Make your projects' meta-tooling accessible to all potential contributors.

      * And I do mean "in the browser"—not "on a server somewhere that you are able to use your browser to access, à la modern SaaS/PaaS"

    6. An incomplete list of things I’ve tried and failed to do
    1. To run it you need node.js installed, and from the command line run npm install once inside that directory to install the library dependencies. Then node run.js <yourExportedDirectory>

      Why require Node?

      Everything that this script does could be better accomplished (read: be made more accessible to a wider audience) if it weren't implemented by programming against NodeJS's non-standard APIs and it were meant to run in the browser instead.

    1. Theoretically, there are many plugins for webservers adding support for scripting using any scripting language you can name. These are sometimes used to host full-blown web applications but I don't see them being used to facilitate mildly dynamic functionality.

      All in all, despite its own flaws, I think this piece hints at a useful ontology for understanding the nuanced, difficult-to-name, POLP-violating design flaws in stuff like Mastodon/ActivityPub—and why BYFOB/S4 is a better fit, esp. for non-technical people.

      https://hypothes.is/search?q=%22black+and+dead+is+all+you+need%22+user:mrcolbyrussell

    2. the former allows me to give an URL to a piece of code

      But you're not! When you wield PHP like this, there is no URL for the piece of code per se—only its (potentially fleeting) output—unless you take special care to make that piece of code available as content otherwise. PHP snippets are just as deserving of a minted identifier issued for them as, say, JS and CSS resources are—perhaps even just as deserving as the content being served up on the site, but PHP actually discourages this.

    3. it's far easier for me to write a PHP script and rsync it to a web server of mine
    4. It's long been fairly apparent to me that the average modern web developer has no comprehension of what the web actually is3

      Agreed, but a it's a very ironic remark, given the author's own position...

    5. The only reasonable implementation options are JavaScript and PHP.

      I argue that PHP is not reasonable here. The only appropriate thing for this use case is (unminified) JS—or some other program text encoded as a document resource permitting introspection and that the user agent just happens to be able to execute/simulate.*

      • Just like the advocates of "a little jQuery", author here doesn't seem to realize that the use of PHP was the first step towards what is widely acknowledged to be messed up about the "modern" Web. People can pine for the days of simple server-side rendering, but there's no use denying that today's Web is the natural result of an outgrowth that began with abuses of the fundamental mechanisms underpinning the Web—abuses that first took root with PHP.

      * Refer to the fourth and sixth laws of "Sane Personal Computing, esp. re "reveals purpose"

    6. how does one support comments? Answer: Specialist third-party services like Disqus come into existence. Now, you can have comments on your website just by adding a <script> tag, and not have to traverse the painful vertical line of making your website itself even slightly dynamic.

      Controversial opinion: this is actually closer to doing the Web the way that it should be done, taking the intent of its design into account. NB: this is not exculpatory of minified JS bundles (where "megabyte" is the appropriate unit order of magnitude for measuring their weight) or anything about "modern" SPAs that thumb their nose at graceful degradation.

    7. an URL

      "an URL"? C'mon.

    8. It's not surprising at all, therefore, that people tend not to do this nowadays.

      I dunno how sound this conclusion is. Even for static sites, there are lower friction ways to do them, but people usually opt for the higher friction paths...

    9. You can read the “Effort” axis as whatever you like here; size, complexity, resource consumption, maintenance burden.

      Hey, look, it's an actually good example of the "steep learning curve".

      (I never understood why people insist that referring to it as a steep curve is wrong; clearly the decisions about your axes are going to have an impact on the thing. It seems that everyone who brings this up is insisting on laying out their graph the wrong way and implicitly arguing that other people need to take responsibility for it.)

    10. Perhaps each page load shows a different, randomly chosen header image.

      That makes them constitute separate editions. It makes things messy.

    11. They might have a style selector at the top of each page, causing a cookie to be set, and the server to serve a different stylesheet on every subsequent page load.

      Unnecessary violation of the Principle of Least Power.

      No active server component is necessary for this. It can be handled by the user agent's content negotiation.

    1. I grew up on PHP, it was the first thing beyond BASIC I ever wrote

      Should we lean into that? Maybe some sort of "server BASIC" is what we need.

      NB: need not (read: "should not") actually be a BASIC; moreso a shared spirit (see also: Hypercard)

    1. My argument for the use of the Web as a medium for publishing the procedures by which the documents from a given authority are themselves published shares something in common with the argument for exploiting Lisp's homoiconicity to represent a program as a data structure that is expressed like any other list.

      There are traces here as well from the influence of the von Neumann computational model, where programs and data are not "typed" such that they belong to different "classes" of storage—they are one and the same.

    1. However when you look UNDERNEATH these cloud services, you get a KERNEL and a SHELL. That is the "timeless API" I'm writing to.

      It's not nearly as timeless as a person might have themselves believe, though. (That's the "predilection" for certain technologies and doing things in a certain way creeping in and exerting its influence over what should otherwise be clear and sober unbiased thought.)

      There's basically one timeless API, and that means written procedures capable of being carried out by a human if/when everything else inevitably fails. The best format that we have for conveying the content comprising those procedures are the formats native to the Web browser—esp. HTML. Really. Nothing else even comes close. (NB: pixel-perfect reproduction à la PDF is out of scope, and PDF makes a bunch of tradeoffs to try to achieve that kind of fidelity which turns out to make it unsuitable/unacceptable in a way that HTML is not, if you're being honest with your criteria, which is something that most people who advocate for PDF's benefits are not—usually having deceived even themselves.)

      Given that Web browsers also expose a programming environment, the next logical step involves making sure these procedures are written to exploit that environment as a means of automation—for doing the drudge work in the here and now (i.e., in the meantime, when things haven't yet fallen apart).

    1. who hosts that?

      Answer: it's hosted under the same auspices as the main content. The "editor" is first-class content (in the vein of ANPD); it's really just another document describing detailed procedures for how the site gets updated.

    1. Lines 1-7 represent quads, where the first element constitutes the graph IRI.

      Uh, it's the last element, though, not the first—right?

    2. Square brackets represent here a blank node. Predicate-object pairs within the square brackets are interpreted as triples with the blank node as subject. Lines starting with '#' represent comments.

      Bad idea to introduce this notation here at the same time as the (unexplained) use of square brackets to group a list of objects.

    3. RDF provides no standard way to convey this semantic assumption (i.e., that graph names represent the source of the RDF data) to other readers of the dataset.

      Lame.

    4. The datatype is appended to the literal through a ^^ delimiter.

      Were parens taken?

    5. A resource without a global identifier, such as the painting's cypress tree, can be represented in RDF by a blank node.

      Terrible choice for a name.

      What was wrong with some variation of "anonymous"?

    1. Cool URIs Don't Change

      But this one did. Or rather, it used to be resolvable as http://infomesh.net/2001/08/swtips/ (note the trailing slash), but now that returns 404 and is only available as http://infomesh.net/2001/08/swtips (no trailing slash).

    1. I like to keep things on the web if I can, permanently archived, because you never know when somebody will find them useful or interesting anyway.

      But Semantic Web Tips http://infomesh.net/2001/08/swtips/ is returning 404...

    1. The events list is created with JS, yes. But that's the only thing on the whole site (~25 pages) that works that way.Here's another site I maintain this way where the events list is plain HTML: https://www.kingfisherband.com

      There's an unnecessary dichotomy here between uses JS and page is served as HTML. There's a middle ground, where the JS can do the same thing that it does now, but it only does so at edit time—in the post author's own browser, but not in others'. Once the post author is ready to publish an update, the client-side generated content is captured as plain HTML, and then they upload that. It still "uses JS", but crucially it doesn't require the visitor to have their browser do it (and for it to be repeated N times, once per page visit)...

    1. A great case study in how the chest-puffing associated with the certain folks in certain segments of the compiled languages crowd can be cover for some truly embarrassing blunders.

      (Also a great antidote against a frequent argument by self-taught "full stack" devs; understanding the runtime complexity of your program is important.)

    1. At one level this is true, but at another level how long is the life of the information that you're putting into your wiki now, and how sure are you that something this could never happen to your wiki software over that lifetime?

      I dunno. Was the wiki software in question MediaWiki?

      I always thought it was weird when people would set up a wiki and'd go for something that wasn't MediaWiki (even though I have my own quibbles with it). MediaWiki was always the clear winner to me, even in 2012 without the benefit of another 10 years of hindsight.

    1. copying and pasting into an online html  editor, then hitting the clean up button?   Copy this cleaned up html into one of your  posts, save it, and view.

      This could/should be part of Zonelets itself.

    1. Updating the script

      This is less than ideal. Besides non-technical people needing to wade into the middle of (what very well might appear to them to be a blob of) JS to update their site, here are some things that Zonelets depends on JS for:

      1. The entire contents of the post archives page
      2. The footer
      3. The page title

      This has real consequences for e.g. the archivability for a Zonelets site.

      The JS-editing problem itself could be partially ameliorated by with something like the polyglot trick used on flems.io and/or the way triple scripts do runtime feature detection using shunting. When the script is sourced via script element from another page, it behaves as JS, but when visited directly as the browser destination it is treated like HTML and has its own DOM tree for the script itself to make the necessary modifications easier. Instead of requiring the user to edit it as freeform text, provide a structured editing interface, so e.g. adding a new post is as simple as clicking the synthesized "+" button in the list of posts, copying the URL of the post in question, and then pasting it in—to a form field. The Zonelets script itself should take care of munging it into the appropriate format upon form "submission". It can also, right there, take care of the escaping issue described in the FAQ—allow the user to preview the generated post title and fix it up if need be.

      Additionally, the archives page need not by dynamically generated by the client—or rather, it can be dynamically filled in exactly once per update—on the author's machine, and then be reified into static HTML, with the user being instructed to save it and overwrite the served version. This gets too unwieldy for next/prev links in the footer, but (a) those are non-essential, and don't even get displayed for no-JS users right now, anyway; and (b) can be seen to violate the entire "UNPROFESSIONAL" etthos.

      Alternatively, the entire editing experience can be complimented with bookmarklets.

    1. Linux itself only started as an amateur project, not professional like Minix, right

      That should be "big and professional like gnu".

    1. it’s hard to look at recent subscription newsletter darling, Substack, without thinking about the increasingly unpredictable paywalls of yesteryear’s blogging darling, Medium. In theory you can simply replatform every five or six years, but cool URIs don’t change and replatforming significantly harms content discovery and distribution.
  3. Apr 2022
    1. "Show me the proof," they said. Here it is. That's the source code. All of it. In all of it's beautiful, wild and raw mess.

      This is how to open source something. "Open source" means that it's published under an open source license. That's it. There's nothing else required.

    1. A ZIP file MUST have only one "end of central directory record"

      There are a few ways to interpret this, one being in an unintuitive way: that is, it is actually acceptable for a given bytestream to have multiple blobs that look like the end of central directory record (having the right signature and size/shape), but only the nth one is actually an end of central directory record. The requirement that a ZIP have only one meaning that all but the nth one aren't actually end of central directory records, but are nonetheless free to appear in the bytestream, because their not being an end of central directory record implies their existence doesn't violate spec.

    1. Without special care you'd get files that aren't supposed to exist or errors from trying to overwrite existing files.

      Yes, and that's just one of the reasons why scanning from the front is invalid. There's nothing special about the signature in file records—it's just a four-byte sequence that might make its way into the resulting ZIP without any intent to denote a file record. If you scan from the front and assume that encountering the signature means a file exists there without cross-referencing the central directory, it means your implementation treats junk bytes as meaningful to the structure of the file, which is not a good idea.

    2. That suggests the central directory might not reference all the files in the zip file

      Sure, but that doesn't mean that it's valid to treat the existence of those bytes as if that file is still "in" the ZIP. They should be treated exactly as any other blob that just happens to have some bytes that match the shape of the what a file record would look like if there were actually supposed to be a file there.

    3. their

      "there"

    4. What if the offset to the central directory is 1,347,093,766? That offset is 0x504b0506 so it will appear to be end central directory header.

      This is, I think, the only legitimate criticism here so far. All the others that amount to questions of "back-to-front or front-to-back?" can be answered: back-to-front.

      This particular issue, however, can be worked around by padding the central directory one byte (or four) so that its not at offset 1,247,093,766. Even then, the flexibility in the format and this easy solution means that even this criticism is mostly defanged.

    1. function Zip(_io, _parent, _root) { this._io = _io; this._parent = _parent; this._root = _root || this; this._read(); } Zip.prototype._read = function() { this.sections = []; var i = 0; while (!this._io.isEof()) { this.sections.push(new PkSection(this._io, this, this._root)); i++; } }

      Although the generated code is very useful...

      This is wrong. It treats the ZIP format as if (à la PNG) it's a concatenated series of records/chunks marked by ZIP's characteristic, "PK" off-set, 4-byte magic numbers. It isn't. The only way to read a ZIP bytestream is to start from the end, look for the signature that denotes the possibility of the presence at the current byte offset the record containing the central directory metadata, proceeding to validate* the file based on that, and then operating on it appropriately. (* If validation fails, you can continue scanning backwards from the offset that was thought to be the signature.)

      The first passed validation attempt carried out in this manner (from back to front) "wins"—there may be more than one validation passes beginning at various offsets that succeed, but only the one that appears nearest to the end of the bytestream is authoritative. If one or more validation attempts fail resulting in no successes, the file may be corrupt, and the implementation may attempt to "repair" it (not necessarily by making on-disk modifications, but merely by being generous with its interpretation of the bytestream—perhaps presenting several different options to the user), or, alternatively, it may be the case that the file is simply not a ZIP archive.

      This is because a ZIP file is permitted to have its records be little embedded "data islands" (in a sea of unrelated bytes). This is what allows spanned/multi-disk archives and for the ZIP to be modified by updating the bytestream in an append-only way (or selectively rubbing out parts of the existing central directory and updating the pointers/offsets in-place). It's also what allows self-extracting archives to be self-extracting: foremost, they conform to the binary executable format and include code for being able to open the very same executable, process the records embedded within it, and write them to disk.

    Tags

    Annotators

    1. I wanted all of my Go code to just deal with JSON HTTP bodies

      Lame. Hopefully it's at least checking the content type and returning an appropriate status code with a helpful message, at least.

      (PS: it wouldn't be multipart/form-data, anyway; the default is application/x-www-form-urlencoded.)

    2. I'm not sure what $name is

      This post is filled with programming/debugging missteps that are the result of nothing other than overlooking what's already right in front of the person who's writing.

    3. comparing the event and window.event isn't enough to know if event is a variable in scope in the function or if it's being looked up in the window object

      Sounds like a good use case for an expansion to the jsmirrors API.

    1. Over the past half-century, the number of men per capita behind bars has more than quadrupled.

      I haven't read the original source.

      Is it possible that this has to do with stricter enforcement of existing laws (or even new ones that criminalize previously "acceptable" behavior, like drunk driving)? Today, being arrested is a pretty big deal—a black mark for sure. Subjectively, it seems that on the whole people of the WWII, Korean War, and Vietnam War eras were more rambunctious and society was more tolerant of it (since it was a lot more common, any of the potentially aggrieved parties would have likely engaged in similar stuff themselves).

    1. What I like best about pdf files is that I can just give them to someone and be almost certain that any questions will be about the content rather than the format of the file.

      Almost every time I've used FedEx's "Print and Go" for a PDF I've created by "printing" e.g. HTML (and that I've verified looks good when previewing it on-screen), it comes out mangled when actually printed to paper.

    1. One significant point of design that Tschichold abandoned was the practice of subordinating the organization of all text elements around an invisible central axis (stay with me here.) What that means is that a designer builds out all the design elements of a book from that nonexistent axis “as if there were some focal point in the center of a line which would justify such an arrangement,” Tschichold wrote. But this, he determined, imposed an artificial central order on the layout of a text. It was an illogical practice, because readers don’t start reading from the center of a book, but from the sides.

      Okay, I stuck it out like the author here requested but I'm still left wondering what any of this is in reference to.

    2. folios (the word used by designers for page numbers)

      Huh? You sure about that?

    1. Notes from Underground

      Standard Ebooks's search needs to incorporate alternate titles. I tried searching first for "the underground man" (my fault) but then I tried "notes from the underground", which turned up nothing. i then began to try searching for Dostoyevsky, but stopped myself when I realized the fruitlessness, because even being unsure if search worked across author names, I knew that I had no idea which transliteration Standard Ebooks was using.

    2. Why is Standard Ebooks sending content-security-policy: default-src 'self';? This is not an appropriate use. (And it keeps things like the Hypothesis sidebar from loading.)

    1. <title>Notes from Underground, by Fyodor Dostoevsky. Translated by Constance Garnett - Free ebook download - Standard Ebooks: Free and liberated ebooks, carefully produced for the true book lover.</title>

      This is way too long. (And when I try saving the page, Firefox stops me because the suggested file name—derived from the title—is too long. NB: that's a bug in both standardebooks.org and Firefox.)

    Tags

    Annotators

    1. I’ll also note that there’s the potential of a reply on Hypothes.is to a prior reply to a canonical URL source. In that case it could be either marked up as a reply to the “parent” on Hypothesis and/or a reply to the canonical source URL, or even both so that webmentions could be sent further upstream.

      You can also "reply" by annotating the standalone (/a/...) page for a given annotation.

    2. Webmention functioning properly will require this canonical URL to exist on the page to be able to send notifications and have them be received properly

      It's also just annoying when trying to get at the original resource (or its URL for reference).

    3. all the data on this particular page seems to be rendered using JavaScript rather than being raw HTML
    1. could a few carefully-placed lines of jQuery

      Look, jQuery is not lightweight.* It's how we got into this mess.

      * Does it require half a gigabyte of dev dependencies and regular dependencies to create a Hello, World application? No, but it's still not lightweight.

    2. The SE server is also responsible for building ebooks when they get released or updated. This is done using our ebook production command line toolset.

      It would be great if these tools were also authored to be a book—a comprehensive, machine-executable runbook.

    3. inscrutible

      Should be "inscrutable".

    4. There's way too much excuse-making in this post.

      They're books. If there's any defensible* reason for making the technical decision to go with "inert" media, then a bunch of books has to be it.

      * Even this framing is wrong. There's a clear and obvious impedance mismatch between the Web platform as designed and the junk that people squirt down the tubes at people. If there's anyone who should be coming up with excuses to justify what they're doing, that burden should rest upon the people perverting the vision of the Web and treating it unlike the way it's supposed to be used—not folks like acabal and amitp who are doing the right thing...

    5. Everything is rendered server-side before it reaches your browser.

      How about removing the "rendering" completely and make it a static site?

    1. even in their own personal spaces

      But your blog post on my screen is not in your personal space any more than your book/pamphlet/whatever lying open on my desk is (which is to say: not at all)... it's my space.

    2. thinking they’re simply querying texts.

      Huh?

    1. There’s this trap people fall into when writing, especially for a place like LessWrong where the bar for epistemic rigor is pretty high. They have a good idea, or an interesting belief, or a cool model. They write it out, but they’re not really sure if it’s true. So they go looking for evidence (not necessarily confirmation bias, just checking the evidence in either direction) and soon end up down a research rabbit hole.
    1. File not found (404 error)

      This is perverse (i.e. an instance of Morissette's false irony).

    1. it might be worth-while to point out that the purpose of abstracting is not to be vague, but to create a new semantic level in which one can be absolutely precise
    1. At a higher level of grouping, we have more trouble. This is the level of DLLs, microservices, and remote APIs. We barely have language for talking about that. We have no word for that level of structure.
  4. small-tech.org small-tech.org
    1. Ongoing research Building on our work with Site.js, we’ve begun working on two interrelated tools: NodeKit The successor to Site.js, NodeKit brings back the original ease of buildless web development to a modern stack based on Node.js that includes a superset of Svelte called NodeScript, JSDB, automatic TLS support, WebSockets, and more.

      "How much of your love of chocolate has to do with your designs for life that are informed by your religious creed? Is it incidental or essential?"

    1. The percentage of Democrats who are worried about speaking their mind is just about identical to the percentage of Republicans who self-censor: 39 and 40 percent, respectively

      What are Republicans worrying about when they self-censor? Being perceived as too far right and trying to appear more moderate, or catching criticism from their political peers if they were to express skepticism about some of the goofiest positions that Republicans are associated with at the moment?

    2. knowing that we could lose status if we don’t believe in something causes us to be more likely to believe in it to guard against that loss. Considerations of what happens to our own reputation guides our beliefs, leading us to adopt a popular view to preserve or enhance our social positions

      Belief, or professed belief? Probably both, but how much of this is conscious/strategic versus happening in the background?

    3. Interestingly, though, expertise appears to influence persuasion only if the individual is identified as an expert before they communicate their message. Research has found that when a person is told the source is an expert after listening to the message, this new information does not increase the person’s likelihood of believing the message.
    4. Many have discovered an argument hack. They don’t need to argue that something is false. They just need to show that it’s associated with low status. The converse is also true: You don’t need to argue that something is true. You just need to show that it’s associated with high status.
    1. This comment makes the classic mistake of mixing up the universal quantifier ("for all X") and the existential quantifier ("there exists [at least one] X"), when (although neither are used), the only thing implied is the latter.

      https://en.wikipedia.org/wiki/Universal_quantification

      https://en.wikipedia.org/wiki/Existential_quantification

      What the average teen is like doesn't compromise Stallman's position. If one "gamer" (is that a taxonomic class?) follows through, then that's perfectly in line with Stallman's mission and the previously avowed position that "Saying No to unjust computing even once is help".

      https://www.gnu.org/philosophy/saying-no-even-once.html

    1. the C standard — because that would be tremendously complicated, and tremendously hard to use

      "the C standard [... is] tremendously complicated, and tremendously hard to use [...] full of wrinkles and [...] complex rules"

    2. It should be illegal to sell a computer that doesn't let users install software of their own from source code.

      Should it? This effectively outlaws a certain type of good: the ability to purchase a computer of that sort, even if that's what you actually want.

      Perhaps instead it should be illegal to offer those types of computers for sale if the same computer without restrictions on installation aren't also available.

    1. myvchoicesofwhatto.attemptandxIiwht,not,t)a-ttemptwveredleterni'ine(toaneiiubarma-inohiomlreatextentbyconsidlerationsofclrclfea-sibilitv

      my choices of what to attempt and what not to attempt were determined to an embarrassingly great extent by considerations of clerical feasibility.

    1. I am now ready to start rendering in GIMP. I use GIMP because it's good and free, but I prefer PAINT for the drawing.

      GIMP needs an MSPaint "persona".

    1. In his response, Shirky responds that he was unable to keep up with overhead of running a WordPress blog, so it fell into disrepair and eventually disappeared.

      Is this proof, from one of the biggest social critics on the topic of crowdsourcing (known for taking a favorable stance on it), that it doesn't work? Separately, is the promise of the Web itself a false one?

      I'm not going to say "yes", but I will say—and have said before—that the "New Social" era of the 2010s saw a change of environment from when Here Comes Everybody was written. I think it highlights a weakness of counter-institutional organization—by definition the results aren't "sticky"—that's the purview of institutions. What's more, even institutions aren't purely cumulative.

    1. I have a theory that most people conceptualize progress as this monotonically increasing curve over time, but progress is actually punctuated. It's discrete. And the world even tolerates regress in this curve.
    1. I was already aware that images cannot be inserted in the DOM like you would any normal image. If you write <img src="https://my-pod.com/recipes/ramen.jpg">, this will probably fail to render the image. That happens because the image will be private, and the POD can't return its contents without proper authentication.
    1. hopefully feed readers can treat permanent redirects as a sign to permanently update their feed URLs, then I can remove it. They probably don't, much like bookmarks don't
    2. I saw the need for a JSON feed format

      Did you really, though? Probably there was a need for a feed data source that was as easy to work with as parsing a JSON object, I'm betting. And I'd wager that there was no real obstacle to writing a FeedDataSource that would ingest the Atom feed and then allow you to call toString("application/json") to achieve the same effect.

    1. The model of high fixed cost, low marginal cost applies to pretty much every consumer good or service sold in the industrial age. If you think publishing the first copy of a piece of software is hard, try setting up a new production line in a factory, or opening a restaurant, or producing a music album or movie.
    1. The marginal cost to Zoom of onboarding a new customer is almost zero
    2. When you’re building a house, you have a pretty good idea of how many people that house will impact. The market has already demonstrated that they’ll pay for a roof and a walk-in shower and a state of the art heated toilet seat. If you erect a sturdy 4 bedroom, 2 ½ bathroom house with these amenities in a desirable neighborhood, you can rest easy knowing that you’ll be able to sell it. This is not how software works. The main problem here is that houses and software have wildly different marginal costs. If I build a house and my neighbor really likes it and wants to live in a house that’s exactly the same, it will cost them almost as much to build as it cost me. Sure, they might be able to save a few thousand dollars on architectural fees, but they’ll still need wood, wires, and boatloads of time from skilled plumbers, electricians, and carpenters to assemble those raw materials into something resembling a house. Marginal cost is just the cost to produce one more unit of something - in this case, one more house. Contrast this with software, which lives in the realm of often-near-zero marginal costs.

      Devops-heavy software changes this. Is GitHub standard fare the way it is because developers are knowingly or unknowingly trying to keep themselves employed—like the old canard/conspiracy theory about how doctors aren't interested in healing anyone, because they want them sick?

    1. To drive this point home:

      I sometimes get people who balk at my characterization of GitHub-style anti-wikis as being inferior to, you know, actual wikis. "You can just use the GitHub UI to edit the files", they'll sometimes say.

      A case study: a couple days ago, I noticed that the lone link in the current README for Jeff Zucker's solid-rest project is a 404. I made a note of it. Just now, I reset my GitLab password, logged in to solid/chat, and notified Jeff https://gitter.im/solid/chat?at=611976c009a1c273827b3bd1. Jeff's response was, "I'll change it".

      This case is rich with examples of what makes unwikis so goddamn inefficient to work with. First, my thought upon finding the broken link was to take note of it (i.e. so that it can eventually be taken care of) rather than fixing it immediately, as would have been the case with a wiki. More on this in a bit. Secondly, my eventual action was still not something that directly addressed the problem—it was to notify someone else† of the problem so that it might be fixed by them, due to the unwiki nature of piles of Git-managed markdown. Thirdly, even Jeff's reflex is not to immediately change it—much like my reaction, his is to note the need for a fix himself and then to tell me he's going to change it, which he will presumably eventually do. Tons of rigamarole just to get a link fixed‡ that remains broken even after having gone through all this.

      † Any attempt to point the finger at me here (i.e. coming up short from having taken the wrong action—notifying somebody rather than doing it myself) would be getting it wrong. First, the fact that I can't just make an edit without taking into account the myriad privacy issues that GitHub presents is materially relevant! Secondly, even if I had been willing to ignore that thorn (or jump through the necessary hoops to work around it) and had used the GitHub web UI as prescribed, it still would have ended up as a request for someone else to actually take action on, because I don't control the project.

      ‡ Any attempt to quibble here that I'm talking about changing a README and not (what GitHub considers) a wiki page gets it wrong. We're here precisely because GitHub's unwikis are a bunch of files of markdown. The experience of changing an unwiki page would be rife with the same problems as encountered here.

    1. Yes, the website itself is a project that welcomes contributions. If you’d like to add information, fix errors, or make improvements (especially to the visual design), talk to @ivanreese in the Slack or open a PR.

      Contribution: make it a wiki. (An actual wiki, not GitHub-style anti-wikis.)

    2. If that project doesn’t pan out, we’ll set up an existing wiki or similar collaborative knowledge system at the start of 2021.

      Oops.

    1. Each entry is a Markdown file stored in the _pages directory.

      Nah, dude. That's not a wiki.

    1. Meta note: I always have difficulty reading Linus Lee. I don't know what it is. Typography? Choices in composition (that is, writing)? Other things to do with composition (that is, visually)?

    1. JavaScript is popular outside of the browser almost entirely on the merit of its ecosystem, its tooling, and the trivial debugging experience enabled by the repl.

      Disagree. JS is popular because its tooling was easy to get started with (i.e. did have merit)—but that's not really the case any more.

    1. I find the tendency of people to frame their thinking in terms of other people's involvement in the projects in question being to make money (as with GitHubbers "contributing"/participating by filing issues only because they're trying to help themselves on some blocker they ran into at work) pretty annoying.

      The comments here boil down to, "You say it should be rewarding? Yeah, well, if you're getting paid for it, you shouldn't have any expectations that the the act itself will be rewarding."

      I actually agree with this perspective. It's the basis of my comments in Nobody wants to work on infrastructure that "if you get an infusion of cash that leaves you with a source of funding for your project[...] then the absolute first thing you should start throwing money at is making sure all the boring stuff that [no] one wants to work". I just don't think it's particularly relevant to what Kartik is arguing for here.

      Open source means that apps are home-cooked meals. Some people get paid to cook, but most people don't. Imagine, though, if the state of home cooking were such that it took far more than one hour's effort (say several days, or a week) before we could reap an appreciable reward—tonight's dinner—and that this were true, by and large, for almost every one of your meals. That's not the case with home cooking, fortunately, but that is what open source is like. The existence of professional chefs doesn't change that. There's still something wrong that could stand to be fixed.

    2. great, I'm genuinely happy for you. But

      In other words, "allow me to go on and recontextualize this to the point of irrelevancy to the original purpose of this piece of writing".

    3. the employment contract I've signed promises a year of payment after a year of effort

      I find that hard to believe. It could be true—I might be wrong—but it's pretty odd if so and still far from the norm.

      The very fact that this "year of payment after a year of effort" probably really means "an annual salary of c$100k per year (for some c > 1), paid fractionally every two weeks" pretty much undermines this bad attempt at a retort.

    1. I feel like better code visualization would solve a lot of my problems. Or at least highlight them.

      The other commenter talks about a typical sw.eng. approach to visualization (flame graphs), but I want programs visualized as a manufacturing/packing/assembly line on a factory floor. Almost like node editors like Unreal's Blueprints, but in three dimensions, and shit visibly moving around between tools on the line in a way that you can actually perceive. Run the testcase on a loop, so you have a constant stream of "shit visibly moving around", and it runs at fractional speed so the whole process takes, say 10 seconds from front-to-back instead of near instantaneously like it normally would (and allow the person who's debugging to control the time scaling, of course). You find bugs by walking the line and seeing, "oh, man, this purple shit is supposed to be a flanged green gewgaw at this point in the line", so you walk over and fix it.

      (This is what I want VR to really be used for, instead of what's capturing people's attention today—games and lame substitutes for real world interaction like Zuckerberg is advocating for.)

    1. I want an hour of reward

      marktani asks (not unreasonably):

      what is an hour of reward?

      Kartik's response[1] is adequate, I feel.

      1. https://news.ycombinator.com/item?id=30041447
    2. NB: This piece has been through many revisions (or, rather, many different pieces have been published with this identifier).

      Check it out with the Wayback Machine: https://web.archive.org/web/*/http://akkartik.name/about

    3. A big cause of complex software is compatibility and the requirement to support old features forever.

      I don't think so. I think it's rather the opposite. Churn is one of the biggest causes for what makes modifying software difficult. I agree, however, with the later remarks about making it easy to delete code where it's no longer useful.

    4. An untrusted program must inspire confidence that it can be run without causing damage.

      I don't think you can get this with anything other than a sandbox, cf Web browsers.

      I'm not sure I understand what it means to say that programs must "tell the truth about their side effects". What if they don't? (How do you make sure what they're telling is the truth?)

    5. Most programs today yield insight only after days or weeks of unrewarded effort. I want an hour of reward for an hour (or three) of effort.
    1. to allow ample space for editing the source code of scripts, the form overflows the page, which requires scrolling down to find the "save" button and avoid losing changes

      The "Edit" button should morph into a "Save" button instead.

    2. bag

      In honor of Haketilo's original name, Hachette, bags might be called "sachets" instead.

    1. it's worthwhile to learn the ins and outs of coding a page and having the page come out exactly the way you want

      No. Again, this is exactly what I don't want. (I don't mean as an author—e.g. burdened with hand-coding my own website—I mean as a person whose eyes will land on the pages that other people have authored with far more frequency than I do look at my own content.)

      As I mentioned—not just in my earlier responses to this piece, but elsewhere—the constant hammering on about how much control personal homepages give the author over the final product is absolutely the wrong way to appeal to people. In the first place, it only appeals to probably about a tenth of as many people as the people advocating think it does. In the second place, the results can and usually are not good, cf MySpace.

      The best thing about Facebook, Twitter, etc? Finally getting people to separate content from presentation. Hand-coded CSS doesn't get you there alone. That's basically a talisman (false shibboleth). Facebook content is in a database somewhere. The presentation gets layered on through subsequent transformations into different HTML+CSS views.

    2. The web is a mire of toxic and empty content because we, as users, took the easy path and decided to consume content instead of creating it.

      As I've said elsewhere about the fediverse: a funny thing to me is that Mastodon, while architecturally guilty of the same sort of things as the big social networks outlined in this post (centralization around nodes instead of small, personal, digital homes), is often touted as being able to deliver a lot of the same benefits, but I don't see them. For one thing, start a webring and get to know your virtual neighbors sounds a lot like "you get to pick your own instance". But secondly, and most relevant to the passage here, is that the types of people I run across in the fediverse for the most part all seem to be of a certain "type". As someone who doesn't use Twitter, joining Mastodon put me in contact with a lot more toxicity than not. I don't see how followthrough on the grand vision here—which will certainly almost exclusively be undertaken by the same sorts of people you find in the fediverse (with this piece, the author shows they know their audience and go right in on the pandering)—won't result in much difference from what you can get on any typical Mastodon instance—i.e. exactly the sort of thing that's basically the template for pieces like this one.

    3. Tim Berners-Lee released the code for the web so that no corporation could control the web or force users to pay for it.

      While the precondition is true, there is an ahistorical suggestion/understanding of the history of the Web at play here.

    4. So seek out new and interesting sites, and link to them on your site. Reach out to them, and see if they'll link to you. Start a dialog. The way to build a better web is to build a better web of people.

      About half the touted benefits of the approach this piece advocates for could be better achieved by instead convincing people to be more forthcoming with making themselves available for contact by e-mail.

      Hand-coding websites (something that I actually for myself) does not inherently presage the sorts of things the author thinks that it does. It could be true that we would be just as well off—if not better—if everyone were to go out and sign up for Owlstown and treat it like a content depository.

    5. Building amateur web pages increases the quality of content on the web as well.
    6. The anti-vaccine and anti-mask groups are a prime example

      Anti-vax positions are the product of anonymity? Pretty sure the harbinger of the anti-vax nonsense are groups of Facebook moms...

    7. Hidden behind a veil of anonymity

      I'm not sure that modern conventions have resulted in heightened use (and abuse) of anonymity. The trend seems to be in the opposite direction. Most major social networks, incl. Facebook, Twitter, and GitHub, either require/favor use of real names or manage to cultivate a sense of obligation from most of their users to do so. Reddit is more old-style Web than any of the others, and anonymity is by far the norm. It's actually very, very weird to go on Reddit and use your real name unless you're a public figure doing an AMA, and it's only slightly less uncommon to post under a pseudonym but have your real-life identity divulged elsewhere, out-of-band. Staying anonymous on Reddit is almost treated as sacred.

      Likewise in the actual old Web, during the age of instant messengers and GeoCities, people pretty much did everything behind a "screen name".

    8. the real reason nobody was collecting massive amounts of data on users was that nobody had thought to do it yet. Nobody had foreseen that compiling a massive database of individual users likes, dislikes, and habits would be valuable to advertisers.
    9. This appeal would have a greater effect if it weren't itself published in a format that exhibits so much of what was less desirable of the pre-modern Web—fixed layouts that show no concern for how I'm viewing this page and causes horizontal scrollbars, overly stylized MySpace-ish presentation, and a general imposition of the author's preferences and affinity for kitsch above all else—all things that we don't want.

      I say this as someone who is not a fan of the trends in the modern Web. Responsive layouts and legible typography are not casualties of the modern Web, however. Rather, they exhibit the best parts of its maturation. If we can move the Web out of adolescence and get rid of the troublesome aspects, we'd be doing pretty good.

    1. Not realizing that you need to remove the roadblocks that prevent you from scaling up the number of unpaid contributors and contributions is like finding a genie and not checking to see if your first wish could be for more wishes.

      Corollary: nobody wants janky project infrastructure to be a roadblack to getting work done. It should not take weeks or days of unrewarded effort in the unfamiliar codebase of a familiar program before a user is confident enough to make changes of their own

      I used to use the phrase 48-hour competency. Kartik says "an hour (or three)". Yesterday I riffed on the idea that "you have 20 seconds to compile". I think all of these are reasonable metrics for a new rubric for software practice.

    1. except its codebase is completely incomprehensible to anyone except the original maintainer. Or maybe no one can seem to get it to build, not for lack of trying but just due to sheer esotericism. It meets the definition of free software, but how useful is it to the user if it doesn't already do what they want it to, and they have no way to make it do so?

      Kartik made a similar remark in an older version of his mission page:

      Open source would more fully deliver on its promise; are the sources truly open if they take too long to grok, so nobody makes the effort?

      https://web.archive.org/web/20140903010656/http://akkartik.name/about

    1. CuteDepravity asks:

      With minimal initial funding (ex: $10k) How would you go about making 1 million dollars in a year ?

      My response is as follows:

      Buy $10k of precious metals (or pick your preferred funge, e.g. cryptocurrency), and then turn around and sell it for a net loss of $1. Repeat this 100 more times, using the sale price of your last sale as the capital for your next purchase and selling it for a $1 net loss over the purchase price. A year should give you enough time to make 202 trades. Your last trade should take you from under a million to just over a million in revenue—from $994,950 to $1,004,849 and with your balance at EOY being $9,899. Deduct your losses from the reward you collect from whomever you made this million dollar bet with. (Hopefully the wager was for greater than $101 and whatever this does to your taxes.)

    1. a complex problem should not ~be regarded immediately in terms of computer instruc- tions, bits, and "logical words," but rather in terms and entities natural to the problem itself, abstracted in some suitable sense

      Likewise, a program being written (especially one being written anew instead of by adapting an existing one) should be written in terms of capabilities from the underlying system that make sense for the needs of the greater program, and not by programming directly against the platform APIs. In the former case, you end up with a readable program (that is also often portable), whereas in the latter case, what you end up writing amounts to a bunch of glue between existing system component that may not work together in any comprehensible way to half the audience who is not already intimately familiar with the platform in question, but no less capable of making meaningful contributions.

    2. must not consist of a bag of tricks and trade secrets, but of a general intellectual ability

      I often think about how many things like Spectre/Meltdown are undiscovered because of how esoteric and unapproachable the associated infrastructure is that might otherwise better allow someone with a solid lead to follow through on their investigation.

    3. In short, it became clear that any amount of efficiency is worthless if we cannot provide reliability

      Ousterhout says:

      The greatest performance improvement of all is when a system goes from not-working to working

      https://hypothes.is/a/yfo-zAB-EeyUTqcA1iAC2g#https://web.stanford.edu/~ouster/cgi-bin/sayings.php

    4. The law of the "Wild West of Programming" was still held in too high esteem! The same inertia that kept many assembly code programmers from ad- vancing to use FORTRAN is now the principal obstacle against moving from a "FORTRAN style" to a structured style.
    5. The amount of resistance and prejudices which the farsighted originators of FORTRAN had to overcome to !gMn acceptance of their product is a memorable indication of the degree to which programmers were pre- occupied with efficiency, and to which trick- ology had already become an addiction
    1. Why not add one more category — Personal Wiki — tied to nothing specific, that I can reuse wherever I see fit?

      This is insufficiently explained.

    2. why the heck am I going to write a comment that is only visible from this one page? There are hundreds (maybe thousands) of pages on the internet making use of the fact that there is no clear explanation of this on the web.

      I.

      In You can't tell people anything, Chip Morningstar recalls, 'People loved it, but nobody got it. Nobody. We provided lots of explanation. We had pictures. We had scenarios, little stories that told what it would be like. People would ask astonishing questions, like “who’s going to pay to make all those links?”'

      II.

      Ted's vision for Xanadu was that all things connected would be shown to be connected.

      III.

      The people we're supposed to laugh at—the ones asking who's going to create all the links—had it right. There's just no way to enforce Ted's vision—no way for all links to be knowable and visible. That's because Ted's vision is fundamentally at odds with reality, just like the idea of unbreakable DRM. The ability of two people to whisper to one another in the corner of a cozy pub about a presentation you gave to the team during work hours, without your ever knowing that commentary is being exchanged (or even the existence of the pub meeting where it happens), is Xanadu's analog hole.

    1. Meanwhile I’ve been humming dackolupatoni to myself. Haven’t come up with a song yet but it feels like it has “Giacomo fina ney” potential.

      Prisencolinensinainciusol is probably more appropriate.

    2. But I feel like once you truly discover the web, you can’t turn your back on it.

      Empirically, this appears to be untrue.

    3. it would nearly impossible to understand the web if you have never used it

      But TBL is talking about "finding out about" the Web by experiencing it, which is not the same thing as understanding it. As the personal reflection in that this post opened up with showed, people are capable of experiencing the Web without understanding it.

    1. You need to be in the triplescripts.org group to see this annotation.

      Membership is semi-private, but only as a consequence of current limitations of the Hypothes.is service. Anyone can join the group.

  5. www.research-collection.ethz.ch www.research-collection.ethz.ch
    1. Ihavelearnttoabandonsuchattemptsofadaptationfairlyquickly,andtostartthedesignofanewprogramaccordingtomyownideasandstandards

      I have learnt to abandon such attempts of adaptation fairly quickly, and to start the design of a new program according to my own ideas and standards

    1. The two notable exceptions are the Lispmachine operating system, which is simply an extension to Lisp (actually a programminglanguage along with the operating system) and the UNIX operating system, which providesfacilities for ‘patching’ programs together.

      Oberon is a better example of this than UNIX. Internally, the typical UNIX program (written in C) uses function calls and shared access to in-memory "objects", but must use a different mechanism entirely (file descriptors) for programs to communicate.

    1. Explain clipboard macros as one use case for bookmarklets. E.g.:

      navigator.clipboard.writeText(`Hey there.  We don't support custom domains right now, but we intend to allow that in the future.`)
      

      Explain that they can be parameterized, too. E.g.:

      let name = String(window.getSelection());
      navigator.clipboard.writeText(`Hey, ${name}...`);
      

      They can create a menu of bookmarklets for commonly used snippets.

    1. we're susceptible to CSP if we try to write an inline script into the manager doc

      Too late! We're already doing that to post the key during the handshake...

    Tags

    Annotators

    1. Feature request (implement something that allows the following): 1. From any page containing a bookmarklet, invoke the user-stored bookmarklet בB 2. Click the bookmarklet on the page that you wish to be able to edit in the Bookmarklet Creator 3. From the window that opens up, navigate to a stored version of the Bookmarklet Creator 4. Invoke bookmarklet בB a second time from within the Bookmarklet Creator

      Expected results:

      The bookmarklet from step #2 is decoded and populates the Bookmarklet Creator's input.

      To discriminate between invocation type II (from step #2) and invocation type IV (from step #4), the Bookmarklet Creator can use an appropriate class (e.g. https://w3id.example.org/bookmarklets/protocol/#code-input) or a meta-based pragma or link relation.

    2. <title>Bookmarklet Creator</title> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8">

      We should perhaps include a rel=canonical or rel=alternate here. But what are the implications for shares and remixes? This also perhaps exposes a shortcoming in Hypothes.is's resource equivalence scheme, cf:

      Maybe the document itself, when the application "wakes up" should check the basics and then insert them as appropriate?

    3. output.addEventListener("click", ((event) => { if (event.target == output.querySelector("a.bookmarklet")) { alert("It's not an ordinary link; you need to bookmark it");

      This should use the registered control service pattern (or something). It's too hard to override this behavior. For example, I could remix the page and remove it, but I should also be able to write a bookmarklet that achieves the same effect.

    1. they walked into that cafe, looked around, and decided I was the easy prey.

      I had a similar feeling in 2016 after a recruitment attempt matching exactly the process described in theogravity's comment on this post. After first coming into contact at the grocery store, exchanged numbers, and met up at a coffee shop (a meeting that was arranged via text message within a couple days/weeks). I was surprised to find out that it was a MLM something-or-other, then I made it clear that I wasn't interested (and subtly made him feel that he should leave while I stayed and finished the hot cider I ordered). But for weeks afterward, I felt embarrassed and insulted that I must have been giving off some vibe as easily dupable. I couldn't reconcile it with the fact that in our first "chance" meeting, I mentioned that I'd left Samsung a few weeks earlier and didn't exactly have any concrete work plans and wasn't especially worried about it. Somehow, though, I guess I was perceived as being susceptible to some get-rich-quick nonsense...?

      Having said that—the author here recounts four meetings and still no revelation about the specifics of what their agenda was. That doesn't sound like "easy" prey to me. That's a pretty involved trap, assuming it is one.

    2. And again, there were these repeated implications that we were special, that we were deeper than other people.

      Maybe they were Fourth Dimensionists. (NB: Not actually a book I can recommend.)

    1. “I didn’t mean to back into your car in the parking lot.” Or, “I didn’t intend to hurt you with my remarks.”

      Worth noting that one of these is concrete (and concretely bad) and avoidable, the other one less so. It even permits claims that are unfalsifiabie.

    2. tenet

      should be "tenant"

    3. “We judge ourselves by our intentions and others by their behavior.” Stephen Covey This statement, made by author Stephen Covey

      It doesn't look like it; seems that Covey actually wrote "motives", not "intentions", and the maxim is not original to him https://quoteinvestigator.com/2015/03/19/judge-others/

    1. Pop up with a mouse hover, effortless

      Mmm… no, it just takes moderately less effort than clicking. And it doesn't apply to mouseless form factors.

    1. You don’t need semantic triples, you just need links

      What can triples do that you couldn't do with pairs?

    1. Reading the nodes straight through from top to bottom of the index will result in one kind of landscape for the text, but not the only or probably the best
    1. In Firefox, click "Bookmarks," then select "Bookmark this Page"

      Fails if the page is already bookmarked.

    1. This is pretty much a requirement if you intend to implement something like custom tooltips on your site which need to be dynamically absolutely positioned.

      wat

    2. Unless unsafe-inline is set on style-src, all inline style attributes are blocked.

      This is where things really go off the rails. There is no legitimate use case for this regime no matter how much people have looked for a reason to justify it or use sleight of hand to make it seem appropriate.

    1. So far it works great. I can now execute my bookmarklets from Twitter, Facebook, Google, and anywhere else, including all https:// "secure" websites.

      In addition to the note above about this being susceptible to sites that deny execution of inline scripts, this also isn't really solving the problem. At this point, these are effectively GreaseMonkey scripts (not bookmarklets), except initialized in a really roundabout way...

    2. work-around

      Bookmarklets and the JS console seem to be the workaround.

      For very large customizations, you may run into browser limits on the effective length of the bookmarklet URI. For a subset of well-formed programs, there is a way to store program parts in multiple bookmarklets, possibly loaded with the assistance of a separate bookmarklet "bootloader", although this would be tedious. The alternative is to use the JS console.

      In FIrefox, you can open a given script that you've stored on your computer by pressing Ctrl+O/Cmd+O, selecting the file as you would in any other program, and then pressing Enter. (Note that this means you might need to press Enter twice, since opening the file in question merely puts its contents into the console input and does not automatically execute it—sort of a hybrid clipboard thing.) I have not tested the limits of the console input for e.g. input size.

      As far as I know, you can also use the JS console to get around the design of the dubious WebExtensions APIs—by ignoring them completely and going back to the old days and using XPCOM/Gecko "private" APIs. The way you do is is to open about:addons by pressing Ctrl+Shift+A (or whatever), opening or pasting the code you want to run, and then pressing Enter. This should I think give you access to all the old familiar Mozilla internals. Note, though, that all bookmarklet functionality is disabled on about:addons (not just affecting bookmarklets that would otherwise violate CSP by loading e.g. an external script or dumping an inline one on the page`).

    3. CSP is taking away too much of the user's power and control over their browser use
    4. Apparently there is a CSP ability to stop inline scripts from executing. I have not come across any sites that use that feature and/or the browser I am using does not support it.

      There're lots.

    1. This article is crazy. Ostensibly, it's about a somewhat disappointing Emacs package (Emacs menus), but it's filled with all sorts of asides that are treasures.

    1. future software development should increasingly be oriented toward making software more self-aware, transparent, and adaptive

      From "Software finishing":

      once software approaches doneness[...] pour effort into fastidiously eliminating hacks around the codebase [...] presents [sic] the affected logic in a way that's clearer [...] judiciously cull constructs of dubious readability[...]

      one of the cornerstones of the FSF/GNU philosophy is that it focuses on maximizing benefit to the user. What could be more beneficial to a user of free software than ensuring that its codebase is clean and comprehensible for study and modification?

    2. When something goes wrong with a computer, you are likely to be stuck. You can't ask the computer what it was doing, why it did it, or what it might be able to do about it. You can report the problem to a programmer, but, typically, that person doesn't have very good ways of finding out what happened either.
    3. Another obstacle is the macho culture of programming.
    4. Lieberman, H., Guest Ed. The debugging scandal special section. Commun. ACM 40, 3 (Mar. 1997).

      That should be CACM 40, 4.

      The article can be found here: https://cacm.acm.org/magazines/1997/4/8423-introduction/abstract

      Also available through Lieberman's homepage: https://web.media.mit.edu/~lieber/Lieberary/Softviz/CACM-Debugging/#Intro

    5. Fry's Law states that programming-environment performance doubles once every 18 years, if that.
    6. software will work only if we provide the tools to fix it when it goes wrong
    1. Knowledge work should accrete

      Related: I'm fond of the position that technological progress should be cumulative—we should avoid churn.

    1. I end up with responsibility (friends complaining to me about this, that, and the other) without control (I can't affect any of those things)

    Tags

    Annotators

    1. Email is local-first.Email is social. You own your own social graph.

      See also Email is your electronic memory. (NB: Not an endorsement of Fastmail. The company is apparently full of assholes-in-tech-type dudes.)

    2. What if every button in your app had an email address? What if apps could email each other?

      Also, what if every app could email you? i mean the app itself—not the product team.

    1. it's minified before encoding the link (with encoding) is only 224 characters, instead of 337

      Not even close to the dumbest thing I've ever read, but still very, very dumb.

    1. quietly interesting
    2. I think there are some systems design insights here that might be valuable for p2p, Web3, dweb, and other efforts to reform, reboot, or rethink the internet.

      Indeed. Why did Dat/Beaker/Fritter fizzle out? Answer: failure to exapt.

    3. Software has more cybernetic variety than hardware
    4. This is possible because the internet isn’t designed around telephone networking hardware. It isn’t designed around any hardware at all. Instead, the internet runs on ideas, a set of shared protocols. You can implement these protocols over a telephone, over a fiberoptic cable, or over two tin cans connected with string.
    5. Infrastructure has decades-long replacement cycles
    1. Scale is also killing open source, for the record. Beyond a certain codebase size or rate of churn, you need to be a mega-corp to contribute to a nominally open-source project.

      What Stephen ignores is that the sort of software that this applies to is pretty much limited to the sort of software that only megacorps are interested in to begin with. As Jonathan pointed out in the post, most "user-facing" software is still not open source. (Side note: that's the real shame of open source.)

      Who cares how hard it is to contribute to the kinds of devops shovelware that GitHubbers have disproportionately concerned themselves with?

    1. The difficulty encountered by authors today when they create metadata for hypertexts points to the risk that adaptive hypermedia and the semantic Web will be initiatives that fit only certain, well-defined communities because of the skills involved
    2. The cognitive overhead that is introduced when the user needs to input a formal representation that collides with his immediate task-dependent needs has to be minimized.
    1. Here's a xanalink to a historical piece-- Here's a xanalink to a scientific work-- Here's a xanalink to an early definition of "hypertext"

      No, no, and no. This is an interesting fallout from Ted's fatwa that links be kept "outside the file".