3,402 Matching Annotations
  1. May 2022
    1. It's long been fairly apparent to me that the average modern web developer has no comprehension of what the web actually is3

      Agreed, but a it's a very ironic remark, given the author's own position...

    2. The only reasonable implementation options are JavaScript and PHP.

      I argue that PHP is not reasonable here. The only appropriate thing for this use case is (unminified) JS—or some other program text encoded as a document resource permitting introspection and that the user agent just happens to be able to execute/simulate.*

      • Just like the advocates of "a little jQuery", author here doesn't seem to realize that the use of PHP was the first step towards what is widely acknowledged to be messed up about the "modern" Web. People can pine for the days of simple server-side rendering, but there's no use denying that today's Web is the natural result of an outgrowth that began with abuses of the fundamental mechanisms underpinning the Web—abuses that first took root with PHP.

      * Refer to the fourth and sixth laws of "Sane Personal Computing, esp. re "reveals purpose"

    3. how does one support comments? Answer: Specialist third-party services like Disqus come into existence. Now, you can have comments on your website just by adding a <script> tag, and not have to traverse the painful vertical line of making your website itself even slightly dynamic.

      Controversial opinion: this is actually closer to doing the Web the way that it should be done, taking the intent of its design into account. NB: this is not exculpatory of minified JS bundles (where "megabyte" is the appropriate unit order of magnitude for measuring their weight) or anything about "modern" SPAs that thumb their nose at graceful degradation.

    4. It's not surprising at all, therefore, that people tend not to do this nowadays.

      I dunno how sound this conclusion is. Even for static sites, there are lower friction ways to do them, but people usually opt for the higher friction paths...

    5. You can read the “Effort” axis as whatever you like here; size, complexity, resource consumption, maintenance burden.

      Hey, look, it's an actually good example of the "steep learning curve".

      (I never understood why people insist that referring to it as a steep curve is wrong; clearly the decisions about your axes are going to have an impact on the thing. It seems that everyone who brings this up is insisting on laying out their graph the wrong way and implicitly arguing that other people need to take responsibility for it.)

    1. My argument for the use of the Web as a medium for publishing the procedures by which the documents from a given authority are themselves published shares something in common with the argument for exploiting Lisp's homoiconicity to represent a program as a data structure that is expressed like any other list.

      There are traces here as well from the influence of the von Neumann computational model, where programs and data are not "typed" such that they belong to different "classes" of storage—they are one and the same.

    1. However when you look UNDERNEATH these cloud services, you get a KERNEL and a SHELL. That is the "timeless API" I'm writing to.

      It's not nearly as timeless as a person might have themselves believe, though. (That's the "predilection" for certain technologies and doing things in a certain way creeping in and exerting its influence over what should otherwise be clear and sober unbiased thought.)

      There's basically one timeless API, and that means written procedures capable of being carried out by a human if/when everything else inevitably fails. The best format that we have for conveying the content comprising those procedures are the formats native to the Web browser—esp. HTML. Really. Nothing else even comes close. (NB: pixel-perfect reproduction à la PDF is out of scope, and PDF makes a bunch of tradeoffs to try to achieve that kind of fidelity which turns out to make it unsuitable/unacceptable in a way that HTML is not, if you're being honest with your criteria, which is something that most people who advocate for PDF's benefits are not—usually having deceived even themselves.)

      Given that Web browsers also expose a programming environment, the next logical step involves making sure these procedures are written to exploit that environment as a means of automation—for doing the drudge work in the here and now (i.e., in the meantime, when things haven't yet fallen apart).

    1. Square brackets represent here a blank node. Predicate-object pairs within the square brackets are interpreted as triples with the blank node as subject. Lines starting with '#' represent comments.

      Bad idea to introduce this notation here at the same time as the (unexplained) use of square brackets to group a list of objects.

    1. The events list is created with JS, yes. But that's the only thing on the whole site (~25 pages) that works that way.Here's another site I maintain this way where the events list is plain HTML: https://www.kingfisherband.com

      There's an unnecessary dichotomy here between uses JS and page is served as HTML. There's a middle ground, where the JS can do the same thing that it does now, but it only does so at edit time—in the post author's own browser, but not in others'. Once the post author is ready to publish an update, the client-side generated content is captured as plain HTML, and then they upload that. It still "uses JS", but crucially it doesn't require the visitor to have their browser do it (and for it to be repeated N times, once per page visit)...

    1. A great case study in how the chest-puffing associated with the certain folks in certain segments of the compiled languages crowd can be cover for some truly embarrassing blunders.

      (Also a great antidote against a frequent argument by self-taught "full stack" devs; understanding the runtime complexity of your program is important.)

    1. At one level this is true, but at another level how long is the life of the information that you're putting into your wiki now, and how sure are you that something this could never happen to your wiki software over that lifetime?

      I dunno. Was the wiki software in question MediaWiki?

      I always thought it was weird when people would set up a wiki and'd go for something that wasn't MediaWiki (even though I have my own quibbles with it). MediaWiki was always the clear winner to me, even in 2012 without the benefit of another 10 years of hindsight.

    1. Updating the script

      This is less than ideal. Besides non-technical people needing to wade into the middle of (what very well might appear to them to be a blob of) JS to update their site, here are some things that Zonelets depends on JS for:

      1. The entire contents of the post archives page
      2. The footer
      3. The page title

      This has real consequences for e.g. the archivability for a Zonelets site.

      The JS-editing problem itself could be partially ameliorated by with something like the polyglot trick used on flems.io and/or the way triple scripts do runtime feature detection using shunting. When the script is sourced via script element from another page, it behaves as JS, but when visited directly as the browser destination it is treated like HTML and has its own DOM tree for the script itself to make the necessary modifications easier. Instead of requiring the user to edit it as freeform text, provide a structured editing interface, so e.g. adding a new post is as simple as clicking the synthesized "+" button in the list of posts, copying the URL of the post in question, and then pasting it in—to a form field. The Zonelets script itself should take care of munging it into the appropriate format upon form "submission". It can also, right there, take care of the escaping issue described in the FAQ—allow the user to preview the generated post title and fix it up if need be.

      Additionally, the archives page need not by dynamically generated by the client—or rather, it can be dynamically filled in exactly once per update—on the author's machine, and then be reified into static HTML, with the user being instructed to save it and overwrite the served version. This gets too unwieldy for next/prev links in the footer, but (a) those are non-essential, and don't even get displayed for no-JS users right now, anyway; and (b) can be seen to violate the entire "UNPROFESSIONAL" etthos.

      Alternatively, the entire editing experience can be complimented with bookmarklets.

  2. Apr 2022
    1. "Show me the proof," they said. Here it is. That's the source code. All of it. In all of it's beautiful, wild and raw mess.

      This is how to open source something. "Open source" means that it's published under an open source license. That's it. There's nothing else required.

    1. A ZIP file MUST have only one "end of central directory record"

      There are a few ways to interpret this, one being in an unintuitive way: that is, it is actually acceptable for a given bytestream to have multiple blobs that look like the end of central directory record (having the right signature and size/shape), but only the nth one is actually an end of central directory record. The requirement that a ZIP have only one meaning that all but the nth one aren't actually end of central directory records, but are nonetheless free to appear in the bytestream, because their not being an end of central directory record implies their existence doesn't violate spec.

    1. Without special care you'd get files that aren't supposed to exist or errors from trying to overwrite existing files.

      Yes, and that's just one of the reasons why scanning from the front is invalid. There's nothing special about the signature in file records—it's just a four-byte sequence that might make its way into the resulting ZIP without any intent to denote a file record. If you scan from the front and assume that encountering the signature means a file exists there without cross-referencing the central directory, it means your implementation treats junk bytes as meaningful to the structure of the file, which is not a good idea.

    2. That suggests the central directory might not reference all the files in the zip file

      Sure, but that doesn't mean that it's valid to treat the existence of those bytes as if that file is still "in" the ZIP. They should be treated exactly as any other blob that just happens to have some bytes that match the shape of the what a file record would look like if there were actually supposed to be a file there.

    3. What if the offset to the central directory is 1,347,093,766? That offset is 0x504b0506 so it will appear to be end central directory header.

      This is, I think, the only legitimate criticism here so far. All the others that amount to questions of "back-to-front or front-to-back?" can be answered: back-to-front.

      This particular issue, however, can be worked around by padding the central directory one byte (or four) so that its not at offset 1,247,093,766. Even then, the flexibility in the format and this easy solution means that even this criticism is mostly defanged.

    1. function Zip(_io, _parent, _root) { this._io = _io; this._parent = _parent; this._root = _root || this; this._read(); } Zip.prototype._read = function() { this.sections = []; var i = 0; while (!this._io.isEof()) { this.sections.push(new PkSection(this._io, this, this._root)); i++; } }

      Although the generated code is very useful...

      This is wrong. It treats the ZIP format as if (à la PNG) it's a concatenated series of records/chunks marked by ZIP's characteristic, "PK" off-set, 4-byte magic numbers. It isn't. The only way to read a ZIP bytestream is to start from the end, look for the signature that denotes the possibility of the presence at the current byte offset the record containing the central directory metadata, proceeding to validate* the file based on that, and then operating on it appropriately. (* If validation fails, you can continue scanning backwards from the offset that was thought to be the signature.)

      The first passed validation attempt carried out in this manner (from back to front) "wins"—there may be more than one validation passes beginning at various offsets that succeed, but only the one that appears nearest to the end of the bytestream is authoritative. If one or more validation attempts fail resulting in no successes, the file may be corrupt, and the implementation may attempt to "repair" it (not necessarily by making on-disk modifications, but merely by being generous with its interpretation of the bytestream—perhaps presenting several different options to the user), or, alternatively, it may be the case that the file is simply not a ZIP archive.

      This is because a ZIP file is permitted to have its records be little embedded "data islands" (in a sea of unrelated bytes). This is what allows spanned/multi-disk archives and for the ZIP to be modified by updating the bytestream in an append-only way (or selectively rubbing out parts of the existing central directory and updating the pointers/offsets in-place). It's also what allows self-extracting archives to be self-extracting: foremost, they conform to the binary executable format and include code for being able to open the very same executable, process the records embedded within it, and write them to disk.

    Tags

    Annotators

    1. I wanted all of my Go code to just deal with JSON HTTP bodies

      Lame. Hopefully it's at least checking the content type and returning an appropriate status code with a helpful message, at least.

      (PS: it wouldn't be multipart/form-data, anyway; the default is application/x-www-form-urlencoded.)

    1. Over the past half-century, the number of men per capita behind bars has more than quadrupled.

      I haven't read the original source.

      Is it possible that this has to do with stricter enforcement of existing laws (or even new ones that criminalize previously "acceptable" behavior, like drunk driving)? Today, being arrested is a pretty big deal—a black mark for sure. Subjectively, it seems that on the whole people of the WWII, Korean War, and Vietnam War eras were more rambunctious and society was more tolerant of it (since it was a lot more common, any of the potentially aggrieved parties would have likely engaged in similar stuff themselves).

    1. What I like best about pdf files is that I can just give them to someone and be almost certain that any questions will be about the content rather than the format of the file.

      Almost every time I've used FedEx's "Print and Go" for a PDF I've created by "printing" e.g. HTML (and that I've verified looks good when previewing it on-screen), it comes out mangled when actually printed to paper.

    1. One significant point of design that Tschichold abandoned was the practice of subordinating the organization of all text elements around an invisible central axis (stay with me here.) What that means is that a designer builds out all the design elements of a book from that nonexistent axis “as if there were some focal point in the center of a line which would justify such an arrangement,” Tschichold wrote. But this, he determined, imposed an artificial central order on the layout of a text. It was an illogical practice, because readers don’t start reading from the center of a book, but from the sides.

      Okay, I stuck it out like the author here requested but I'm still left wondering what any of this is in reference to.

    1. Notes from Underground

      Standard Ebooks's search needs to incorporate alternate titles. I tried searching first for "the underground man" (my fault) but then I tried "notes from the underground", which turned up nothing. i then began to try searching for Dostoyevsky, but stopped myself when I realized the fruitlessness, because even being unsure if search worked across author names, I knew that I had no idea which transliteration Standard Ebooks was using.

    1. <title>Notes from Underground, by Fyodor Dostoevsky. Translated by Constance Garnett - Free ebook download - Standard Ebooks: Free and liberated ebooks, carefully produced for the true book lover.</title>

      This is way too long. (And when I try saving the page, Firefox stops me because the suggested file name—derived from the title—is too long. NB: that's a bug in both standardebooks.org and Firefox.)

    Tags

    Annotators

    1. I’ll also note that there’s the potential of a reply on Hypothes.is to a prior reply to a canonical URL source. In that case it could be either marked up as a reply to the “parent” on Hypothesis and/or a reply to the canonical source URL, or even both so that webmentions could be sent further upstream.

      You can also "reply" by annotating the standalone (/a/...) page for a given annotation.

    1. could a few carefully-placed lines of jQuery

      Look, jQuery is not lightweight.* It's how we got into this mess.

      * Does it require half a gigabyte of dev dependencies and regular dependencies to create a Hello, World application? No, but it's still not lightweight.

    2. The SE server is also responsible for building ebooks when they get released or updated. This is done using our ebook production command line toolset.

      It would be great if these tools were also authored to be a book—a comprehensive, machine-executable runbook.

    3. There's way too much excuse-making in this post.

      They're books. If there's any defensible* reason for making the technical decision to go with "inert" media, then a bunch of books has to be it.

      * Even this framing is wrong. There's a clear and obvious impedance mismatch between the Web platform as designed and the junk that people squirt down the tubes at people. If there's anyone who should be coming up with excuses to justify what they're doing, that burden should rest upon the people perverting the vision of the Web and treating it unlike the way it's supposed to be used—not folks like acabal and amitp who are doing the right thing...

    1. even in their own personal spaces

      But your blog post on my screen is not in your personal space any more than your book/pamphlet/whatever lying open on my desk is (which is to say: not at all)... it's my space.

    1. There’s this trap people fall into when writing, especially for a place like LessWrong where the bar for epistemic rigor is pretty high. They have a good idea, or an interesting belief, or a cool model. They write it out, but they’re not really sure if it’s true. So they go looking for evidence (not necessarily confirmation bias, just checking the evidence in either direction) and soon end up down a research rabbit hole.

      The internet has made defensive writers of us all

  3. small-tech.org small-tech.org
    1. Ongoing research Building on our work with Site.js, we’ve begun working on two interrelated tools: NodeKit The successor to Site.js, NodeKit brings back the original ease of buildless web development to a modern stack based on Node.js that includes a superset of Svelte called NodeScript, JSDB, automatic TLS support, WebSockets, and more.

      "How much of your love of chocolate has to do with your designs for life that are informed by your religious creed? Is it incidental or essential?"

    1. The percentage of Democrats who are worried about speaking their mind is just about identical to the percentage of Republicans who self-censor: 39 and 40 percent, respectively

      What are Republicans worrying about when they self-censor? Being perceived as too far right and trying to appear more moderate, or catching criticism from their political peers if they were to express skepticism about some of the goofiest positions that Republicans are associated with at the moment?

    2. knowing that we could lose status if we don’t believe in something causes us to be more likely to believe in it to guard against that loss. Considerations of what happens to our own reputation guides our beliefs, leading us to adopt a popular view to preserve or enhance our social positions

      Belief, or professed belief? Probably both, but how much of this is conscious/strategic versus happening in the background?

    3. Interestingly, though, expertise appears to influence persuasion only if the individual is identified as an expert before they communicate their message. Research has found that when a person is told the source is an expert after listening to the message, this new information does not increase the person’s likelihood of believing the message.
    4. Many have discovered an argument hack. They don’t need to argue that something is false. They just need to show that it’s associated with low status. The converse is also true: You don’t need to argue that something is true. You just need to show that it’s associated with high status.
    1. This comment makes the classic mistake of mixing up the universal quantifier ("for all X") and the existential quantifier ("there exists [at least one] X"), when (although neither are used), the only thing implied is the latter.

      https://en.wikipedia.org/wiki/Universal_quantification

      https://en.wikipedia.org/wiki/Existential_quantification

      What the average teen is like doesn't compromise Stallman's position. If one "gamer" (is that a taxonomic class?) follows through, then that's perfectly in line with Stallman's mission and the previously avowed position that "Saying No to unjust computing even once is help".

      https://www.gnu.org/philosophy/saying-no-even-once.html

    1. It should be illegal to sell a computer that doesn't let users install software of their own from source code.

      Should it? This effectively outlaws a certain type of good: the ability to purchase a computer of that sort, even if that's what you actually want.

      Perhaps instead it should be illegal to offer those types of computers for sale if the same computer without restrictions on installation aren't also available.

    1. myvchoicesofwhatto.attemptandxIiwht,not,t)a-ttemptwveredleterni'ine(toaneiiubarma-inohiomlreatextentbyconsidlerationsofclrclfea-sibilitv

      my choices of what to attempt and what not to attempt were determined to an embarrassingly great extent by considerations of clerical feasibility.

    1. In his response, Shirky responds that he was unable to keep up with overhead of running a WordPress blog, so it fell into disrepair and eventually disappeared.

      Is this proof, from one of the biggest social critics on the topic of crowdsourcing (known for taking a favorable stance on it), that it doesn't work? Separately, is the promise of the Web itself a false one?

      I'm not going to say "yes", but I will say—and have said before—that the "New Social" era of the 2010s saw a change of environment from when Here Comes Everybody was written. I think it highlights a weakness of counter-institutional organization—by definition the results aren't "sticky"—that's the purview of institutions. What's more, even institutions aren't purely cumulative.

    1. I was already aware that images cannot be inserted in the DOM like you would any normal image. If you write <img src="https://my-pod.com/recipes/ramen.jpg">, this will probably fail to render the image. That happens because the image will be private, and the POD can't return its contents without proper authentication.
    1. I saw the need for a JSON feed format

      Did you really, though? Probably there was a need for a feed data source that was as easy to work with as parsing a JSON object, I'm betting. And I'd wager that there was no real obstacle to writing a FeedDataSource that would ingest the Atom feed and then allow you to call toString("application/json") to achieve the same effect.

    1. The model of high fixed cost, low marginal cost applies to pretty much every consumer good or service sold in the industrial age. If you think publishing the first copy of a piece of software is hard, try setting up a new production line in a factory, or opening a restaurant, or producing a music album or movie.
    1. When you’re building a house, you have a pretty good idea of how many people that house will impact. The market has already demonstrated that they’ll pay for a roof and a walk-in shower and a state of the art heated toilet seat. If you erect a sturdy 4 bedroom, 2 ½ bathroom house with these amenities in a desirable neighborhood, you can rest easy knowing that you’ll be able to sell it. This is not how software works. The main problem here is that houses and software have wildly different marginal costs. If I build a house and my neighbor really likes it and wants to live in a house that’s exactly the same, it will cost them almost as much to build as it cost me. Sure, they might be able to save a few thousand dollars on architectural fees, but they’ll still need wood, wires, and boatloads of time from skilled plumbers, electricians, and carpenters to assemble those raw materials into something resembling a house. Marginal cost is just the cost to produce one more unit of something - in this case, one more house. Contrast this with software, which lives in the realm of often-near-zero marginal costs.

      Devops-heavy software changes this. Is GitHub standard fare the way it is because developers are knowingly or unknowingly trying to keep themselves employed—like the old canard/conspiracy theory about how doctors aren't interested in healing anyone, because they want them sick?

    1. To drive this point home:

      I sometimes get people who balk at my characterization of GitHub-style anti-wikis as being inferior to, you know, actual wikis. "You can just use the GitHub UI to edit the files", they'll sometimes say.

      A case study: a couple days ago, I noticed that the lone link in the current README for Jeff Zucker's solid-rest project is a 404. I made a note of it. Just now, I reset my GitLab password, logged in to solid/chat, and notified Jeff https://gitter.im/solid/chat?at=611976c009a1c273827b3bd1. Jeff's response was, "I'll change it".

      This case is rich with examples of what makes unwikis so goddamn inefficient to work with. First, my thought upon finding the broken link was to take note of it (i.e. so that it can eventually be taken care of) rather than fixing it immediately, as would have been the case with a wiki. More on this in a bit. Secondly, my eventual action was still not something that directly addressed the problem—it was to notify someone else† of the problem so that it might be fixed by them, due to the unwiki nature of piles of Git-managed markdown. Thirdly, even Jeff's reflex is not to immediately change it—much like my reaction, his is to note the need for a fix himself and then to tell me he's going to change it, which he will presumably eventually do. Tons of rigamarole just to get a link fixed‡ that remains broken even after having gone through all this.

      † Any attempt to point the finger at me here (i.e. coming up short from having taken the wrong action—notifying somebody rather than doing it myself) would be getting it wrong. First, the fact that I can't just make an edit without taking into account the myriad privacy issues that GitHub presents is materially relevant! Secondly, even if I had been willing to ignore that thorn (or jump through the necessary hoops to work around it) and had used the GitHub web UI as prescribed, it still would have ended up as a request for someone else to actually take action on, because I don't control the project.

      ‡ Any attempt to quibble here that I'm talking about changing a README and not (what GitHub considers) a wiki page gets it wrong. We're here precisely because GitHub's unwikis are a bunch of files of markdown. The experience of changing an unwiki page would be rife with the same problems as encountered here.

    1. Yes, the website itself is a project that welcomes contributions. If you’d like to add information, fix errors, or make improvements (especially to the visual design), talk to @ivanreese in the Slack or open a PR.

      Contribution: make it a wiki. (An actual wiki, not GitHub-style anti-wikis.)

    1. Meta note: I always have difficulty reading Linus Lee. I don't know what it is. Typography? Choices in composition (that is, writing)? Other things to do with composition (that is, visually)?

    1. I find the tendency of people to frame their thinking in terms of other people's involvement in the projects in question being to make money (as with GitHubbers "contributing"/participating by filing issues only because they're trying to help themselves on some blocker they ran into at work) pretty annoying.

      The comments here boil down to, "You say it should be rewarding? Yeah, well, if you're getting paid for it, you shouldn't have any expectations that the the act itself will be rewarding."

      I actually agree with this perspective. It's the basis of my comments in Nobody wants to work on infrastructure that "if you get an infusion of cash that leaves you with a source of funding for your project[...] then the absolute first thing you should start throwing money at is making sure all the boring stuff that [no] one wants to work". I just don't think it's particularly relevant to what Kartik is arguing for here.

      Open source means that apps are home-cooked meals. Some people get paid to cook, but most people don't. Imagine, though, if the state of home cooking were such that it took far more than one hour's effort (say several days, or a week) before we could reap an appreciable reward—tonight's dinner—and that this were true, by and large, for almost every one of your meals. That's not the case with home cooking, fortunately, but that is what open source is like. The existence of professional chefs doesn't change that. There's still something wrong that could stand to be fixed.

    2. great, I'm genuinely happy for you. But

      In other words, "allow me to go on and recontextualize this to the point of irrelevancy to the original purpose of this piece of writing".

    3. the employment contract I've signed promises a year of payment after a year of effort

      I find that hard to believe. It could be true—I might be wrong—but it's pretty odd if so and still far from the norm.

      The very fact that this "year of payment after a year of effort" probably really means "an annual salary of c$100k per year (for some c > 1), paid fractionally every two weeks" pretty much undermines this bad attempt at a retort.

    1. I feel like better code visualization would solve a lot of my problems. Or at least highlight them.

      The other commenter talks about a typical sw.eng. approach to visualization (flame graphs), but I want programs visualized as a manufacturing/packing/assembly line on a factory floor. Almost like node editors like Unreal's Blueprints, but in three dimensions, and shit visibly moving around between tools on the line in a way that you can actually perceive. Run the testcase on a loop, so you have a constant stream of "shit visibly moving around", and it runs at fractional speed so the whole process takes, say 10 seconds from front-to-back instead of near instantaneously like it normally would (and allow the person who's debugging to control the time scaling, of course). You find bugs by walking the line and seeing, "oh, man, this purple shit is supposed to be a flanged green gewgaw at this point in the line", so you walk over and fix it.

      (This is what I want VR to really be used for, instead of what's capturing people's attention today—games and lame substitutes for real world interaction like Zuckerberg is advocating for.)

    1. A big cause of complex software is compatibility and the requirement to support old features forever.

      I don't think so. I think it's rather the opposite. Churn is one of the biggest causes for what makes modifying software difficult. I agree, however, with the later remarks about making it easy to delete code where it's no longer useful.

    2. An untrusted program must inspire confidence that it can be run without causing damage.

      I don't think you can get this with anything other than a sandbox, cf Web browsers.

      I'm not sure I understand what it means to say that programs must "tell the truth about their side effects". What if they don't? (How do you make sure what they're telling is the truth?)

    1. to allow ample space for editing the source code of scripts, the form overflows the page, which requires scrolling down to find the "save" button and avoid losing changes

      The "Edit" button should morph into a "Save" button instead.

    1. it's worthwhile to learn the ins and outs of coding a page and having the page come out exactly the way you want

      No. Again, this is exactly what I don't want. (I don't mean as an author—e.g. burdened with hand-coding my own website—I mean as a person whose eyes will land on the pages that other people have authored with far more frequency than I do look at my own content.)

      As I mentioned—not just in my earlier responses to this piece, but elsewhere—the constant hammering on about how much control personal homepages give the author over the final product is absolutely the wrong way to appeal to people. In the first place, it only appeals to probably about a tenth of as many people as the people advocating think it does. In the second place, the results can and usually are not good, cf MySpace.

      The best thing about Facebook, Twitter, etc? Finally getting people to separate content from presentation. Hand-coded CSS doesn't get you there alone. That's basically a talisman (false shibboleth). Facebook content is in a database somewhere. The presentation gets layered on through subsequent transformations into different HTML+CSS views.

    2. The web is a mire of toxic and empty content because we, as users, took the easy path and decided to consume content instead of creating it.

      As I've said elsewhere about the fediverse: a funny thing to me is that Mastodon, while architecturally guilty of the same sort of things as the big social networks outlined in this post (centralization around nodes instead of small, personal, digital homes), is often touted as being able to deliver a lot of the same benefits, but I don't see them. For one thing, start a webring and get to know your virtual neighbors sounds a lot like "you get to pick your own instance". But secondly, and most relevant to the passage here, is that the types of people I run across in the fediverse for the most part all seem to be of a certain "type". As someone who doesn't use Twitter, joining Mastodon put me in contact with a lot more toxicity than not. I don't see how followthrough on the grand vision here—which will certainly almost exclusively be undertaken by the same sorts of people you find in the fediverse (with this piece, the author shows they know their audience and go right in on the pandering)—won't result in much difference from what you can get on any typical Mastodon instance—i.e. exactly the sort of thing that's basically the template for pieces like this one.

    3. Tim Berners-Lee released the code for the web so that no corporation could control the web or force users to pay for it.

      While the precondition is true, there is an ahistorical suggestion/understanding of the history of the Web at play here.

    4. So seek out new and interesting sites, and link to them on your site. Reach out to them, and see if they'll link to you. Start a dialog. The way to build a better web is to build a better web of people.

      About half the touted benefits of the approach this piece advocates for could be better achieved by instead convincing people to be more forthcoming with making themselves available for contact by e-mail.

      Hand-coding websites (something that I actually for myself) does not inherently presage the sorts of things the author thinks that it does. It could be true that we would be just as well off—if not better—if everyone were to go out and sign up for Owlstown and treat it like a content depository.

    5. The anti-vaccine and anti-mask groups are a prime example

      Anti-vax positions are the product of anonymity? Pretty sure the harbinger of the anti-vax nonsense are groups of Facebook moms...

    6. Hidden behind a veil of anonymity

      I'm not sure that modern conventions have resulted in heightened use (and abuse) of anonymity. The trend seems to be in the opposite direction. Most major social networks, incl. Facebook, Twitter, and GitHub, either require/favor use of real names or manage to cultivate a sense of obligation from most of their users to do so. Reddit is more old-style Web than any of the others, and anonymity is by far the norm. It's actually very, very weird to go on Reddit and use your real name unless you're a public figure doing an AMA, and it's only slightly less uncommon to post under a pseudonym but have your real-life identity divulged elsewhere, out-of-band. Staying anonymous on Reddit is almost treated as sacred.

      Likewise in the actual old Web, during the age of instant messengers and GeoCities, people pretty much did everything behind a "screen name".

    7. the real reason nobody was collecting massive amounts of data on users was that nobody had thought to do it yet. Nobody had foreseen that compiling a massive database of individual users likes, dislikes, and habits would be valuable to advertisers.
    8. This appeal would have a greater effect if it weren't itself published in a format that exhibits so much of what was less desirable of the pre-modern Web—fixed layouts that show no concern for how I'm viewing this page and causes horizontal scrollbars, overly stylized MySpace-ish presentation, and a general imposition of the author's preferences and affinity for kitsch above all else—all things that we don't want.

      I say this as someone who is not a fan of the trends in the modern Web. Responsive layouts and legible typography are not casualties of the modern Web, however. Rather, they exhibit the best parts of its maturation. If we can move the Web out of adolescence and get rid of the troublesome aspects, we'd be doing pretty good.

    1. Not realizing that you need to remove the roadblocks that prevent you from scaling up the number of unpaid contributors and contributions is like finding a genie and not checking to see if your first wish could be for more wishes.

      Corollary: nobody wants janky project infrastructure to be a roadblack to getting work done. It should not take weeks or days of unrewarded effort in the unfamiliar codebase of a familiar program before a user is confident enough to make changes of their own

      I used to use the phrase 48-hour competency. Kartik says "an hour (or three)". Yesterday I riffed on the idea that "you have 20 seconds to compile". I think all of these are reasonable metrics for a new rubric for software practice.

    1. except its codebase is completely incomprehensible to anyone except the original maintainer. Or maybe no one can seem to get it to build, not for lack of trying but just due to sheer esotericism. It meets the definition of free software, but how useful is it to the user if it doesn't already do what they want it to, and they have no way to make it do so?

      Kartik made a similar remark in an older version of his mission page:

      Open source would more fully deliver on its promise; are the sources truly open if they take too long to grok, so nobody makes the effort?

      https://web.archive.org/web/20140903010656/http://akkartik.name/about

    1. CuteDepravity asks:

      With minimal initial funding (ex: $10k) How would you go about making 1 million dollars in a year ?

      My response is as follows:

      Buy $10k of precious metals (or pick your preferred funge, e.g. cryptocurrency), and then turn around and sell it for a net loss of $1. Repeat this 100 more times, using the sale price of your last sale as the capital for your next purchase and selling it for a $1 net loss over the purchase price. A year should give you enough time to make 202 trades. Your last trade should take you from under a million to just over a million in revenue—from $994,950 to $1,004,849 and with your balance at EOY being $9,899. Deduct your losses from the reward you collect from whomever you made this million dollar bet with. (Hopefully the wager was for greater than $101 and whatever this does to your taxes.)

    1. a complex problem should not ~be regarded immediately in terms of computer instruc- tions, bits, and "logical words," but rather in terms and entities natural to the problem itself, abstracted in some suitable sense

      Likewise, a program being written (especially one being written anew instead of by adapting an existing one) should be written in terms of capabilities from the underlying system that make sense for the needs of the greater program, and not by programming directly against the platform APIs. In the former case, you end up with a readable program (that is also often portable), whereas in the latter case, what you end up writing amounts to a bunch of glue between existing system component that may not work together in any comprehensible way to half the audience who is not already intimately familiar with the platform in question, but no less capable of making meaningful contributions.

    2. must not consist of a bag of tricks and trade secrets, but of a general intellectual ability

      I often think about how many things like Spectre/Meltdown are undiscovered because of how esoteric and unapproachable the associated infrastructure is that might otherwise better allow someone with a solid lead to follow through on their investigation.

    3. The amount of resistance and prejudices which the farsighted originators of FORTRAN had to overcome to !gMn acceptance of their product is a memorable indication of the degree to which programmers were pre- occupied with efficiency, and to which trick- ology had already become an addiction
    1. why the heck am I going to write a comment that is only visible from this one page? There are hundreds (maybe thousands) of pages on the internet making use of the fact that there is no clear explanation of this on the web.

      I.

      In You can't tell people anything, Chip Morningstar recalls, 'People loved it, but nobody got it. Nobody. We provided lots of explanation. We had pictures. We had scenarios, little stories that told what it would be like. People would ask astonishing questions, like “who’s going to pay to make all those links?”'

      II.

      Ted's vision for Xanadu was that all things connected would be shown to be connected.

      III.

      The people we're supposed to laugh at—the ones asking who's going to create all the links—had it right. There's just no way to enforce Ted's vision—no way for all links to be knowable and visible. That's because Ted's vision is fundamentally at odds with reality, just like the idea of unbreakable DRM. The ability of two people to whisper to one another in the corner of a cozy pub about a presentation you gave to the team during work hours, without your ever knowing that commentary is being exchanged (or even the existence of the pub meeting where it happens), is Xanadu's analog hole.

    1. it would nearly impossible to understand the web if you have never used it

      But TBL is talking about "finding out about" the Web by experiencing it, which is not the same thing as understanding it. As the personal reflection in that this post opened up with showed, people are capable of experiencing the Web without understanding it.

  4. www.research-collection.ethz.ch www.research-collection.ethz.ch
    1. The two notable exceptions are the Lispmachine operating system, which is simply an extension to Lisp (actually a programminglanguage along with the operating system) and the UNIX operating system, which providesfacilities for ‘patching’ programs together.

      Oberon is a better example of this than UNIX. Internally, the typical UNIX program (written in C) uses function calls and shared access to in-memory "objects", but must use a different mechanism entirely (file descriptors) for programs to communicate.

    1. Explain clipboard macros as one use case for bookmarklets. E.g.:

      navigator.clipboard.writeText(`Hey there.  We don't support custom domains right now, but we intend to allow that in the future.`)
      

      Explain that they can be parameterized, too. E.g.:

      let name = String(window.getSelection());
      navigator.clipboard.writeText(`Hey, ${name}...`);
      

      They can create a menu of bookmarklets for commonly used snippets.

    Tags

    Annotators

    1. Feature request (implement something that allows the following): 1. From any page containing a bookmarklet, invoke the user-stored bookmarklet בB 2. Click the bookmarklet on the page that you wish to be able to edit in the Bookmarklet Creator 3. From the window that opens up, navigate to a stored version of the Bookmarklet Creator 4. Invoke bookmarklet בB a second time from within the Bookmarklet Creator

      Expected results:

      The bookmarklet from step #2 is decoded and populates the Bookmarklet Creator's input.

      To discriminate between invocation type II (from step #2) and invocation type IV (from step #4), the Bookmarklet Creator can use an appropriate class (e.g. https://w3id.example.org/bookmarklets/protocol/#code-input) or a meta-based pragma or link relation.

    2. <title>Bookmarklet Creator</title> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8">

      We should perhaps include a rel=canonical or rel=alternate here. But what are the implications for shares and remixes? This also perhaps exposes a shortcoming in Hypothes.is's resource equivalence scheme, cf:

      Maybe the document itself, when the application "wakes up" should check the basics and then insert them as appropriate?

    3. output.addEventListener("click", ((event) => { if (event.target == output.querySelector("a.bookmarklet")) { alert("It's not an ordinary link; you need to bookmark it");

      This should use the registered control service pattern (or something). It's too hard to override this behavior. For example, I could remix the page and remove it, but I should also be able to write a bookmarklet that achieves the same effect.

    1. they walked into that cafe, looked around, and decided I was the easy prey.

      I had a similar feeling in 2016 after a recruitment attempt matching exactly the process described in theogravity's comment on this post. After first coming into contact at the grocery store, exchanged numbers, and met up at a coffee shop (a meeting that was arranged via text message within a couple days/weeks). I was surprised to find out that it was a MLM something-or-other, then I made it clear that I wasn't interested (and subtly made him feel that he should leave while I stayed and finished the hot cider I ordered). But for weeks afterward, I felt embarrassed and insulted that I must have been giving off some vibe as easily dupable. I couldn't reconcile it with the fact that in our first "chance" meeting, I mentioned that I'd left Samsung a few weeks earlier and didn't exactly have any concrete work plans and wasn't especially worried about it. Somehow, though, I guess I was perceived as being susceptible to some get-rich-quick nonsense...?

      Having said that—the author here recounts four meetings and still no revelation about the specifics of what their agenda was. That doesn't sound like "easy" prey to me. That's a pretty involved trap, assuming it is one.

    1. “I didn’t mean to back into your car in the parking lot.” Or, “I didn’t intend to hurt you with my remarks.”

      Worth noting that one of these is concrete (and concretely bad) and avoidable, the other one less so. It even permits claims that are unfalsifiabie.

    1. So far it works great. I can now execute my bookmarklets from Twitter, Facebook, Google, and anywhere else, including all https:// "secure" websites.

      In addition to the note above about this being susceptible to sites that deny execution of inline scripts, this also isn't really solving the problem. At this point, these are effectively GreaseMonkey scripts (not bookmarklets), except initialized in a really roundabout way...

    2. work-around

      Bookmarklets and the JS console seem to be the workaround.

      For very large customizations, you may run into browser limits on the effective length of the bookmarklet URI. For a subset of well-formed programs, there is a way to store program parts in multiple bookmarklets, possibly loaded with the assistance of a separate bookmarklet "bootloader", although this would be tedious. The alternative is to use the JS console.

      In FIrefox, you can open a given script that you've stored on your computer by pressing Ctrl+O/Cmd+O, selecting the file as you would in any other program, and then pressing Enter. (Note that this means you might need to press Enter twice, since opening the file in question merely puts its contents into the console input and does not automatically execute it—sort of a hybrid clipboard thing.) I have not tested the limits of the console input for e.g. input size.

      As far as I know, you can also use the JS console to get around the design of the dubious WebExtensions APIs—by ignoring them completely and going back to the old days and using XPCOM/Gecko "private" APIs. The way you do is is to open about:addons by pressing Ctrl+Shift+A (or whatever), opening or pasting the code you want to run, and then pressing Enter. This should I think give you access to all the old familiar Mozilla internals. Note, though, that all bookmarklet functionality is disabled on about:addons (not just affecting bookmarklets that would otherwise violate CSP by loading e.g. an external script or dumping an inline one on the page`).

    1. future software development should increasingly be oriented toward making software more self-aware, transparent, and adaptive

      From "Software finishing":

      once software approaches doneness[...] pour effort into fastidiously eliminating hacks around the codebase [...] presents [sic] the affected logic in a way that's clearer [...] judiciously cull constructs of dubious readability[...]

      one of the cornerstones of the FSF/GNU philosophy is that it focuses on maximizing benefit to the user. What could be more beneficial to a user of free software than ensuring that its codebase is clean and comprehensible for study and modification?

    2. When something goes wrong with a computer, you are likely to be stuck. You can't ask the computer what it was doing, why it did it, or what it might be able to do about it. You can report the problem to a programmer, but, typically, that person doesn't have very good ways of finding out what happened either.

    Tags

    Annotators

    1. What if every button in your app had an email address? What if apps could email each other?

      Also, what if every app could email you? i mean the app itself—not the product team.

    1. I think there are some systems design insights here that might be valuable for p2p, Web3, dweb, and other efforts to reform, reboot, or rethink the internet.

      Indeed. Why did Dat/Beaker/Fritter fizzle out? Answer: failure to exapt.

    2. This is possible because the internet isn’t designed around telephone networking hardware. It isn’t designed around any hardware at all. Instead, the internet runs on ideas, a set of shared protocols. You can implement these protocols over a telephone, over a fiberoptic cable, or over two tin cans connected with string.
    1. Scale is also killing open source, for the record. Beyond a certain codebase size or rate of churn, you need to be a mega-corp to contribute to a nominally open-source project.

      What Stephen ignores is that the sort of software that this applies to is pretty much limited to the sort of software that only megacorps are interested in to begin with. As Jonathan pointed out in the post, most "user-facing" software is still not open source. (Side note: that's the real shame of open source.)

      Who cares how hard it is to contribute to the kinds of devops shovelware that GitHubbers have disproportionately concerned themselves with?

    1. The difficulty encountered by authors today when they create metadata for hypertexts points to the risk that adaptive hypermedia and the semantic Web will be initiatives that fit only certain, well-defined communities because of the skills involved
    1. Here's a xanalink to a historical piece-- Here's a xanalink to a scientific work-- Here's a xanalink to an early definition of "hypertext"

      No, no, and no. This is an interesting fallout from Ted's fatwa that links be kept "outside the file".

  5. Mar 2022
    1. Why spend time, effort and ultimately money on improving productivity when you can just get stuff for free?

      As a rejoinder, have you ever undertaken any serious endeavor to work out exactly how difficult it would be to pay for the software you use that happens to be open source? Massively.

      It's hard enough getting people to who sell SaaS/PaaS stuff to let you pay them a fair price for letting you use their "free" tier if none of their other (often pricey, enterprise-/B2B-oriented) plans offer you anything of value—and these are entities that already have payment processing infrastructure set up!

    2. Open source strongly favors maintenance and incremental improvement on a stable base. It also encourages cloning and porting. So we get an endless supply of slightly-different programming languages encouraging people to port everything over.

      Huh? The argument here is that what's killing the progress of software development is... forks of programming languages? We live in different worlds.

    1. she became wise for a world that's no longer here

      This might just be true for Tim's grandma, but something people neglect generally is that you can find just as much bad advice elsewhere—where wisdom from another time is not the reason. Some people just have no idea what they're talking about—and they're never going to be subjected to a fitness function that culls the nonsense they dispense.

    1. Many of the items in the docuverse are not static, run-of-the-mill materials, i.e. unformatted text, graphics, database files, or whatever. They are, in fact, executable programs, materials that from a docuverse perspective can be viewed as Executable Documents (EDs). Such programs run the gamut from the simplest COBOL or C program to massive expert systems and FORTRAN programs. Since the docuverse address scheme allows us to link documents at will, we can link together compiled code, source code, and descriptive material in hypertext fashion. Now, if, in addition, we can prepare and link to an executable document an Input-Output Document (IOD), a document specifying a program's input and output requirements and behavior, and an RWI describing the IOD, we can entertain the notion of integrating data and programs that were not originally designed to work together.

      (NB: RWI — Real World Interpretation)

    1. If you happen to annotate page three, and then weeks or years later visit the single page view wouldn’t you want to see the annotation you made? If the tool you are using queries for annotations using only the URL of the document you are viewing you won’t see it.
    2. A primary challenge is that documents, especially online ones, change frequently.

      But they don't. They get re-issued (https://hypothes.is/a/FWEDwInLEeyhtVeF1gArUw) and their associated identifier (URI) gets re-used for the new version, and the original issues get routinely "disappeared". This is a failure to carry out TBL's original vision for the Web in practice–where every article is given a name that can be systematically (mechanically) resolved, or if not that, then at least used as a handle to unambiguously refer to the thing.

    1. I wish education was based around this principle.

      This is a recurring grievance of mine with the way most people approach writing documentation. Closely related: code comments should explain the why, not the what (or worse the how—which just ends up rehashing the code itself).

      Too many people try to launch right in and explain what they're doing—their solution to a problem—without ever adequately outlining the problem itself. This might seem like too much of a hassle for your readers, but often when the problem is explained well enough, people don't actually need to read your explanation of the solution.