3,132 Matching Annotations
  1. May 2022
    1. The thrill of getting "hello world" on the screen in Symbian/S60 is not something I'll ever forget. Took ages of battling CodeWarrior and I think even a simple app was something like 6 files
    1. Knuth recommended getting familiar with the program by picking up one particular part and "navigating" the program to study just that part. (See https://youtu.be/D1jhVMx5lLo?t=4103 at 1:08:25, transcribed a bit at https://shreevatsa.net/tex/program/videos/s04/) He seems to find using the index (at the back of the book, and on each two-page spread in the book) to be a really convenient way of "navigate" the program (and indeed randomly jumping through code, as you said), and he thinks that one of the convenient things about the "web" format is that you can explore it the way you want. This is really strange (to us) as the affordances we're used to from IDEs / code browsers etc are really not there

      I can't help but think that currentgen programmers are misunderstanding Knuth and anachronizing him, with their being a product of the current programming regime where most never lived in a world without structured programming, for example, when we hear "literate programming", we attempt to understand it by building off our conception of current programming practices and try to work out what Knuth could mean given widespread modern affordances as a precondition, when really Knuth is just advocating for something that approximates (with ink and paper) currentgen tooling, and is therefore in fact more primitive than our reference point which we are trying to understand as being capable of being improved upon, but Knuth's LP is an improvement nonetheless of something even more primitive further still.

    1. I still stick by the fact that web software used to be like: point, click, boom. Now it is like: let's build a circuit board from scratch, make everyone learn a new language, require VPS type hosting, and get the RedBull ready because it's going to take a long time to figure out.
    1. Unlike conventional blogs which are read-only this blog contains active code, so you can click on any of the links in this tiddler and see the code and see how the system behaves.
    1. I have no doubt that the four children, now in their 70s and 80s, who spent their early years in the basement house would never tolerate such activity in their own neighborhoods today.
    1. I always joke that I'm fluent in jQuery, but no absolutely no javascript[...] There are still banks and airlines that run COBOL

      The joke being that both jQuery and COBOL were crummy right from the beginning, and now they're crummy and old, right?

    1. i noted first that the headline performance gain was 10% & 11% this for a team given far more time and resources to optimize for their goal than most

      What does this even mean?

      I also find it laughable/horrifying the comparison between jQuery and greybeardism (or, elsewhere, a bicycle[1]). jQuery is definitely not that. It is the original poster child for bloat.

      For all the people feigning offense at the "greybeard" comments at the thread start, it's much more common, unfortunately, to find comments like this one, with people relishing it when it comes to jQuery, because it confers an (undeserved) sense of quasi-moral superiority, wisdom, and parsimony, even though—once again—jQuery represents anything but those qualities.

      1. https://news.ycombinator.com/item?id=31440670
    1. I’d just like to point out that the problem with jQuery today is it was a library built to smooth over and fix differences between browser JavaScript engines

      jQuery is primarily a DOM manipulation library and incidentally smoothed over differences in browsers' DOM implementations. To the extent that there were any significant differences in browser's JS implementations, jQuery offered little if anything to fix that.

    1. jQuery-style syntax for manipulating the DOM

      This is 70+% of the reason why I end up ripping out jQuery from old/throwaway projects when I start trying to hack on them. The jQuery object model is really confusing (read: tries too cute/clever), and the documentation sucks, relatively speaking, and the code is an impenetrable blob, which means reading through it to figure out WTF it's supposed to do is a non-option.

    1. The problem is that a lot of old school website devs can write jQuery and very very little actual JavaScript.

      This happens to be true of many of the new/up-to-date Web developers I see, too.

      Anecdote: I never really did StackOverflow, either as a reader or a contributor. One day several years ago (well after StackOverflow had taken off), I figured that since I see people complain about JS being confusing all the time and since I know JS well, then I'd go answer a bunch of questions. The only problem was that when I went to the site and looked at the JS section, it was just a bunch of jQuery and framework shit—too much to simply ignore and try to find the ones that were actually questions about JS-the-language. "I know," I thought. "I'm in the JS section. I'll just manually rewrite the URL to jump to the ECMAScript section, which surely exists, right? So I did that, and I just got redirected to the JS section...

    Tags

    Annotators

    1. I'm not going to write document.querySelector every time I have to select some nodes, which happens quite often.

      This remark manages to make this one of the dumbest comments I've ever read on HN.

    1. now something breaks elsewhere that was unsuspected and subtle. Maybe it’s an off-by-one problem, or the polarity of a sign seems reversed. Maybe it’s a slight race condition that’s hard to tease out. Nevermind, I can patch over this by changing a <= to a <, or fixing the sign, or adding a lock: I’m still fleshing out the system and getting an idea of the entire structure. Eventually, these little hacks tend to metastasize into a cancer that reaches into every dependent module because the whole reason things even worked was because of the “cheat”; when I go back to excise the hack, I eventually conclude it’s not worth the effort and so the next best option is to burn the whole thing down and rewrite it…but unfortunately, we’re already behind schedule and over budget so the re-write never happens, and the hack lives on.

      I'm having real difficulty understanding what is going on here and in what situations such cascading problems occur.

      Is it a case of under-abstraction?

    2. Furthermore, its release philosophy is supposed to avoid what I call “the problem with Python”: your code stops working if you don’t actively keep up with the latest version of the language.
    1. the skills to tweak an app or website into what they need

      Does "what they need" here implicitly mean "a design that no one really benefits from but you can bill a client for $40+/hr for"? Because that's how Glitch comes off to me—more vocational (and even less academic) than a bootcamp without the structure.

      What was that part about home-cooked meals?

    2. The biggest barriers to coding are technical complexity around processes like collaboration and deployment, and social obstacles like gatekeeping and exclusion — so that's what we've got to fix
    3. Building and sharing an app should be as easy as creating and sharing a video.

      This is where I think Glitch goes wrong. Why such a focus on apps (and esp. pushing the same practices and overcomplicated architecture as people on GitHub trying to emulate the trendiest devops shovelware)?

      "Web" is a red herring here. Make the Web more accessible for app creation, sure, but what about making it more accessible (and therefore simpler) for sharing simple stuff (like documents comprising the written word), too? Glitch doesn't do well at this at all. It feels less like a place for the uninitiated and more like a place for the cool kids who are already slinging/pushing Modern Best Practices hang out—not unlike societal elites who feign to tether themself to the mast of helping the downtrodden but really use the whole charade as machine for converting attention into prestige and personal wealth. Their prices, for example, reflect that. Where's the "give us, like 20 bucks a year and we'll give you better alternative to emailing Microsoft Office documents around (that isn't Google Sheets)" plan?

    4. as if the only option we had to eat was factory-farmed fast food, and we didn’t have any way to make home-cooked meals

      See also An app can be a home-cooked meal along with this comment containing RMS's remarks with his code-as-recipe metaphor in the HN thread about Sloan's post:

      some of you may not ever write computer programs, but perhaps you cook. And if you cook, unless you're really great, you probably use recipes. And, if you use recipes, you've probably had the experience of getting a copy of a recipe from a friend who's sharing it. And you've probably also had the experience — unless you're a total neophyte — of changing a recipe. You know, it says certain things, but you don't have to do exactly that. You can leave out some ingredients. Add some mushrooms, 'cause you like mushrooms. Put in less salt because your doctor said you should cut down on salt — whatever. You can even make bigger changes according to your skill. And if you've made changes in a recipe, and you cook it for your friends, and they like it, one of your friends might say, “Hey, could I have the recipe?” And then, what do you do? You could write down your modified version of the recipe and make a copy for your friend. These are the natural things to do with functionally useful recipes of any kind.

      Now a recipe is a lot like a computer program. A computer program's a lot like a recipe: a series of steps to be carried out to get some result that you want. So it's just as natural to do those same things with computer programs — hand a copy to your friend. Make changes in it because the job it was written to do isn't exactly what you want. It did a great job for somebody else, but your job is a different job. And after you've changed it, that's likely to be useful for other people. Maybe they have a job to do that's like the job you do. So they ask, “Hey, can I have a copy?” Of course, if you're a nice person, you're going to give a copy. That's the way to be a decent person.

    5. If you’re a coder, when’s the last time you just quickly built something to solve a problem for yourself or simply because it was a fun idea?

      And how future-proof was the result or how easy was it to make sure you could share it with others in a form that they could make use of (and not be dependent on you or some third-party or their internet connection)?

    1. I’ve been using heroku for years and while some might complain that it has stagnated and mostly stayed the same since the salesforce acquistion, I think that’s been an asset.
    1. Cold starts depend a lot on what people actually deploy. They're really fast for an optimized Go binary, really slow for most Node apps.
    1. Before deploying, we need to do one more thing. goStatic listens on port 8043 by default, but the default fly.toml assumes port 8080.

      When I created a blank app with flyctl launch, it gave me a fly.toml with 8080. The fly.toml cloned from the repo, however, already has it set to 8043.

      It's possible that the quoted section is still correct, but it's ambiguous.

    1. This column will continue only if I hear from people who use literate-programming systems that they have not designed themselves. and it did not continue.
    1. To keep tiny mistakes from crashing our software or trashing our data, we write more software to do error checking and correction.

      This is supposed to be the justification for increasing code size. So what's the excuse for projects today? Software of today is not exactly known for adding more "error checking and correction". It feels more like growth for growth's sake, or stimulating some developer's sense of "wouldn't it be cool if [...]?".

    1. The atomic unit of developer productivity ought then to be one iteration of the inner loop. The appropriate unit is not code quantity, but iteration frequency. We might call this unit of quantity developer hertz.
    1. Because we didn’t have real marketing people, we updated the product to became more and more interesting to us, the developers, and less interesting to potential buyers.
    1. suppose when you needed to make a permanent edit to the style sheet on your homepage, you opened up the CSS viewer, made the edit, and the result persists—not just in your browser, but by changing the very style sheet itself
    1. because of the "LP will never be mainstream" belief, I'm still thinking of targeting mainstream languages, with "code" and "comments"

      No need to constraint yourself to comments, though. Why comments? We can do full-fledged, out-of-line doclets.

    2. an acknowledgement of network effects: LP is unlikely to ever catch on enough to be the majority, so there needs to be a way for a random programmer using their preferred IDE/editor to edit a "literate" program

      This is part of the reason why I advocate for language skins for comparatively esoteric languages like Ada.

    3. in other words, there would be no "weave" step

      Well, there could still be a weave step—same as there is like with triple scripts (to go from compilation form back to the original modules) or with Markdown, which should be readable both as plain text and in rendered form.

    1. in an ideal LP system, the (or at least a) source format would just simply be valid files in the target language, with any LP-related markup or whatever in the comments. The reason is so that LP programs can get contributions from "mainstream" programmers. (It's ok if the LP users have an alternative format they can write in, as long as edits to the source file can be incorporated back.)

      (NB: the compilation "object" format here would, much like triple scripts, be another form of human readable source.)

  2. geraldmweinberg.com geraldmweinberg.com
    1. Welcome to the Gerald M. Weinberg Fan Site!

      Do we have to wait for people to die before these kinds of digital fanclubs can materialize for people who aren't in entertainment?

    1. Code can't explain why the program is being written, and the rationale for choosing this or that method. Code cannot discuss the reasons certain alternative approaches were taken.

      Having trouble sourcing this quote? That's because some shithead who happens to run a popular programming blog changed the words but still decided to present it as a quote.

      Raskin's actual words:

      the fundamental reason code cannot ever be self-documenting and automatic documentation generators can’t create what is needed is that they can’t explain why the program is being written, and the rationale for choosing this or that method. They cannot discuss the reasons certain alternative approaches were taken. For example:

      :Comment: A binary search turned out to be slower than the Boyer-Moore algorithm for the data sets of interest, thus we have used the more complex, but faster method even though this problem does not at first seem amenable to a string search technique. :End Comment:

      From "Comments Are More Important Than Code" https://dl.acm.org/ft_gateway.cfm?id=1053354&ftid=300937&dwn=1

    1. This is a good case study for what I talk about when I mean the fancub economy.

      Wouldn't it be better if gklitt were a willing participant to this aggregation and republishing of his thoughts, even if that only meant that there were a place set up in the same namespace as his homepage that would allow volunteers ("fans") to attach notes that you wouldn't otherwise be aware of if you made the mistake of thinking that his homepage were his digital home, instead of the place he's actually chosen to live—on Twitter?

    1. I loved the Moto G (the original—from during the brief time when Google owned Motorola). I used it for 6 years. It's not even an especially small phone. Checking the dimensions, it's actually slightly smaller (or, arguably, about the same size) when compared to either the iPhone 12 Mini and the iPhone 13 Mini—which I'd say makes those deceivingly named. They're nothing like that one that Palm made, which is called, uh... "Palm", I guess. (Described by Palm as about the size of a credit card.)

    1. memory usage and (lack of) parallelism are concerns

      Memory usage is a concern? wat

      It's a problem, sure, if you're programming the way NPMers do. So don't do that.

      This is a huge problem I've noticed when it comes to people programming in JS—even, bizarrely, people coming from other languages like Java or C# and where you'd expect them to at least try to continue to do things in JS just like they're comfortable doing in their own language. Just because it's there (i.e. possible in the language, e.g. dynamic language features) doesn't mean you have to use it...

      (Relevant: How (and why) developers use the dynamic features of programming languages https://users.dcc.uchile.cl/~rrobbes/p/EMSE-features.pdf)

      The really annoying thing is that the NPM style isn't even idiomatic for the language! So much of what the NodeJS camp does is so clearly done in frustration and the byproduct of a desire to work against the language. Case in point: the absolutely nonsensical attitude about always using triple equals (as if to ward off some evil spirits) and the undeniable contempt that so many have for this.

    2. on baggage: "package.json", for example, is not an ECMA standard

    3. My argument: JavaScript is a future-proof programming language

    1. Also (related moreso to Future-proof), Haxe? Dart?

    2. Absent from consideration: * Gecko/XULRunner (check out the way that Zotero is built) * GtkJS

      Both deserve a look

    1. C# is a great language and the .NET standard library is probably the most thoughtfully crafted standard library I've ever used

      My reaction to the author's take on future-proof programming languages even before I got to this post (and this part in it) was that he should look at porting (or getting ported) the C# standard libraries (to JS—as part of my argument that JS the future-proof programming language Krinke is looking for).

    1. My argument for the use of the Web as a medium for publishing the procedures by which the documents from a given authority are themselves published shares something in common with the argument for exploiting Lisp's homoiconicity to represent a program as a data structure that is expressed like any other list.

      There are traces here as well from the influence of the von Neumann computational model, where programs and data are not "typed" such that they belong to different "classes" of storage—they are one and the same.

    1. You give up private channels (DMs)

      Consider ways to build on the static node architecture that wouldn't require you to give this up:

      • PGP blobs instead of plain text
      • messages relayed through WebRTC when both participants are online
      • you could choose to delegate to a message service for your DMs, to guarantee availability, just like in the olden days with telephones
    1. I’d start using a library written by somebody else to get started, then eventually replace it with my own version
  3. www.mindprod.com www.mindprod.com
    1. local a (e.g. aPoint) param p (e.g. pPoint) member instance m (e.g. mPoint) static s (e.g. sPoint)

      This is really only a problem in languages that make the unfortunate mistake of allowing references to unqualified names that get fixed up as if the programmer had written this.mPoint or Foo.point. Even if you're writing in a language where that's possible, just don't write code like that! Just because you can doesn't mean you have to.

      The only real exception is distinguishing locals from parameters. Keep your procedures short and it's less of a problem.

    2. Show me a switch statement as if it had been handled with a set of subclasses. There is underlying deep structure here. I should be able to view the code as if it had been done with switch or as if it had been done with polymorphism. Sometimes you are interested in all the facts about Dalmatians. Sometimes you are interested in comparing all the different ways different breeds of dogs bury their bones. Why should you have to pre-decide on a representation that lets you see only one point of view?

      similar to my strawman for language skins

    3. We would never dream of handing a customer such error prone tools for manipulating such complicated cross-linked data as source code. If a customer had such data, we would offer a GUI-based data entry system with all sorts of point and click features, extreme data validation and ability to reuse that data, view it in many ways and search it by any key.

      This old hat description captures something not usually brought up by CLI supremacists: GUIs as ways validate and impose constraints on structured data.

    1. I'd have to set up the WP instance and maintain it.

      (NB: this is in response to the question Why not just use wordpress + wysiwyg editor similar to *docs, and you're done?.)

      This is a good of an explanation as any for local-first software.

      A natural response (to potatolicious's comment) is, "Well, somebody has to maintain these no-code Web apps, too, right? If there's someone in the loop maintaining something, the question still stands; wouldn't it make more sense for that something to be e.g. a WordPress instance?"

      Answer: yeah, the no-code Web app model isn't so great, either. If service maintenance is a problem, it should be identified as such and work done to eliminate it. What that would look like is that the sort of useful work that those Web apps are capable of doing should be captured in a document that you can copy to your local machine and make full use of the processes and procedures that it describes in perpetuity, regardless of whether someone is able to continue propping up a third-party service.

    1. software engineers who do web development are by far among the worst at actually evaluating solutions based on their engineering merit

      There's plenty of irrationality to be found in opposing camps, too. I won't say that it's actually worse (because it's not), but it's definitely a lot more annoying, because it usually also carries overtones that there's a sort of well-informed moral and technological high ground—when it turns out it's usually just a bunch of second panel thinkers who themselves don't even understand computers (incl. compilers, system software, etc.) very well.

      This is what makes it hard to have discussions about reforming the practices in mainstream Web development. The Web devs are doing awful things, but at least ~half of the criticism that these devs are actually exposed to ends up being junk because lots of the critics unfortunately just have no fucking idea what they're talking about and are nowhere near the high ground they think they're standing on—often taking things for granted that just don't make sense when actually considered in terms of the technological uppercrust that they hope to invoke. Just a kneejerk "browser = bad" association from people who can't meaningfully distinguish between JS (the language), browser APIs, and the NPM corpus (though most NPM programmers are usually guilty of exactly the same...).

      It's a very "the enemy of my enemy is not my friend" sort of thing.

    1. Instead of being parsed, it was `import`-ed and `include`-d

      Flems does something like this:

      To allow you to use Flems with only a single file to be required the javascript and the html for the iframe runtime has been merged into a single file disguised as flems.html. It works by having the javascript code contained in html comments and the html code contained in javascript comments. In that way if loaded like javascript the html is ignored and when loaded as html the javascript part is ignored.

      https://github.com/porsager/flems#html-script-tag

    1. I've referred to a similar (but-unrelated) architecture diagram for writing platform-independent programs as being a Klein bowtie. There is a narrow "waist" (the bowtie know), with either end of the bowtie being system-specific routines (addressed by a common interface in the right half of the diagram).

    2. From https://news.ycombinator.com/item?id=31378614:

      There's this tendency of languages to want to be the be-all end-all, i.e. to pretend that they are at the center of the universe. Instead, they should focus on interoperating with other languages[...] This is one reason I'm working on https://www.oilshell.org -- with a focus on INTEROPERABILITY

    1. wrt the sentiment in the McHale tweet:

      See Tom Duff's "Design Principles" section in the rc manual http://doc.cat-v.org/plan_9/4th_edition/papers/rc, esp.:

      It is remarkable that in the four most recent editions of the UNIX system programmer’s manual the Bourne shell grammar described in the manual page does not admit the command who|wc. This is surely an oversight, but it suggests something darker: nobody really knows what the Bourne shell’s grammar is.

    1. This can get much worse than the above example; the number of \’s required is exponential in the nesting depth. Rc fixes this by making the backquote a unary operator whose argument is a command, like this: size=‘{wc -l ‘{ls -t|sed 1q}}
  4. www.dreamsongs.com www.dreamsongs.com
    1. the very existence of a master plan means, by definition, that the members of the community can have little impact on the future shape of their community,
    2. You cannot expect to evolve a window system into a spreadsheet.

      Reference to a Kiczales talking point, e.g. Why are Black Boxes so Hard to Reuse?.

    3. This sentence is compressed enough that the meaning of the strange word morat is clear

      This is a very odd place to apply the word "compression", since pretty much the opposite is happening: there's enough redundancy that a single signal error doesn't bungle the entire attempt to communicate.

    1. a little space where you can have chat, docs and shared calendar/gmail in one place…

      I think the only thing I liked that got close to this was Keybase.

    1. The argument used to propose its use is to avoid the construction of multiple volatile objects. This supposed advantage is not real in virtual machines with efficient garbage collection mechanisms.

      Consider a Sufficiently Smart Compiler/Runtime where a multiply-instanced class has the exact same runtime characteristics as code that has been hand-"tuned" to use a singleton.

    1. To be on time you must be early; it’s nearly impossible to be precisely on time – time is moving too fast. For instance, if a meeting starts at 1:00 you can’t walk in 1:00 – that occurs in a milli-second and then becomes the past. You must arrive before 1:00.

      This is a fine perspective as long as you're not penalizing people who arrive at 12:59:59 — "If you are on time, you are late" is a stupid mantra that, while my sample size is low, I've only heard from people who were themselves egregious time wasters and made the remarks as a way of honoring Ra.

      (I'd argue further that anyone who arrives at any time between [13:00:00, 13:01:00) are doing okay, so long as they're wiling to accept that no one is obligated to wait for them. I.e. what "the meeting is at 1:00 PM" means is that everyone has permission to start the meeting at 13:00:00, whether you're there or not.)

    1. most papers are written unnecessarily complex, likely to make them appear more impressive than they actually are
    1. the reviewer wanted _more_ academese. It was the last paper of my grad school career and I was sick of academese. In so many words, I told them to pound sand. That was my only paper to never get published.
    1. When creating a singleton, the author is making the assumption that no program will ever have any reason to have more than one instance of the class.  This is a big assumption, and it often proves wrong later on when the requirements change.
    2. Avoid singletons from the start.
    1. That people show off these illegible globs in public only makes sense from a signaling perspective: They are saying, “look at how many nodes I have in my brain, amazing nodes

      See also: GitHub contribution graphs

    1. Why should the reader have to read every citation or trust that an author is not taking a citation out of context, when hyperlinks are available?

      How do hyperlinks neutralize the ability for people to take things out of context? I see it all the time. The backchannels of Wikipedia are rife with it.

    2. consider how silly it is to include MLA-style citations at the bottom of a text

      Academic citations are an amazing piece of technology, though.

    3. On the other hand, the notion of the “document” that is intrinsic to web development today is overdetermined by the legacy of print media.

      I dunno. I think all the things about dynamism and liveness that follow this claim are true in the minds of most people. The rarity is for people to conceive of content on the Web (or elsewhere rendered to a computer screen) as capable of being imbued with the fixity of print. Everything feels transient and rests on a shaky, fleeting foundation..

    4. let me sing the praises of documents for a moment. People often get carried away when they discover the original vision of hypertext, which involves a network of documents, portions of which are “transcluded” (included via hypertext) into one another. The implication is that readers could follow any reference and see the source material—and granted, this would be transformative. However, there’s a limit to the effectiveness of the knowledge network as a reading experience.
    1. That said, I've since realized I was wrong of course. Trying to maintain projects that haven't been touched in more than a year led to hours of fixing dependency issues.
    1. There are significant / very vocal people demand open source to be about the community. And Community Driven. And dumping code out isn't very "open source" by their standards.

      This is an easy one: those people are wrong.

    1. it's not as simple as copying homepage.php to homepagenew.php, hacking on it until it works, and then renaming homepagenew.php to homepage.php

      It actually can be easier than that if the only reason PHP is involved is for templating and you don't want CGI. (NB: This is admittedly contra "the mildly dynamic website" described in the article).

      If you're Alice of alice.example.com, then you open up your local copy of e.g. https://alice.example.com/sop/pub.app.htm, proceed to start "hacking on it until it works", then rsync the output.

    1. absolutely

      @45:18

      In fact, after I did this, Nigel sent me a CL to do exactly that and[...] it's probably better, and it probably fixed what performance problems there may be, if any, but it just wasn't as pretty so I pushed back. I like this better.

    1. Psst: Philip gave me a copy of this piece (just like he did many others before he decided to unpublish it). I will totally let you peek at my copy (if you want, and if you happen to be in Austin—I'll respect Philip's commandment not to distribute unauthorized copies, but to reiterate: you're free to look at the copy I already have). It's a great article, even if lots of the feedback at the time unnecessarily focused on quibbling with his decision to characterize the issue as being inherent to "command-line bullshittery", rather than charitably* interpreting it as general discontent with the familiar pain** of setting up/configuring/working with any given toolchain and the problems that crop up (esp. when it comes at the price of discouraging smart or even brilliant people who have interesting ideas they wish to pursue.)

      * See also https://pchiusano.github.io/2014-10-11/defensive-writing.html

      ** See also http://lighttable.com/2014/05/16/pain-we-forgot/

    1. Linux (and Wine) may prove to be an alternative here.

      If what we're discussing here is the decision to no longer opt in to playing along with the "Western" regime for IP, then why would they limit themselves to Linux and Wine—two products of attempts to play by the rules of the now-deprioritized regime? Why wouldn't they react by shamelessly embracing "pirated" forms of the (Windows) systems that they clearly have a revealed preference for? If hackability is the issue*, then that's ameliorated by the fact that NT/2000 source code and XP source code was leaked awhile ago—again: the only thing stopping anyone from embracing those before was a willingness to play along and recognize that some game states are unreachable when (artificially) restricting one's own options to things that are considered legal moves. But that's not important anymore, right?

      * i.e. malleability, and it's not obvious that it should be—it wasn't already, so what does this change?

    1. Are you limited to PHP?

      No, but further: the question (about being "limited") presupposes something that isn't true.

      If you're doing PHP here, you're doing it wrong—unless the PHP application is written with great care (i.e. unidiomatically) and has some way to reveal its own program text (as first-class content). Otherwise, that's a complete failure to avoid the "elsewhere"-ness that we're trying to eradicate.

    2. sounds like literate programming for a shell script to me

      The difference being that you don't read shell scripts, except in the course of editing them. Shell isn't a very good language for writing checkllsts/SOPs, anyway.

    3. If I really wanted to make my blog portable and timeless, I would port it to NearlyFreeSpeech, which I estimate would probably be one day of work

      "one day" is really only considered mlniscule by the likes of programmers, etc. Taking an entire day to work on something that would be seen as incidental is likely not to get approval in many organizations. (Places like Samsung exist.) And would they be wrong? Why should it require any porting effort at all?

    4. "predilection for certain systems and ways of working"

      "everything looks like a nail"

    5. who hosts that?

      Answer: it's hosted under the same auspices as the main content. The "editor" is first-class content (in the vein of ANPD); it's really just another document describing detailed procedures for how the site gets updated.

    1. State exact versions and checksums of all deps plus run your own server hosting the deps

      In other words, do a lot of work to route around the problems introduced by the way that using npm routes around your existing version control system.

    1. all the exception handling these packages do

      These packages don't/can't do the amount of exception handling suggested by this comment.

    1. typeof v === "number"

      Using triple equals to check the results of typeof is totally unnecessary, and a sure sign that someone, somewhere has unthinkingly adopted some dubious advice; it's a code smell/red flag.

      A standard equality comparison (using ==) does exactly the right thing while also avoiding association with questions/doubts about the quality of the surrounding code.

    1. Templates for recurring projects.

      Food for thought.

      How do companies that produce goods (light notebooks and graph paper) turn a profit? (For either of the two aforementioned examples, anyone can make their own from possibly cheaper primitives. Making graph paper, for example, is just putting lines on paper.)

      Those goods being disposable and tangibly/temporally "rivalrous" in the physical world is one part of it, but it's not everything. People pay for convenience. (Look to e.g. paid Netflix subscriptions versus just pirating the stuff.) "Buy this ready-to-consume ebook" is one example of potentially profitable convenience that has made at least some inroads with the general public. Why shouldn't "buy this ready-to-use template" belong to the same set? Yeah, you could make your own, but this is graph paper.

    1. But then you end up with the old patronage system, where artists need to be on good terms with the ultra-rich.

      Isn't what we have now, i.e. the situation for most artists, pretty close to that?

      When "ordinary" artists (i.e. not career creatives) make money from their work, how much do IP laws have to do with that? It seems like the overriding factor is a culture of patronage and/or paying for convenience. (And are those two even distinct? Does one explain the other?)

    1. Requirements: Ruby and Bundler should be installed.

      wat

      This site has a total of two pages! Just reify them as proper documents instead of compilation artifacts emitted from an SSG.

    1. pretty-feed.xsl

      Shouldn't the spirit of this project demand that, instead of this link pointing to a GitHub pretty-printed render of the style sheet source code, this link point instead to a hosted copy—which can itself by styled by XSL, rather than merely being presented in its raw XML form?

      PS: Can an XSL style sheet point to itself to specify how it should be styled?

    1. (I know calling it "a philosophy" is confusing, I'll search a better word)
    2. I recently stopped working on it to learn Solid

      Needs to be resurrected. "Autonomous Data" is a way better name (being both cooler and less subject to ambiguity) than either "Solid" or "zero data [application]".

  5. autonomous-data.noeldemartin.com autonomous-data.noeldemartin.com
    1. Autonomous

      This term is well-suited for the sort of thing I was going for with S4/BYFOB.

      @tomcritchlow's comment about being hobbled by CORS in his attempt to set up an app[1] that is capable of working with his Library JSON is relevant. With a BYFOB "relay" pretty much anyone can get around CORS restrictions (without the intervention of their server administrator). The mechanism of the relay is pretty deserving of the "autonomous" label—perhaps even moreso that Noel's original conception of what Autonomous Data really means...

      1. https://library-json-node-2.tomcritchlow.repl.co/library?url=https://tomcritchlow.com/library.json
    1. instead of the “Mastodon appraoch” we take the “Replit approach”

      I'm confused by the continual references to the Replit. Once you have Replit-style power, you can do Mastodon interop—but it keeps you dependent on third-party SaaS. Continuing to violate the principle of least power isn't really any improvement. If you're going to shoot for displacing the status quo, it should be to enable public participation from people who have nothing more than a Neocities account or a static site published with GitHub Pages or one of the many other providers. Once you bring "live" backends into this (in contrast to "dead" media like RSS/Atom), you've pretty much compromised the whole thing.

    2. Here’s a super rough proof of concept Replit tiny library.

      There's nothing about this that requires Replit (or NodeJS, for that matter). The whole thing can be achieved by writing a program to run on the script engine that everyone already has access to—the one in the browser. No servers required.

    3. Here’s a real example. A while back I posted up some thoughts about a decentralized Goodreads: Library JSON - A Proposal for a Decentralized Goodreads. The idea is that a million individual static sites can publish their book lists in a way that allows us to build Goodreads-esque behavior on top of it.

      A sort of "backend on the frontend".

      A similar "BYFOB" design principle was the basis for a proposal to bring "Solid[-like] services for static sites" into existence. I submitted this proposal to NLnet in their call for applications for their user-operated Internet fund. It was not accepted.

    4. What happens if - maybe! - there’s a model of decentralization that feels more like a bunch of weird Replits networking with each other.

      Get rid of the networking, and make it more like the RSS/Atom model.

      ActivityPub, for example, shouldn't really require active server support if you just want publish to the clear Web (i.e. have no use for DMs). Anyone, anywhere can add RSS/Atom "support" to their blog—it's just dumping another asset on their (possibly static!) site. Not so with something like Mastodon, which is unfortunate. It violates the Principle of Least Power at a fundamental level.

    5. There’s no export button - everything is automatically replicated in all three places
    1. I wrote about my idea for Library.json a while back. It’s this idea that we might be able to rebuild these monolithic centralized services like Goodreads using nothing by a little RSS.

      See also this thread with Noel De Martin, discussing a (Solid-based) organizer for your media library/watchlist: https://noeldemartin.social/@noeldemartin/105646436548899306

      It shouldn't require Solid-level powers to run this. A design based upon "inert" data like RSS/Atom/JSON feeds (that don't require a smart backend to take on the role of an active participant in the protocol) would beat every attempt at Solid, ActivityPub, etc. that has been tried so far. "Inert"/"dead" media that works by just dumping some content on a Web-reachable endpoint somewhere, including a static site, is always going to be more accessible/approachable than something that requires either a server plug-in or a whole new backend to handle.

      The litmus test for any new proposal for a social protocol should be, "If I can't join the conversation by thumping on my SSG to get it to produce the right kind of output—the way that it's possible with RSS/Atom—then the design is fundamentally flawed and needs to be fixed."

    1. Maybe Mozilla could buy up Glitch and integrate it natively inside Firefox? Maybe BeakerBrowser will get enough traction and look beyond the P2P web? Maybe the Browser Company will do something like this?

      Before Keybase died, I had hopes that they would do something kind of like this. It'd work by installing worker services in the Keybase client and/or also allow you to connect to network-attached compute like AWS or DigitalOcean (or some Keybase-operated service) to seamlessly process worker requests when your laptop was offline. The main draw would be a friendly UI in the Keybase client for managing your workers. Too bad!

    2. Imagine if node.js shipped inside Chrome by default!

      There was something like that, in HTML5's pre-history: Google Gears.

      I've thought for a long time that someone should resurrect it (in spirit, that is) for the world's modern needs. Instead of running around getting everyone to install NodeJS and exhorting them to npm install && npm run, people can install the "Gears 2" browser extension which drastically expands the scope of the browser capabilities, and you can distribute app bundles that get installed "into" Gears.

      Beaker (mentioned later in this post) was an interesting attempt. I followed them for awhile. But its maintainers didn't seem to appreciate the value of frictionless onboarding experience, which could have been made possible by e.g. working to allow people to continue using legacy Web browsers and distributing an optional plug-in, in the vein of what I just described about Gears.

    3. Can you imagine if the beginner version of Node.js came pre-installed with a GUI for managing and running your code?

      A graphical JS interpreter? That's the browser! And it just so happens that it's already installed, too (everywhere; not just on Macs).

    4. build a browser that comes pre-installed with node.js

      Nah. Just stop programming directly against NodeJS to start with!

      The Web platform is a multi-vendor standardized effort involving broad agreement to implement a set of common interfaces. NodeJS is a single implementation of a set of APIs that seemed good (to the NodeJS developers) at the time, and that could change whenever the NodeJS project decides it makes sense to.

      (Projects like WebRun which try to provide a shim to let people continue to program against NodeJS's APIs but run the result in the browser is a fool's errand. Incredibly tempting, but definitely the wrong way to go about tackling the problem.)

    5. But… on installing node.js you’re greeted with this screen (wtf is user/local/bin in $path?), and left to fire up the command line.

      Agreed. NodeJS is developer tooling. It's well past the time where we should have started packaging up apps/utilities that are written in JS so that they can run directly in* the browser—instead of shamelessly targeting NodeJS's non-standard APIs (on the off-chance everyone in your audience is a technical user and/or already has it installed).

      This is exactly the crusade I've been on (intermittently) when I've had the resources (time/opportunity) to work on it.

      Eliminate implicit step zero from software development. Make your projects' meta-tooling accessible to all potential contributors.

      * And I do mean "in the browser"—not "on a server somewhere that you are able to use your browser to access, à la modern SaaS/PaaS"

    6. An incomplete list of things I’ve tried and failed to do
    1. To run it you need node.js installed, and from the command line run npm install once inside that directory to install the library dependencies. Then node run.js <yourExportedDirectory>

      Why require Node?

      Everything that this script does could be better accomplished (read: be made more accessible to a wider audience) if it weren't implemented by programming against NodeJS's non-standard APIs and it were meant to run in the browser instead.

    1. Theoretically, there are many plugins for webservers adding support for scripting using any scripting language you can name. These are sometimes used to host full-blown web applications but I don't see them being used to facilitate mildly dynamic functionality.

      All in all, despite its own flaws, I think this piece hints at a useful ontology for understanding the nuanced, difficult-to-name, POLP-violating design flaws in stuff like Mastodon/ActivityPub—and why BYFOB/S4 is a better fit, esp. for non-technical people.

      https://hypothes.is/search?q=%22black+and+dead+is+all+you+need%22+user:mrcolbyrussell

    2. the former allows me to give an URL to a piece of code

      But you're not! When you wield PHP like this, there is no URL for the piece of code per se—only its (potentially fleeting) output—unless you take special care to make that piece of code available as content otherwise. PHP snippets are just as deserving of a minted identifier issued for them as, say, JS and CSS resources are—perhaps even just as deserving as the content being served up on the site, but PHP actually discourages this.

    3. it's far easier for me to write a PHP script and rsync it to a web server of mine
    4. It's long been fairly apparent to me that the average modern web developer has no comprehension of what the web actually is3

      Agreed, but a it's a very ironic remark, given the author's own position...

    5. The only reasonable implementation options are JavaScript and PHP.

      I argue that PHP is not reasonable here. The only appropriate thing for this use case is (unminified) JS—or some other program text encoded as a document resource permitting introspection and that the user agent just happens to be able to execute/simulate.*

      • Just like the advocates of "a little jQuery", author here doesn't seem to realize that the use of PHP was the first step towards what is widely acknowledged to be messed up about the "modern" Web. People can pine for the days of simple server-side rendering, but there's no use denying that today's Web is the natural result of an outgrowth that began with abuses of the fundamental mechanisms underpinning the Web—abuses that first took root with PHP.

      * Refer to the fourth and sixth laws of "Sane Personal Computing, esp. re "reveals purpose"

    6. how does one support comments? Answer: Specialist third-party services like Disqus come into existence. Now, you can have comments on your website just by adding a <script> tag, and not have to traverse the painful vertical line of making your website itself even slightly dynamic.

      Controversial opinion: this is actually closer to doing the Web the way that it should be done, taking the intent of its design into account. NB: this is not exculpatory of minified JS bundles (where "megabyte" is the appropriate unit order of magnitude for measuring their weight) or anything about "modern" SPAs that thumb their nose at graceful degradation.

    7. an URL

      "an URL"? C'mon.

    8. It's not surprising at all, therefore, that people tend not to do this nowadays.

      I dunno how sound this conclusion is. Even for static sites, there are lower friction ways to do them, but people usually opt for the higher friction paths...

    9. You can read the “Effort” axis as whatever you like here; size, complexity, resource consumption, maintenance burden.

      Hey, look, it's an actually good example of the "steep learning curve".

      (I never understood why people insist that referring to it as a steep curve is wrong; clearly the decisions about your axes are going to have an impact on the thing. It seems that everyone who brings this up is insisting on laying out their graph the wrong way and implicitly arguing that other people need to take responsibility for it.)

    10. Perhaps each page load shows a different, randomly chosen header image.

      That makes them constitute separate editions. It makes things messy.

    11. They might have a style selector at the top of each page, causing a cookie to be set, and the server to serve a different stylesheet on every subsequent page load.

      Unnecessary violation of the Principle of Least Power.

      No active server component is necessary for this. It can be handled by the user agent's content negotiation.

    1. I grew up on PHP, it was the first thing beyond BASIC I ever wrote

      Should we lean into that? Maybe some sort of "server BASIC" is what we need.

      NB: need not (read: "should not") actually be a BASIC; moreso a shared spirit (see also: Hypercard)

    1. However when you look UNDERNEATH these cloud services, you get a KERNEL and a SHELL. That is the "timeless API" I'm writing to.

      It's not nearly as timeless as a person might have themselves believe, though. (That's the "predilection" for certain technologies and doing things in a certain way creeping in and exerting its influence over what should otherwise be clear and sober unbiased thought.)

      There's basically one timeless API, and that means written procedures capable of being carried out by a human if/when everything else inevitably fails. The best format that we have for conveying the content comprising those procedures are the formats native to the Web browser—esp. HTML. Really. Nothing else even comes close. (NB: pixel-perfect reproduction à la PDF is out of scope, and PDF makes a bunch of tradeoffs to try to achieve that kind of fidelity which turns out to make it unsuitable/unacceptable in a way that HTML is not, if you're being honest with your criteria, which is something that most people who advocate for PDF's benefits are not—usually having deceived even themselves.)

      Given that Web browsers also expose a programming environment, the next logical step involves making sure these procedures are written to exploit that environment as a means of automation—for doing the drudge work in the here and now (i.e., in the meantime, when things haven't yet fallen apart).

    1. Lines 1-7 represent quads, where the first element constitutes the graph IRI.

      Uh, it's the last element, though, not the first—right?

    2. Square brackets represent here a blank node. Predicate-object pairs within the square brackets are interpreted as triples with the blank node as subject. Lines starting with '#' represent comments.

      Bad idea to introduce this notation here at the same time as the (unexplained) use of square brackets to group a list of objects.

    3. RDF provides no standard way to convey this semantic assumption (i.e., that graph names represent the source of the RDF data) to other readers of the dataset.

      Lame.

    4. The datatype is appended to the literal through a ^^ delimiter.

      Were parens taken?

    5. A resource without a global identifier, such as the painting's cypress tree, can be represented in RDF by a blank node.

      Terrible choice for a name.

      What was wrong with some variation of "anonymous"?

    1. Cool URIs Don't Change

      But this one did. Or rather, it used to be resolvable as http://infomesh.net/2001/08/swtips/ (note the trailing slash), but now that returns 404 and is only available as http://infomesh.net/2001/08/swtips (no trailing slash).

    1. I like to keep things on the web if I can, permanently archived, because you never know when somebody will find them useful or interesting anyway.

      But Semantic Web Tips http://infomesh.net/2001/08/swtips/ is returning 404...

    1. The events list is created with JS, yes. But that's the only thing on the whole site (~25 pages) that works that way.Here's another site I maintain this way where the events list is plain HTML: https://www.kingfisherband.com

      There's an unnecessary dichotomy here between uses JS and page is served as HTML. There's a middle ground, where the JS can do the same thing that it does now, but it only does so at edit time—in the post author's own browser, but not in others'. Once the post author is ready to publish an update, the client-side generated content is captured as plain HTML, and then they upload that. It still "uses JS", but crucially it doesn't require the visitor to have their browser do it (and for it to be repeated N times, once per page visit)...

    1. A great case study in how the chest-puffing associated with the certain folks in certain segments of the compiled languages crowd can be cover for some truly embarrassing blunders.

      (Also a great antidote against a frequent argument by self-taught "full stack" devs; understanding the runtime complexity of your program is important.)

    1. At one level this is true, but at another level how long is the life of the information that you're putting into your wiki now, and how sure are you that something this could never happen to your wiki software over that lifetime?

      I dunno. Was the wiki software in question MediaWiki?

      I always thought it was weird when people would set up a wiki and'd go for something that wasn't MediaWiki (even though I have my own quibbles with it). MediaWiki was always the clear winner to me, even in 2012 without the benefit of another 10 years of hindsight.

    1. copying and pasting into an online html  editor, then hitting the clean up button?   Copy this cleaned up html into one of your  posts, save it, and view.

      This could/should be part of Zonelets itself.

    1. Updating the script

      This is less than ideal. Besides non-technical people needing to wade into the middle of (what very well might appear to them to be a blob of) JS to update their site, here are some things that Zonelets depends on JS for:

      1. The entire contents of the post archives page
      2. The footer
      3. The page title

      This has real consequences for e.g. the archivability for a Zonelets site.

      The JS-editing problem itself could be partially ameliorated by with something like the polyglot trick used on flems.io and/or the way triple scripts do runtime feature detection using shunting. When the script is sourced via script element from another page, it behaves as JS, but when visited directly as the browser destination it is treated like HTML and has its own DOM tree for the script itself to make the necessary modifications easier. Instead of requiring the user to edit it as freeform text, provide a structured editing interface, so e.g. adding a new post is as simple as clicking the synthesized "+" button in the list of posts, copying the URL of the post in question, and then pasting it in—to a form field. The Zonelets script itself should take care of munging it into the appropriate format upon form "submission". It can also, right there, take care of the escaping issue described in the FAQ—allow the user to preview the generated post title and fix it up if need be.

      Additionally, the archives page need not by dynamically generated by the client—or rather, it can be dynamically filled in exactly once per update—on the author's machine, and then be reified into static HTML, with the user being instructed to save it and overwrite the served version. This gets too unwieldy for next/prev links in the footer, but (a) those are non-essential, and don't even get displayed for no-JS users right now, anyway; and (b) can be seen to violate the entire "UNPROFESSIONAL" etthos.

      Alternatively, the entire editing experience can be complimented with bookmarklets.

    1. Linux itself only started as an amateur project, not professional like Minix, right

      That should be "big and professional like gnu".

    1. it’s hard to look at recent subscription newsletter darling, Substack, without thinking about the increasingly unpredictable paywalls of yesteryear’s blogging darling, Medium. In theory you can simply replatform every five or six years, but cool URIs don’t change and replatforming significantly harms content discovery and distribution.
  6. Apr 2022
    1. "Show me the proof," they said. Here it is. That's the source code. All of it. In all of it's beautiful, wild and raw mess.

      This is how to open source something. "Open source" means that it's published under an open source license. That's it. There's nothing else required.

    1. A ZIP file MUST have only one "end of central directory record"

      There are a few ways to interpret this, one being in an unintuitive way: that is, it is actually acceptable for a given bytestream to have multiple blobs that look like the end of central directory record (having the right signature and size/shape), but only the nth one is actually an end of central directory record. The requirement that a ZIP have only one meaning that all but the nth one aren't actually end of central directory records, but are nonetheless free to appear in the bytestream, because their not being an end of central directory record implies their existence doesn't violate spec.

    1. Without special care you'd get files that aren't supposed to exist or errors from trying to overwrite existing files.

      Yes, and that's just one of the reasons why scanning from the front is invalid. There's nothing special about the signature in file records—it's just a four-byte sequence that might make its way into the resulting ZIP without any intent to denote a file record. If you scan from the front and assume that encountering the signature means a file exists there without cross-referencing the central directory, it means your implementation treats junk bytes as meaningful to the structure of the file, which is not a good idea.

    2. That suggests the central directory might not reference all the files in the zip file

      Sure, but that doesn't mean that it's valid to treat the existence of those bytes as if that file is still "in" the ZIP. They should be treated exactly as any other blob that just happens to have some bytes that match the shape of the what a file record would look like if there were actually supposed to be a file there.

    1. function Zip(_io, _parent, _root) { this._io = _io; this._parent = _parent; this._root = _root || this; this._read(); } Zip.prototype._read = function() { this.sections = []; var i = 0; while (!this._io.isEof()) { this.sections.push(new PkSection(this._io, this, this._root)); i++; } }

      Although the generated code is very useful...

      This is wrong. It treats the ZIP format as if (à la PNG) it's a concatenated series of records/chunks marked by ZIP's characteristic, "PK" off-set, 4-byte magic numbers. It isn't. The only way to read a ZIP bytestream is to start from the end, look for the signature that denotes the possibility of the presence at the current byte offset the record containing the central directory metadata, proceeding to validate* the file based on that, and then operating on it appropriately. (* If validation fails, you can continue scanning backwards from the offset that was thought to be the signature.)

      The first passed validation attempt carried out in this manner (from back to front) "wins"—there may be more than one validation passes beginning at various offsets that succeed, but only the one that appears nearest to the end of the bytestream is authoritative. If one or more validation attempts fail resulting in no successes, the file may be corrupt, and the implementation may attempt to "repair" it (not necessarily by making on-disk modifications, but merely by being generous with its interpretation of the bytestream—perhaps presenting several different options to the user), or, alternatively, it may be the case that the file is simply not a ZIP archive.

      This is because a ZIP file is permitted to have its records be little embedded "data islands" (in a sea of unrelated bytes). This is what allows spanned/multi-disk archives and for the ZIP to be modified by updating the bytestream in an append-only way (or selectively rubbing out parts of the existing central directory and updating the pointers/offsets in-place). It's also what allows self-extracting archives to be self-extracting: foremost, they conform to the binary executable format and include code for being able to open the very same executable, process the records embedded within it, and write them to disk.

    Tags

    Annotators

    1. I wanted all of my Go code to just deal with JSON HTTP bodies

      Lame. Hopefully it's at least checking the content type and returning an appropriate status code with a helpful message, at least.

      (PS: it wouldn't be multipart/form-data, anyway; the default is application/x-www-form-urlencoded.)

    2. I'm not sure what $name is

      This post is filled with programming/debugging missteps that are the result of nothing other than overlooking what's already right in front of the person who's writing.

    3. comparing the event and window.event isn't enough to know if event is a variable in scope in the function or if it's being looked up in the window object

      Sounds like a good use case for an expansion to the jsmirrors API.

    1. Over the past half-century, the number of men per capita behind bars has more than quadrupled.

      I haven't read the original source.

      Is it possible that this has to do with stricter enforcement of existing laws (or even new ones that criminalize previously "acceptable" behavior, like drunk driving)? Today, being arrested is a pretty big deal—a black mark for sure. Subjectively, it seems that on the whole people of the WWII, Korean War, and Vietnam War eras were more rambunctious and society was more tolerant of it (since it was a lot more common, any of the potentially aggrieved parties would have likely engaged in similar stuff themselves).

    1. What I like best about pdf files is that I can just give them to someone and be almost certain that any questions will be about the content rather than the format of the file.

      Almost every time I've used FedEx's "Print and Go" for a PDF I've created by "printing" e.g. HTML (and that I've verified looks good when previewing it on-screen), it comes out mangled when actually printed to paper.

    1. One significant point of design that Tschichold abandoned was the practice of subordinating the organization of all text elements around an invisible central axis (stay with me here.) What that means is that a designer builds out all the design elements of a book from that nonexistent axis “as if there were some focal point in the center of a line which would justify such an arrangement,” Tschichold wrote. But this, he determined, imposed an artificial central order on the layout of a text. It was an illogical practice, because readers don’t start reading from the center of a book, but from the sides.

      Okay, I stuck it out like the author here requested but I'm still left wondering what any of this is in reference to.

    2. folios (the word used by designers for page numbers)

      Huh? You sure about that?

    1. Notes from Underground

      Standard Ebooks's search needs to incorporate alternate titles. I tried searching first for "the underground man" (my fault) but then I tried "notes from the underground", which turned up nothing. i then began to try searching for Dostoyevsky, but stopped myself when I realized the fruitlessness, because even being unsure if search worked across author names, I knew that I had no idea which transliteration Standard Ebooks was using.

    2. Why is Standard Ebooks sending content-security-policy: default-src 'self';? This is not an appropriate use. (And it keeps things like the Hypothesis sidebar from loading.)

    1. <title>Notes from Underground, by Fyodor Dostoevsky. Translated by Constance Garnett - Free ebook download - Standard Ebooks: Free and liberated ebooks, carefully produced for the true book lover.</title>

      This is way too long. (And when I try saving the page, Firefox stops me because the suggested file name—derived from the title—is too long. NB: that's a bug in both standardebooks.org and Firefox.)

    Tags

    Annotators

    1. I’ll also note that there’s the potential of a reply on Hypothes.is to a prior reply to a canonical URL source. In that case it could be either marked up as a reply to the “parent” on Hypothesis and/or a reply to the canonical source URL, or even both so that webmentions could be sent further upstream.

      You can also "reply" by annotating the standalone (/a/...) page for a given annotation.

    2. Webmention functioning properly will require this canonical URL to exist on the page to be able to send notifications and have them be received properly

      It's also just annoying when trying to get at the original resource (or its URL for reference).

    3. all the data on this particular page seems to be rendered using JavaScript rather than being raw HTML
    1. could a few carefully-placed lines of jQuery

      Look, jQuery is not lightweight.* It's how we got into this mess.

      * Does it require half a gigabyte of dev dependencies and regular dependencies to create a Hello, World application? No, but it's still not lightweight.

    1. even in their own personal spaces

      But your blog post on my screen is not in your personal space any more than your book/pamphlet/whatever lying open on my desk is (which is to say: not at all)... it's my space.

    2. thinking they’re simply querying texts.

      Huh?

    1. There’s this trap people fall into when writing, especially for a place like LessWrong where the bar for epistemic rigor is pretty high. They have a good idea, or an interesting belief, or a cool model. They write it out, but they’re not really sure if it’s true. So they go looking for evidence (not necessarily confirmation bias, just checking the evidence in either direction) and soon end up down a research rabbit hole.
    1. File not found (404 error)

      This is perverse (i.e. an instance of Morissette's false irony).

    1. it might be worth-while to point out that the purpose of abstracting is not to be vague, but to create a new semantic level in which one can be absolutely precise
    1. At a higher level of grouping, we have more trouble. This is the level of DLLs, microservices, and remote APIs. We barely have language for talking about that. We have no word for that level of structure.
  7. small-tech.org small-tech.org
    1. Ongoing research Building on our work with Site.js, we’ve begun working on two interrelated tools: NodeKit The successor to Site.js, NodeKit brings back the original ease of buildless web development to a modern stack based on Node.js that includes a superset of Svelte called NodeScript, JSDB, automatic TLS support, WebSockets, and more.

      "How much of your love of chocolate has to do with your designs for life that are informed by your religious creed? Is it incidental or essential?"

    1. The percentage of Democrats who are worried about speaking their mind is just about identical to the percentage of Republicans who self-censor: 39 and 40 percent, respectively

      What are Republicans worrying about when they self-censor? Being perceived as too far right and trying to appear more moderate, or catching criticism from their political peers if they were to express skepticism about some of the goofiest positions that Republicans are associated with at the moment?

    2. knowing that we could lose status if we don’t believe in something causes us to be more likely to believe in it to guard against that loss. Considerations of what happens to our own reputation guides our beliefs, leading us to adopt a popular view to preserve or enhance our social positions

      Belief, or professed belief? Probably both, but how much of this is conscious/strategic versus happening in the background?

    3. Interestingly, though, expertise appears to influence persuasion only if the individual is identified as an expert before they communicate their message. Research has found that when a person is told the source is an expert after listening to the message, this new information does not increase the person’s likelihood of believing the message.
    4. Many have discovered an argument hack. They don’t need to argue that something is false. They just need to show that it’s associated with low status. The converse is also true: You don’t need to argue that something is true. You just need to show that it’s associated with high status.
    1. This comment makes the classic mistake of mixing up the universal quantifier ("for all X") and the existential quantifier ("there exists [at least one] X"), when (although neither are used), the only thing implied is the latter.

      https://en.wikipedia.org/wiki/Universal_quantification

      https://en.wikipedia.org/wiki/Existential_quantification

      What the average teen is like doesn't compromise Stallman's position. If one "gamer" (is that a taxonomic class?) follows through, then that's perfectly in line with Stallman's mission and the previously avowed position that "Saying No to unjust computing even once is help".

      https://www.gnu.org/philosophy/saying-no-even-once.html

    1. the C standard — because that would be tremendously complicated, and tremendously hard to use

      "the C standard [... is] tremendously complicated, and tremendously hard to use [...] full of wrinkles and [...] complex rules"

    2. It should be illegal to sell a computer that doesn't let users install software of their own from source code.

      Should it? This effectively outlaws a certain type of good: the ability to purchase a computer of that sort, even if that's what you actually want.

      Perhaps instead it should be illegal to offer those types of computers for sale if the same computer without restrictions on installation aren't also available.

    1. myvchoicesofwhatto.attemptandxIiwht,not,t)a-ttemptwveredleterni'ine(toaneiiubarma-inohiomlreatextentbyconsidlerationsofclrclfea-sibilitv

      my choices of what to attempt and what not to attempt were determined to an embarrassingly great extent by considerations of clerical feasibility.