2,954 Matching Annotations
  1. Last 7 days
  2. Mar 2024
    1. Safety Tip Always use === (triple equals) and !== when testing for equality and inequality in JavaScript.
  3. gitlet.maryrosecook.com gitlet.maryrosecook.com
    1. a quick introduction to what happens when you run the basic Git commands

      The is the root cause of the issue re failure to understand.

    1. we write functions as functionName rather than functionName(); the latter ismore common, but people don’t use objectName{} for objects or arrayName[] for arrays,and the empty parentheses makes it hard to tell whether we’re talking about “the functionitself” or “a call to the function with no parameters”

    Tags

    Annotators

    1. Its performance is not very different from the system versions of grep, which shows that the recursive technique is not too costly and that it's not worth trying to tune the code.
    2. The occurrence of a do-while instead of a while should always raise a question: why isn't the loop termination condition being tested at the beginning
    1. A Window object represents the actual window of the web browser.

      No it doesn't. Window is pretty obviously a recapitulation of the W3C DOMWindow.

  4. Feb 2024
    1. So What Would a Static Site Generator for the Rest of Us Like Like?

      Not like a static site generator, that's for sure. Normal people don't a step in between input source code and the output. They don't want a difference between input and output at all. Programmers want a compilation step, because they're programmers.

    2. Not a web developer? Sucks to be you. The vast majority of the static site generator tools out there are run from the command line, powered by things you've never heard of like Node, Grunt, or Babel.
    3. can build a site in a jiff using any number of site builders like Jekyll
  5. Jan 2024
    1. Wirth himself realized the problems of Pascal and his later languages are basically improved versions of Pascal -- Modula, Modula-2, and Oberon. But these languages didn't even really displace Pascal itself let alone C -- but maybe if he had named them in a way that made it clear to outsiders that these were Pascal improvements they would have had more uptake.

      Modula and Oberon should have been codenames rather than independent projects.

    1. "=" to mean assignment and resorting to a special symbol for equality, rather than the obviously better reverse
    2. Pascal largely lost to its design opposite, C, the epitome of permissiveness, where you can (for example) add anything to almost anything

      C programmers balk and cry, "JavaScript!"

    3. Englebart

      NB: "Engelbart"

    1. in Java, the vulgar Latin of programming languages. I figure if you can write it in Java, you can write it in anything

      One of my favorite turns of phrase about programming. I come back to it multiple times a year.

    2. You can do this with recursive descent, but it’s a chore.

      Jonathan Blow recently revisited this topic with Casey Muratori. (They last talked about this 3 years ago.)

      What's a little absurd is that (a) the original discussion is something like 3–6 hours long and doesn't use recursive descent—instead they descended into some madness about trying to work out from first principles how to special-case operator precedence—and (b) they start out in this video poo-pooing people who speak about "recursive descent", saying that it's just a really obnoxious way to say writing ordinary code—again, all this after they three years ago went out of their way to not "just" write "normal" code—and (c) they do this while launching into yet another 3+ hour discussion about how to do it right—in a better, less confusing way this time, with Jon explaining that he spent "6 or 7 hours" working through this "like 5 days ago". Another really perverse thing is that when he talks about Bob's other post (Parsing Expressions) that ended up in the Crafting Interpreters book, he calls it stupid because it's doing "a lot" for something so simple. Again: this is to justify spending 12 hours to work out the vagaries of precedence levels and reviewing a bunch of papers instead of just spending, I dunno, 5 or 10 minutes or so doing it with recursive descent (the cost of which mostly comes down to just typing it in).

      So which one is the real chore? Doing it the straightforward, fast way, or going off and attending to one's unrestrained impulse that you for some reason need to special-case arithmetic expressions (and a handful of other types of operations) like someone is going to throw you off a building if you don't treat them differently from all your other ("normal") code?

      Major blind spots all over.

    1. There’s not much of a market for what I’m describing.

      There is, actually. Look at Google Docs, Office 365, etc. Those are all an end-run around the fact that webdevs are self-serving and haven't prioritized making desktop publishing for casual users a priority.

      The webdev industry subverts users' ability to publish to the Web natively, and Google, MS et al subvert native Web features in order to capture users.

      The users are there.

    1. "I've been thinking about the problem with division of labor for 7 years now, and I think I've boiled it down to two sentences. Why division of labor is disempowering: 1. (the setup) Power = capability - supervision. 2. Division of labor tends to discourage supervision."

      I think this is too pithy. It's hard to make out what applies to which actors and what's supposed to be good or bad; in order for me to understand this, I have to know a priori Kartik's position on division of labor (it's bad), then work backwards to see what the equations are saying and try to reconstruct his thinking. That's the opposite of what you want! The equations are supposed to be themselves the explanatory aide—not the thing needing explanation.

    2. Division of labor is an extremely mature state for a society. Aiming prematurely for it is counterproductive. Rather than try to imitate more mature domains, start from scratch and see what this domain ends up needing."
    1. Experts without accountability start acting in their own interests rather than that of their customers/users. And we don’t know how to hold programmers accountable without understanding the code they write.
    1. In a healthy community people do their reading in private, and come together to discuss what they read.
    1. Looking at the screen captures, one thing I like about HIEW is that it groups octets into sets of 32 bits in the hex view (by interspersing hyphens (-) throughout). Nice.

    1. (Sounds a little antisocial, sure, but you can imagine good reasons.)

      Geez. What?

      I'm not even sure Brent actually believes this so much as that he felt the need to post a defense. Or maybe he really does believe it. But it needs no defense.

    2. And the a couple months went by and Apple introduced Swift—decidedly not a scripting language—and eventually Brent bought in almost all the way.

  6. Dec 2023
    1. I think librarians, like all users of web-based information systems, should be unpleasantly surprised when they find that their systems haven't been engineered in the common sense ways that make them friendly to ad hoc integration.
    1. peak
    2. Over thirty years later, in 2021, we finally got to see some of the original source code for the World Wide Web. In June of this year, Berners-Lee put an NFT (non-fungible token) of nearly 10,000 lines of the code up for sale at Sothebys.

      This suggests that the source code wasn't available before the NFT auction. It's been public domain for 30+ years.

    1. getting into a position to think

      Often when I think about the problem of disruptions, environmental distractions, &c. which often results in total productivity death, I'm reminded of Licklider's "getting into a position to think" quip. It's not what he meant when he said it, and when I read him, I understand what he meant, but I somehow always forget and instead most strongly associate it with the process of eliminating disruptions.

    1. However, after finding the magic number, unzip does not check if the comment length correctly describes the comment that must follow. Rather, unzip only checks to make sure the comment length is small enough to not cause an out-of-bounds read beyond the end of the zip file. This means that unzip tolerates arbitrary data append to the end of a zipfile without even so much as a warning. The zip file spec does not allow this arbitrary data

      Yeah, no.

    2. The only way to find the End of Central Directory Record is to do a linear search backwards from the end of the file, but even that is not guaranteed to find it. This is because the comment itself can be anything; it can be any bytes; it can even contain the magic number we're looking for.

      It's not that difficult.

      Scan backwards for the magic number. If you find it, keep scanning and look for other occurrences. If you only found one, then congratulations: you're done—you found the end of central directory record.

      The fact that metadata defining the bounds of the comment block are in the end of central directory record at a fixed offset makes this super easy: for each candidate record, assume that it's a well-formed record and compute the boundaries of the comment block. Also compute what would be the boundaries of the start and end of the central directory record. If any of the boundaries are somehow illegal (e.g. they lie past the end of the file), then clearly this candidate is not the right one. If the offset of any candidate record lies within the boundaries of the comment block defined by an earlier candidate record, then the earlier record takes primacy and the later record should be eliminated as a candidate. Of the candidates that remain, choose the one nearest the end. That's it.

    1. Available Formats CSV

      This would be a good candidate for WebCSV (also known by the more official but definitely worse name CSVW).

    2. Note that this registry omits things such as NOTIFY and M-SEARCH from SSDP (part of the UPnP spec and described on the Cloudflare blog as "poorly standardised"[1] but used nonetheless for various devices, such as Roku[2]).

      1. https://blog.cloudflare.com/ssdp-100gbps/

      2. https://developer.roku.com/docs/developer-program/dev-tools/external-control-api.md

    1. it's a miracle actually it's not you know even if somebody's copying something it doesn't mean it's not America it could still be a miracle I'm not precluding a miracle there I'm just saying somebody copied

      [Laughter] Said, "Yeah, okay. It's a miracle." And I said, "Actually, it's not— you know, even if somebody's copying something, it doesn't mean it's not a miracle. It could still be a miracle. I'm not precluding a miracle there. I'm just saying somebody copied.

      NB: this isn't logically consistent.

    1. No more bugzilla and GitHub morning triage.

      Well, that's something you chose, not something imposed upon you.

    1. Reading text with a simple, clear, uncluttered layout without any animation or embedded videos or sidebars full of distracting, unrelated extras. If you use the "Reader Mode" in your web browser a lot and you love it because you think that 99% of the time it makes webpages ten times easier to use by throwing out all the useless clutter and just giving you what you want

      So sidestepping the sorts of things that result in dark blue text with red links on darkish green backgrounds?

    1. thanks to the complexity of JDSL it took days to do coding work that should only take minutes
    2. “Let me know if you have any more questions,”

      Here's one: "But why?" In other words, "What problem does this solve?"

    3. all you have to do
    4. Scott laughed. “You wouldn’t want to ‘just’ run it.
    5. the non-technical interviewer’s comment that it was all “built on top of Subversion” which he assumed was a simple misunderstanding

      The author describes in this article a pathological instance of what I've been calling "orthogonal version control systems".

      And I think this whole story was made up in jest, but it's basically the principle behind NodeJS/NPM's package.json design—subvert your project's source tree and well-founded version control discipline with your own cockamamie scheme.

    1. My work is part of a larger effort to reframe what we think about Victorian life

      Okay. What do we think about Victorian life?

    1. JS is used pervasively with Gnome. As prior art, JS had always been a major part of the Firefox codebase—the app was built with XUL widgets and XBL, which was essentially JSX and Web Components before those ever existed. With a lot of focus on making JS engines fast after Google introduced V8 with Chrome, Gnome started looking at alternative suggestions to GTK-with-C for app development on Gnome. About a year or two before GitHub released Atom, the Gnome folks convened and said that JS was going to be not just a tier-1 language for GTK, but the language that the project would push for Gnome desktop app development. By then integration was pretty mature and had proven itself.

      This upset a lot of people on Planet Gnome, though, and they basically revolted. Gnome as a project ended up putting out Gnome Shell, but sort of softened the prior commitment to JS. Too bad. Instead what we got was NPM and Electron, which in addition to tending to bring things bad enough on their own have also gone on to infect the places where you'd have traditionally encountered JS (i.e. web development).

      Most people who boot into a Gnome desktop and open up Firefox and then proceed to opportunistically rail in online forums against "JS" (when what they mean is "the NodeJS community and the way that NPM programmers do things") are either unaware of the state of affairs, or are aware but constantly forgetting—i.e. acting and speaking indistinguishably from the sort of people who don't know these things. It's weird. JS isn't slow. It isn't bloated. (Certainly not in comparison to, say, Python.) You can write command-line utilities that finish before equivalent programs that are written in Java do, and if you avoid antipatterns peddled as best practices (basically everything that people associated with Electron suggest you do), you can make desktop apps snappy enough that no one even knows what's happening behind the scenes.

      It's a massive shame that the package.json cult has cannibalized such a productive approach to computing.

    1. Every time I changed labs and computers during my postdoc years, I had to spend a day or two to reinstall everything I needed
    1. When the designer on the team, who also writes CSS, went to go make changes, it was a lot harder for them to implement them. They had to figure out which file to look in, open up command line, run a build step, check that it worked as expected, and then deploy the code.
    1. I hate npm so much. I had a situation where I couldn't work on a project because I couldn't get the dev environment running locally.
    1. “Various people asked to do various things with it, and they referred them to this guy who didn't respond,” Brand says. “And so it was just frustrating for decades.”
    1. This comes as an inevitable consequence of the fact that we changed the world once, and are lining up to do so again.

      I'd call this quaint in hindsight, but it was obvious with basic levels of foresight that Firefox OS was going to fail.

    1. No author WANTS to mark emphasis or important text.

      lol, what?

    2. Can you say that EVERY SINGLE TIME I want bold text that will match the semantics of <strong>? If that's true, then it shouldn't be called <strong>, it should be called <bold>.

      You're almost there, buddy. You're so close.

      This whole thread feels like an apolitical art project from the types of people who hang out in /r/SelfAwareWolves.

  7. Nov 2023
    1. curl (including libcurl) ships a new version at least once every eight weeks. We merge bugfixes at a rate of around three bugfixes per day.

      Interesting that the way this is framed tries to give it an incredibly positive spin. In reality, you might as well say, "Look how many bugs we're able to write (and still get people to use the project)."

    1. There’s an idea in the science-fiction community called steam-engine time, which is what people call it when suddenly twenty or thirty different writers produce stories about the same idea. It’s called steam-engine time ­because nobody knows why the steam engine happened when it did. Ptolemy demonstrated the mechanics of the steam engine, and there was nothing technically stopping the Romans from building big steam engines. They had little toy steam engines, and they had enough metalworking skill to build big steam tractors. It just never occurred to them to do it.
  8. srconstantin.wordpress.com srconstantin.wordpress.com
    1. When Ra is active, you’ll see a persistent disposition, in otherwise intelligent people, to misunderstand trade or negotiation scenarios as dominance/submission scenarios.

      Fuck. I just noticed that this line was in here!

    2. Nastasya Philipovna, in The Idiot, demonstrates this kind of anger; when she meets the man who embodies her moral ideal, instead of reaching out to him as a lover, she is outraged that he’s being shabby and noble and ignoring the “way of the world”, and she actively ruins his life. It’s not that she doesn’t appreciate goodness; it’s that it freaks her out.  People ought not be that good. It disturbs the universe.  Myshkin is missing something — it’s not clear what, because if you look at his words and actions explicitly he seems to be behaving quite sensibly and moderately — but he’s missing some intuition about the “way of the world”, and that enrages everyone around him.
    1. The presence of such features can beoutright dangerous if a web application is used for controlling a medical system or a nuclear plant.

      Untrue. It is not the presence of these things that "can be outright dangerous". If the programmer is reckless—doing things he or she shouldn't be—then certainly things can get dangerous. But it's a basic responsibility of the programmer not to be reckless.

    2. This is:

      Taivalsaari, Antero, Tommi Mikkonen, Dan Ingalls, Krzysztof Palacz, Antero Taivalsaari, Tommi Mikkonen, Dan Ingalls, and Krzysztof Palacz. 2008. “Web Browser as an Application Platform: The Lively Kernel Experience.”

    Tags

    Annotators

    1. Boy, this is hard to read. I know Marcel has blogged about this, so I won't mention my usual prescription that every academic article needs to be accompanied by a blog post. But I do wish every academic article were required to come with a single page cover sheet that authors are required to fill out and that starts with the words "check this out" or something else of the author's choosing if it can be shown to be equally compelling. It should not be subject to the template that the journal uses.

    2. This is:

      Weiher, Marcel, and Robert Hirschfeld. 2019. “Standard Object out: Streaming Objects with Polymorphic Write Streams.” In Proceedings of the 15th ACM SIGPLAN International Symposium on Dynamic Languages, 104–16. DLS 2019. Athens, Greece: Association for Computing Machinery. https://doi.org/10.1145/3359619.3359748

    Tags

    Annotators

    1. My NGVCS dream implies defacto centralization.

      I'm not seeing it.

    2. It'd be a hell of a lot easier to contribute to open source projects if step 0 wasn't "spend 8 hours configuring environment to build"

      I call this implicit step zero.

    1. Side note: I have for a long time (>10 years) been an advocate for the unbundling of browser history and bookmarks from the browser itself—not unlike the way Firefox was extracted as a standalone app from the Mozilla project. Firefox just didn't go far enough. I shouldn't have separate app-managed browsing histories for both Chrome and Firefox. (Syncing is not the answer here.) Each should just read and write to the same place on my machine. Same story for bookmarks. Same story for downloads. (Download management, that is—downloads can be written wherever, but when a download is initiated, it should be managed by the system download manager.)

    2. The natural conclusion of most tools for thought is a relational database with rich text as a possible column type. So that’s essentially what I built: an object-oriented graph database on top of SQLite.

      Dude, just embrace the Web already.

      (NB: By "the Web" I really do mean the Web (URLs, etc) and not browser-based tech like HTML, JS, and CSS.)

    3. drowned in an ocean of banality

      ... or an ocean of utility, even. See: https://en.wikipedia.org/wiki/Map–territory_relation.

    4. Organizing collections with the filesystem is difficult, because of the hierarchical nature of the filesystem

      Sure, imposing hierarchies on data that doesn't fit is a problem, but file systems support symbolic links. And there's the seldom-exercised option of having multiple hard links, too.

    5. the higher the activation energy to using a tool, the less likely you are to use it. Even a small amount of friction can cause me to go, oh, who cares, can’t be bothered
    6. And yet I don’t use them.

      The same is true of most personal sites, generally; the gamedev metaphor can be adapted to blog software vs using Twitter. (Twitter would always win.)

    1. In 1945, Vannevar Bush proposed the idea of memex, a hypertext system.

      Bush. As We May Think. The Atlantic. 1945.

    2. They are added as simple, unidirectional links by the original authors of whatever it is you’re reading. You can’t add your own link between two pages on New York Times that you find relevant. You can’t create a “trail” of web documents, photographs and pages that are somehow relevant to a topic you’re researching.

      This is confused. You are every bit as able to do that as with the medium described in As We May Think. What you can't do is take, say, a copy of an issue of The Atlantic, add links to it, and expect them to magically show up in all copies of the original. But then you can't do that with memex, either, and Bush doesn't say otherwise.

    3. On the web, documents aren’t yours. Almost universally, what you read on the internet is on someone’s else server. You cannot edit it, or annotate it.

      Actually, you can.

    1. A user on HN writes on the topic of blogging that they've reverted a publishing regime where they "just create github gists now" and "stopped trying to make something fancy". They're not wrong to change their practices, but it's a nonsequitur to give up maintaining control of their own content.

      The problem to identify is that they were building thing X—a personal website probably with a traditional (or at least fashionable) workflow centered around a static site generator and maybe even CI/CD—but they never really wanted X, they wanted Y—in this case GitHub Gists (or something like it). Why were they trying to do X in the first place? Probably some memetic notion that this is what it looks like when you do a personal website. Why is that a meme? Who knows!

      Consider that if you want a blogging workflow built around a gist-like experience, you can change your setup to work that way instead. In other words, instead of trying to throw up a blog based on some notion that it should look and feel a certain certain blog-like way, you could just go out and literally clone the GitHub Gists product. Along the way, you'll probably realize you don't actually want that, either. How important is it, really, that there's a link to the GitHub API in the footer, for example?

      The point is, though, that you shouldn't start with trying to imagine what your work should look like based on trends of people blogging about blogging setups that they never use and then assume that you'll like it. Start with something that you know you like and then ask, "What can I get rid of in a way that dropping means either that my experience doesn't suffer or is actually improved?"

      See also: - Blogging vs. blog setups. - New city, new job, new... website?

    1. there are probably more infrequent developers for any popular language than you might think

      Truth.

    2. Since infrequent developers spend relatively little time dealing with the language, setting up and running additional pieces of software is a much higher overhead for them and is generally not worth it if they have a choice.
    1. if you're going to write a 00:35:14 plugin for an ide prepare for your hello world to be days of learning and pages of code just to do the hello world

      If you're going to write a plugin for an IDE, prepare for your hello-world to be days of learning and pages of code just to do the hello-world.

    2. now i would love someday to do a plug-in for intellij that understands all of the 00:33:01 custom stuff for my game code right you know i would love to but you know that's that's a project

      Now, I would love someday to do a plug-in for IntelliJ that understands all of the custom stuff for my game code. Right? You know, I would love to, but you know that's that's a project.

    1. started a Patreon to help support the exploding usage

      Crazy. Consider how this compares to sharing the same stuff via blog posts + RSS.

    1. Not all of this is necessary to make a fast, fluid API

      Mm... These should be table stakes.

    2. You’re meticulous about little micro-optimizations (e.g. debouncing, event delegation

      "Meticulous" (and calling these "micro-optimizations") is a really generous way to label what's described here...

    3. There’s not much you can do in a social media app when you’re offline

      I dunno. That strikes me as a weird perspective. You should be able to expect that it will do at least as much as a standard email client (which can do a lot—at a minimum, reading/viewing existing messages and searching through them and the ability to compose multiple drafts that can be sent when you go back online).

    4. someone did a recent analysis showing that Pinafore uses less CPU and memory than the default Mastodon frontend

      Given what the Mastodon frontend is like, it would be pretty concerning if that weren't true.

    5. the fact that Mastodon has a fairly bog-standard REST API makes it pretty difficult to implement offline support

      Huh? This comes across as nonsequitur.

    6. it would be a pure DX (Developer Experience) improvement, not a UX (User Experience) improvement

      This raises questions about how much the original approach made for good DX in the first place (and whether or not the new approach would). That is, when measured against not using a framework.

      The whole point of these purported DX wins are supposed to be that—DX wins. When framed in the terms of this post, however, they're clear liabilities...

    7. it’s a lot of work to manually migrate 200+ components to what is essentially a new framework
    1. The web started off as a simple, easy-to-use, easy-to-write-for infrastructure. Programmers have remodelled HTML in their own image, and made it complicated, hard to implement, and hard to write for, excluding many potential creators.
    1. When can we expect the Web to stop pretending to be the old things, and start being what it really ought to be?

      The Web already is what it is, at least—and what that is is not an imitation of the old. If anything, it ought to be more like the old, cf Tschichold.

      Things like citability are crucial, not just generally, but in that they are fundamental to what the Web was supposed to have been, and modern Web practices overwhelmingly sabotage it.

    2. This conference imitating the old Providing papers for this conference is a choice between latex (which is a pre-web technology) or Word! There's a page limit! There's a styleguide on how references should be visually displayed! IT'S ALL ABOUT PAPER!
    1. This post is a narrative rant (in the same vein of Dan Luu's "Everything is Broken" post) about my problems one afternoon getting a Fancy New Programming Language to work on my laptop.
    1. Some people are extremely gifted mathematicians with incredible talent for algorithmic thinking, yet can be totally shut down by build configuration bullshit.
    1. The repo was 3 years old. Surely it wouldn't be that hard to get running again? Ha!Here's what went well.Installing Android Studio. I remember when this was a chore - but I just installed the Flatpak, opened it, and let it update.Cloning the repo. Again, simple.Importing the project. Couple of clicks. Done.Then it all went to hell.
    1. I sometimes find myself hacking together a quick console-based or vanilla JS prototype for an idea and then just stop there because messing with different cloud providers, containers, react, webpack and etc is just soul draining. I remember when I was 14 I'd throw up a quick PHP script for my project, upload it to my host and get it up and running in just a few minutes. A month ago I spent week trying to get Cognito working with a Serverless API and by the time I figured it out I was mentally done with the project. I cannot ever seem to get over this hump. I love working on side projects but getting things up and running properly is just a huge drag these days.
    1. My husband reviews papers. He works a 40h/wk industry job; he reviews papers on Saturday mornings when I talk to other people or do personal projects, pretty much out of the goodness of his heart. There is no way he would ever have time to download the required third party libraries for the average paper in his field, let alone figure out how to build and run it.
    1. I was trying to make it work with Python 2.7 but, after installing the required packages successfully I get the following error:
    2. Cidraque · 2016-Oct-23 Only linux? :( Matt Zucker · 2016-Oct-23 It should work on any system where you can install Python and the requirements, including windows.
    3. Hi there, I can't run the program, it gives me this output and I can't solve the problem by myself
    1. My first experience with Scheme involved trying and failing to install multiple Scheme distributions because I couldn’t get all the dependencies to work.
    1. The hidden curriculum consists of the unwritten rules, unspokennorms, and field-specific insider knowledge that are essential forstudent success but are not taught in classes. Examples includesocial norms about how to interact with authority figures, whereto ask for unadvertised career-related opportunities, and how tonavigate around the official rules of a bureaucracy.

      Clever. I also like the framing of MIT's "Missing Semester" https://missing.csail.mit.edu/

    1. The fact that most free software is privacy-respecting is due to cultural circumstances and the personal views of its developers
    1. Thereafter, I would need to build an executable, which, depending on the libraries upon which the project relies could be anything from straightforward to painful.
    1. @1:24:40

      Starting from main isn't actually a good way to explain a program almost ever, unless the program is trivial.

    1. We estimate that by 2025, Signal will require approximately $50 million dollars a year to operate—and this is very lean

      Nah. Wrong.

    1. This is:

      Hsu, Hansen. 2009. “Connections between the Software Crisis and Object-Oriented Programming.” SIGCIS: Michael Mahoney and the Histories of Computing.

    2. We undoubtedly produce software by backward techniques. Weundoubtedly get the short end of the stick in confrontations with hardwarepeople because they are the industrialists and we are the crofters. Softwareproduction today appears in the scale of industrialization somewherebelow the more backward construction industries. I […] would like toinvestigate the prospects for mass production techniques in software.17

      Hsu only cites Mahoney for this, but the original McIlroy quote is from "Mass Produced Software Components".

    Tags

    Annotators

    1. Firefox seems to impose a limit (at least in the latest release that I tested on) of a length* of 2^16 i.e. 65536. You can test this by creating a bookmarklet that starts javascript:"65525+11/// followed by 65512 other slashes and then a terminating quote. If you modify it to be any longer, the bookmarks manager will reject it (silently failing to apply the change). If you select another bookmarklet and then reselect the one you edited, it will revert to original "65525+11" one.

      * haven't checked whether this is bytes or...

    1. This snippet removes some of the empty a elements to make the headings anchors instead:

      javascript ([ ...document.querySelectorAll("a[name] +h1, a[name] +h2, a[name] +h3, a[name] +h4, h1 +a[name], h2 +a[name], h3 +a[name], h4 +a[name]") ]).map((x) => { if (x instanceof HTMLHeadingElement) { var link = x.previousElementSibling; var heading = x; } else { var link = x; var heading = x.previousElementSibling; } link.parentElement.removeChild(link); heading.setAttribute("id", link.name); })

    2. The HTML encoding of this document contains several errors, some of which substantially affect the way it's read. This fixes one of those problems in Appendix II:

      javascript ([ ...document.querySelectorAll("op") ]).reverse().forEach((op) => { let f = document.createDocumentFragment(); f.append(document.createTextNode("<OP>"), ...op.childNodes); op.parentElement.replaceChild(f, op); })

      The problem show be apparent on what is, at the time of this writing, line 4437:

      html <code>IF ?w THEN ?x<OP>?y ELSE ?z<OP>?y</code>

      (The angle brackets around the occurrences of "OP" should be encoded as HTML entities. Because they aren't they end up getting parsed as HTML op elements (which isn't a thing) and screwing up the document tree.)

    1. I keep repeating this in the hopes that it sticks, because too much OO code is written like Java, and too many programmers believe that OO is defined by Java.

      This reads like a total non-sequitur at this point in the post.

    2. The key and only feature that makes JavaScript object-oriented is the humble and error-prone this
    1. If you don’t own your platform (maybe you’re publishing to Substack or Notion), you can at least save your website to the Wayback Machine. I would also advise saving your content somewhere you control.

      The Wayback Machine should provide an easy way for website authors to upload archives that can be signed and validated with the same certificate you're serving on your domain, so you neither you nor the Internet Archive needs to waste more resources than necessary having the Wayback Machine crawl your site in the ordinary way.

    2. When talking to Ollie about this, he told me that some people leave their old websites online at <year>.<domain> and I love that idea

      At the expense of still breaking everyone's links.

      If you know you're going to do this, publish all your crap at <year>.<domain> now. Or even <domain>/<year>/. Oh wait, we just partially re-invented the recommendations of a bunch of static site generators.

      Better advice: don't touch anything once you've published it. (Do you really need to free up e.g. /articles/archive-your-old-projects articles from your namespace? Why?)

    1. we should be able to utilize tabs for any application and combine tabs between them

      Microsoft had a demo of this. It got shelved.

    1. I've mentioned it before, but what I find interesting is the idea of really parsing shell (scripts) like a conventional programming language—e.g. where what would ordinary be binary invocations are actually function calls i.e. to built-ins (and all that implies, such as inlining, etc).

    1. Thompson observed that backtracking required scanning some parts of the input string multiple times. To avoid this, he built a VM implementation that ran all the threads in lock step: they all process the first character in the string, then they all process the second, and so on.

      What about actual concurrency (i.e. on a real-world CPU using e.g. x86-64 SMP) and not just a simulation? This should yield a speedup on lexing, right? Lexing a file containing n tokens under those circumstances should then take about as long as lexing the same number of tokens in a language that only contains a single keyword foo—assuming you can parallelize up to the number of keywords you have, with no failed branching where you first tried to match e.g. int, long, void, etc before finally getting around to the actual match.

    1. The next article in this series, “Regular Expression Matching: the Virtual Machine Approach,” discusses NFA-based submatch extraction. The third article, “Regular Expression Matching in the Wild,” examines a production implementation. The fourth article, “Regular Expression Matching with a Trigram Index,” explains how Google Code Search was implemented.

      Russ's regular expression article series makes for a good example when demonstrating the Web's pseudomutability problem. It also works well to discuss forward references.

    2. A more efficient but more complicated way to simulate perfect guessing is to guess both options simultaneously

      NB: Russ talking here about flattening the NFA into a DFA that has enough synthesized states to represent e.g. in either state A or state B. He's not talking about CPU-level concurrency. But what if he were?

    1. A contributor license agreement, or CLA, usually (but not always) includes an important clause: a copyright assignment.

      Mm, no.

      There are CLAs, and there are copyright assignments, and there are some companies that have CLAs that contain a copyright assignment, but they don't "usually" include a copyright assignment.

    1. People are greedy. They tend to be event-gluttons, wishing to receive far more information than they actually intend to read, and rarely remember to unsubscribe from event streams.
    2. Relative economies of scale were used by Nikunj Mehta in his dissertation to compare architectural choices: “A system is considered to scale economically if it responds to increased processing requirements with a sub-linear growth in the resources used for processing.”

      Wait, why is sub-linear growth a requirement...?

      Doesn't it suffice if there are some c₁ and c₂ such that costs are characterized by U(x) = rᵤx + c₁ and returns are V(x) = rᵥx + c₂ where rᵥ < rᵤ and the business had enough capital to reach the point where U(x) < V(x)?

    3. the era of specialization: people writing about technical subjects in a way that only other scientists would understand. And, as their knowledge grew, so did their need for specialist words to describe that knowledge. If there is a gulf today, between the man-in-the-street and the scientists and the technologists who change his world every day, that’s where it comes from.
    4. A few people even complained that my dissertation is too hard to read. Imagine that!

      To be fair: it's not an example of particularly good writing. As Roy himself says:

      ["hypertext as the engine of hypermedia state"*] is fundamental to the goal of removing all coupling aside from the standardized data formats and the initial bookmark URI. My dissertation does not do a good job of explaining that (I had a hard deadline, so an entire chapter on data formats was left unwritten) but it does need to be part of REST when we teach the ideas to others.

      https://web.archive.org/web/20080603222738/http://intertwingly.net/blog/2008/03/23/Connecting#c1206306269z

      I'm actually surprised that Fielding's dissertation gets cited so often. Fielding and Taylor's "Principled Design of the Modern Web Architecture" is much better.

      * sic

    5. The problem is that various people have described “I am using HTTP” as some sort of style in itself and then used the REST moniker for branding (or excuses) even when they haven’t the slightest idea what it means.
    6. It isn’t RESTful to use POST for information retrieval when that information corresponds to a potential resource, because that usage prevents safe reusability and the network-effect of having a URI.

      Controversial opinion: response bodies should never have been allowed for POST requests.

    7. the methods defined by HTTP are part of the Web’s architecture definition, not the REST architectural style

      See also: Roy's lamentations in "On software architecture".

    8. most folks who use the term are talking about REST without the hypertext constraint
    9. what application means in our industry: applying computing to accomplish a given task
  9. citeseerx.ist.psu.edu citeseerx.ist.psu.edu
    1. This is:

      Dahl, Ole-Johan, and Kristen Nygaard. “SIMULA: An ALGOL-Based Simulation Language.” Communications of the ACM 9, no. 9 (September 1966): 671–78. https://doi.org/10.1145/365813.365819

    1. How about an example that doesn't make you cringe: a piece of code known as Foo.java from conception through all its revisions to the most recent version maintains the same identity. We still call it Foo.java. To reference a specific revision or epoch is what Fielding is getting at with his "temporally varying member function MR(t), where revision r or time t maps to a set of spatial parts" stuff. In short, line 15 of Foo.java is just as much a part as version 15 of Foo.java, they just reference different subsets of its set of parts (one spatial and one temporal).
    1. it’s definitely too late for a clearer naming scheme so let’s move on

      No way. Not too late for a better porcelain that keeps the underlying data model but discards the legacy nomenclature entirely.

    2. it sounds like it’s some complicated technical internal thing

      it is

    3. after almost 15 years of using git, I’ve become very used to git’s idiosyncracies and it’s easy for me to forget what’s confusing about it
    1. I think that the website code started to feel like it had bitrotted, and so making new blog posts became onerous.
    1. almost every other time I've had the misfortune of compiling a c(++) application from scratch it's gone wildly wrong with the most undiagnose-able wall of error messages I've ever seen (and often I never manyage to figure it out even after over a day of trying because C developers insist on using some of the most obtuse build systems conceivable)
  10. Oct 2023
    1. where I have access to the full reply chain, of which my own instance often captures only a subset

      extremely frustrating

      The experience is so bad, I don't know why Mastodon even bothers trying to synthesize and present these local views to the user. I either have to click through every time, or I'm misled into thinking that my instance has already shown me the entire discussion, so I forget to go to the original.

    2. I realized that what I wanted is not a better Mastodon client, but a better Mastodon workflow

      If you remove the word "Mastodon" from this sentence, this insight holds for a lot of things.

    1. In many ways, computing security has regressed since the Air Force report on Multics was written in June 1974.
    2. the modern textual archive format

      The ar format is underrated.

    1. The solution, Hickey concludes, is that we ought to model the world not as a collection of mutable objects but a collection of processes acting on immutable data.

      Compelling offer when you try draw upon your experience to visualize the opportunity cost of proceeding along the current path by focusing on the problem described, but it's basically a shell game; the solution isn't a solution. It rearranges the deck chairs—at some cost.

    1. HTML had blown open document publishing on the internet

      ... which may have really happened, per se, but it didn't wholly incorporate (subsume/cannibalize) conventional desktop publishing, which is still in 2023 dominated by office suites (a la MS Word) or (perversely) browser-based facsimiles like Google Docs. Because the Web as it came to be used turned out to be as a sui generis medium, not exactly what TBL was aiming for, which was giving everything (everything—including every existing thing) its own URL.

    1. Hixie does have a point (though he didn't make it explicitly) and that is that the script doesn't really add anything semantic to the document, and thus would be better if it was accessed as an external resource

      Interesting distinction.

    1. or slightly more honestly as “RESTful” APIs

      I don't think that arises from honesty. I'm pretty sure most people saying "RESTful" don't have any clue what REST really is. I think they just think that RESTful was cute, and they're not trying to make a distinction been "REST" and "RESTful" (i.e. "REST... ish", or "REST-inspired" if we're being really generous). Not most of them, at least.

    2. REST purists

      I really hate this phrase. It's probably one of the leading causes of misunderstanding. It's unfortunate that it's used here.

    3. Fielding’s dissertation isn’t about web service APIs at all
    4. how REST became a model for web service APIs

      It didn't, though. It became a label applied to Web service APIs, despite having nothing to do with REST.

    5. today it would not be at all surprising to find that an engineering team has built a backend using REST even though the backend only talks to clients that the engineering team has full control over

      It's probably not REST, anyway.

    6. why REST is relevant there

      "could be relevant" (if you don't really understand it)

    7. Fielding came up with REST because the web posed a thorny problem of “anarchic scalability,” by which Fielding means the need to connect documents in a performant way across organizational and national boundaries. The constraints that REST imposes were carefully chosen to solve this anarchic scalability problem.

      There are better ways to put this.

    8. the common case of the Web

      Good way to put it.

    9. REST gets blindly used for all sorts of networked applications now

      the label¹, at least

      1. "REST", that is
    10. inspired by Unix pipes

      More appropriate might be "extracted from (the use of) UNIX pipes".

    11. should

      I don't know if that's totally accurate. "Could", maybe.

      We could ask, I guess, but.

    12. He was interested in the architectural lessons that could be drawn from the design of the HTTP protocol; his dissertation presents REST as a distillation of the architectural principles that guided the standardization process for HTTP/1.1.

      I don't think this is the best way to describe it. He was first interested in extracting an abstract model from the implementation of the Web itself (i.e. how it could be and was often experienced at the time—by simply using it). His primary concern was using that as a rubric against which proposals to extend HTTP would have to survive in order to be accepted by those working on standardization.

    13. The biggest of these misconceptions is that the dissertation directly addresses the problem of building APIs.

      "The biggest of these misconceptions [about REST] is that [Fielding's] dissertation directly addresses the problem of building APIs."

      For example (another HN commenter you can empathize with), danbruc insists on trying to understand REST in terms of APIs—even while the correct description is being given to him—because that's what he's always been told: https://news.ycombinator.com/item?id=36963311

    1. This is:

      Ciortea, Andrei, Olivier Boissier, and Alessandro Ricci. “Engineering World-Wide Multi-Agent Systems with Hypermedia.” In Engineering Multi-Agent Systems, edited by Danny Weyns, Viviana Mascardi, and Alessandro Ricci, 11375:285–301. Lecture Notes in Computer Science. Cham: Springer International Publishing, 2019. https://doi.org/10.1007/978-3-030-25693-7_15.

    2. Toillustrate this principle, an HTML page typically provides the user with a num-ber of affordances, such as to navigate to a different page by clicking a hyperlinkor to submit an order by filling out and submitting an HTML form. Performingany such action transitions the application to a new state, which provides theuser with a new set of affordances. In each state, the user’s browser retrievesan HTML representation of the current state from a server, but also a selec-tion of next possible states and the information required to construct the HTTPrequests to transition to those states. Retrieving all this information throughhypermedia allows the application to evolve without impacting the browser, andallows the browser to transition seamlessly across servers. The use of hyperme-dia and HATEOAS is central to reducing coupling among Web components, andallowed the Web to evolve into an open, world-wide, and long-lived system.In contrast to the above example, when using a non-hypermedia Web service(e.g., an implementation of CRUD operations over HTTP), developers have tohard-code into clients all the knowledge required to interact with the service.This approach is simple and intuitive for developers, but the trade-off is thatclients are then tightly coupled to the services they use (hence the need for APIversioning).
    1. Finally, it allows anauthor to reference the concept rather than some singularrepresentation of that concept, thus removing the need tochange all existing links whenever the representationchanges

      I'm against this, because on net it has probably been more harmful than beneficial.

      At the very least, if the mapping is going to change—and it's known/foreseeable that it will change, then it should be returning 3xx rather than 200 with varying payloads across time.

    2. A resource can map to the empty set, which allowsreferences to be made to a concept before any realization ofthat concept exist

      A very nice property—

      These are not strictly subject to the constraints of e.g. Git commits, blockchain entities, other Merkel tree nodes.

      You can make forward references that can be fulfilled/resolved when the new thing actually appears, even if it doesn't exist now at the time that you're referring to it.

    1. Messages are delineated by newlines. This means, in particular, that the JSON encoding process must not introduce newlines within a message. Note however that newlines are used in this document for readability.

      Better still: separate messages by double linefeed (i.e., a blank line in between each one). It only costs one byte and it means that human-readable JSON is also valid in all readers—not just ones that have been bodged to allow non-conformant payloads under special circumstances (debugging).

    1. without raising an error

      ... since HTML/XML is not part of the JS grammar (at least not in legacy runtimes, i.e. those at the time of this writing).

    2. ECMA-262 grammar

      So, at minimum, we won't get any syntax errors. But the semantics of the constructs we use means that it's a valid expectation that the browser itself can execute this code itself—even though it is not strictly JS—because the expected semantics here conveniently overlap with some of JS's semantics.

    3. offline documents

      "[...] that is, ones not technically on the Web"

    4. This poses a problem that we'll need to address.

      Add a liaison/segue sentence here (after this one) that says "Browsers, in fact, were not designed with triple scripts in mind at all."

    5. Browsers

      "Web browsers"

    6. This is of course ideal

      huh?

    7. Our main here is an immediately invoked function expression, so it runs as soon as it is encountered. An IIFE is used here since the triple script dialect has certain prohibitions on the sort of top-level code that can appear in a triple script's global scope, to avoid littering the namespace with incidental values.

      Emphasize that this corresponds to the main familiar from other programming systems—that triple scripts doesn't just permit arbitrary use of IIFEs at the top level, so long as you write them that way. This is in fact the correct way to denote the program entry point; it's special syntax.

    8. The code labelled the "program entry point" (containing the main function) is referred to as shunting block.

      Preface this with "In the world of triple scripts"?

      Also, we can link to the wiki article for shunting blocks.

    9. Note that by starting with LineChecker.prototype.getStats before later moving on to LineChecker.analyze, we're not actually practicing top-down programming here...

    10. It expects the system read call to return a promise that resolves to the file's contents.

      Just say "It expects the read call to resolve to the file contents."?

    11. system.print("\nThis file doesn't end with a line terminator.");

      I don't like this. How about:

      system.print("\n");
      system.print("This file doesn't end with a line terminator.");
      

      (This will separate the last line from the preceding section by two blank lines, but that's acceptable—who said there must only be one?.)

    12. What about a literate programming compiler that takes as input this page (as either Markdown or HTML) and then compiles it into the final program?

    13. and these tests can be run with Inaft. Inaft allows tests to be written in JS, which is very similar to the triple script dialect. Inaft itself is a triple script, and a copy is included at tests/harness.app.htm.

      Reword this to say "[...] can be run with Inaft, which is included in in the project archive. (Inaft itself is a triple script, and the triplescripts.org philosophy encourages creators to make and use triple scripts that are designed to be copied into the project, rather than being merely referenced and subsequently downloaded e.g. by an external tool like a package manager.)"

    14. We need to embed the Hypothesis client here to invite people to comment on this. I've heard that one of the things that made the PHP docs so successful is that they contained a comment section right at the bottom of every page.

      (NB: I'm not familiar at all with the PHP docs through actual firsthand experience, so it may actually be wring. I've also seen others complain about this, too. But seems good, on net.)

    15. The project archive's subtree

      Find a better way to say this. E.g. "The subdirectory for Part 1 from the project archive source tree"

    16. [ 0, 0, 0, 1 ]

      And of course there's a bug here. This should be [1, 0, 0, 1].

    17. returns [ 0, 0, 0, 1 ]

      We can afford to emphasize the TYPE family constants here by saying something like:

      Or, to put it another way, given a statement let stats = checker.getStats(), the following results are true:

      stats[LineChecker.TYPE_NONE] // evaluates to `1`
      stats[LineChecker.TYPE_CR]   // evaluates to `0`
      stats[LineChecker.TYPE_LF]   // evaluates to `0`
      stats[LineChecker.TYPE_CRLF] // evaluates to `1`
      
    18. propertes

      "properties"

    19. returned

      "... by getStats."

    20. In fact, this is the default for DOS-style text-processing utilities.

      Note that the example cited is "a single line of text". We should emphasize that this isn't what we mean when we say that this is the default for DOS-style text files. (Of course DOS supports multi-line text files. It's just that the last line will have no CRLF sequence.)

    1. The hack uses some clever multi-language comments to hide the HTML in the file from the script interpreter, while ensuring that the documentation remains readable when the file is interpreted as HTML.

      flems.io uses this to great effect.

      The (much simpler) triplescripts.org list-of-blocks file format relies on a similar principle.