3,402 Matching Annotations
  1. Last 7 days
    1. From an optimistic perspective:

      ~10 articles per month sounds like the perfect publishing pace for a blog, and more academics should be blogging.

      What if the submissions to the aforementioned low-quality journals were just a bunch of 6 pagers that were actually originally intended for syndication to the author's website and also just happened to also get wider distribution in those suck journals?

      And a separate question: might it be conceivable that this could turn out to be an effective way to actually bootstrap high-quality research output and a healthy ecosystem of professional/academic blogs (in the vein of the parable of the pottery class https://austinkleon.com/2020/12/10/quantity-leads-to-quality-the-origin-of-a-parable/)?

    1. Response #1

      It would be nice to be able to have good subject lines. I'm thinking that if you register your form, then you should also be able to register a template string that can be used to build a good subject line (like "Riku key request for riku@x.colbyrussell.com"—built from Riku key request for ${email}).

    1. In my first year of college, I was biking back through campus one lovely fall day when I saw a large group of fellow students gathered enthusiastically around a truck from the California burger chain In-N-Out. Maybe they craved a taste of home. But in Michigan, where I was from, there are no In-N-Outs. I’d never heard of it. Feeling excluded from the burger party, I biked off in a huff to eat my lunch in the dining hall alone. I remember thinking, I’m not standing in line for a burger! What was my problem? As an 18-year-old, I certainly didn’t want to think of myself as feeling that I did not belong in college. And I definitely didn’t want to think that an In-N-Out truck could trigger that feeling. How ridiculous that would be.
    1. This is frankly a really good phishing email. Breaking it down: It greets the user personally with their NPM username. This makes it look personalized, so people are more likely to trust it. People are used to the idea of changing passwords for security. With that in mind, at a glance the idea of changing your two-factor auth credentials "for security reasons" isn't completely unreasonable. NPM has always been kinda weird compared to other open source package repositories, so them requiring something strange like that reads as reasonable. It sets a deadline a few days in the future. This creates a sense of urgency, and when you combine urgency with being rushed by life, you are much more likely to fall for the phishing link. It links to a website (I'm assuming it's on npm.help), and that website is used to get the two-factor credentials somehow and then start publishing new packages with the exploit code.

      What many analyses fail to highlight is that NPM has been sending a lot of really pushy emails about two-factor authentication settings over the last couple years.

      Perversely, this cargo culted best practice led to worse security.

    1. Even though some of the affected versions are currently being removed from npm, some are still available. So please use overrides in your package.json.A malicious package can still be pulled in if another dependency requires a vulnerable version range. Use the overrides feature in your package.json to force a specific, safe version of any package across your entire project.

      Perhaps the most irresponsible thing of the last week is that among possible mitigations, to give priority to exercising even more features of the poorly conceived package.json-based system, especially where it is redundant (but inferior) to a mitigation scheme that consists of checking the given source code revision into the revision control system.

      This omission is especially absurd in relation to stuff like the has-ansi package, which hasn't had a substantial change in years.

    1. It works as well as you’d expect an operating system to work in a browser.

      The author also keeps saying "operating system" like this as a euphemism for "desktop environment" or "graphical shell", which is kind of annoying.

    2. Often times, I’ll want to refer to different pages at the same time. So I’ll CMD + click “a couple times” while browsing around and before I know it, I have 12 new tabs open – all indistinguishable from each other because they share the same favicon.PostHog.com has the same problem – especially as the site has grown from supporting a handful of paid products to over a dozen.As I looked for ways to solve this explosion of pages, I started to question many of the typical patterns that marketing & docs websites have today.Long-form scrolling. Oversized footers. Absurd whitespace.These website encourage scrolling, but just to get people to the bottom of the page? And then what?Why are we doing this? What if we just made better ways to consume content?That’s the idea behind the new PostHog.com.

      The absolute last thing I want here is to delegate decisions about interaction style and the implementation of application-level affordances to the person supplying the content.

  2. Sep 2025
    1. There's some additionally complexity because of something called the "lexer hack". Essentially, when parsing C you want to know if something is a type name or variable name (because that context matters for compiling certain expressions), but there's no syntactic distinction between them: int int_t = 0; is perfectly valid C, as is typedef int int_t; int_t x = 0;. To know if an arbitrary token int_t is a type name or a variable name, we need to feed type information from the parsing/codegen stage back into the lexer. This is a giant pain for regular compilers that want to keep their lexer, parser, and codegen modules pure and plantonically separate, but it's actually not very hard for us! I'll explain it more when we get to the typedef section, but basically we just keep types: set[str] in Lexer, and when lexing, check if a token is in that set before giving it a token kind

      It's strange how much appeal "the lexer hack" has for being such a bad solution to the problem.

      The most reasonable thing is to just not care about whether your lexer can distinguish between whether an identifier refers to a type or to another sort of identifier. Just report that it's an identifier. In practice, this doesn't very much change how you have to implement the parser, and the symbol table can remain local to the higher-level parser machinery where it was always going to be anyway.

      The lexer hack sucks.

    1. You can see where we're going. If our goal is to minimize copying, it would be better to copy a fundamental type once than to generate a pointer, copy that, then dereference that pointer to get the underlying value. That is the crux of this subtle optimization trick.

      This isn't subtle. It's intuitive and obvious, being presented as if it's non-obvious. I kept waiting for the punchline.

      I guess you can get here if your main mode of thinking is dealing in opaque "best practices" like "use references because it makes your code faster".

  3. Aug 2025
    1. I video taped the interview in order to document any divergences

      This struck me as an obviously good idea the first time I came across something similar where Wikileaks published their recording of an interview of Julian Assange by some large news organization (who published their own copy that they owned the rights to).

    1. we couldlearn from the history and design a better file format:• Each part of a format must be unambiguously located. Itis a bad idea to rely on fragile signature searching.• Conflicting data resolution should be clearly defined,ideally by avoiding redundant data in the first place.• Leave room for backward-compatible feature extensions.Make it clear whether an extension is enabled or not.• Fields that are allowed to be silently ignored should notcontain security-sensitive data. For example, the extrafields in ZIP should not contain filenames and sizes.

      Ideally, one would (above all) maintain compatibility with at least ISO/IEC 21320-1:2015 and, as a baseline concern, follow the lead of the design of PNG and specify a means by which to identify definitively (a) whether a data block at a given offset is a ZIP-level "chunk" versus random noise, and if so (b) the extent—i.e. size—of said chunk, so that implementations know how to confidently determine the chunk boundaries to skip over them, even if they don't have intrinsic knowledge the offsets and bitwidths of the fields in that struct.

    2. Normalize the ZIP file. To exploit ZIP parsing ambiguities,the attacker usually needs to carefully manipulate the fieldsof a ZIP file, which cannot be achieved by a regular ZIParchiver. Most ambiguities will disappear if the ZIP file isfirst extracted and then repacked. Therefore, if we care aboutonly the contents but not the integrity of the whole ZIP file,we can normalize the ZIP file by extracting and repacking itbefore processing.

      This really seems like the best strategy—to ensure that Parser2 sees content the same way that Parser1 sees it, you can effectively give Parser2 Parser1's powers by having Parser1 "talk" to Parser2 by way of a specially prepared input. (It just happens to be the case that that input also looks like a ZIP archive.)

      This does require some validation of the suitability for Parser1 and Parser2 to work together, but it's not any more difficult than the previous task at hand whereby you expect/require Parser1 and Parser2 to behave exactly the same for all inputs. The task is made slightly easier. And to this end, a test suite (derived from the work from this paper) could be distributed for the purpose of benchmarking implementations for compatibility matches.

      This could be facilitated by the appropriate libraries/packages all being patched to support a "normalize" (i.e. "repack") operation (and for this to be the norm among all implementations) explicitly for this purpose.

    1. ZIP is just not a very well defined format

      The purported ambiguity in the implementor's notes is overstated. With exactly one exception, all of the concrete criticisms I've ever come across (including, unfortunately, some of the specific ones woodruffw echoed in the Astral blog post earlier this month) come down to implementors just flat-out doing the wrong thing (e.g. disregarding the central directory entirely), or, when presented with the choice of:

      1. doing the obviously correct thing, versus

      2. taking a really, really, really obtuse position that a particularly strained reading justifies an implementor's decision to do certain things (like trying to infer the position of the central directory instead of just using the field that is explicitly labeled for that purpose—bonus points if accompanied by complaints that the field is "redundant")

      ... they go for the latter, out of a seeming inability to say, "Whoops, how silly of me," and just fixing their damn (mental model and corresponding) implementation.

    1. These design considerations meant that the ZIP standard is complicated to implement, and in many ways is ambiguous in what the "result" of extracting a valid ZIP file should be.

      Not really. The correct behavior is, well, correct. There are just a lot of people who either succumb to taking (frankly bizarre) shortcuts or stubbornly insist on reading the spec the wrong way.

    2. multiple End of Central Directory headers

      This surely means "multiple End of Central Directory records". To reject archives with multiple headers would mean accepting only archives that contain a single file.

    1. WE DO NOT BREAK USERSPACE

      If only full-stack developers actually cared about the user who is sitting in front of a Web browser (i.e. end users) as much as they cared about the "users" of their APIs who are sitting in front of a text editor (i.e., programmers).

    1. What a waste of total bandwidth and disk space everywhere.

      What until you're motivated to articulate what you thought the value proposition of NPM-style build-time package fetchers actually ever was and then eventually settle into the realization of how little disk space has been saved in the whole scheme, how much bandwidth has been wasted on thousands of redundant package fetches, and the immeasurable amount of human toil and the resulting hit to the world's creative output from overlay version control systems instead of just leaving the version control to the DVCS that you're using.

    1. There is a user in the JavaScript community who goes around adding "backwards compatibility" to projects. They do this by adding 50 extra package dependencies to your project, which are maintained by them.

      Of course, this is another example where people say "JavaScript" and mean "NPM".

    2. Git can be configured to automatically convert LF to CRLF on checkout and CRLF breaks bash scripts.

      The last time I looked, around 8 years ago, of the multiple settings that Git offers wrt line endings, there was no combination of any of any of them that would do the most reasonable thing, and all of the advice and documentation around use of these settings were wrong. The best course of action was to use the defaults and just deal with it.

    3. Image dimensions in EXIF metadata are cursedThe dimensions in EXIF metadata can be different from the actual dimensions of the image, causing issues with cropping and resizing.

      It's strange that EXIF even includes such properties—what's wrong with the image's existing dimension fields?

    1. APPNOTE section 1.4.

      This section doesn't appear in all revisions of the APPNOTE. Notably, it doesn't appear in revision 1.0, nor does it appear in 6.2.0, which is the basis for the ISO standardized ZIP spec.

    2. Python's zipfile interprets it as relative to the EOCDR

      Huh? For the reasons above, I would be very, very surprised if this were true and Woodruff isn't just confused. This would make it incompatible with the vast majority (virtually all) ZIP-producing software. (Is there any implementation which actually works this way?)

    3. Unfortunately, the ZIP specification is ambiguous about the nature of this offset: it's not described as either absolute (i.e. from the start of the ZIP) or relative (i.e. from the EOCDR's own offset).

      It would take a very strained reading to end up with a misunderstanding aligning with the latter behavior (and it's very unfortunate that the "relative"/"absolute" terminology was chosen here, as where "relative" appears in the spec, it is used to mean almost the exact opposite of how it's used here—the relativity of the "relative offsets" mentioned in the spec are explicitly described as being relative to the start of the "disk" (which in almost all cases constitutes the only "disk" comprising the archive—i.e. it's the offset from the start of the file).

    1. I don't think I've seen a single person bring up the classism inherent in dictating gentlemanly manners.

      Here, or in general?

      I do think about this a lot. This is a nice, succinct way to put it. (Critique, though: "classism" is not the best way to put it. For better or worse, "privilege" is probably one of the best words we have for this. Separately: Since "privilege" became a staple of common rhetoric, I've mused a lot about trying to convince people to minimize the focus on "privilege" (to avoid the familiar kneejerk reactions from those hearing it who have associated it with overuse), with the intent to be to sway people instead by speaking about privilege without actually using the word "privilege" and speaking exclusively in terms of affordances*.)

      See: https://hypothes.is/a/TCB5zClKEeyrIOu9mp-5TA and tag:"privilege vs affordance". (NB: Hypothes.is doesn't linkify the tag in the preceding annotation correctly.)

    1. find out that I didn't have the whole picture, the problem was messier than it first appeared, and there were perfectly valid reasons for the code being that way

      I've tried using a hiking metaphor to describe a similar phenomenon (specifically, and perversely, as a preface when trying to explain second panel syndrome.

  4. Jul 2025
    1. When you own your domain name this is trivial.

      It's not. It assumes that if you have your own domain, you're also able to configure/patch the server to be an active participant in the Webfinger protocol—you can't do anything to add Webfinger support to your static site, for example. This is one reason that makes Webfinger a bad protocol.

    1. Hence the difficulty in people seeing the point of getting Solid for just one app.

      I explained in a post to Mastodon somewhere that the issue lies in trying to sell, to users, the infrastructure.

      For Solid to succeed, it needs to get into the hands of people, and to get into the hands of people, it needs an app that incidentally uses Solid—you're never going to convince the majority of end users to adopt Solid on its merits. They need to experience it—something that TBL should be acutely aware of, since this is what drop adoption of the WWW.

    1. Students read each sentence out loud and then interpreted the meaning intheir own words—a process Ericsson and Simon (220) called the “think-aloud” or “talk-aloud” method. In this 1980 article, the writers defend thisstrategy as a valid way to gather evidence on cognitive processing.

      Speaking (or typing, as in the case of transcription) has a substantially negative effect on my ability to process the same amount of information as silent reading.

      Later, the authors of this paper state that subjects who were "uncomfortable" with reading out loud had the option to read silently.

      I don't doubt the conclusions of the paper, but I suspect that reading aloud actually has a deleterious effect, especially for those who are performing the act of reading without showing signs of having achieved the comprehension desired.

    1. I use a jekyll/CI/static hosting workflow, and even though I make a zillion git commits a day, somehow branching, editing, PRing, and merging one to my website seems like friction.

      This is at the root of the infamous "Blogging vs. blog setups" comic https://rakhim.org/honestly-undefined/19/.

      The fact that this is true is also the entire basis for wikis. It is reasonable to find it irksome that people, perversely, refer to Git repos full of Markdown documentation as "wikis"—which they aren't. They are fully the opposite.

    1. However, the high-quality, widely-available free software that is most likely to get beginners hooked on programming – to turn users into programmers – are almost always written in a UNIX style. That's why my article focuses on this culture.

      Note that while this isn't true now in 2025, it wasn't exactly true in 2013 when this post was written, either. Eclipse, IntelliJ, and NetBeans are all popular IDEs that were free and (in both senses) at the time this post was written.

    2. How about we use Python to process real-world data and then draw a few charts? Okay sure, let's fire up our trusty 1960s-era text editor (not Microsoft Word) and write some code. Wait, first we need to install the proper add-on libraries such as NumPy and Matplotlib. [an hour of troubleshooting later, especially for Windows users ...] Okay, let's write some code. [type, type, type] Yeah, isn't this fun and intuitive? Python makes it all so easy ...
    1. Note how the Select declares the property id="lines". That makes lines a reactive variable.

      That's a strange way to put it. Surely this makes lines refer to the Select element—and the properties (such as lines.value below) are just natural, first-class properties of that element?

      If not, and there's something else going on here, that's an unforgivable overloading of the "id" attribute, which is already intrinsically significant in XML for orthogonal purposes. This needs to go.

    1. I declare that IMGUI performance is pretty gosh darn good! Some readers might have predicted IMGUI to be significantly worse as RMGUI. Instead we see numbers that are in the same ballpark.

      Maybe, this is true, but it's not a conclusion we can draw from these tests. In choosing programs to benchmark, he selected notorious behemoths to pit against the IMGUI set—which, it should be noted contains stuff like Dear ImGui and EGUI, which aren't even apps.

      And later, in the Windows benchmarks, he shows that the two apps—the only apps—in the IMGUI set actually perform worse than the others. The fact that RAD Debugger has a heavier power draw than clunky Electron apps of all thinks like VSCode should be considered evidence conclusive in the other direction.

    1. @58:18

      I'm very anti- this whole, like, NPM thing that happened where all these web people like— not only is their program like not a mathematical object, it's like defined by things not even on their computer—out in the world that might change at some point or go away. That's the worse possible reality, and so I think you want in source control, ideally, everything that you're using.

    1. Software interfaces are much too rigid for that. I vaguely remember Alan Kay speaking about more lenient interface mechanisms - if anyone has a reference to share, please leave a comment!

      Kay has certainly spoken about such things (albeit not to my knowledge in any way that illuminates a generalizable approach that solves the problem). I'll elide a reference to anything specific (and the prerequisite side quest to track it down), and instead include a reference to Sussman at Strange Loop: We Really Don't Know How To Compute

    2. the few we have impose unpleasant restrictions

      NB: Unpleasant restrictions consist of things like, "You cannot read files from the user's local disk outside the file(s)/folder(s) that the user has given the software permission to access, and the person who authored it can't write to arbitrary locations on the user's disk"—both of which are imminently reasonable (and relate to things which you would hope would go one to be addressed (and the quicker, the better) if it were the case that the environment didn't already provide protection against these sorts of things).

    3. What I call software collapse is what is more commonly referred to as software rot: the fact that software stops working eventually if is not actively maintained. The rot/maintenance metaphor is not appropriate in my opinion because it blames the phenomenon on the wrong part. Software does not disintegrate with time. It stops working because the foundations on which it was built start to move.
    1. both sides

      Third option: acknowledge that you chose the wrong foundations to build upon, and just because the present structures have proven to be (what I'd have argued was a predictable) failure, that that doesn't mean that there is no foundation upon which these things can be built while delivering the sort of stability desired.

    1. What is also sorely missing is a straightforward way to package an application program with all its dependencies in such a way that it can be installed with reasonable effort on all common platforms.

      Assuming the "common platform" is something reasonable (i.e. depends only a runtime that can be expected to be present on all machines) this is as straightforward as zip -r ./package.zip research/.

      (The problem isn't figuring out how to do it. It's getting people to stop sleepwalking along with all the "best practices" that are outright inimical to the reproducibility/replicability goals. Almost everyone—including to an extent the author of this post—is unwilling to cast aside their attachments.)

    1. More work is clearly required. But it will only happen if larger parts of the scientific community agree that it is worth doing

      Yeah. It's a major social problem. Not so much a technical one.

    2. Now let's consider two popular recipes for reproducibility

      Imagine if there were a readily accessible World Wide Wruntime that anyone with any commodity computing device could use to run arbitrary code (and its corresponding documentation) written by other people...

    3. Alice: I couldn't compile your code. Look at this error message! Bob: It works for me! You use Debian 12? I still run Debian 9. That's surely what makes the difference. But I also have good news: I managed to run your code on my machine. The only problem is that... I get 0.8 nm. Alice: I use libode version 3.4. The documentation says it must be compiled with gcc 10 or later. You probably have an older gcc. Bob: Uhhh... Well... I will have to install a virtual machine with Debian 12, and you with Debian 9. Shall we meet again in a week?
    1. Everything I write in these posts will be a normal, 64-bit, Windows program. We'll be using Windows because that is the OS I'm running on all of my non-work machines

      Terrible pedagogy.

      It's trivial for someone who has only a single device to get their hands on a Linux image and run it in a lightweight VM if they're coming from Mac or Windows. The preceding sentence doesn't hold true for any other permutation of { Linux, Mac, Windows }.

    1. Many JavaScript websites will advise you to never use the “==” and “!=” JavaScript operators, because when they compare variables containing different data types, JavaScript will coerce one of the operands to a matching type, sometimes in unexpected ways. We can thank the early days of JavaScript for this feature, when it was trying to be extraordinarily forgiving of sloppy code. I’m not going to list all the odd results that can arise from JavaScript’s operand coercion, because there are more than enough examples on the web already. To avoid unexpected type coercion, and thus unexpected matches and/or mismatches, the usual advice is to always use strict equality operators (“===” and “!==”). I disagree.
    1. This is part of a deeper instinct in modern life, I think, to explain everything. Psychologically, scientifically, evolutionarily. Everything about us is caused, categorised

      We have a word "pathologize" for this, but the author doesn't use it anywhere in this piece. The general thrust behind the word's existence was an implicit understanding that to pathologize X is generally the wrong approach to trying to explain or address X. Because it doesn't really explain anything—you just sort of bottom out (or hit a wall, pick your metaphor); it operates on the same principle behind the degenerate behavior that Feynman observed and criticized about how people think that knowing the name for something is a substitute for understanding that thing, or, when asked to explain magnets, he asked the questioner what they thought they were asking and explained that there's really not an answer (of the sort they thought they wanted) for anything else that they think they understand better than magnets but didn't ask about.

    1. Apps reacting Let's check how different applications react to this file.

      It's notable that the author left out the canonical implementation (PKZIP).

      I'd also expect them to have at least tried to test the ZIP support in Mac OS Finder and the Windows shell.

    2. technically both the offset of the Central Directory and the size of the Central Directory are redundant when you consider that the End of Central Directory Record should be, well, at the end of the Central Directory. In a proper ZIP file the following equation should be true: position_of_EoCDR - size_of_CD == offset_of_CD

      Not true—they're not redundant; it is not correct to assume that this relation holds.

      The only correct way to compute the offset is to use the field with information about the offset. It is not correct to do otherwise.

    1. that people use as Web addresses

      "Web address" is so awkward as a term. Better to just make it concrete, anyway, by providing a familiar example like "The URLs that appear in the address bar in a Web browser […]".

    2. <Bob> <is a> <person>. <Bob> <is a friend of> <Alice>. <Bob> <is born on> <the 4th of July 1990>. <Bob> <is interested in> <the Mona Lisa>. <the Mona Lisa> <was created by> <Leonardo da Vinci>. <the video 'La Joconde à Washington'> <is about> <the Mona Lisa>.

      The syntax highlighting here is very strange. It seems to follow the rule that the first atom inside a set of angle brackets is bold, and the subsequent atoms (if any) are not.

    3. the Mona Lisa is the subject of one and the object of two triples

      The natural way to phrase this would be "the subject of one triple and the object of two". (There should probably be a "both" thrown in there, too, to make the sentence structure even more predictable.)

  5. Jun 2025
    1. D’s static reflection and code generation capabilities make it an ideal candidate to implement a codebase that needs to be called from several different languages and environments (e.g. Python, Excel, R, …). Traditionally this is done by specifying data structures and RPC calls in an Interface Definition Language (IDL) then translating that to the supported languages, with a wire protocol to go along with it. With D, none of that is necessary.
    1. When I try to imagine what an optimistic far future will be like, say 100 or 200 years from now, my best answer so far is, “like today but more evenly distributed.”

      This is a better, but more verbose way to state what I summarized as "evenly distributing yesterday's future".

    1. if the user knows a little bit of JavaScript, they can tweak user scripts written in Tampermonkey directly in the browser

      I'm confused about why this namechecks a proprietary clone of Greasemonkey instead of just, you know, mentioning Greasemonkey.

    2. Malleable software does not imply everybody creating all of their own tools from scratch.

      This is a superfluous clarification.

      (Maybe if it appeared nearer to the earlier discussion of situated software?)

      "Malleable software" implies the opposite, right on its face.

    3. Modifying a serious open source codebase usually requires significant expertise and effort. This applies even for making a tiny change, like changing the color of a button . Even for a skilled programmer, setting up a development environment and getting acquainted with a codebase represents enough of a hurdle that it’s not casually pursued in the moment.
    4. every layer of the modern computing landscape has been built upon the assumption that users are passive recipients rather than active co-creators. What we need instead are computing systems that invite every user to gradually become a creator.

      Is "creator" the right word here? There's lots of software that falls outside of what is the subject of this paper that enables creators. Indeed, it has been a common refrain in criticisms of the FOSS movement that for the types of software that creatives need and/or simply desire to use the proprietary apps tend to have no equals.

    1. it("basic reviver with multiple features"

      At this point, when you're writing things like this, it's worth re-examining what in the hell you've got all this stuff underneath that you don't even understand and you're certainly not using correctly.

      "It basic reviver with multiple features"? That makes no sense.

    1. All languages considered fully object-oriented feature encapsulation,polymorphism, and inheritance.

      Inheritance seems to be the odd item out in this list—inheritance is not essential for OO.

    2. It is easy to see how interchangeable parts could help in manufacturing.But manufacturing involves replicating a standard product, whileprogramming does not. Programming is not an assembly-line business buta build-to-order one, more akin to plumbing than gun manufacturing.

      It's strange that there is a ready metaphor that isn't used—

      The practice not of manufacturing parts (which is readily automated), but the labor that goes into designing the machines that enable the automation but for which there is no known way to automate the design work itself.

    3. , “No Silver Bullet Revisited ,”American Programmer Journal (November 1995),http://virtualschool.edu/cox/pub/NoSilverBulletRevisted/

      As I mentioned in another annotation, Cox's article in American Programmer was actually published with the title '"No Silver Bullet" Reconsidered'.

    4. It appears that we have few specific environments (factory facilities) forthe economical production of programs. I contend that the productioncosts are affected far more adversely by the absence of such anenvironment than by the absence of any tools in the environment… Afactory supplies power, work space, shipping and receiving, labordistribution, and financial controls, etc. Thus a software factory should bea programming environment residing upon and controlled by a computer.Program construction, checkout and usage should be done entirely withinthis environment. Ideally it should be impossible to produce programsexterior to this environment…Economical products of high quality […]are not possible (in most instances) when one instructs the programmer ingood practice and merely hopes that he will make his invisible productaccording to those rules and standards. This just does not happen underhuman supervision. A factory, however, has more than humansupervision. It has measures and controls for productivity and quality.18

      Hsu again cites only Mahoney for this, and the passage here is presented as one quote, but it's actually a quote within a quote: first Bemer and then Mahoney. The original Bemer quote ends with the second sentence ("I contend that the production costs are affected far more adversely by the absence of such an environment than by the absence of any tools in the environment…" which ends prematurely here but ends with a parenthetical "e.g. writing a program in PL/1 is using a tool"), and the remainder is Mahoney's commentary.

      The Bemer source is:

      R.W. Bemer, "Position Paper for Panel Discussion [on] the Economics of Program Production", Information Processing 68, North-Holland Publishing Company, 1969, vol. II, p. 1626.

    5. yet in spite of,but possibly precisely because of this, becoming the most widely used object-orientedprogramming language in the industry,8 thus making it the most likely OO language ofchoice for managers

      This is such a strange thing to see in a paper from 2009.

      The source cited as support is a book from 1995.

    Tags

    Annotators

    1. Some might find this example hard to believe. This really occurred in some code I’ve seen: (defun make-matrix (n m) (let ((matrix ())) (dotimes (i n matrix) (push (make-list m) matrix)))) (defun add-matrix (m1 m2) (let ((l1 (length m1)) (l2 (length m2))) (let ((matrix (make-matrix l1 l2))) (dotimes (i l1 matrix) (dotimes (j l2) (setf (nth i (nth j matrix)) (+ (nth i (nth j m1)) (nth i (nth j m2))))))))) What’s worse is that in the particular application, the matrices were all fixed size, and matrix arithmetic would have been just as fast in Lisp as in FORTRAN. This example is bitterly sad: The code is absolutely beautiful, but it adds matrices slowly. Therefore it is excellent prototype code and lousy production code.

      Strong Python vibes.

    1. all too late I read a paper that explains why the Web beatXanadu. Its an article by Richard Gabriel and written in 1987 and called 'Good News, Bad News, and How to WinBig, and Why the Thing that Does the First 50% Flashilly Wins'

      Wikipedia says it was written in 1989. Gabriel says it was published in 1991. Presumably the latter refers narrowly to the actually publication when it was cut down to 9 pages for AI Expert in June 1991.

      Ted is referring to the "The Rise of Worse Is Better" section that appears in the version published on Gabriel's site but that was simply called "Worse is better" in the version that AI Expert published.

    1. Get them to buy into the fact that bullshit (read: diagnosing and debugging weird things) is a part of life in the world of computers.

      No.

      From Ted Nelson's "Computer Lib Pledge":

      • The purpose of computers is human freedom.
      • I am going to help make people free through computers.
      • I will not help the computer priesthood confuse and bully the public.
      • I will endeavor to explain patiently what computer systems really do.
      • I will not give misleading answers to get people off my back, like "Because that's the way computers work" instead of "Because that's the way I designed it."
      • I will stand firm against the forces of evil.
      • I will speak up against computer systems that are oppressive, insulting, or unkind, and do the best I can to improve or replace them, if I cannot prevent them from being bought or created in the first place.
      • I will fight injustice, complication, and any company that makes things difficult on purpose.
      • I will do all I can to further human understanding, especially through the new visualizing tools of interactive computer graphics.
      • I will do what I can to make systems easy to understand, interactive wherever possible, and fun for the user.
    2. hopefully you recognize the other kinds of “bullshit” a researcher will encounter: weird pseudo-code in a paper with parameters that seem defined by magic that you need to implement, or pieces of code that need to be extracted from something else, or someone’s ill-documented source code that worked yesterday but doesn’t today. If you live at the edge, you need to learn how to deal with bullshit. The general coping skills that let you deal with the bullshit, as well as specific computer-science coping skills, are hard to come by. If these aren’t taught early, in relatively low-stakes, easy-to-fix environments, it will only be worse later on.

      Guo's addendum (mentioned in an earlier annotation) addresses this: so many readers seem to have come away from Guo's piece with weird assumptions that he's not in the process of trying to teach his student any of this stuff and that he is instead just trying to set things up for them. That's not what he's doing, and that's not what upsets him.

    3. Scientific progress is made by building on the hard work of others and that, unfortunately, requires a certain perseverance.

      What of the lack of perseverance of "the others"—to contribute their work in a way that can be integrated? Wouldn't it be a lot more productive if everyone aimed their perseverance at dealing with the problems that fall out of their own contributions—work that they are intimately familiar with—rather than trying to grapple with problems in N of the M components that they are building on and don't have the same level of familiarity—and for this effort to be duplicated by everyone else trying to build on top of it, too?

    4. research is about the long game

      Indeed—which should really cast the norms around scientific computing in stark light as being clearly the wrong way to go about doing things. (Which itself isn't to say that there's anything special about scientific computing here—there are plenty of programmers (working on open source or otherwise) that get things just as wrong. Most of them, even. It's virtually everyone.)

    5. It is inevitable because we work with other people who are not software developers who have been trained in the best possible procedures.

      This doesn't nail it either. There are lots of people who are software developers who have (or at least have been told or have convinced themselves) that they have been trained in the best possible procedures. Adar obliquely acknowledges "open-source developers", but in a strange way that seemingly implies software endeavors that aren't FOSS are better "incentivized to keep documentation up to date or the software running".

    6. The problem isn’t that there is some magic invocation that makes this work, the problem is that the next thing you download will need (what looks like) a completely new magic invocation.

      My initial reaction was to the "This is inevitable" part that follows, before backing up and focusing on this sentence, to which my response was originally going to be, "Is it? No, it isn't." That is, until I skimmed Guo's addendum/update (which I'd never seen before) in one of the two copies I kept of the piece which this piece by Adar is a response to. And indeed, it seemed, based on a surface-level reading of that addendum that Guo largely agreed. But then a close reread of that proves that there isn't any explicit or implicit agreement on Guo's part about the diagnosis offered here. And I don't think the diagnosis is correct.

      And now to focus on the next part:

      This is inevitable. It is inevitable because

      I don't think it is. (If nothing else, it hasn't been proven.) Even from the perspective of the author, this falls squarely in the "accidental complexity" versus "essential complexity" bucket.

      (And to pick up a thread of mine from earlier this year, I wonder if we have perhaps made a mistake in focusing too much on the notional of incidental (or "accidental") versus inherent (or "essential") complexity, and whether or not we should be trying to address the fact of incidental versus intentional complexity—which surely exists. And that isn't to say that's the culprit everywhere, but to reiterate: it surely exists and surely accounts for some of what's going on.)

    7. If you look at most learning objective frameworks you’ll find that there’s a spectrum from the factual (type “git clone”), to procedural (if you see this error message, make the following fixes; if you see this other error message, make these other fixes)

      I don't see how the examples offered are helpful in conveying the distinction. The instruction to "type git clone" seems procedural.

      Based only on my inference of what is actually intended by "factual" versus "procedural" (without first checking the link here), I'd guess that, "The command for copying a Git repository's contents to the local machine is git clone" would be a real example of a factual (versus a procedural) lesson style.

    1. My initial reaction was to the "This is inevitable" part that follows, before backing up and focusing on this sentence, to which my response was originally going to be, "Is it? No, it isn't." That is, until I skimmed Guo's addendum/update (which I'd never seen before) in one of the two copies I kept of the piece which this piece by Adar is a response to. And indeed, it seemed, based on a surface-level reading of that addendum that Guo largely agreed. But then a close reread of that proves that there isn't any explicit or implicit agreement on Guo's part about the diagnosis offered here. And I don't think the diagnosis is correct. And now to focus on the next part: This is inevitable. It is inevitable because I don't think it is. (If nothing else, it hasn't been proven.) Even from the perspective of the author, this falls squarely in the "accidental complexity" versus "essential complexity" bucket. (And to pick up a thread of mine from earlier this year, I wonder if we have perhaps made a mistake in focusing too much on the notional of incidental (or "accidental") versus inherent (or "essential") complexity, and whether or not we should be trying to address the fact of incidental versus intentional complexity—which surely exists. And that isn't to say that's the culprit everywhere, but to reiterate: it surely exists and surely accounts for some of what's going on.)

      Too meta, too messy, too personal, and too much detail.

    Tags

    Annotators

    1. LLMs can write a large fraction of all the tedious code you’ll ever need to write. And most code on most projects is tedious. LLMs drastically reduce the number of things you’ll ever need to Google. They look things up themselves. Most importantly, they don’t get tired

      Does this mean arguments against verbose "boilerplate" languages are going to be given less credence?

  6. May 2025
    1. But algospeak isn’t just a communications issue: It’s a labor issue. The people who truly live and die by algorithmic ranking choices are the people whose ability to put groceries on the table is directly tied to whether a social media platform suppresses their videos or text.

      Pfft. Lame.

    1. without welfare

      The earlier breakdown involved being dependent on the bus for transportation into the city and relying on the library for entertainment. Is the bus subsidized? Is the library?

    2. there are ways to purchase bulk food at cost through their channels that can lower one’s food bill to laughably low levels — my wife and I presently spend perhaps $300 per month on food

      Before I got to the end of this sentence I was expecting to see something like $70 per person.

      $300 per month on food is "laughably low"? You can eat cheaper in the city!

    3. They’d merely need to content themselves with a manner of living that would be more in line with that of their own great-grandfathers than the life so often depicted on reality television, TikTok, Instagram, and whatever else.

      Quintessential boomer shit.

    1. For example, the syntactic analysis stage builds a conventional expression tree, but this tree is expressed as objects—instances of various classes relevant to compilation: Class, Method, Message, Selector, Argument, etc. […] The code for these operations is encapsulated inside each object, and is not spread throughout the system so changes are easily made. And all these specialized classes rely heavily on reusable code inherited from more primitive classes like Object, HashTable, Symbol, Token, and LinkedList.

      This is an interesting claim, given that one of the common criticisms of the classic, inheritance-heavy style of classic OOP is precisely that it leads to code that is indeed "spread throughout". Adele Goldberg commenting on the legacy of Smalltalk made a well-known quip, "In Smalltalk, everything happens somewhere else."

      It's notable that although there many common word sequences between this paper and the later, similarly titled one that appeared in IEEE Software (1984), this passage doesn't appear to be present (based on a quick skim—I could be missing something), and the closest corresponding statements are significantly reworded.

    Tags

    Annotators

    1. Programs Are Models That RunPrograms have much in common with models, in particular they are abstractions of a system that makecertain properties explicit and hide, or abstract away, other properties. But programs have a specialproperty that most kinds of models do not – they can automatically produce the actual computation theymodel.
    1. detailedfunctional descriptions of hardware tothe extent that the instruction set of thecomputer can be emulated

      This presupposes that you'd settle on opaque sequential blobs of said instructions as the preferred archive format—a consequence of the cultural milieu of the writer, and not the product of careful thinking and logical decisionmaking.

    1. A common measurement of code bulk is to count the number of lines of code, and a common measure for software productivity is to count the number of lines of code produced per unit time. These numbers are relatively easy to collect, and have actually demonstrated remarkable success in getting a quick reading of where things stand. But of course the code bulk metric pleases almost no one. Surely more than code bulk is involved in software productivity. If productivity can really be measured as the rate at which lines of code are produced, why not just use a tight loop to spew code as fast as possible, and send the programmers home?

      The modern incarnation of the fallacy of bulk-is-better is judging the quality of a project (esp. a component) by being overly concerned with whether it is more or less "active" than some other project.

      The question posed here is an obvious response for when you encounter instances of the bulk-is-better mindset (although it's not as easy to automate as Cox suggests here).

      I've often mused about the thought of pulling a Reddit* and automating some level of churn within a codebase for a project that's hosted on e.g. GitHub for the sole purpose of making sure that it appears "active".

      * A story about the early days of Reddit has become well-known after the creators volunteered some information about the early days when they made use of a bunch of "fake" accounts to submit links that the creators had aggregated/curated for the purpose of seeding the site.

    2. Most would agree that software productivity is low; lower than we'd like certainly, and probably lower than it needs to be. It is harder to agree what productivity is, at least in a quantifiable sense that would allow controlled experiments to be defined and measured scientifically. What is software productivity? Or much the same thing, How do you measure it? In measuring something, one hopes to understand the factors that influence it, and from these to discover ways to control it to advantage. This is the goal of Software Metrics, the study of factors influencing productivity by measuring real or artificial software development projects.
    3. Software does not need dusting, waxing, or cleaning.  It often does have faults that do need attention, but this is not maintenance, but repair.  Repair is fixing something that has been broken by tinkering with it, or something that has been broken all along.  Conversely, as the environment around software changes, energy must be expended to keep it current.  This is not maintenance; holding steady to prevent decline.
    4. The way you make code last a long time is you minimize dependencies that are likely to change and, to the extent you must take such dependencies, you minimize the contact surface between your program and those dependencies.

      It's strange that this basic truth is something that has to be taught/explained. It indicates a failure in analytical thinking, esp. regarding those parts of the question which the quoted passage is supposed to be a response to.

    5. That dependency gets checked into your source tree, a copy of exactly the version you use. Ten years later you can pull down that source and recompile, and it works

      In other words: actually practicing version control.

    1. Zulip is just snappy. It’s not bulky like Slack or Matrix.

      I'm baffled how much mindshare Matrix was evidently able to gain (and maintain) given how crummy it is as a product and frustrating to use.

    Tags

    Annotators

    1. Mary Shaw's Towards and Engineering Discipline of Software

      The actual name of the paper referenced here appears to be "Prospects for an Engineering Discipline of Software". "Towards an Engineering Discipline of Software" is the last chapter/section.

    1. weird limitations of the web

      Claims like these need supporting evidence. Are they really "weird limitations"? Or are they actually reasonable limitations—on developers' unending attempt to exercise what they feel as their manifest destiny to control the user's device and experience?

    2. In practice, we can't even get web apps to work on desktop and mobile, to the point where even sites like Wikipedia, the simplest and most obvious place where HTML should Just Work across devices, has a whole fricking separate subdomain for mobile.

      This is a terrible example.

      That is 100% a failing of the folks making those decisions at Wikipedia, not HTML.

    3. I don't want to have to read documentation on how my SSG works (either my own docs or docs on some website) to remember the script to generate the updates, or worry about deploying changes, or fiddling with updates that break my scripts, or anything like that.

      This flavor of fatigue (cf "JavaScript fatigue") is neither particular to nor an intrinsic consequence of static site generators. It's orthogonal and a product of the milieu.

    4. being "followed" isn't always a good thing, it can create pressure to pander to your audience, the same thing that makes social media bad.

      It can also create pressure to hold your tongue on things that you might otherwise be comfortable expressing were it not for the fact that there was available a meticulously indexed, reverse chronological stream of every sequence of words that you've authored and made ambiently available to the world.

    1. Figure 5.

      The screenshot here (p. 33) reads:

      Introduction

      The intent of this paper is to describe a number of studies that we have conducted on the use of a personal computing system, one of which was located in a public junior high school in Palo Alto, California. In general, the purpose of these studies has been to test out our ideas on the design of a computer-based environment for storing, viewing, and changing information in the form of text, pictures, and sound. In particular, we have been looking at the basic information-related methods that should be immediately available to the user of such a system, and at the form of the user interface for accessing these methods. There will undoubtedly be users of a personal computer who will want to do something that can be, but not already done; so we have also been studying methods that allow the user to build his own tools, that is, to write computer programs.

      We have adopted the viewpoint that each user is a learner and have approached our user studies with an emphasis on providing introductory materials for users at different levels of expertise. Our initial focus has been on educational applications in which young children are taught how to program and to use personalizable tools for developing ideas in art and music, as well as in science and mathematics.

      It is titled "Smalltalk in the Classroom" and attributed to "Adele Goldberg and Alan Kay, Xerox Palo Alto Research Center, Learning Research Group". Searching around indicates that this title was re-used in SSL-77-2, but neither introduction in that report matches the text shown here.

      Searching around for distinct phrases doesn't turn anything up. Is this a lost paper? Is there some Smalltalk image around from which this text can be extracted?

    1. We all play the game we think we can do better at.

      Is that actually true? Surely there are examples where people play the game that they're less suited for—where the decision is driven by desire?

    1. There are a number of ways to become G, but usually you do it by adopting a complainer mindset. You wake up in a bad mood. You find little flaws in everything you see in the world, and focus on those.

      I don't think that's right—

      My life experiences during the (now bygone) Shirky era and the loose school associated with it really inculcated (in me, at least) the value of small, cumulative improvements contributed by folks on a wide scale. See something wrong? Fix it. Can't fix it (like, say, because you don't have the appropriate authorization)? File a bug so the people who can fix it know that it should be fixed. This matches exactly the description of seeing the "little flaws in everything you see in the world, and focus[ing] on those".

      Looking at those flaws and thinking "this thing isn't as good as it could be" is a necessary first step for optimism. That belief and the belief in the possibility of getting it fixed is the optimist approach.

      When I think of miserable people (and the ones who make me miserable), it's the ones who take the attitude of resignation that everything is shit and you shouldn't bother trying to change it because of how futile it is.

    1. This is a new technology, people (read: businesses) want to take advantage of it. They are often ignorant, or simply too busy to learn to harnass it themselves. Many of them will pay you to weave their way on the web. html programming is one of the most lucrative, and most facile consulting jobs in the computing industry. Setting up basic web sites is not at all hard to do, panicked businesses looking to build their tolllane on the infoway will pay you unprecedented piles of cash for no more than a day's labour.
    1. You probably spew bullshit too. It’s just that you’re not famous, and so it may even be harder to be held accountable. Nobody writes essays poking the holes in some private belief you have.

      The at-oddsness of the two things mentioned here—spewing bullshit and private beliefs that someone could correct you about—is hard to skip over.

    2. most “experts”, in every industry, say some amount of bullshit

      This is my experience working with other people, generally. Another way to put it besides people not "priortizing the truth" is that they just don't care—about many things, the truth being one of them.

    3. Philosopher Harry Frankfurt described “bullshit” as “speech intended to persuade without regard for truth. The liar cares about the truth and attempts to hide it; the bullshitter doesn’t care whether what they say is true or false.”

      I don't think this is a good definition of liar. The bullshitter described here would be a liar if what they're saying is lie.

    1. SMALLTALK (Kay 1973) and PLANNER73 (Hewitt 1973), both embody aninteresting idea which extends the SIMULA (Ichbiah 1971) notion ofclasses, items that have both internal data and procedures. Auser program can obtain an "answer" from an instance of a class bygiving it the name of its request without knowing whether therequested information is data or is procedurally determined. AlanKay has extended the idea by making such classes be the basis fora distributed interpreter in SHALLTALK, where each symbolinterrogates its environment and context to determine how torespond. Hewitt has called a version of such extended classesactors, and has studied some of the theoretical implications ofusing actors (with built-in properties such as intention andresource allocation) as the basis for a programming formalism andlanguage based on his earlier PLANNER work.and PLANNER73 are both not yet publicly available, the ideas mayprovide an interesting basis for thinking about programs. Themajor danger seems to be that too much may have been collapsedinto one concept, with a resultant loss of clarity.

      An early mention of Smalltalk.

    1. message-oriented programming language

      That's interesting. This paper is Copyright 1977 (dated June of that year) and is using the term "message-oriented" rather than "object-oriented".

      Rochus maintains that Liskov and Jones are the originators of the term "object-oriented [programming] language".

      Here's Goldberg in an early paper (that nonetheless postdates the 1976 paper by Liskov and Jones) writing at length about objects but calling Smalltalk "message-oriented" (in line with what Kay later said OO was really about).

    1. We're at the point in humanity's development of computing infrastructure where the source code for program texts (e.g. a module definition) should be rich text and not just ASCII/UTF-8.

      Forget that I said "rich text" for a moment and pretend that I was just narrowly talking about the inclusion of, say, graphical diagrams in source code comments.

      Could we do this today? Answer: yes.

      "Sure, you could define a format, but what should it look like? You're going to have to deal with lots of competing proposals for how to actually encode those documents, right?" Answer: no, not really. We have a ubiquitous, widely supported format that is capable of encoding this and more: HTML.

      Now consider what else we could do with that power. Consider a TypeScript alternative that works not by inserting inline type annotations into the program text, but instead by encoding the type of a given identifier via the HTML class attribute.

      Now consider program parametrization where a module includes multiple options for the way that you use it, and you configure it as the programmer by opening up the module definition in your program editor, gesturing at the thing it is that you want to concretely specify, selecting one of those options, and have the program text for the module react accordingly—without erasing or severing the mechanism for configuration, so if another programmer wants to change the module parameters to satisfy some future need—or lift that module from your source tree and use it in another one for a completely different program—then they can reconfigure it with the same mechanism that you used.

    1. I think that most of the complexity in software development is accidental.

      I feel no strong urge to disagree, but on one matter I do want to raise a question: Is "accidental" even the right term? (To contrast against "essential complexity", that is.)

      A substantial portion of the non-essential complexity in modern software development (granted: something that Brooks definitely wasn't and couldn't have been thinking about when he first came up with his turn of phrase in the 1980s) doesn't seem to be "accidental". It very much seems to be intentional.

      So should we therefore* highlight the contrast between "incidental" vs "intentional" complexity?

      * i.e. would it better serve us if we did?

    1. One of the problems with building a jargon is that terms are vulnerable to losing their meaning, in a process of semantic diffusion - to use yet another potential addition to our jargon. Semantic diffusion occurs when you have a word that is coined by a person or group, often with a pretty good definition, but then gets spread through the wider community in a way that weakens that definition.
    1. Unless techniques to create software increase dramatically in productivity, the future of computing will be very large software systems barely being able to use a fraction of the computing power of extremely large computers.

      In hindsight, this would be a better outcome than what we ended up with instead: very large software systems that, relative to their size—esp. when contrasted with the size of their (often more capable and featureful) forebears—accomplish very little, but demand the resources of extremely powerful computers.

    1. Rust was initially a personal project by Graydon Hoare, which was adopted by Mozilla, his employer, and eventually became supported by the Rust Foundation

      Considering this post is about Swift, describes Lattner's departure, etc., it would have been opportune to mention Graydon Hoare's departure re Mozilla/Rust and his subsequent work at Apple on Swift.

    1. I made it obscenely clear that there was not going going to be an RFC for the work I was talking about (“Pre-RFC” is the exact wording I used when talking to individuals involved with Rust Project Leadership)

      "Pre-RFC" doesn't sound like there's "not going to be an RFC" for it. It rather sounds like the opposite.

    1. I have never seen any form of create generative model output (be that image, text, audio, or video) which I would rather see than the original prompt. The resulting output has less substance than the prompt and lacks any human vision in its creation. The whole point of making creative work is to share one’s own experience - if there’s no experience to share, why bother? If it’s not worth writing, it’s not worth reading.

      I haven't been using LLMs much—my ChatGPT history is scant and filled with the evidence of months going by between the experiments I perform—but I recently had an excellent experience with ChatGPT helping to work out a response that I was incredibly pleased with.

      It went like this: one commenter, let's label them Person A posted some opinion, and then I, Person B, followed up with a concurrence. Person C then appeared and either totally misunderstood the entire premise and logical throughline in a way that left me at a loss for how to respond, or was intentionally subverting the fundamental, basic (though not unwritten) rules of conversation. This sort of thing happens every so often, and it breaks my brain, so I sought help from ChatGPT.

      First I prompted ChatGPT to tell me what was wrong with Person C's comment. Without my saying so, it discerned exactly what the issue was, and described it correctly: "[…] the issue lies in missing the point of the second commenter's critique […]". I was impressed but still felt like pushing for more detail; the specific issue I was looking for ChatGPT to help explain was how Person C was breaking the conversation (and my brain) by violating Grice's maxims.

      It didn't end up getting there on its own within the first few volleys, even with me pressing for more, so I eventually just outright said that "The third comment violates Grice's maxims". ChatGPT responded with the level of sycophantism (and emoji-punctuated bulleted statements) a little too high, but also went on to prove itself capable of being a useful tool in assisting with crafting a response about the matter at hand that I would not have been able to produce on my own and that came across as a lot more substantive than the prompts alone.

    1. Is my data kept private? Yes. Your chosen files will not leave your computer. The files will not be accessible by anyone else, the provided link only works for you, and nothing is transmitted over a network while loading it.

      This is a perfect use for a mostly-single-file browser-based literate program.

  7. Apr 2025
    1. Around the start of ski season this year, we talked about my plans to go skiing that weekend, and later that day he started seeing skiing-related ads.He thinks it's because his phone listened into the conversation, but it could just as easily have been that it was spending more time near my phone

      Or—get this—it was because despite the fact that he "hasn't been for several years", he used to "ski a lot", and it was the start of ski season.

      You don't have to assume any sophisticated conspiracy of ad companies listening in through your device's microphone or location-awareness and user correlation. This is an outcome that could be effected by even the dumbest targeted advertising endeavor with shoddy not-even-up-to-date user data. (Indeed, those would be more likely to produce this outcome.)

    1. I had the Android Emulator in my laptop, which I used to install the application, add our insurance information and figure out where to go. Just to be safe, I also ordered an Android phone to be delivered to me while I went to the hospital, where I used my iPhone's hotspot to set it up and show all the insurance information to the hospital staff.

      If only there were some sort of highly accessible information system that was designed to make resources available from anywhere in the world without any higher requirement besides relatively simple and ubiquitous client software. Developers might then not be compelled to churn out bespoke programs that effectively lock up the data (and prevent it from being referenced, viewed, and kept within arm's reach of the people that are its intended consumers).

    1. Software gets more complicated. All of this complexity is there for a reason. But what happened to specializing?

      This feels like a sharp left turn, given the way the post started out.

      Overspecialization is the root of the problem.

      Companies want generalists. This is actually reasonable and good. What's bad is that companies also believe that they need specialists (for things that don't actually require specialization).

    2. “But I can keep doing things the way that I’ve been doing them. It worked fine. I don’t need React”. Of course you can. You can absolutely deviate from the way things are done everywhere in your fast-moving, money-burning startup. Just tell your boss that you’re available to teach the new hires

      I keep seeing people make this move recently—this (implicit, in this case) claim that choosing React or not is a matter of what you could call costliness, and that React is the better option under those circumstances—that it's less costly than, say, vanilla JS.

      No one ever has ever substantiated it, though. That's one thing.

      The other thing is that, intuitively, I actually know* that the opposite is true—that React and the whole ecosystem around it is more costly. And indeed, perversely the entire post in which the quoted passage is embedded is (seemingly unknowingly, i.e. unselfawarely) making the case against the position of React-as-the-low-cost-option.

      * or "know", if the naked assertion raises your hackles otherwise, esp re a comment that immediately follows a complaint about others' lack of evidence

    1. The one recent exception is “Why Can’t We Screenshot Frames From DRM-Protected Video on Apple Devices?”, which somehow escaped the shitlist and garnered 208 comments. These occasional exceptions to DF’s general shitlisting at HN have always made the whole thing more mysterious to me.

      Geez. How clueless can a person be? (Or is feigned cluelessness also a deliberate part of the strategy for increasing engagement?)

    1. The JSON Resource Descriptor (JRD) is a simple JSON object that describes a "resource" on the Internet, where a "resource" is any entity on the Internet that is identified via a URI or IRI. For example, a person's account URI (e.g., acct:bob@example.com) is a resource.

      Wait, the account URI is a resource? Not the account itself (identified by URI)?