Dear wanderer:
You are looking for https://support.pkware.com/pkzip/application-note-archives.
Dear wanderer:
You are looking for https://support.pkware.com/pkzip/application-note-archives.
there is no de facto standard
This is simply not true. The de facto standard is "the behavior of PKZIP, circa 30 years ago". Conveniently, PKZIP happens to be shareware (at least the versions of interest here). One confounding factor, as documented by Jason Summers's blog, is the difficulty of getting your hands on bonafide PKZIP originals. https://entropymine.wordpress.com/2019/07/27/will-the-real-pkz110-exe-please-stand-up/
The ZIP64EOCDR can be located based on the ZIP64 EOCDL, or bysearching for its signature
TheZIP64 EOCDL can be located by a fixed offset from the regu-lar EOCDR, or by searching for its signature
someparsers assume that the EOCDR immediately follows thecentral directory with no gap between them, and thus use thecentral directory size to determine its position
ZIP is just not a very well defined format
The purported ambiguity in the implementor's notes is overstated. With exactly one exception, all of the concrete criticisms I've ever come across (including, unfortunately, some of the specific ones woodruffw echoed in the Astral blog post earlier this month) come down to implementors just flat-out doing the wrong thing (e.g. disregarding the central directory entirely), or, when presented with the choice of:
doing the obviously correct thing, versus
taking a really, really, really obtuse position that a particularly strained reading justifies an implementor's decision to do certain things (like trying to infer the position of the central directory instead of just using the field that is explicitly labeled for that purpose—bonus points if accompanied by complaints that the field is "redundant")
... they go for the latter, out of a seeming inability to say, "Whoops, how silly of me," and just fixing their damn (mental model and corresponding) implementation.
These design considerations meant that the ZIP standard is complicated to implement, and in many ways is ambiguous in what the "result" of extracting a valid ZIP file should be.
Not really. The correct behavior is, well, correct. There are just a lot of people who either succumb to taking (frankly bizarre) shortcuts or stubbornly insist on reading the spec the wrong way.
multiple End of Central Directory headers
This surely means "multiple End of Central Directory records". To reject archives with multiple headers would mean accepting only archives that contain a single file.
The correct method to unpack a ZIP is to first check the Central Directory of files before extracting entries
I commented on earlier HN threads re: removal of XSLT support from HTML spec and browswers, IBM owns a high-performance XSLT implementation
It's here: https://news.ycombinator.com/item?id=44955771
(NB: there is no more detail in the referenced comment than what is stated here.)
WE DO NOT BREAK USERSPACE
If only full-stack developers actually cared about the user who is sitting in front of a Web browser (i.e. end users) as much as they cared about the "users" of their APIs who are sitting in front of a text editor (i.e., programmers).
a server can send a header as a continuation of the previous header by adding leading whitespace
Go’s ZIP archive extraction starts looking from the start of the compressed data.
What a waste of total bandwidth and disk space everywhere.
What until you're motivated to articulate what you thought the value proposition of NPM-style build-time package fetchers actually ever was and then eventually settle into the realization of how little disk space has been saved in the whole scheme, how much bandwidth has been wasted on thousands of redundant package fetches, and the immeasurable amount of human toil and the resulting hit to the world's creative output from overlay version control systems instead of just leaving the version control to the DVCS that you're using.
There is a user in the JavaScript community who goes around adding "backwards compatibility" to projects. They do this by adding 50 extra package dependencies to your project, which are maintained by them.
Of course, this is another example where people say "JavaScript" and mean "NPM".
Git can be configured to automatically convert LF to CRLF on checkout and CRLF breaks bash scripts.
The last time I looked, around 8 years ago, of the multiple settings that Git offers wrt line endings, there was no combination of any of any of them that would do the most reasonable thing, and all of the advice and documentation around use of these settings were wrong. The best course of action was to use the defaults and just deal with it.
Image dimensions in EXIF metadata are cursedThe dimensions in EXIF metadata can be different from the actual dimensions of the image, causing issues with cropping and resizing.
It's strange that EXIF even includes such properties—what's wrong with the image's existing dimension fields?
APPNOTE section 1.4.
This section doesn't appear in all revisions of the APPNOTE. Notably, it doesn't appear in revision 1.0, nor does it appear in 6.2.0, which is the basis for the ISO standardized ZIP spec.
Python's zipfile interprets it as relative to the EOCDR
Huh? For the reasons above, I would be very, very surprised if this were true and Woodruff isn't just confused. This would make it incompatible with the vast majority (virtually all) ZIP-producing software. (Is there any implementation which actually works this way?)
Unfortunately, the ZIP specification is ambiguous about the nature of this offset: it's not described as either absolute (i.e. from the start of the ZIP) or relative (i.e. from the EOCDR's own offset).
It would take a very strained reading to end up with a misunderstanding aligning with the latter behavior (and it's very unfortunate that the "relative"/"absolute" terminology was chosen here, as where "relative" appears in the spec, it is used to mean almost the exact opposite of how it's used here—the relativity of the "relative offsets" mentioned in the spec are explicitly described as being relative to the start of the "disk" (which in almost all cases constitutes the only "disk" comprising the archive—i.e. it's the offset from the start of the file).
I don't think I've seen a single person bring up the classism inherent in dictating gentlemanly manners.
Here, or in general?
I do think about this a lot. This is a nice, succinct way to put it. (Critique, though: "classism" is not the best way to put it. For better or worse, "privilege" is probably one of the best words we have for this. Separately: Since "privilege" became a staple of common rhetoric, I've mused a lot about trying to convince people to minimize the focus on "privilege" (to avoid the familiar kneejerk reactions from those hearing it who have associated it with overuse), with the intent to be to sway people instead by speaking about privilege without actually using the word "privilege" and speaking exclusively in terms of affordances*.)
See: https://hypothes.is/a/TCB5zClKEeyrIOu9mp-5TA and tag:"privilege vs affordance". (NB: Hypothes.is doesn't linkify the tag in the preceding annotation correctly.)
find out that I didn't have the whole picture, the problem was messier than it first appeared, and there were perfectly valid reasons for the code being that way
I've tried using a hiking metaphor to describe a similar phenomenon (specifically, and perversely, as a preface when trying to explain second panel syndrome.
When you own your domain name this is trivial.
It's not. It assumes that if you have your own domain, you're also able to configure/patch the server to be an active participant in the Webfinger protocol—you can't do anything to add Webfinger support to your static site, for example. This is one reason that makes Webfinger a bad protocol.
Hence the difficulty in people seeing the point of getting Solid for just one app.
I explained in a post to Mastodon somewhere that the issue lies in trying to sell, to users, the infrastructure.
For Solid to succeed, it needs to get into the hands of people, and to get into the hands of people, it needs an app that incidentally uses Solid—you're never going to convince the majority of end users to adopt Solid on its merits. They need to experience it—something that TBL should be acutely aware of, since this is what drop adoption of the WWW.
Students read each sentence out loud and then interpreted the meaning intheir own words—a process Ericsson and Simon (220) called the “think-aloud” or “talk-aloud” method. In this 1980 article, the writers defend thisstrategy as a valid way to gather evidence on cognitive processing.
Speaking (or typing, as in the case of transcription) has a substantially negative effect on my ability to process the same amount of information as silent reading.
Later, the authors of this paper state that subjects who were "uncomfortable" with reading out loud had the option to read silently.
I don't doubt the conclusions of the paper, but I suspect that reading aloud actually has a deleterious effect, especially for those who are performing the act of reading without showing signs of having achieved the comprehension desired.
I use a jekyll/CI/static hosting workflow, and even though I make a zillion git commits a day, somehow branching, editing, PRing, and merging one to my website seems like friction.
This is at the root of the infamous "Blogging vs. blog setups" comic https://rakhim.org/honestly-undefined/19/.
The fact that this is true is also the entire basis for wikis. It is reasonable to find it irksome that people, perversely, refer to Git repos full of Markdown documentation as "wikis"—which they aren't. They are fully the opposite.
I don't think the current affinity for third-party dependencies is particularly harmful
You should. It is.
I feel betrayed that hype did not lead me to Java sooner. Java is fun to write, productive, and gets an unfair reputation among new developers as a dinosaur.
See also lkesteloot's Java for Everything.
However, the high-quality, widely-available free software that is most likely to get beginners hooked on programming – to turn users into programmers – are almost always written in a UNIX style. That's why my article focuses on this culture.
Note that while this isn't true now in 2025, it wasn't exactly true in 2013 when this post was written, either. Eclipse, IntelliJ, and NetBeans are all popular IDEs that were free and (in both senses) at the time this post was written.
This is:
Guo, Phillip. 2013. “The Two Cultures of Computing.” Blog. December. http://pgbovine.net/two-cultures-of-computing.htm.
How about we use Python to process real-world data and then draw a few charts? Okay sure, let's fire up our trusty 1960s-era text editor (not Microsoft Word) and write some code. Wait, first we need to install the proper add-on libraries such as NumPy and Matplotlib. [an hour of troubleshooting later, especially for Windows users ...] Okay, let's write some code. [type, type, type] Yeah, isn't this fun and intuitive? Python makes it all so easy ...
Note how the Select declares the property id="lines". That makes lines a reactive variable.
That's a strange way to put it. Surely this makes lines refer to the Select element—and the properties (such as lines.value below) are just natural, first-class properties of that element?
If not, and there's something else going on here, that's an unforgivable overloading of the "id" attribute, which is already intrinsically significant in XML for orthogonal purposes. This needs to go.
You shouldn’t need a React-savvy front-end developer to help you make routine changes to your site.
Yeah. Web authoring is really just desktop publishing—except in its current state, foiled by the JavaScript industrial complex.
the JavaScript industrial complex
Great turn of phrase, but it's weakened by the choice to shoehorn React in as a dependency.
These annotations com from the use of https://github.com/dwhly-proj/DailyJournal which sends you an email, waits for your reply, then posts it as a top-level annotation targeting the dummy URL http://my.journal and tagged with DailyJournal.
Our video annotation capability works for public YouTube videos that have either human or machine-generated transcripts. The video you’ve selected does not have one or is a private video or is not a YouTube video.
@dwhly docdrop.org is broken.
I declare that IMGUI performance is pretty gosh darn good! Some readers might have predicted IMGUI to be significantly worse as RMGUI. Instead we see numbers that are in the same ballpark.
Maybe, this is true, but it's not a conclusion we can draw from these tests. In choosing programs to benchmark, he selected notorious behemoths to pit against the IMGUI set—which, it should be noted contains stuff like Dear ImGui and EGUI, which aren't even apps.
And later, in the Windows benchmarks, he shows that the two apps—the only apps—in the IMGUI set actually perform worse than the others. The fact that RAD Debugger has a heavier power draw than clunky Electron apps of all thinks like VSCode should be considered evidence conclusive in the other direction.
@58:18
I'm very anti- this whole, like, NPM thing that happened where all these web people like— not only is their program like not a mathematical object, it's like defined by things not even on their computer—out in the world that might change at some point or go away. That's the worse possible reality, and so I think you want in source control, ideally, everything that you're using.
Software interfaces are much too rigid for that. I vaguely remember Alan Kay speaking about more lenient interface mechanisms - if anyone has a reference to share, please leave a comment!
Kay has certainly spoken about such things (albeit not to my knowledge in any way that illuminates a generalizable approach that solves the problem). I'll elide a reference to anything specific (and the prerequisite side quest to track it down), and instead include a reference to Sussman at Strange Loop: We Really Don't Know How To Compute
the few we have impose unpleasant restrictions
NB: Unpleasant restrictions consist of things like, "You cannot read files from the user's local disk outside the file(s)/folder(s) that the user has given the software permission to access, and the person who authored it can't write to arbitrary locations on the user's disk"—both of which are imminently reasonable (and relate to things which you would hope would go one to be addressed (and the quicker, the better) if it were the case that the environment didn't already provide protection against these sorts of things).
This is more like an earthquake destroying a house than like fungi or bacteria transforming food
Relatedly, Brad Cox quibbles about the inappropriate application of the word "maintenance" to these situations.
What I call software collapse is what is more commonly referred to as software rot: the fact that software stops working eventually if is not actively maintained. The rot/maintenance metaphor is not appropriate in my opinion because it blames the phenomenon on the wrong part. Software does not disintegrate with time. It stops working because the foundations on which it was built start to move.
both sides
Third option: acknowledge that you chose the wrong foundations to build upon, and just because the present structures have proven to be (what I'd have argued was a predictable) failure, that that doesn't mean that there is no foundation upon which these things can be built while delivering the sort of stability desired.
What is also sorely missing is a straightforward way to package an application program with all its dependencies in such a way that it can be installed with reasonable effort on all common platforms.
Assuming the "common platform" is something reasonable (i.e. depends only a runtime that can be expected to be present on all machines) this is as straightforward as zip -r ./package.zip research/.
(The problem isn't figuring out how to do it. It's getting people to stop sleepwalking along with all the "best practices" that are outright inimical to the reproducibility/replicability goals. Almost everyone—including to an extent the author of this post—is unwilling to cast aside their attachments.)
function Shop() { this.construct = function
Even before class declaration statements appeared in ES6, this was the wrong way to define methods.
AbstractBuilder is not used because JavaScript does not support abstract classes
Sure it does.
More work is clearly required. But it will only happen if larger parts of the scientific community agree that it is worth doing
Yeah. It's a major social problem. Not so much a technical one.
Outside of science, approximately nobody cares about reproducibility.
Now let's consider two popular recipes for reproducibility
Imagine if there were a readily accessible World Wide Wruntime that anyone with any commodity computing device could use to run arbitrary code (and its corresponding documentation) written by other people...
Alice: I couldn't compile your code. Look at this error message! Bob: It works for me! You use Debian 12? I still run Debian 9. That's surely what makes the difference. But I also have good news: I managed to run your code on my machine. The only problem is that... I get 0.8 nm. Alice: I use libode version 3.4. The documentation says it must be compiled with gcc 10 or later. You probably have an older gcc. Bob: Uhhh... Well... I will have to install a virtual machine with Debian 12, and you with Debian 9. Shall we meet again in a week?
Proposal for a much more interesting piece of writing: "The Sacrifices We Choose Not To Make"; cf:
Everything I write in these posts will be a normal, 64-bit, Windows program. We'll be using Windows because that is the OS I'm running on all of my non-work machines
Terrible pedagogy.
It's trivial for someone who has only a single device to get their hands on a Linux image and run it in a lightweight VM if they're coming from Mac or Windows. The preceding sentence doesn't hold true for any other permutation of { Linux, Mac, Windows }.
paying a hundred bucks a month
Yeesh.
Many JavaScript websites will advise you to never use the “==” and “!=” JavaScript operators, because when they compare variables containing different data types, JavaScript will coerce one of the operands to a matching type, sometimes in unexpected ways. We can thank the early days of JavaScript for this feature, when it was trying to be extraordinarily forgiving of sloppy code. I’m not going to list all the odd results that can arise from JavaScript’s operand coercion, because there are more than enough examples on the web already. To avoid unexpected type coercion, and thus unexpected matches and/or mismatches, the usual advice is to always use strict equality operators (“===” and “!==”). I disagree.
This is part of a deeper instinct in modern life, I think, to explain everything. Psychologically, scientifically, evolutionarily. Everything about us is caused, categorised
We have a word "pathologize" for this, but the author doesn't use it anywhere in this piece. The general thrust behind the word's existence was an implicit understanding that to pathologize X is generally the wrong approach to trying to explain or address X. Because it doesn't really explain anything—you just sort of bottom out (or hit a wall, pick your metaphor); it operates on the same principle behind the degenerate behavior that Feynman observed and criticized about how people think that knowing the name for something is a substitute for understanding that thing, or, when asked to explain magnets, he asked the questioner what they thought they were asking and explained that there's really not an answer (of the sort they thought they wanted) for anything else that they think they understand better than magnets but didn't ask about.
Apps reacting Let's check how different applications react to this file.
It's notable that the author left out the canonical implementation (PKZIP).
I'd also expect them to have at least tried to test the ZIP support in Mac OS Finder and the Windows shell.
technically both the offset of the Central Directory and the size of the Central Directory are redundant when you consider that the End of Central Directory Record should be, well, at the end of the Central Directory. In a proper ZIP file the following equation should be true: position_of_EoCDR - size_of_CD == offset_of_CD
Not true—they're not redundant; it is not correct to assume that this relation holds.
The only correct way to compute the offset is to use the field with information about the offset. It is not correct to do otherwise.
Our next step is to open source the codebase (Q3 2024).
The drive-through-only design gets rid of spaces that don't directly generate revenue.
The URL for this page (https://www.beet.tv/2009/10/webs-inventor-sir-tim-bernerslee-double-backslashes-were-unnecessary.html) betrays the author's (editor's?) inability to grasp what TBL says even while they are the ones trying to piggyback off his comments for attention.
[…] People on the radio are calling it "backslash backslash"[…]
I’ve been unable to obtain a .txt form of the source code
Apart from the formatting characters, this should be easy to do—the simplest OCR to ever implement.
I’d like to share the process of porting the original codebase from ~67,000 lines of C code to ~81,000 lines of Rust
Interesting outcome. Rarely (basically never) do you see "it's more verbose to write in $LANG2 what would be written in fewer lines if you were using $LANG1" touted as a positive.
for example
missing comma here
URI (Uniform Resource Identifier)
Ibid.
that people use as Web addresses
"Web address" is so awkward as a term. Better to just make it concrete, anyway, by providing a familiar example like "The URLs that appear in the address bar in a Web browser […]".
<Bob> <is a> <person>. <Bob> <is a friend of> <Alice>. <Bob> <is born on> <the 4th of July 1990>. <Bob> <is interested in> <the Mona Lisa>. <the Mona Lisa> <was created by> <Leonardo da Vinci>. <the video 'La Joconde à Washington'> <is about> <the Mona Lisa>.
The syntax highlighting here is very strange. It seems to follow the rule that the first atom inside a set of angle brackets is bold, and the subsequent atoms (if any) are not.
This ability to have the same resource be in the subject position of one triple and the object position of another makes it possible to find connections between triples
There's probably a better way to put this.
the Mona Lisa is the subject of one and the object of two triples
The natural way to phrase this would be "the subject of one triple and the object of two". (There should probably be a "both" thrown in there, too, to make the sentence structure even more predictable.)
Allows for code to become “portable” between files since the code can carry most of its external dependencies inside of itself, making refactoring a bit easier.
Making it easy for newcomers to understand where certain functions are coming from.
This is an underrated (actually undervalued) affordance in any codebase.
The two following snippets are completely equivalent in function
Do they emit the same code?
three
* 1.5504
regular changes in the ReadTheDocs build setup also required regular time investment on our part to make things work
D’s static reflection and code generation capabilities make it an ideal candidate to implement a codebase that needs to be called from several different languages and environments (e.g. Python, Excel, R, …). Traditionally this is done by specifying data structures and RPC calls in an Interface Definition Language (IDL) then translating that to the supported languages, with a wire protocol to go along with it. With D, none of that is necessary.
including 10s to download the crates every time
What are overlay SCMs supposed to actually achieve again?
When I try to imagine what an optimistic far future will be like, say 100 or 200 years from now, my best answer so far is, “like today but more evenly distributed.”
This is a better, but more verbose way to state what I summarized as "evenly distributing yesterday's future".
they’re scrappy personal tools
This belies the inevitable would-be justifications for the technology choices behind these projects.
Upgrading tools would frequently break compatibility with existing data. And it was difficult to make different tools interoperate if they couldn’t agree exactly on the format of their underlying JSON data.
Why should the app even know let alone care about which version of the serialization format is in play?
if the user knows a little bit of JavaScript, they can tweak user scripts written in Tampermonkey directly in the browser
I'm confused about why this namechecks a proprietary clone of Greasemonkey instead of just, you know, mentioning Greasemonkey.
Malleable software does not imply everybody creating all of their own tools from scratch.
This is a superfluous clarification.
(Maybe if it appeared nearer to the earlier discussion of situated software?)
"Malleable software" implies the opposite, right on its face.
Modifying a serious open source codebase usually requires significant expertise and effort. This applies even for making a tiny change, like changing the color of a button . Even for a skilled programmer, setting up a development environment and getting acquainted with a codebase represents enough of a hurdle that it’s not casually pursued in the moment.
every layer of the modern computing landscape has been built upon the assumption that users are passive recipients rather than active co-creators. What we need instead are computing systems that invite every user to gradually become a creator.
Is "creator" the right word here? There's lots of software that falls outside of what is the subject of this paper that enables creators. Indeed, it has been a common refrain in criticisms of the FOSS movement that for the types of software that creatives need and/or simply desire to use the proprietary apps tend to have no equals.
The requirements were driven by the needs of their specific department, not the needs of every doctor in the country.
See situated software.
To do our best work and live our best lives, we need spaces that let us each express our unique potential.
live our best lives
it("basic reviver with multiple features"
At this point, when you're writing things like this, it's worth re-examining what in the hell you've got all this stuff underneath that you don't even understand and you're certainly not using correctly.
"It basic reviver with multiple features"? That makes no sense.
model recovery
Great term of art for e.g. scraping.
All languages considered fully object-oriented feature encapsulation,polymorphism, and inheritance.
Inheritance seems to be the odd item out in this list—inheritance is not essential for OO.
It is easy to see how interchangeable parts could help in manufacturing.But manufacturing involves replicating a standard product, whileprogramming does not. Programming is not an assembly-line business buta build-to-order one, more akin to plumbing than gun manufacturing.
It's strange that there is a ready metaphor that isn't used—
The practice not of manufacturing parts (which is readily automated), but the labor that goes into designing the machines that enable the automation but for which there is no known way to automate the design work itself.
, “No Silver Bullet Revisited ,”American Programmer Journal (November 1995),http://virtualschool.edu/cox/pub/NoSilverBulletRevisted/
As I mentioned in another annotation, Cox's article in American Programmer was actually published with the title '"No Silver Bullet" Reconsidered'.
It appears that we have few specific environments (factory facilities) forthe economical production of programs. I contend that the productioncosts are affected far more adversely by the absence of such anenvironment than by the absence of any tools in the environment… Afactory supplies power, work space, shipping and receiving, labordistribution, and financial controls, etc. Thus a software factory should bea programming environment residing upon and controlled by a computer.Program construction, checkout and usage should be done entirely withinthis environment. Ideally it should be impossible to produce programsexterior to this environment…Economical products of high quality […]are not possible (in most instances) when one instructs the programmer ingood practice and merely hopes that he will make his invisible productaccording to those rules and standards. This just does not happen underhuman supervision. A factory, however, has more than humansupervision. It has measures and controls for productivity and quality.18
Hsu again cites only Mahoney for this, and the passage here is presented as one quote, but it's actually a quote within a quote: first Bemer and then Mahoney. The original Bemer quote ends with the second sentence ("I contend that the production costs are affected far more adversely by the absence of such an environment than by the absence of any tools in the environment…" which ends prematurely here but ends with a parenthetical "e.g. writing a program in PL/1 is using a tool"), and the remainder is Mahoney's commentary.
The Bemer source is:
R.W. Bemer, "Position Paper for Panel Discussion [on] the Economics of Program Production", Information Processing 68, North-Holland Publishing Company, 1969, vol. II, p. 1626.
yet in spite of,but possibly precisely because of this, becoming the most widely used object-orientedprogramming language in the industry,8 thus making it the most likely OO language ofchoice for managers
This is such a strange thing to see in a paper from 2009.
The source cited as support is a book from 1995.
Some might find this example hard to believe. This really occurred in some code I’ve seen: (defun make-matrix (n m) (let ((matrix ())) (dotimes (i n matrix) (push (make-list m) matrix)))) (defun add-matrix (m1 m2) (let ((l1 (length m1)) (l2 (length m2))) (let ((matrix (make-matrix l1 l2))) (dotimes (i l1 matrix) (dotimes (j l2) (setf (nth i (nth j matrix)) (+ (nth i (nth j m1)) (nth i (nth j m2))))))))) What’s worse is that in the particular application, the matrices were all fixed size, and matrix arithmetic would have been just as fast in Lisp as in FORTRAN. This example is bitterly sad: The code is absolutely beautiful, but it adds matrices slowly. Therefore it is excellent prototype code and lousy production code.
Strong Python vibes.
This article was originally published in 1991.
Specifically, it's:
Gabriel, Richard P. “LISP: Good News, Bad News, How to Win Big.” AI Expert 6, no. 6 (1991): 30–39.
all too late I read a paper that explains why the Web beatXanadu. Its an article by Richard Gabriel and written in 1987 and called 'Good News, Bad News, and How to WinBig, and Why the Thing that Does the First 50% Flashilly Wins'
Wikipedia says it was written in 1989. Gabriel says it was published in 1991. Presumably the latter refers narrowly to the actually publication when it was cut down to 9 pages for AI Expert in June 1991.
Ted is referring to the "The Rise of Worse Is Better" section that appears in the version published on Gabriel's site but that was simply called "Worse is better" in the version that AI Expert published.
The header image here doesn't load. But I made sure that it was archived. If and when Medium stops syndicating this article, you can find a copy of the image that was used here:
See this note for the same essay, only qualified by Guo's UCSD subdomain instead of his personal one:
This is:
Gabriel, Richard P. “LISP: Good News, Bad News, How to Win Big.” AI Expert 6, no. 6 (1991): 30–39.
... and a copy (in HTML) can be found at https://www.dreamsongs.com/WIB.html
No Silver Bullet Revisted American Programmer Journal
Note that the actual title as it appeared in the American Programmer Journal is «"No Silver Bullet" Reconsidered».
“because I said so”
Irony (cf https://hypothes.is/a/BeEXTkk3EfCERutnCr7MPw)
Get them to buy into the fact that bullshit (read: diagnosing and debugging weird things) is a part of life in the world of computers.
No.
From Ted Nelson's "Computer Lib Pledge":
- The purpose of computers is human freedom.
- I am going to help make people free through computers.
- I will not help the computer priesthood confuse and bully the public.
- I will endeavor to explain patiently what computer systems really do.
- I will not give misleading answers to get people off my back, like "Because that's the way computers work" instead of "Because that's the way I designed it."
- I will stand firm against the forces of evil.
- I will speak up against computer systems that are oppressive, insulting, or unkind, and do the best I can to improve or replace them, if I cannot prevent them from being bought or created in the first place.
- I will fight injustice, complication, and any company that makes things difficult on purpose.
- I will do all I can to further human understanding, especially through the new visualizing tools of interactive computer graphics.
- I will do what I can to make systems easy to understand, interactive wherever possible, and fun for the user.
hopefully you recognize the other kinds of “bullshit” a researcher will encounter: weird pseudo-code in a paper with parameters that seem defined by magic that you need to implement, or pieces of code that need to be extracted from something else, or someone’s ill-documented source code that worked yesterday but doesn’t today. If you live at the edge, you need to learn how to deal with bullshit. The general coping skills that let you deal with the bullshit, as well as specific computer-science coping skills, are hard to come by. If these aren’t taught early, in relatively low-stakes, easy-to-fix environments, it will only be worse later on.
Guo's addendum (mentioned in an earlier annotation) addresses this: so many readers seem to have come away from Guo's piece with weird assumptions that he's not in the process of trying to teach his student any of this stuff and that he is instead just trying to set things up for them. That's not what he's doing, and that's not what upsets him.
Scientific progress is made by building on the hard work of others and that, unfortunately, requires a certain perseverance.
What of the lack of perseverance of "the others"—to contribute their work in a way that can be integrated? Wouldn't it be a lot more productive if everyone aimed their perseverance at dealing with the problems that fall out of their own contributions—work that they are intimately familiar with—rather than trying to grapple with problems in N of the M components that they are building on and don't have the same level of familiarity—and for this effort to be duplicated by everyone else trying to build on top of it, too?
research is about the long game
Indeed—which should really cast the norms around scientific computing in stark light as being clearly the wrong way to go about doing things. (Which itself isn't to say that there's anything special about scientific computing here—there are plenty of programmers (working on open source or otherwise) that get things just as wrong. Most of them, even. It's virtually everyone.)
It is inevitable because we work with other people who are not software developers who have been trained in the best possible procedures.
This doesn't nail it either. There are lots of people who are software developers who have (or at least have been told or have convinced themselves) that they have been trained in the best possible procedures. Adar obliquely acknowledges "open-source developers", but in a strange way that seemingly implies software endeavors that aren't FOSS are better "incentivized to keep documentation up to date or the software running".
The problem isn’t that there is some magic invocation that makes this work, the problem is that the next thing you download will need (what looks like) a completely new magic invocation.
My initial reaction was to the "This is inevitable" part that follows, before backing up and focusing on this sentence, to which my response was originally going to be, "Is it? No, it isn't." That is, until I skimmed Guo's addendum/update (which I'd never seen before) in one of the two copies I kept of the piece which this piece by Adar is a response to. And indeed, it seemed, based on a surface-level reading of that addendum that Guo largely agreed. But then a close reread of that proves that there isn't any explicit or implicit agreement on Guo's part about the diagnosis offered here. And I don't think the diagnosis is correct.
And now to focus on the next part:
This is inevitable. It is inevitable because
I don't think it is. (If nothing else, it hasn't been proven.) Even from the perspective of the author, this falls squarely in the "accidental complexity" versus "essential complexity" bucket.
(And to pick up a thread of mine from earlier this year, I wonder if we have perhaps made a mistake in focusing too much on the notional of incidental (or "accidental") versus inherent (or "essential") complexity, and whether or not we should be trying to address the fact of incidental versus intentional complexity—which surely exists. And that isn't to say that's the culprit everywhere, but to reiterate: it surely exists and surely accounts for some of what's going on.)
If you look at most learning objective frameworks you’ll find that there’s a spectrum from the factual (type “git clone”), to procedural (if you see this error message, make the following fixes; if you see this other error message, make these other fixes)
I don't see how the examples offered are helpful in conveying the distinction. The instruction to "type git clone" seems procedural.
Based only on my inference of what is actually intended by "factual" versus "procedural" (without first checking the link here), I'd guess that, "The command for copying a Git repository's contents to the local machine is git clone" would be a real example of a factual (versus a procedural) lesson style.
My initial reaction was to the "This is inevitable" part that follows, before backing up and focusing on this sentence, to which my response was originally going to be, "Is it? No, it isn't." That is, until I skimmed Guo's addendum/update (which I'd never seen before) in one of the two copies I kept of the piece which this piece by Adar is a response to. And indeed, it seemed, based on a surface-level reading of that addendum that Guo largely agreed. But then a close reread of that proves that there isn't any explicit or implicit agreement on Guo's part about the diagnosis offered here. And I don't think the diagnosis is correct. And now to focus on the next part: This is inevitable. It is inevitable because I don't think it is. (If nothing else, it hasn't been proven.) Even from the perspective of the author, this falls squarely in the "accidental complexity" versus "essential complexity" bucket. (And to pick up a thread of mine from earlier this year, I wonder if we have perhaps made a mistake in focusing too much on the notional of incidental (or "accidental") versus inherent (or "essential") complexity, and whether or not we should be trying to address the fact of incidental versus intentional complexity—which surely exists. And that isn't to say that's the culprit everywhere, but to reiterate: it surely exists and surely accounts for some of what's going on.)
Too meta, too messy, too personal, and too much detail.
(NB: this actually comes from export of my annotation of Eytan Adar's piece "On the Value of Command-Line “Bullshittery”" originally published on Medium. See https://hypothes.is/a/zmhLwkkqEfCyyo_A6dOGrw for the original annotation. I used the aforementioned export to annotate the annotation—i.e., to add a true, first-class (though perhaps second-rate) annotation, rather than a reply.)
A major purpose of this style is Malleable Software, or software that is more supportive of change, reusability, and enhancement.
Is this the first occurrence of this term?
LLMs can write a large fraction of all the tedious code you’ll ever need to write. And most code on most projects is tedious. LLMs drastically reduce the number of things you’ll ever need to Google. They look things up themselves. Most importantly, they don’t get tired
Does this mean arguments against verbose "boilerplate" languages are going to be given less credence?
Previously, however, the GUID (being the URL) changed too
So why not just stop doing that? None of the churn was necessary.
“The goal for all the reverse logistics stuff,” says Roberson, “is to keep things out of the landfill.”
But algospeak isn’t just a communications issue: It’s a labor issue. The people who truly live and die by algorithmic ranking choices are the people whose ability to put groceries on the table is directly tied to whether a social media platform suppresses their videos or text.
Pfft. Lame.
without welfare
The earlier breakdown involved being dependent on the bus for transportation into the city and relying on the library for entertainment. Is the bus subsidized? Is the library?
there are ways to purchase bulk food at cost through their channels that can lower one’s food bill to laughably low levels — my wife and I presently spend perhaps $300 per month on food
Before I got to the end of this sentence I was expecting to see something like $70 per person.
$300 per month on food is "laughably low"? You can eat cheaper in the city!
They’d merely need to content themselves with a manner of living that would be more in line with that of their own great-grandfathers than the life so often depicted on reality television, TikTok, Instagram, and whatever else.
Quintessential boomer shit.
both conventional solitary tools toenhance their personal productivity, and a newclass of facility, coordination tools4, to address thecoordination problems which develop when largen u m b e r s of individuals need to cooperate on ac o m m o n task
Interesting problem decomposition with shades of Engelbart re https://dougengelbart.org/pubs/papers/scanned-original/1960-augment-133181-Special-Considerations-Individual-re-Information.pdf.
Hey, traveler. You're looking for https://dl.acm.org/doi/pdf/10.1145/948093.948095.
For example, the syntactic analysis stage builds a conventional expression tree, but this tree is expressed as objects—instances of various classes relevant to compilation: Class, Method, Message, Selector, Argument, etc. […] The code for these operations is encapsulated inside each object, and is not spread throughout the system so changes are easily made. And all these specialized classes rely heavily on reusable code inherited from more primitive classes like Object, HashTable, Symbol, Token, and LinkedList.
This is an interesting claim, given that one of the common criticisms of the classic, inheritance-heavy style of classic OOP is precisely that it leads to code that is indeed "spread throughout". Adele Goldberg commenting on the legacy of Smalltalk made a well-known quip, "In Smalltalk, everything happens somewhere else."
It's notable that although there many common word sequences between this paper and the later, similarly titled one that appeared in IEEE Software (1984), this passage doesn't appear to be present (based on a quick skim—I could be missing something), and the closest corresponding statements are significantly reworded.
This is:
Cox, Brad J. “The Message/Object Programming Model.” In Proceedings of Softfair: A Conference on Software Development Tools, Techniques, and Alternatives, 51–60. Arlington, VA: IEEE Computer Society Press, 1983.
In Hints and Principles for Computer System Design, 2020, Lampson describes his original 1983 paper (https://dl.acm.org/doi/10.1145/800217.806614) as being "[r]eprinted with some changes in IEEE Software".
This is:
Lampson, B.W. “Hints for Computer System Design.” IEEE Software 1, no. 1 (January 1984): 11–28. https://doi.org/10.1109/MS.1984.233391.
Programs Are Models That RunPrograms have much in common with models, in particular they are abstractions of a system that makecertain properties explicit and hide, or abstract away, other properties. But programs have a specialproperty that most kinds of models do not – they can automatically produce the actual computation theymodel.
detailedfunctional descriptions of hardware tothe extent that the instruction set of thecomputer can be emulated
This presupposes that you'd settle on opaque sequential blobs of said instructions as the preferred archive format—a consequence of the cultural milieu of the writer, and not the product of careful thinking and logical decisionmaking.
A common measurement of code bulk is to count the number of lines of code, and a common measure for software productivity is to count the number of lines of code produced per unit time. These numbers are relatively easy to collect, and have actually demonstrated remarkable success in getting a quick reading of where things stand. But of course the code bulk metric pleases almost no one. Surely more than code bulk is involved in software productivity. If productivity can really be measured as the rate at which lines of code are produced, why not just use a tight loop to spew code as fast as possible, and send the programmers home?
The modern incarnation of the fallacy of bulk-is-better is judging the quality of a project (esp. a component) by being overly concerned with whether it is more or less "active" than some other project.
The question posed here is an obvious response for when you encounter instances of the bulk-is-better mindset (although it's not as easy to automate as Cox suggests here).
I've often mused about the thought of pulling a Reddit* and automating some level of churn within a codebase for a project that's hosted on e.g. GitHub for the sole purpose of making sure that it appears "active".
* A story about the early days of Reddit has become well-known after the creators volunteered some information about the early days when they made use of a bunch of "fake" accounts to submit links that the creators had aggregated/curated for the purpose of seeding the site.
Although many of the hypotheses in this field are unproved
Are they even well-formed hypotheses? That is, are they falsifiable?
Most would agree that software productivity is low; lower than we'd like certainly, and probably lower than it needs to be. It is harder to agree what productivity is, at least in a quantifiable sense that would allow controlled experiments to be defined and measured scientifically. What is software productivity? Or much the same thing, How do you measure it? In measuring something, one hopes to understand the factors that influence it, and from these to discover ways to control it to advantage. This is the goal of Software Metrics, the study of factors influencing productivity by measuring real or artificial software development projects.
Software does not need dusting, waxing, or cleaning. It often does have faults that do need attention, but this is not maintenance, but repair. Repair is fixing something that has been broken by tinkering with it, or something that has been broken all along. Conversely, as the environment around software changes, energy must be expended to keep it current. This is not maintenance; holding steady to prevent decline.
The way you make code last a long time is you minimize dependencies that are likely to change and, to the extent you must take such dependencies, you minimize the contact surface between your program and those dependencies.
It's strange that this basic truth is something that has to be taught/explained. It indicates a failure in analytical thinking, esp. regarding those parts of the question which the quoted passage is supposed to be a response to.
That dependency gets checked into your source tree, a copy of exactly the version you use. Ten years later you can pull down that source and recompile, and it works
In other words: actually practicing version control.
Zulip is just snappy. It’s not bulky like Slack or Matrix.
I'm baffled how much mindshare Matrix was evidently able to gain (and maintain) given how crummy it is as a product and frustrating to use.
This is:
Cox, Brad J. “Message/Object Programming: An Evolutionary Change in Programming Technology.” IEEE Software 1, no. 1 (January 1984): 50–61. https://doi.org/10.1109/MS.1984.233398.
Mary Shaw's Towards and Engineering Discipline of Software
The actual name of the paper referenced here appears to be "Prospects for an Engineering Discipline of Software". "Towards an Engineering Discipline of Software" is the last chapter/section.
Consider a derivative: Slim Source Code—
… in the context of the problem described in "The Cost of Selective Recompilation and Environment Processing" (Adams, Rolf, Walter Tichy, and Annette Weinert. ACM Transactions on Software Engineering and Methodology 3, no. 1 (January 2, 1994): 3–28. https://doi.org/10.1145/174634.174637.)
This is:
Franz, Michael, and Thomas Kistler. “Slim Binaries.” Communications of the ACM 40, no. 12 (December 1, 1997): 87–94. https://doi.org/10.1145/265563.265576.
*hugs*
Infuriatingly, obnoxiously dismissive.
weird limitations of the web
Claims like these need supporting evidence. Are they really "weird limitations"? Or are they actually reasonable limitations—on developers' unending attempt to exercise what they feel as their manifest destiny to control the user's device and experience?
developers are hamstrung
So? To reiterate:
Fuck "developers".
In practice, we can't even get web apps to work on desktop and mobile, to the point where even sites like Wikipedia, the simplest and most obvious place where HTML should Just Work across devices, has a whole fricking separate subdomain for mobile.
This is a terrible example.
That is 100% a failing of the folks making those decisions at Wikipedia, not HTML.
I don't want to have to read documentation on how my SSG works (either my own docs or docs on some website) to remember the script to generate the updates, or worry about deploying changes, or fiddling with updates that break my scripts, or anything like that.
This flavor of fatigue (cf "JavaScript fatigue") is neither particular to nor an intrinsic consequence of static site generators. It's orthogonal and a product of the milieu.
being "followed" isn't always a good thing, it can create pressure to pander to your audience, the same thing that makes social media bad.
It can also create pressure to hold your tongue on things that you might otherwise be comfortable expressing were it not for the fact that there was available a meticulously indexed, reverse chronological stream of every sequence of words that you've authored and made ambiently available to the world.
This could be an episode of the sci-fi series Black Mirror, but
Mmm… half of what's described here is an episode of Black Mirror…
Figure 5.
The screenshot here (p. 33) reads:
Introduction
The intent of this paper is to describe a number of studies that we have conducted on the use of a personal computing system, one of which was located in a public junior high school in Palo Alto, California. In general, the purpose of these studies has been to test out our ideas on the design of a computer-based environment for storing, viewing, and changing information in the form of text, pictures, and sound. In particular, we have been looking at the basic information-related methods that should be immediately available to the user of such a system, and at the form of the user interface for accessing these methods. There will undoubtedly be users of a personal computer who will want to do something that can be, but not already done; so we have also been studying methods that allow the user to build his own tools, that is, to write computer programs.
We have adopted the viewpoint that each user is a learner and have approached our user studies with an emphasis on providing introductory materials for users at different levels of expertise. Our initial focus has been on educational applications in which young children are taught how to program and to use personalizable tools for developing ideas in art and music, as well as in science and mathematics.
It is titled "Smalltalk in the Classroom" and attributed to "Adele Goldberg and Alan Kay, Xerox Palo Alto Research Center, Learning Research Group". Searching around indicates that this title was re-used in SSL-77-2, but neither introduction in that report matches the text shown here.
Searching around for distinct phrases doesn't turn anything up. Is this a lost paper? Is there some Smalltalk image around from which this text can be extracted?
Nobody just says to do what you want. They always first demonstrate that they understand the standard arguments for tilting or not tilting, and then say to disregard them. That demonstration shows that they are above the other groups, not below them.
We all play the game we think we can do better at.
Is that actually true? Surely there are examples where people play the game that they're less suited for—where the decision is driven by desire?
There are a number of ways to become G, but usually you do it by adopting a complainer mindset. You wake up in a bad mood. You find little flaws in everything you see in the world, and focus on those.
I don't think that's right—
My life experiences during the (now bygone) Shirky era and the loose school associated with it really inculcated (in me, at least) the value of small, cumulative improvements contributed by folks on a wide scale. See something wrong? Fix it. Can't fix it (like, say, because you don't have the appropriate authorization)? File a bug so the people who can fix it know that it should be fixed. This matches exactly the description of seeing the "little flaws in everything you see in the world, and focus[ing] on those".
Looking at those flaws and thinking "this thing isn't as good as it could be" is a necessary first step for optimism. That belief and the belief in the possibility of getting it fixed is the optimist approach.
When I think of miserable people (and the ones who make me miserable), it's the ones who take the attitude of resignation that everything is shit and you shouldn't bother trying to change it because of how futile it is.
This is a new technology, people (read: businesses) want to take advantage of it. They are often ignorant, or simply too busy to learn to harnass it themselves. Many of them will pay you to weave their way on the web. html programming is one of the most lucrative, and most facile consulting jobs in the computing industry. Setting up basic web sites is not at all hard to do, panicked businesses looking to build their tolllane on the infoway will pay you unprecedented piles of cash for no more than a day's labour.
You probably spew bullshit too. It’s just that you’re not famous, and so it may even be harder to be held accountable. Nobody writes essays poking the holes in some private belief you have.
The at-oddsness of the two things mentioned here—spewing bullshit and private beliefs that someone could correct you about—is hard to skip over.
most “experts”, in every industry, say some amount of bullshit
This is my experience working with other people, generally. Another way to put it besides people not "priortizing the truth" is that they just don't care—about many things, the truth being one of them.
Philosopher Harry Frankfurt described “bullshit” as “speech intended to persuade without regard for truth. The liar cares about the truth and attempts to hide it; the bullshitter doesn’t care whether what they say is true or false.”
I don't think this is a good definition of liar. The bullshitter described here would be a liar if what they're saying is lie.
context
Check out https://tinlizzie.org/VPRIPapers/m2004001_power.pdf
Alan Kay coined the term “Object-Oriented Programming” This is absolutely true.
Are we sure?
This is:
Bobrow, Daniel G, and Bertram Raphael. “New Programming Languages for AI Research.” Palo Alto, California, August 20, 1973.
SMALLTALK (Kay 1973) and PLANNER73 (Hewitt 1973), both embody aninteresting idea which extends the SIMULA (Ichbiah 1971) notion ofclasses, items that have both internal data and procedures. Auser program can obtain an "answer" from an instance of a class bygiving it the name of its request without knowing whether therequested information is data or is procedurally determined. AlanKay has extended the idea by making such classes be the basis fora distributed interpreter in SHALLTALK, where each symbolinterrogates its environment and context to determine how torespond. Hewitt has called a version of such extended classesactors, and has studied some of the theoretical implications ofusing actors (with built-in properties such as intention andresource allocation) as the basis for a programming formalism andlanguage based on his earlier PLANNER work.and PLANNER73 are both not yet publicly available, the ideas mayprovide an interesting basis for thinking about programs. Themajor danger seems to be that too much may have been collapsedinto one concept, with a resultant loss of clarity.
An early mention of Smalltalk.
message-oriented programming language
That's interesting. This paper is Copyright 1977 (dated June of that year) and is using the term "message-oriented" rather than "object-oriented".
Rochus maintains that Liskov and Jones are the originators of the term "object-oriented [programming] language".
Here's Goldberg in an early paper (that nonetheless postdates the 1976 paper by Liskov and Jones) writing at length about objects but calling Smalltalk "message-oriented" (in line with what Kay later said OO was really about).
We're at the point in humanity's development of computing infrastructure where the source code for program texts (e.g. a module definition) should be rich text and not just ASCII/UTF-8.
Forget that I said "rich text" for a moment and pretend that I was just narrowly talking about the inclusion of, say, graphical diagrams in source code comments.
Could we do this today? Answer: yes.
"Sure, you could define a format, but what should it look like? You're going to have to deal with lots of competing proposals for how to actually encode those documents, right?" Answer: no, not really. We have a ubiquitous, widely supported format that is capable of encoding this and more: HTML.
Now consider what else we could do with that power. Consider a TypeScript alternative that works not by inserting inline type annotations into the program text, but instead by encoding the type of a given identifier via the HTML class attribute.
Now consider program parametrization where a module includes multiple options for the way that you use it, and you configure it as the programmer by opening up the module definition in your program editor, gesturing at the thing it is that you want to concretely specify, selecting one of those options, and have the program text for the module react accordingly—without erasing or severing the mechanism for configuration, so if another programmer wants to change the module parameters to satisfy some future need—or lift that module from your source tree and use it in another one for a completely different program—then they can reconfigure it with the same mechanism that you used.
I think that most of the complexity in software development is accidental.
I feel no strong urge to disagree, but on one matter I do want to raise a question: Is "accidental" even the right term? (To contrast against "essential complexity", that is.)
A substantial portion of the non-essential complexity in modern software development (granted: something that Brooks definitely wasn't and couldn't have been thinking about when he first came up with his turn of phrase in the 1980s) doesn't seem to be "accidental". It very much seems to be intentional.
So should we therefore* highlight the contrast between "incidental" vs "intentional" complexity?
* i.e. would it better serve us if we did?
One of the problems with building a jargon is that terms are vulnerable to losing their meaning, in a process of semantic diffusion - to use yet another potential addition to our jargon. Semantic diffusion occurs when you have a word that is coined by a person or group, often with a pretty good definition, but then gets spread through the wider community in a way that weakens that definition.
If a tool is developed to aid the production of software, its impact depends on the importance of the lifecycle phases it affects. Thus, a coding tool has the least impact while an evolution tool has the most impact.
Unless techniques to create software increase dramatically in productivity, the future of computing will be very large software systems barely being able to use a fraction of the computing power of extremely large computers.
In hindsight, this would be a better outcome than what we ended up with instead: very large software systems that, relative to their size—esp. when contrasted with the size of their (often more capable and featureful) forebears—accomplish very little, but demand the resources of extremely powerful computers.
Rust was initially a personal project by Graydon Hoare, which was adopted by Mozilla, his employer, and eventually became supported by the Rust Foundation
Considering this post is about Swift, describes Lattner's departure, etc., it would have been opportune to mention Graydon Hoare's departure re Mozilla/Rust and his subsequent work at Apple on Swift.
I made it obscenely clear that there was not going going to be an RFC for the work I was talking about (“Pre-RFC” is the exact wording I used when talking to individuals involved with Rust Project Leadership)
"Pre-RFC" doesn't sound like there's "not going to be an RFC" for it. It rather sounds like the opposite.
I have never seen any form of create generative model output (be that image, text, audio, or video) which I would rather see than the original prompt. The resulting output has less substance than the prompt and lacks any human vision in its creation. The whole point of making creative work is to share one’s own experience - if there’s no experience to share, why bother? If it’s not worth writing, it’s not worth reading.
I haven't been using LLMs much—my ChatGPT history is scant and filled with the evidence of months going by between the experiments I perform—but I recently had an excellent experience with ChatGPT helping to work out a response that I was incredibly pleased with.
It went like this: one commenter, let's label them Person A posted some opinion, and then I, Person B, followed up with a concurrence. Person C then appeared and either totally misunderstood the entire premise and logical throughline in a way that left me at a loss for how to respond, or was intentionally subverting the fundamental, basic (though not unwritten) rules of conversation. This sort of thing happens every so often, and it breaks my brain, so I sought help from ChatGPT.
First I prompted ChatGPT to tell me what was wrong with Person C's comment. Without my saying so, it discerned exactly what the issue was, and described it correctly: "[…] the issue lies in missing the point of the second commenter's critique […]". I was impressed but still felt like pushing for more detail; the specific issue I was looking for ChatGPT to help explain was how Person C was breaking the conversation (and my brain) by violating Grice's maxims.
It didn't end up getting there on its own within the first few volleys, even with me pressing for more, so I eventually just outright said that "The third comment violates Grice's maxims". ChatGPT responded with the level of sycophantism (and emoji-punctuated bulleted statements) a little too high, but also went on to prove itself capable of being a useful tool in assisting with crafting a response about the matter at hand that I would not have been able to produce on my own and that came across as a lot more substantive than the prompts alone.
all sanitization is done client-side which means Cloudflare never sees the full contents of the session token
Then don't use the word "upload".
This URL is reference from https://ignore.pl/2025/04/a_pair_of_examples_of_simple_interfaces.html
Traveller:
You are looking for https://ignore.pl/2022/01/playing_around_with_simple_interfaces.html.
Is my data kept private? Yes. Your chosen files will not leave your computer. The files will not be accessible by anyone else, the provided link only works for you, and nothing is transmitted over a network while loading it.
This is a perfect use for a mostly-single-file browser-based literate program.
Around the start of ski season this year, we talked about my plans to go skiing that weekend, and later that day he started seeing skiing-related ads.He thinks it's because his phone listened into the conversation, but it could just as easily have been that it was spending more time near my phone
Or—get this—it was because despite the fact that he "hasn't been for several years", he used to "ski a lot", and it was the start of ski season.
You don't have to assume any sophisticated conspiracy of ad companies listening in through your device's microphone or location-awareness and user correlation. This is an outcome that could be effected by even the dumbest targeted advertising endeavor with shoddy not-even-up-to-date user data. (Indeed, those would be more likely to produce this outcome.)
Fucking stupid.
It is a tragicomic fact that our proper upbringing has become an ally of the secret police.
This is:
Engelbart, Douglas C. “Special Considerations of the Individual as User, Generator, and Retriever of Information.” American Documentation 12, no. 2 (1961): 121–25. https://dougengelbart.org/pubs/papers/scanned-original/1960-augment-133181-Special-Considerations-Individual-re-Information.pdf
Calvin Mooers
I had the Android Emulator in my laptop, which I used to install the application, add our insurance information and figure out where to go. Just to be safe, I also ordered an Android phone to be delivered to me while I went to the hospital, where I used my iPhone's hotspot to set it up and show all the insurance information to the hospital staff.
If only there were some sort of highly accessible information system that was designed to make resources available from anywhere in the world without any higher requirement besides relatively simple and ubiquitous client software. Developers might then not be compelled to churn out bespoke programs that effectively lock up the data (and prevent it from being referenced, viewed, and kept within arm's reach of the people that are its intended consumers).
Software gets more complicated. All of this complexity is there for a reason. But what happened to specializing?
This feels like a sharp left turn, given the way the post started out.
Overspecialization is the root of the problem.
Companies want generalists. This is actually reasonable and good. What's bad is that companies also believe that they need specialists (for things that don't actually require specialization).
“But I can keep doing things the way that I’ve been doing them. It worked fine. I don’t need React”. Of course you can. You can absolutely deviate from the way things are done everywhere in your fast-moving, money-burning startup. Just tell your boss that you’re available to teach the new hires
I keep seeing people make this move recently—this (implicit, in this case) claim that choosing React or not is a matter of what you could call costliness, and that React is the better option under those circumstances—that it's less costly than, say, vanilla JS.
No one ever has ever substantiated it, though. That's one thing.
The other thing is that, intuitively, I actually know* that the opposite is true—that React and the whole ecosystem around it is more costly. And indeed, perversely the entire post in which the quoted passage is embedded is (seemingly unknowingly, i.e. unselfawarely) making the case against the position of React-as-the-low-cost-option.
* or "know", if the naked assertion raises your hackles otherwise, esp re a comment that immediately follows a complaint about others' lack of evidence
copywritten
The terms are "copyright" and "copyrighted".
I mean… come on
The one recent exception is “Why Can’t We Screenshot Frames From DRM-Protected Video on Apple Devices?”, which somehow escaped the shitlist and garnered 208 comments. These occasional exceptions to DF’s general shitlisting at HN have always made the whole thing more mysterious to me.
Geez. How clueless can a person be? (Or is feigned cluelessness also a deliberate part of the strategy for increasing engagement?)
The JSON Resource Descriptor (JRD) is a simple JSON object that describes a "resource" on the Internet, where a "resource" is any entity on the Internet that is identified via a URI or IRI. For example, a person's account URI (e.g., acct:bob@example.com) is a resource.
Wait, the account URI is a resource? Not the account itself (identified by URI)?
WebFinger (RFC 7033) is a REST-based web service
Nope.
This is one of those instances where you can do a search-and-replace on REST to substitute "HTTP" instead.
Given the constitutional underpinnings for copyright law as it exists in the US, this commenter's sense of the laws' spiritual intent isn't just off, it's actually in opposition to what the US is trying to effect.
The very small start-up effort is designed to allow small contributions
parse them out into smaller pieces
It's "parcel". Not "parse".
do better
Variable links are often found in online newspapers, for example, where links to top stories change from day to day. The click command can be used to click on a variable link by describing its location in the web page with a LAPIS text constraint pattern. For example: http://www.salon.com/ # Start at Salon click {Link after Image in Column3}
This is how RSS/Atom should be implemented in every feed reader, but, to my knowledge, never is. That is—
If URL to the feed for alice.example.net can be found at <https://alice.example.net/feed.xml> and it is available via feed autodiscovery, it should not require the user to navigate to alice.example.net obtain the URL to the feed, and then paste that into the feed reader. Rather, when I'm looking at my feed reader, I should be able to merely indicate that I want the feed for alice.example.net. The feed reader's job, then, should be to do BOTH of the following:
/feed.xml), ANDalice.example.net"—NOT merely the bare URL <https://alice.example.net/feed.xml>(The URL obtained via autodiscovery can be cached.)
If ever the feed location changes, it should not require the user to then backtrack and go through the process again to obtain the feed URL. With the site alice.example.net itself being kept in the user data store, the feed reader can automate this process. A website could thereby update the URL for its feed whenever it wants—as often as every day, for example, or even multiple times a day—without it being a source of problems for the user.
To make it even better, the Atom and RSS specs can be extended so that if the user actually pastes in the feed URL <https://alice.example.net/feed.xml>, then that's fine—the specs should permit metadata to advertise that it is the alice.example.net feed (which can be verified by any reader), and the client can store that fact and need not bother the user about the overly-specific URL.
Neat. It's basically a browser with deep integration for Fastmail.
(Not that Fastmail is neat—it's evidently got a bunch of jerks in charge—but I wholeheartedly approve of user agents operating in their users' best interests, and this qualifies. The fact that this is a site-specific browser made with no involvement of the site operator? A++++)
Please note that if you plan on importing this file back into Hypothesis you will need to export to a JSON file.
I'm kind of bummed that HTML exports don't contain sufficient information to act as a substitute for the JSON.
To import annotations from a JSON file saved to your computer, select the “Import” button under the Sharing menu.
That's neat, too.
You can export any notes to which you have access in the Hypothesis apps.
Neat.
to build xz from sources, we need autoconf to generate the configure script
Should you, though?
(The author approaches things from a different direction, but it's worth considering this one, too.)
while git repositories can be altered
Huh? I think I know what this is supposed to mean, but the problem with it is that it's a different kind of nonsense.
The most challenging part of gathering evidence is organising it.
non-sequitur?
For others, many of whom are voracious readers, the compulsion to read manifests as an exercise in confirmation bias: collecting fragments of ideas that validate existing worldview.
This is always how I've valued songs and their lyrics.
I'm aware of it.
the algorithm
please, stop
If something has value, users should pay for it.
Forget "value". That's a lateral shift in focus, and therefore a subtle change in subject. This is about costs. So rework the thought:
If a "user"* is the cause of costs, then we should consider getting the user to pay for it—especially when the costs are big enough.
(The answer to this is probably resource-proportionate billing for the accumulated expenses.)
* or "consumer"
Every time you run your bundler, the JavaScript VM is seeing your bundler's code for the first time
And of course this fares poorly if the input ("your bundler's code") is low-quality.
It's important to make an apples-to-apples comparison, so you don't end up with the wrong takeaway, like, "Stuff written in JS is always going to be inherently as bad as e.g. Webpack," which is more or less the idea that this paragraph wants you to get behind.
It shouldn't be surprising that if you reject the way of a bad example, then you avoid the problems that would have naturally followed if you'd have gone ahead and done things the bad way.
Write JS that has the look of Java and broad shape of Objective-C, and feels as boring as Go (i.e. JS as it was intended; I'm never not surprised by NPM programmers' preference to write their programs as if willingly engaging in a continual wrestling match against a language they clearly hate...)
Every time you run your bundler, the JavaScript VM is seeing your bundler's code for the first time without any optimization hints
This is really a failure of the NodeJS &co and is not something that's inherent to the essence of JS or any other programming language.
It makes sense for Web browsers to throw away everything and re-JIT every time since the inputs have just streamed in over the network microseconds ago. It doesn't make sense for a runtime that exclusively runs programs that are installed locally and read from disk instead of over the network; the runtime could do one of (or both): - accept AOT-compiled inputs - cache the results of the JITting
"with a text editor, with a web browser, standard stuff"