(Yes, my handwriting is atrocious, yes I can read it, yes I apologize to all my grade school teachers who gave me Cs in Penmanship. You tried.)
So crazy seeing this from an art school person.
(Yes, my handwriting is atrocious, yes I can read it, yes I apologize to all my grade school teachers who gave me Cs in Penmanship. You tried.)
So crazy seeing this from an art school person.
throw new Error("panic!"); // XXX
This could be reduced to throw Error("panic!");. And nowadays I prefer to order the check for document ahead of the one for window, just because.
Code for injecting that button and piggy-backing off the behavior of the BrowserSystem module follows.
Need to explain how this IIFE works, incl. the logic around events and readyState, etc.
Other elements used in this document include code, dfn, em, and p for denoting inline text comprising a snippet of code, a defined term that is not an abbreviation, inline text that should be emphasized, and a paragraph, respectively.
I failed to cover the use of ul and li tags.`
as of this writing in 2021
As of today (and for some time before this), and at least as I recall, the status quo with Firefox has changed so monospace text uses the same size as other code, like in Chrome. I may be mistaken, though.
Note that the use of the text/plain+css media type here
NB: this should be "Note the use [...]"
between style tags and not in a script element
Note that I bungled rule in the code block that precedes it, so it looks like it's in hybrid style/script block. Spot the error:
body script[type="text/plain+css"]::before {
content: '\3Cstyle type="text/plain+css"\3E';
}
Additionally, with the old wiki, only registered users could edit the wiki. With the new docs, because it's in a repo on GitHub, anyone can contribute to the documentation
This is such a weird fuckin' sentence. It's framed as if it's going from narrow to wide-open, but it's actually the opposite.
wat
it asks for the street address of the lot. I have never seen this information printed on any parking lot in my life. it suggests several "nearby" options; they are actually half a mile away. unable to figure this conundrum out even for myself, i sigh and walk her through installing Park Mobile
Instead of opening Google Maps...?
I know a lot of people (mostly young men with minimal parental guidance) who were total idiots up to about 25
This reminds me of abfan1127's "I paid off $240,000 in 3 years by not being an idiot with my money. I got on a budget with my wife. We cut up our credit cards because we always double or triple spent our money."
This is a double whammy: at the time, it gets dissmissed almost outright for the reason that, essentially, "everyone has an opinion", and then months or years later when it's evident that it did know better and it the official tack was flawed, it doesn't even get the acknowledgment that, yes, in fact that's the mindset that should have gotten buy-in.
I get frustrated whenever I have knowledge (specifically Web Platform knowledge) to solve a problem, but the abstraction prevents me from using my knowledge.
IF sym = ORS.ident THEN ORS.CopyId(modid); ORS.Get(sym); Texts.WriteString(W, modid); Texts.Append(Oberon.Log, W.buf) ELSE ORS.Mark("identifier expected") END ;
This "IF...ELSE Mark, END" region could be reduced by replacing the three lines corresponding to those control flow keywords with a single call to Check:
Check(ORS.ident, "identifier expected");
We really f'ed up the web didn't we?
I think I get what you're saying but I have some difficulty moving past the fact that you're claiming it doesn't need to be a website because it would be sufficient if it was a bunch of hosted markup documents that link to each other.
With Go, I can download any random code from at least 2018, and do this: go build and it just works. all the needed packages are automatically downloaded and built, and fast. same process for Rust and even Python to an extent. my understanding is C++ has never had a process like this, and its up to each developer to streamline this process on their own. if thats no longer the case, I am happy to hear it. I worked on C/C++ code for years, and at least 1/3 of my development time was wasted on tooling and build issues.
@1:14:37:
when you have a Dynabook and you have simulations and multidimensional things as your major way of storing knowledge, the last thing you want to do is print any of it out. Because you are destroying the multidimensionality and you can't run them
It is not unrealistic to forsee the costs ofcomputation and memory plummeting by orders ofmagnitude, while the cost of human programmers increases.It will be cost effective to use large systems like ~. forevery kind of programming, as long as they can providesignificant increases in programmer power. Just ascompilers have found their way into every application overthe past twenty years, intelligent program-understandingsystems may become a part of every reasonablecomputational environment in the next twenty.
A close-up photograph taken by DART just two seconds before the collision shows a similar number of boulders sitting on the asteroid’s surface — and of similar sizes and shapes
Where's that photograph available, and why isn't it either included or linked here?
This is probably a good place to comment on the difference between what we thought of as OOP-style and the superficial encapsulation called "abstract data types" that was just starting to be investigated in academic circles. Our early "LISP-pair" definition is an example of an abstract data type because it preserves the "field access" and "field rebinding" that is the hallmark of a data structure. Considerable work in the 60s was concerned with generalizing such structures [DSP *]. The "official" computer science world started to regard Simula as a possible vehicle for defining abstract data types (even by one of its inventors [Dahl 1970]), and it formed much of the later backbone of ADA. This led to the ubiquitous stack data-type example in hundreds of papers. To put it mildly, we were quite amazed at this, since to us, what Simula had whispered was something much stronger than simply reimplementing a weak and ad hoc idea. What I got from Simula was that you could now replace bindings and assignment with goals. The last thing you wanted any programmer to do is mess with internal state even if presented figuratively. Instead, the objects should be presented as sites of higher level behaviors more appropriate for use as dynamic components.
I struggle to say with confidence that I understand what Kay is talking about here.
What I glean from the last bit about goals—if I understand correctly—is something I've thought a lot about and struggled to articulate, but I wouldn't characterize it as "object-oriented"...
Computers can be effective tools for participating in the affairs of the world. They can also be used by the "experts" to erect barriers to participation.
Foreword
Preface
1 3 4 5 7 9 8 6 4 2
Why does the number 4 appear twice in the printer's key? Mistake?
Because this PDF does not include outline metadata, I have inserted jump points by highlighting the names of the chapter on the page where that chapter begins for each chapter in the book. These can be filtered by the "chapter heading" tag.
A Personal Narrative: Stanford
A Personal Narrative: Journey to Stanford
Writing Broadside
What We Do
Productivity: Is There a Silver Bullet?
The End of History and the Last Programming Language
Language Size
The Bead Game, Rugs, and Beauty
The Quality Without a Name
The Failure of Pattern Languages
Pattern Languages
Abstraction Descant
Habitability and Piecemeal Growth
Reuse Versus Compression
This is:
Gabriel, Richard P. Patterns of Software: Tales from the Software Community. New York: Oxford University Press, 1996. https://www.dreamsongs.com/Files/PatternsOfSoftware.pdf
a 1985 broadcast of Computer Chronicles (13:50) on UNIX: As for the future of UNIX, he [Bill Joy] says its Open Source Code
That's not what she says (but of course you're already aware of this).
Compare:
The claim here is that she's using the latter meaning. She is not. It's the former.
A Note on the Confinement Problemby B.W. Lampson. In Communications of the ACM 16(10), October 1973
(This paragraph replaced a more complex one based on a helpful comment from stellalo on HN!)
@1:26:22
I wasn’t really thinking about this until sometime in the ’90s when I got an email from someone who said, “Can you tell me if this is the correct meaning of the Liskov substitution principle?” So that was the first time I had any idea that there was such a thing, that this name had developed.[...] I discovered there were lots and lots of people on the Internet having arguments about what the Liskov substitution principle meant.
@41:15
I used to feel kind of jealous of the electrical engineers because I thought, “At least they have these components and they connect them by wires, and that forces you to really focus on what those wires are.” Whereas software was so plastic that people used to design without thinking much about those wires, and so then they would end up with this huge mess of interconnections between the pieces of the program, and that was a big problem.
Our goal is not to argue about proper nouns
And yet you are arguing (instead of just fixing your mistake). Why?
You even went out of your way to change the post: it used to say "Codecov is now Open Source"[1]. In the time since, you have changed it so "Code is Now Open Source"[2].
This is notable for two reasons: it means that it's not outside the bounds of reasonableness to ask why the post hasn't changed since you've been confronted about the discontent, but it also raises questions about why you made that particular change in the first place. By a reasonable guess, I'd bet it has something to do with the fact that writing it as "Open Source" (rather than merely "open source") does real damage to any argument that the latter is generic and doesn't have any particular significance, thus allowing you to repudiate the OSI and the OSD. Which, of course, means that you guys are total fuckin' slimeballs, since you are now actively taking steps to cover your tracks.
$100,000 at Stanford
I'm pretty sure it's $125,000: - https://web.archive.org/web/20150329000137/http://news.stanford.edu/news/2015/march/new-admits-finaid-032715.html - https://125.stanford.edu/making-dreams-possible/
JS's birth and (slightly delayed) ascent begins roughly contemporaneous with its namesake—Java. Java, too, has managed to go many places. In the HN comments section in response to a recent look back at a 2009 article in IEEE Spectrum titled "Java’s Forgotten Forebear", user tapanjk writes: Java is popular [because] it was the easiest language to start with https://news.ycombinator.com/item?id=18691584 In the early 2000s in particular, this meant that you could expect to find tons of budding programmers adopting Java on university campuses, owing to Sun's intense campaign to market the language as a fixture in many schools' CS programs. Also around this time, you could expect its runtime—the JRE—to be already installed on upwards of 90% of prospective users' machines. This was true even when the systems running those machines were diverse. There was a (not widely acknowledged) snag to this, though: As a programmer, you still had to download the authoring tools necessary for doing the development itself. So while the JRE's prevalence meant that it was probably already present on your own machine (in addition to those of your users), its SDK was not. The result is that Java had a non-zero initial setup cost for authoring even the most trivial program before you could get it up and running and putting its results on display. Sidestepping this problem is where JS succeeded.
Fielding actually has a whole section in his dissertation about this (6.5.4.3 "Java versus JavaScript").
A JSON engineer attempting to meet Level 3 of the Richardson Maturity Model
I love the epithet used here: a "JSON engineer".
It can be amusing to see authors taking pains to describe recommended paths through theirbooks, sometimes with the help of sophisticated traversal charts — as if readers ever paidany attention, and were not smart enough to map their own course. An author is permitted,however, to say in what spirit he has scheduled the different chapters, and what path hehad in mind for what Umberto Eco calls the Model Reader — not to be confused with thereal reader, also known as “you”, made of flesh, blood and tastes.The answer here is the simplest possible one. This book tells a story, and assumesthat the Model Reader will follow that story from beginning to end, being however invitedto avoid the more specialized sections marked as “skippable on first reading” and, if notmathematically inclined, to ignore a few mathematical developments also labeledexplicitly.
Great attitude.
let's keep a universally understood specificcompiler reference language so that all of us, no matter whatour computer, can share what work we have done
You don’t necessarily know who or what server B blocks or doesn’t block. You may not even know that server C exists or who Adolf is. But all it takes is someone to put a post in server C’s eyeline and they can take it and keep it, and then ignore any and all requests to delete it. Meanwhile, Adolf and his friends Rudolf and Hermann can have a lovely little laugh at your expense in your replies… on their server.
Yes, and people can also get together at the nearest bar, bookstore, coffee shop, library, etc. and snicker while making fun of you over drinks... and there is absolutely no mechanism to stop them or to get around this.
They can also start a small club where they perform skits about how dumb they think you are and then start inviting other people to their twice-monthly Bloonface Is So Stupid get-togethers that are open to the public and write plays and put on stage productions about it. And there is absolutely no mechanism to stop them or to get around this.
it can start making API requests to your server, anonymously, to get your account information and any other public posts you have
If you're giving stuff out to anyone who asks for it, then you're giving stuff out anyone who asks for it.
what is expected of them
Obnoxious application of this turn of phrase.
If your server closes down, and does not run the “self-destruct” command that tells all servers it has ever federated with to delete all record of it, its users and its posts, then they will stay on those servers indefinitely with no simple means of deleting them, or even knowing that they are there. And that’s assuming that the other servers would have honoured that deletion request anyway. Again, a bad actor doesn’t have to.
The fact that this strikes the writer as being notable means there's something crazy afoot wrt to expectations.
If you send me an email to delete all record of your conversations, I can choose to honor it or not. If you send it to my email service provider, you'll have to (a) somehow convince them to do it, and (b) hope that I am relying on them to store my copies, so that in the event they do honor your asinine request, my access is actually severed because I haven't (read: my client hasn't) already e.g. fetched the material in question.
If you have any objection at all to your posts and profile information being potentially sucked up by Meta, Google, or literally any other bad actor you can think of, do not use the fediverse. Period. Even if your personal chosen bogey-man does not presently suck down every single thing you and your contacts post, absolutely nothing prevents them from doing so in the future, and they may well already be doing so, and there’s next to nothing you can do about it.
Compare: if you have any objection at all to your GeoCities pages being sucked up by Yahoo!, AltaVista, or literally any other bad actor you can think of, do not publish to the Web. Period. Absolutely nothing stops your personal chosen bogey-man from sucking down every single thing you post. They may well already be doing so, and there's next to nothing you can do about it.
a bad actor simply has to behave not in good faith and there is absolutely no mechanism to stop them or to get around this
So "bad actor" here means someone who asks for a copy of your stuff, and you send it to them, and then when you decide you don't want to have given it to them and so you tell them to please get rid of it, they say "no thanks, I'll keep it"?
So, immediately I was prevented from just doing the bare minimum I hope to expect from JS applications: npm install npm run
Good case study in class of JS/Node category errors.
I tried precompiling the JavaScript code to QuickJS bytecode (to avoid parsing overhead), but that only saved about 40 milliseconds (I guess parsing is really fast!).
Alternative approach to consider: don't rebuild older posts. Transform them from Markdown into HTML in situ and then never worry about recompiling that post again. You could do this by either keeping both the Markdown source and the output document around, or by crafting the output in such a way that the compilation process is reversible—so you could delete the Markdown source after it's compiled and derive it from the output in the event that you ever wanted to recover it.
The problem with this approach is that output files never specify their dependencies. Looking at it from the other direction, if I modify this post's Markdown file, the only change to Goldsmith's initial data model is the content of this Markdown file. The problem is that this one input file could impact numerous output files: the post itself, the Atom feed, any category/keyword index pages (especially if keywords are added or removed), the home page, and, of course, the archive page.
This is a good summary of the problems affecting static site generators (and program compilers) generally.
Has tons of native packages... but is it portable to Windows?
Note than jart has been doing a bunch of interesting stuff with truly cross-platform binaries in the form of Actually Portable Executables and has settled on embedding Lua.
Python for tools and scripts JavaScript on the web
Nah. Use JS for your scripts, too. Python is far from "ubiquitous".
it's a shame because I really like C# and the .NET standard library
You can, by the way, target the design of the .NET APIs in your non-C# program and then just fill your own re-implementation that works just well enough to service your application's needs. This strategy is way too undervalued.
C/C++ support cross-compiling through a painful process of setting up an entire compilation environment for the target.
That leaves C#
There's also Java which supports AOT compilation to native via Graal. I haven't done a comparison, but I would think it's similar in bloat to the experiments with C#.
Obviously, it's not fair to compare the compile-to-native languages to scripting languages
Sure it is! Particularly since those "scripting" languages also contain compile-to-native code generators of their own—they just execute at runtime rather than ahead of time. (Although, in the case of QuickJS, which was covered in an earlier episode of a related series[1], it permits you to you to use it just like ahead-of-time compilers.)
the Rust SDK is disappointingly heavy
Yes. The Rust team had the opportunity to fix more than one thing wrong with C++, one of them being very important to making hacking on Gecko more approachable—the heft of the build requirements—and they went the opposite direction. Major fumble which should have resulted in a vote of no confidence to anyone thinking clearly (i.e. not intoxicated by hype).
To put it succinctly: Rust blew it.
What about an old laptop?
Almost certainly, I'd bet.
Would a newer Raspberry Pi hit the sweet spot between performance and cheapness?
Almost certainly, I'd bet.
though I worry what will happen as the language continues to evolve beyond QuickJS's implementation
Just because the technical committee adds more stuff to the language spec, it doesn't mean you have to use those things. Whatever exists now will exist as a subset of the future version, so just stick with that. (You probably don't need the difference, anyway.)
I'll admit that md2blog hasn't been optimized. It always regenerates the entire site, and it's written in JavaScript. On my desktop computer, I can build my site in maybe 1 second. On the Raspberry Pi (admittedly using a much simpler JavaScript runtime), it took over 6 minutes!
This shouldn't be the case, "written in JavaScript" or not. I suspect it's rather a consequence of the dependencies. Large parts of Firefox circa 15 years ago were written in JS and ran on similar-ish hardware and executed in an interpreter rather than a JITting VM that should, when measured, work out to be less performant than QuickJS.
On the other hand, I am aware of the (original?) Pi's deficient floating point handling, which involves emulating floating point operations with software routines—and JS numbers are all, in theory, IEEE 754 doubles, but QuickJS is a capable runtime that should be smoothing over that wrinkle behind the scenes—and I have no idea if the Pi used here has the same limitation in the first place.
Navigating large C files in Vim was slow (frequently, I could see the screen re-drawing individual lines of text)
This seems odd, especially so since the experience using w3m was described as "pretty snappy". What's Vim doing wrong?
Installing packages and opening web pages in w3m was pretty snappy, but compiling QuickJS was a rude awakening. I typed make and then left to do something else when it became apparent that it might be a while. Later, I came back, and it was still compiling! With link-time optimization, it took almost half an hour to build QuickJS.
I wonder what the experience is like compiling QuickJS using TCC instead of GCC.
a factor of 10 did go into faster responses to the user’s actions
We've seen the opposite trend in the last 10 years or so.
a reusable component costs 3 to 5 times as much as a good module. The extra money pays for: · Generality: A reusable module must meet the needs of a fairly wide range of ‘foreign’ clients, not just of people working on the same project. Figuring out what those needs are is hard, and designing an implementation that can meet them efficiently enough is often hard as well. · Simplicity: Foreign clients must be able to understand the interface to a module fairly easily, or it’s no use to them. If it only needs to work in a single system, a complicated interface is all right, because the client has much more context. · Customization: To make the module general enough, it probably must be customizable, either with some well-chosen parameters or with some kind of programmability, which often takes the form of a special-purpose programming language. · Testing: Foreign clients have higher expectations for the quality of a module, and they use it in more different ways. The generality and customization must be tested as well. · Documentation: Foreign clients need more documentation, since they can’t come over to your office. · Stability: Foreign clients are not tied to the release cycle of a system. For them, a module’s behaviour must remain unchanged (or upward compatible) for years, probably for the lifetime of their system. Regardless of whether a reusable component is a good investment, it’s nearly impossible to fund this kind of development.
a reusable component costs 3 to 5 times as much as a good module. The extra money pays for: Generality[...] Simplicity[...] Customization[...] Testing[...] Documentation[...] Stability[...] ¶ Regardless of whether a reusable component is a good investment, it’s nearly impossible to fund this kind of development.
In a true docu-ment-centered system, you start aspreadsheet by just putting in columns(e.g. with tabs)
This is part of a collection "Smalltalk-related technical papers and reports" https://www.computerhistory.org/collections/catalog/102739344
Linked to from here https://news.ycombinator.com/item?id=36833755
Some applications, likeMicrosoft Word, come with a sophisticated customization sub-system that allows users tochange the menus and keyboard accelerators for commands. However, most applicationsstill do not have such a facility because it is difficult to build.
Maybe part of the problem is that I'm grossly under-estimating the amount of work involved in "post an interesting technical article to it once or twice a year" for people who don't already spend a lot of their time writing.
Writing is one thing. Writing for a public audience (read: writing persuasively) is another thing. Case in point: "how much push-back this one [blog post] gets", which comes as a surprise to Simon, he says.
my advice is very much focused on "working for ambitious technology companies"
Good start at an ontology for different kinds of work? Samsung Austin Semiconductor, for example, does not fall within the class that Simon calls "ambitious technology companies", despite nominally being a "tech" company and ostensibly "ambitious".
This post presumes that a given candidate is looking for career in show business. There's no good reason to make that logical leap.
Bank managers (or HR folks at tech companies for that matter..) don't seem to be getting told to curate the equivalent of a GitHub profile. (LinkedIn notwithstanding—Microsoft, stop trying to make "fetch" happen.) Why should a software engineer?
This is so weird; why isn't the Etherpad project wiki an Etherpad instance?
This is:
Wright, Alex. Cataloging the World: Paul Otlet and the Birth of the Information Age. New York, NY: Oxford University Press, 2014.
the conference ultimately resulted in a series of platitudinous proclamations (as conferences tend to do)
Nice turn of phrase—"platitudinous proclamations" (p. 88)
pulling some code that could be inline into a function allows someone to more easily replace that function
There are shades of OO ideology in this, unintentionally.
As anyone familiar with software development knows, the difficulty of adding new features or modifying existing ones grows very quickly, much faster than linearly, with the total number of features. They interfere with one another. By reducing the number of shipped features, we reduce the difficulty of modification. Anybody can do it (or have somebody do it for them).The more users we try to appease out of the box, the harder things become for those we haven’t served yet. A more rigorous analysis would attempt to model costs and benefits, do the math, etc. I’ll leave it at noting that the combination of the 80/20 rule and superlinear complexity growth means we probably aren’t amortizing as much effort as we would hope by adding every feature to a single code base.
Simple but non-obvious truth.
“It’s just not [sic] gonna work.”
I'm confused about the use of "[sic]" here.
Sure but if the job listings are saying “College Degree in something” applicants without a degree are likely to get rejected well before interviews because it is an easy filter for HR.
Why do we never try to address how obviously inadequate most who are hired into HR are? Filter those.
tech’s focus on prioritizing output over credentials
Is this even real? It feels to me, as someone outside the Bay Area, like something that is either a result of Bay Area parochialism (for lack of a better word) or a mistake: tech does prioritize "credentials"—they're just credentials in the form of résumé-driven data points related to tech stacks (e.g. React, Kubernetes, Docker...) and employment history, rather than academic credentials.
Crow and Dabars explain that most universities aspire towards offering a Michelin star experience to students, but what we actually need is a ‘fast casual’, Cheesecake Factory-like option that can provide an affordable, quality education to millions
I'm conflicted by this analogy, because I think that in a certain sense a fast-casual Cheesecake Factory for degrees is exactly what universities have turned themselves into. NB: that's "for degrees", not for "quality education" as nayafia relates here.
that does a disservice to what tech culture increasingly stands for today: a beacon for people who care about finding and spreading great ideas
Is this what the tech world achieves in practice?
you might remember ASU’s reputation as a party school for hot people. In 2002, Crow was appointed ASU’s 16th president, and he set about both developing and implementing a vision for reform. Today, ASU ranks as a top 100 research university worldwide [1], and has managed to do so while increasing affordable access to higher education. ASU is an example of what President Crow and his colleague, William Dabars, call the “New American University”, a model that they hope other public research universities will emulate. There are a bunch of interesting aspects to this model, but the most striking, in my view, has been to throw away the Ivy League playbook, rejecting the idea that a university’s prestige is defined by whom they exclude. Instead, ASU has significantly improved their rankings while accepting and graduating more students. Given the widespread critique of academia, especially within tech, I was surprised that, after asking around, none of my peers had come across ASU as a case study for reform, despite its reputation among university administrators. So I’m summarizing what I’ve learned about ASU here in hopes that it might help others learn
This raises an (un?)interesting issue: if ASU's ranking improves, but its reputation does not—because the news never reaches those who have internalized its status as a party school—what then? You can imagine someone returning to school now, picking ASU, and then showing up to a job interview or landing within a pool of coworkers or an after-work social event where attendance is dominated by B players who remain uninformed. It's not hard to imagine someone "losing" so many arguments on the basis that that person went to ASU, ASU is for dumb people, and ipso facto they are wrong.
So exclusion is the answer: we should seek to exclude the types of people for whom this strain of intellectually lazy argument is an attractive weapon, and ideally also exclude those who respond positively to its use.
This is not a well-written blog post.
(That isn't to say it contains no insights or that the insights are not true. Only that it is not well-written.)
a paper called the Open Data Model in 1998
Myers, Brad A. “The Case for an Open Data Model,” August 1998. http://reports-archive.adm.cs.cmu.edu/anon/1998/CMU-CS-98-153.pdf
If you look at your home, did you hire an interior designer to put everything in the right place in your house? Some people do but most people don't.
Important to note that what's also left out of this picture of software design being left up to designers is the staggering amount of software with positively bad UI and UX that people have forced on them all day in the form of e.g. awful enterprise software. The notion that the masses somehow need designers' (proper designers) involvement in software is just flatly contradicted by reality.
And on that note there's also the matter of choice—if you were to hire a designer who did something you didn't like, you'd get rid of them. Among all the software that people interact with every day, whether it's terrible enterprise junk or an iOS app designed with a self-anointed designer, there isn't one in which the user actually hired anyone to do the UI. The publisher/whomever just makes the thing and says, essentially, "here you go; take it or leave". That's what's really on offer in the world made too favorable to designers and not malleable enough to users.
go sample from A gallery of interesting Jupyter Notebooks. Pick five. Try to run them. Try to install the stuff needed to run them. Weep in despair.
throwing the JavaScript world into temporary insanity
No. Throwing the NPM world into insanity. The author of the Guardian article got this right. (The slug is literally npm_left_pad_chaos.) There's no excuse for this sort of equivocation.
Of all the stuff in the world, JS is the one thing that's closest to having the robustness values desired in this blog post. TC-39's motto is literally "Don't break the Web".
our systems do not really support reproducibility, mostly due to software collapse issues as defined by Hinsen
So stop building within those systems (i.e. on crummy foundations).
The bedrock choice is possible, as demonstrated by the military and NASA, but it also dramatically limits innovation, so it’s probably not appropriate for research projects
I think this is too curt of a dismissal. The browser environment, while not suitable for computationally expensive, long-run experiments, is certainly adequate for a great many things that could be "solved" by targeting it, even though they currently are not. Things like converting a simple assembly listing into a 512-byte program.
(Compare: https://www.colbyrussell.com/LP/debut/plain.txt.htm.)
our culture and our institutions do not reward reproducibility
This is a real problem/the real problem.
Unless all systems are to collect their own data directly from eachcitizen (an appalling prospect),
This is an interesting remark from the vantage point of 50 years in the future.
I tried writing a serious-looking research paper about the bug and my proposed fix, but I lost a series of pitched battles against Pytorch and biblatex
you start by (for example) importing Tomcat
Worth pointing out that this (Tomcat) comprises a component in and of itself. It is because this component was available that this step can be dashed off as "[just] importing Tomcat"—rather than "write a program that binds to port 80 and returns a 404 for any resource requested".
It costs be-tween ½ and 2 times as much to build amodule with a clean interface that iswell-designed for your system as to justwrite some code
I believe it, but I would have liked to have seen a reference for this claim.
For the most part, component libraries have been a failure, in spite of muchtalk and a number of attempts.
It would be nice to hear a 20-year retrospective from Lampson on this, in light of the creation and rise of e.g. npm and other npm-like "language package managers". (He doesn't really acknowledge this in his 2020 omnibus report Hints and Principles for Computer System Design.)
only a small fraction of the features of each component, and your program con-sumes 10 or 100 times the hardware resources of a fully custom program, butyou write 10% or 1% of the code you would have written 30 years ago.
You use only a small fraction of the features of each component, and your program consumes 10 or 100 times the hardware resources of a fully custom program, but you write 10% or 1% of the code you would have written 30 years ago.
This is:
Lampson, Butler W. “Software Components: Only the Giants Survive.” In Computer Systems: Theory, Technology, and Applications, edited by Andrew Herbert and Karen Spärck Jones, 137–45. Monographs in Computer Science. New York, NY: Springer, 2004. <doi:10.1007/0-387-21821-1_21>.
Broadly speaking, there are two kinds of software, precise and approximate, with the contrastinggoals “Get it right” and “Get it soon and make it cool.”
A different take on the "pick two" quip ({on time, within budget, bug-free})
This is:
Lampson, Butler. “Hints and Principles for Computer System Design,” November 2020. https://www.microsoft.com/en-us/research/publication/hints-and-principles-for-computer-system-design-3/.
The trend here wouldn't be so bad if, in practice, public libraries didn't become such hostile environments for the two classic use cases for libraries—that is, "places to: a) read quietly; b) study". It's unfortunate, then, that they have (at least in my experience).
We need to legally regulate remote attestation.
Hear, hear! This is a great response to anyone calling for regulation aligned with Google's Web Environment Integrity proposal (and similar forms of DRM): "Regulation? Okay, let's regulate you—so you cannot do this thing you're trying for."
It's like how copyleft was invented to allow the GPL to use copyright to work against the prurient interests of those normally wielding it.
Google's new effort to DRM the web
Very concerning. "DRM the Web" is a very apt descriptor for what's at the end of that link. I seem to have missed the news.
It really looks like this isn't related to libxml2's strlen implementation but that you're hitting a quadratic performance problem caused by naive string concatenation.
Yes, it does look like that, which is a little frustrating/baffling already, but the bug is "xmlStrlen is 2–30 times solwer than glibc strlen". That's a well-defined issue. Respond to that, not the context (which is out of scope). The bug is not, "help us speed up our application".
The Web needs to be stud-ied and understood as a phenomenonbut also as something to be engineeredfor future growth and capabilities.
I'd rather we focus for now on maximizing it to its current potential as a more convenient digital equivalent for pre-Web physical resources. Just getting people to embrace URLs—truly embrace them—is a big enough task itself. It's been 30+ years, and not even Web professionals and other (ostensibly smart) people on the periphery (e.g. techies) reliably get this stuff right. Indeed, they're often the biggest offenders...
However, in many ofthese courses, the Web itself is treat-ed as a specific instantiation of moregeneral principals. In other cases, theWeb is treated primarily as a dynamiccontent mechanism that supports thesocial interactions among multiplebrowser users. Whether in CS studiesor in information-school courses, theWeb is often studied exclusively as thedelivery vehicle for content, technicalor social, rather than as an object ofstudy in its own right.
I'd argue that this is a good thing. I think the tech industry's navelgazing does perhaps some of the worst harm wrt the problems articulated earlier.
f you look at CS cur-ricula in most universities worldwideyou will find “Web design” is taught asa service course, along with, perhaps,a course on Web scripting languages.You are unlikely to find a course thatteaches Web architecture or protocols
Pretty fucked up, actually! It's true. And it's probably why software engineers (incl. e.g. CACM folks...) do such a poor job of managing their digital document depositories and generally fulfilling the potential of the Web as it was meant to be.
This is:
Hendler, James, Nigel Shadbolt, Wendy Hall, Tim Berners-Lee, and Daniel Weitzner. “Web Science: An Interdisciplinary Approach to Understanding the Web.” Communications of the ACM 51, no. 7 (July 1, 2008): 60–69. https://doi.org/10.1145/1364782.1364798.
Amazon.com, for example, grew into a huge online bookstore, then music store, then store for all kinds of goods because it had open, free access to the technical standards on which the Web operates. Amazon, like any other Web user, could use HTML, URI and HTTP without asking anyone’s permission and without having to pay.
Amazon doesn't, however, as TBL has already noted, make its A/V media available as first-class Web resources. (What's the URL for a recent blockbuster available for streaming through Amazon's Prime Video program?)
I originally called the naming scheme URI, for universal resource identifier
It was really "UDI" (Universal Document Identifier) even before that. See:
Berners-Lee, Tim, Jean-François Groff, and Robert Cailliau. “Universal Document Identifiers on the Network,” 1992.
Yet people seem to think the Web is some sort of piece of nature, and if it starts to wither, well, that’s just one of those unfortunate things we can’t help.
Also: computing generally.
my thesis on the form of the web browser and its social effects
This one:
Marco, Matthew Tangco. “The Form of the Web Browser and Its Social Effects.” PhD Thesis, Georgetown University, 2011. http://hdl.handle.net/10822/552929
Not to be confused with https://www.w3.org/History/1989/proposal.html
These indexes are maintained using existing FIND* software.
See also: https://www.w3.org/History/1990/WWW/FIND/Hyperizing_FIND.wn/
Sorry, Insufficient Access Privileges
Traveller:
You may benefit from this link, instead https://www.w3.org/History/1991/HTRejected.wn/WNDocument.wn
(Not sure if it's actually the same as what's supposed to be available here.)
It appears to be a "MacWrite II document" according to http://mark0.net/onlinetrid.aspx
Note that as of this writing, the site at that link now identifies the same file as "Word for the Macintosh document (v4.0)". I think the MacWrite II diagnosis is more plausible. No idea what changed.
Also, for those who for some reason prefer curly brackets over Python-style indenting, it is also possible to write:
Good and sensible.
This could* pretty trivially be converted into a triple script.
* and should be
it's like all right someone built an app that's pretty cool now let me go set up my ide let me go download all these packages what's the stack they're using
A good(?) case study in why these "papers" should be... papers (and not six-pagers* with directories of source code that exist "over there").
cf https://www.colbyrussell.com/LP/debut/
* 19 in this case
Chris (incl. his intuition) isn't wrong—static sites really do exist and are special. Consider: - files "served" from the local filesystem, i.e. what you get if you access a collection of resources saved to disk and opened in your browser - how much can be captured by a crawler (e.g. Wayback Machine) - bounded limits of computation: serving an HTML artifact (or some other static resource) is guaranteed to terminate
This would probably be less of an issue making Hugo config changes was something I did more than a couple of times per year.
See also:
So, time to update the website, but the first wall I hit was that I:
- Forgot how my over-engineered SaaS was supposed to be used (no documentation because I built it myself and was lazy)
- Forgot how to follow the esoteric Hugo conventions (has documentation, but it's not easy to parse at a glance)
I was pretty annoyed with myself for having fallen for the trap of not documenting my own systems, but not sure how I could have remembered all of the Hugo-isms, especially since I don't update this site very often and don't do static site generator work outside of this.
https://web.archive.org/web/20210331182731/https://corytheboyd.com/posts/2020-03-09
(Previously: https://hypothes.is/a/U742AodQEeu5T2dEN4YdWQ)
Browser-based interfaces are slow, clumsy, and require you to be online just to use them
Nope (re: offline). You're confusing "browser-based" and "Web-based" (sort of the way people confuse statically typed" versus strongly typed*). They're different. You can have a fully offline browser-based interface. Most common browsers are every bit as amenable as being used as simple document viewers as Preview.app or Microsoft Word is. The browser's native file format is just a different sort—not DOCX, and not PDF (although lots of browsers can do PDF, too; you can't write apps in PDF, though—at least not in the subset supported by typical browsers). Instead of Office macros written in VBA, browsers support scripting offline documents in JS just like online documents. You can run your offline programs using the browser runtime, unless they're written to expect otherwise.
This is:
Wang, April Yi, Andrew Head, Ashley Ge Zhang, Steve Oney, and Christopher Brooks. “Colaroid: A Literate Programming Approach for Authoring Explorable Multi-Stage Tutorials.” In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, 1–22. Hamburg Germany: ACM, 2023. https://doi.org/10.1145/3544548.3581525.
the order in which tutorial authors may want to explaintheir code often does not match the order of the code itself
This is an error/defect on the part of the author.
notebook-like interface
Colaroid’s unique ap-proach to literate programming is to bring together the rich textediting affordances of notebooks together with automated creationof contextualized code snippets showing code differences, and closeintegration of the literate document into an IDE where code can betinkered with.
This misses the point of LP—the true "fundamental theorem of LP" is basically that the compiler should be made to accept the preferred form.
their are a few configurations
Should be "there"
"<a href="http://vorlonjs.com/documentation/""
404 Web Site not found.
The server isn't redirecting requests at the apex domain to www..
"Fixing" the RSS-vs-Atom problem: brand Atom as "RSS3" (sexy)—and make sure there are accommodations (read: plenty of RFC 2119-style MUSTs, MUST NOTs, etc) for podcasts while you're at it.
Set up the Standard Ebooks toolset
The "Standard Ebooks toolset" should just be... an ebook. That your browser can execute. (Because it's a ubiquitous runtime target that requires no setup...)
JavaScript developers tell me that disk space and network bandwidth are cheap these days. While they're mostly saying that to try and justify extremely bad architectural decisions (what's a couple thousand NPM dependencies between friends?)
Question: what even is the "NPM hypothesis"? I.e. what is the value proposition, stated in the form of a falsifiable claim?
cranies
Should be "crannies"
What if console services (i.e. services offered by the system software for hardware game consoles) offered an asset cache?
NEWS 2023-06-21: The GMP servers has been under serious load due to a barrage of clone requests from Microsoft/Github. Github's setup encourages "forks" of their projects, and such forks then by default pull in parent project changes. Around 2023-06-15, a project known as FFmpeg decided that it would be a great idea to clone GMP in their CI scripts, meaning that every one of their commits requested a compressed clone from the GMP servers. But, by Github's design, hundreds of FFmpeg forks automatically followed suit, themselves cloning the GMP repo. In effect, Microsoft's computer cloud performed a DDoS attack on the GMP servers. After bringing up the issue here and on the GMP mailing lists, some Github brass replied, minimizing the issue and blaming our servers for the denial-of-service attack. They did not do anything to stop the attack! In fact, it is still ongoing a week later. Our servers are fully available again, but that's the result of us adding all participating Microsoft network ranges to our firewall. We understand that we are far from the first project to take such measures against Github.
Notice the total lack of addressability of this piece of content (in re the submission "The GMP library's repository is under attack by a single GitHub user" at <https://news.ycombinator.com/item?id=36380325>)
picolog.blue is missing
I started to learn PHP. I am very happy with the language so far. I enjoy a lot how easy it is to deploy and update a page. If I want to update the text on a page, I can just edit the file, save it, and that's it. If I want to push the latest version of a site, I just copy the files to the server.
... because the server is configured like that.
(This is not a property of the language.)
In dynamic languages you can write programs that take programs and output other programs
Dynamic languages are not a prerequisite to this.
Good example of how when people are talking about a thing, they're often not really talking about it—they're saying something about themselves, their environment, how they do things, etc.
See also xkcd 1172: Workflow
only
This looks to be imagined.
The problem is that the Copyright Office, under color of authority ostensibly assigned to it by statute, requires libraries to misinform patrons about their rights.
This is the basis of the article. It would be nice if, in all the words here, it were actually substantiated.
why is a library required tell you otherwise? Why must libraries actively misinform their patrons about their actual rights under the law?
Okay, now this is the problem statement(s). I don't see that the notice is doing these things (telling someone otherwise or "actively misinform[ing] their patrons"). It looks like it's totally in step with what is actually true.
is it really the case that when you make a copy of an in-copyright document at Kinko’s, you have the full spectrum of fair use rights – but if you copy (or receive a copy of) the same document in a library your fair use rights are significantly more restricted?
Who—outside of this article—is saying anything like that?
the notice that libraries are required by law to provide you
I missed the part where the writer established that this was the case. What law are they referring to?
And if you’re someone who is fairly familiar with U.S. copyright law, and especially with the fair use doctrine, that notice may have led you to ask yourself the following question: “Why are my rights more constrained with regard to a copy made in the library than they would be if the copy were made anywhere else?”
wat
I always wonder how infinite money would upend my system of values.
You upend your values before attaining the near-infinite money (in order to attain it).
A detailed description in html format is included in the Tutorial directory.
This is exemplary and deserves to be more fashionable than it is in 2023.
Disroot: Requires nonfree JS to sign up
Remember: forms are just forms.
If they don't provide a form that is sufficiently "inert", then anyone (including FSF) can create their own.
Which of these two messages do you think is more effective? ChatGPT will lie to you Or ChatGPT doesn’t lie, lying is too human and implies intent. It hallucinates. Actually no, hallucination still implies human-like thought. It confabulates. That’s a term used in psychiatry to describe when someone replaces a gap in one’s memory by a falsification that one believes to be true—though of course these things don’t have human minds so even confabulation is unnecessarily anthropomorphic. I hope you’ve enjoyed this linguistic detour!
"ChatGPT will say things that aren't true" seems perfectly adequate. I don't understand why this dumb debate—over the definition of the word lie—is even happening (to the extent, if any, that it actually is...)
Every time this is mentioned, the person is heavily downvoted.But if you were here in the 2000, that's exactly what happened.
Not even close.
JS succeeded because it was well-designed, period. There were many, many contenders in the early days—despite popular historical revisionism to the contrary. All of them failures.
This is:
Hilse, Hans-Werner, and Jochen Kothe. Implementing Persistent Identifiers: Overview of Concepts, Guidelines and Recommendations. London: Consortium of European Research Libraries, 2006.
It would be even worse if no such error message appeared: a URL may also be
It would be even worse if no such error message appeared: a URL may also be unstable in that it now points to a different resource that has replaced the earlier one.
Since this is yet another piece of journalism that covers a set of cases before the courts with citing the cases in question by name, here they are: - Students for Fair Admissions v. Harvard - Students for Fair Admissions v. University of North Carolina
Seth pointed out this is useful if you are on Windows and don't have the gzip utility installed.
Language runtimes that don't ship by default on Windows have historically been harder to install than utilities like gzip in my experience.
I question whether lawyers should be looking for new technology, as opposed to learning how to use more effectively the technology already available to them.
You can swap "lawyers" here for just about anyone—including (or "especially"?) software developers.
I don't want "new". I want established, reliable, with plenty of support, plenty of good documentation. (Okay, so I also want Free/open source, and self-hostable, but I appreciate that I'm probably a significant outlier here.) "New" is a con, not a pro.
See also: cars.
The scoping of the this keyword
My last comment here, the this hate, along with the fetishization of === over ==, is one of the most overblown aspects (and arguably: outright just incorrect stances) about language design when it comes to JS.
different versions in different browsers and will never be upgraded
This has nothing to do with JS. It's not a language-level concern; the same would be true with any other language. Python, Ruby, Scheme, C#... pick literally any other language. It's a totally orthogonal issue.
JeagerMonkey, TraceMonkey
Weird. TraceMonkey and JagerMonkey are just codenames for different approaches for fast execution/evaluation of JS in SpiderMonkey. They're not holistic JS language implementations unto themselves in the vein of, say, CPython vs IronPython.
Fragmented runtimes While pretty much every single one of these problems could be fixed by either extending the specific runtime you’re using or by ECMA releasing a new standard, it’s just not that simple. Assuming you extend the runtime you’re using to allow for things like native/fast arrays and you define a module/namespace solution like node.js has done, now any code you write will only run on that specific runtime with your extensions which punches a big hole in the “same language everywhere”-argument that you hear every day on twitter (and it really is server-side JavaScript’s only claim to fame).
In other words, the "problem" is the opposite of "fragmented runtimes"—it's that JS runtimes are so strictly committed to compatibility at the language level (to a degree not seen in any other widely deployed tech that I can think of). There are clear downsides (such as the sentiment expressed here), but there are also massive upsides. On net, the alignment is a huge win.
In case you didn’t know, these two functions are not identical: Advertisementhttp://wtfjs.com/ 12(function bar() { })function foo() { }
This is true, but also pretty huge non-issue. The only way it could possibly be an issue is for someone who just objects to the notion of function-valued expressions generally (fans of Pascal, maybe? I dunno) or someone who against all good sense believes that a pair of parens should result in the introduction of a new scope (weird).
why would you even have two different types of ‘nil’ to begin with?
You already know why: the semipredicate problem. (In other words, for a similar reason why you might want a number type to be nullable even though you already have 0.)
Being able to concatenate two comma-seperated strings without a comma in the middle is such a common use case. I'm glad the JS develpoers thought to optimise for the most common use case.
This criticism is not just ignorant. It's stupid.
A passive failure to make a special case ("optimise") for array operands in binary expressions with + is not the same thing as actively optimizing for "two comma-seperated[sic] strings without a comma in the middle".
the web is always stuck in thepresent
Mmm... no. It often is—to the world's peril—but it's not the architecture that makes this so. It is a choice of node operators to return some resource X when a given identifier is referenced on day d[0] and to return resource Y on day d[1].
linking to the site virtually impossible
On the contrary. It is the lack of stability that makes (reliably) linking virtually impossible. How do stable links make linking more difficult...?
daily newssites, will generate many new resources
They already are.
The former option will preserve all content,but will leadto an explosion of new resources, eachwith its own distinct identi®er.
It doesn't. It leads to no more (or fewer) resources than otherwise. It does lead to an "explosion" of identifiers.
Think about the silliness of casting aspersion on the notion of "[many] resources, each with its own distinct identifier". The fundamental premise of the Web is that this is a good thing; if you have a slew of resources, you should be able to (unambiguously) refer to them by identifier.
Referential integrity can en-sure that links always reference the same resource,but what happens if the content containedwithinthe resource changes?
Response: the question is malformed. The problem lies in the belief that it's possible for content to change. It can't. If it appears to, then it is only because the server is returning a different resource than whatever was previously available. Don't reuse identifiers for different resources. Mint a new identifier.
pointing at the correct IP address, but having the server not recognize the Host header
not an example of something gone wrong with DNS
They use canonical text. The metadata of the note is embedded in the content of the note, meaning things like tags and links have to be extracted out. But they’re structured data and should be modeled as distinct from the content. I’d want my notetaking app to use a relational database, not a bag of textfiles.
I don't understand this criticism at all.
There are a lot of papers cited in this presentation that aren't cited explicitly and in detail in the associated "transcript": https://web.archive.org/web/19980215203333/http://www.parc.xerox.com/spl/projects/oi/towards-talk/transcript.html
(E.g. Wirth "On the Design of Programming Languages" @25:00.)
It would be nice/useful to go through the video, grab the moments where these are referenced (incl. screenshots of the slides), and collect them into an orderly works cited rider doc.
Hey, traveler. Try this instead:
Not sure why this is detached from the thread it belongs to, but it appears to be this one:
nobody can even tell what skills will be needed in the job market a few decades into the future
By a reasonable guess, no matter what the actual (vs stated) requirements are, they'll be overstated.
and people can work only so many hours a day (and many prefer to work less if they can)
Do they? (This is almost certainly a coordination problem, right?)
Current American foundations for the most part are not really geared up to be supporting or encouraging 13- to 19-year-olds. That is where a lot of the low-hanging fruit is.
the best policy for science is just more immigration into the United States. High skilled maybe, but just flat out more immigration.
Isn't that just drawing from another exhaustible resource? Think end game. Don't we converge on an equilibrium where we are no longer able to exploit the outsized gains?
Of course, the personal computer and its cousin, the smartphone, have brought about some big changes. And many goods and services are now more plentiful and of better quality. But compared with what my grandmother witnessed, the basic accoutrements of life have remained broadly the same.
Related: see the "web of documents" tag on Hypothesis.
you can inspect as source and learn something or understand how it is constructed
Again: we're way past that—and it's not broadly accessible simulated word processors that got us there. (It's actually Web developers themselves.)
an increasing abandonment of creating web content as the kind of web content I know and love
I think that's a good thing, for the reasons described here: https://hypothes.is/a/TF6N6AxHEe6qM_dG0VlZzw. We should stop treating the Web as a sui generis medium and elevate it to the same status as material that appears as written word made for print.
Anything radical published on Google Docs is published in spite of Google Docs.
I think the key here is "[...] published on Google Docs". Consider instead: "Something radical can be prepared with Google Docs". There's nothing stopping anyone from using Google Docs in the same way that traditional desktop word processors are used—by working on your stuff and then saving a copy to your own disk (using Google Docs's "File" menu). Pick your own format—DOCX, ODT, etc., but note that HTML mostly works quite well...
In fact, it being on your own disk is beside the point; you can put the thing anywhere—it doesn't have to stay "on" (read: in) Google Docs. Put it online if you want. (I refer again to https://crussell.ichi.city/food/inauguration/.))
Google Doc's main goal is to make money off of companies whose main goal is to make money.
I'm not entirely sure that's true. I'd wager that's incidental; the main goal seems to be to give people access to a word processor in the form of a Web app, and the fact that they've figured out a way to make money on it is... not coincidental, but certainly "serendipitous" or "fortunate" (in the way that it's fortunate to be able to make money off any kind of software created for some other reason than to make money).
Axiom 5: A doc is a distinct, shareable object
I dislike the use of "doc" as nomenclature here. "Document" works perfectly fine.
Axiom 4: If you build a tool with the ability to publish, so help them god, people will publish
See also: Zawinski's Law.
The writing here is super obscure. I don't like it.
The soft power of Google Doc publishing
See also:
Google Docs is one of the best ways to make content to put on the Web.