3,138 Matching Annotations
  1. Apr 2023
    1. Ondaatje, Michael 1992. The English Patient. Vintage International, New York
    2. Mockapetris, P.V. 1987b. "Domain Names: Concepts and Facilities," RFC 1035
    3. Mockapetris, P.V. 1987a. "Domain Names: Concepts and Facilities," RFC 1034
    4. Malone, Thomas W., Grant, Kenneth R., Lai, Jum-Yew, Rao, Ramana, and Rosenblitt, David 1987. "Semistructured Messages are Surprisingly Useful for Computer-Supported Coordination." ACM Transactions on Office Information Systems, 5, 2, pp. 115-131.
    5. Malone, Thomas W., Yu, Keh-Chaing, Lee, Jintae 1989. What Good are Semistructured Objects? Adding Semiformal Structure to Hypertext. Center for Coordination Science Technical Report #102. M.I.T. Sloan School of Management, Cambridge, MA
    6. Martin Gilbert 1991. Churchill A Life Henry Holt & Company, New York, page 595
    7. There are a few obvious objections to this mechanism. The most serious objection is that duplicate information must be maintained consistently in two places. For example, if the conference organizers decide to change the abstracts deadline from 10 August to 15 August, they'll have to make that change both in the META element in the HEAD and in some human-readable area of the BODY.

      Microdata addresses this.

    1. With the Richer Install UI web developers have a new opportunity to give their users specific context about their app at install time. This UI is available for mobile from Chrome 94 and for desktop from Chrome 108. While Chrome will continue to offer the simple install dialogs for installable apps, this bigger UI gives developers space to highlight their web app.

      K, but the installation comes from a context that the app vendor already controls—the entire surface of their own site—so...

      Even so, do what you want with your browser UI, but like... there are crazy levels of navel-gazing here.

    1. not free as in beer." At this point, a discussion on software has turned into one about the price of an alcoholic beverage.

      This turn of phrase always struck me as confusing. (Still does, actually.) Maybe I lack the cultural context where "free [as in] beer" is illuminating rather than confounding. It raises questions, though—what cultural context is the one where this is a logical and widely understood sequence of words?

      Complimentary beer nuts, sure, but the beer isn't free—that's why the nuts are: because they're trying to sell more beer.

      When and where was "free [as in] beer" ever a thing?

    1. This is:

      Caplan, Priscilla. Support for Digital Formats. Library Technology Reports 44, 19–21 (2008). https://journals.ala.org/index.php/ltr/article/view/4227

    2. Adrian White

      Did Adrian White become Adrian Brown? That's what the byline in DPTP-01 actually says.

    Tags

    Annotators

    1. The present leadership, particularly from RMS, creates an exclusionary environment in a place where inclusion and representation are important for the success of the movement.

      Does this mean that Drew is going to step back, then? He is after all yet another white guy himself who has few (or none) of the characteristics described, so...

      The term "virtue signalling" was probably played out 5 years ago, but geez—it's hard to see this as anything except an opportunistic version of exactly that. Like greenwashing being wielded by people with ulterior motives, this seems like a straightforward case of an intellectually bankrupt attempt to conspicuously dress up one's argument in the most politically unimpeachable cause and claim that it comes from a place of wanting the best for society when ultimately a self-serving motive underlies the thing. This is very cheap and very tacky.

    2. His polemeic rhetoric rivals even my own, and the demographics he represents – to the exclusion of all others – is becoming a minority within the free software movement. We need more leaders of color, women, LGBTQ representation, and others besides. The present leadership, particularly from RMS, creates an exclusionary environment in a place where inclusion and representation are important for the success of the movement.

      I'm not a vanguard for the FSF per se, but when I think about the community norms and attitudes that are most exclusionary and turn people away, it's the sort of stuff that Drew and his fans are most often associated with. Stallman at least e.g. views Emacs as something that "secretaries" can be taught. Drew's circle tends to come across as having superiority complexes and holding strong opinions about computing that stops them just short of calling you a little bitch for not being as hardcore as they are...

    3. hip new software isn’t using copyleft: over 1 million npm packages use a permissive license while fewer than 20,000 use the GPL

      I didn't realize Mr. DeVault was such an admirer of NPM.

    4. Many people assume that the MIT license is not free software because it’s not viral.

      Their fault, really. What does the FSF have to do, and how much and how often do they have to do it, to make clear that this isn't their position? The culprit is right there in the sentence: the word "assume". It's not unforgivable to not be certain, but the number of people I've interacted with who insist that this is the FSF's position are exasperating.

    1. As far as I can tell, Google Takeout lists every Google service that stores data of some kind

      Not Google Podcasts.

    1. const EACH$ = ((x) => (this.each(x))); const SAFE$ = ((x) => (this.escape(x))); const HTML$ = ((x) => (x));

      In my port of judell's slideshow tool, I made these built-ins. (They're bindings that are created in the ContentStereotype implementation.)

      In that app, the stereotype body is just a return statement. Perhaps the ContentStereotype implementation should introspect on the source parameter and check if it's an expression or a statement sequence. The rule should be that iff the first character is an open param, then it's an expression—so there is no need for an explicit return, nor the escaped backtick...

      This still gives the flexibility of introducing other bindings—like the ones for _CSSH_ and _CSSI_ here—but doesn't doesn't penalize people who don't need it.

    Annotators

    1. Identifiers are an area wherethe needs of libraries and publish-ing are not well supported by thecommercial developmen
    2. Handleshave one serious disadvantage.Effective use requires the user’sWeb browser to incorporate specialsoftware. CNRI provides this soft-ware, but digital libraries havebeen reluctant to require theirusers to install it.
    1. amd [sic.]

      I'm having trouble determining the source of this purported error. This PDF appears to have copied the content from the version published on kurzweilai.net, which includes the same "erratum". Meanwhile, however, this document which looks like it could plausibly be a scan of the original contains no such error: https://documents.theblackvault.com/documents/dod/readingroom/16a/977.pdf

      I wonder if someone transcribed the memo with this "amd" error and that copy was widely distributed (e.g. during the BBS era?) and then someone came across that copy and inserted the "[sic]" adornments.

    1. The system also includes a searchable online database that will give a buyer instant information. "DOI will alsO provide a national directory of who owns what online," said Burns. "The system will give permissions, list rights fees and provide other articles by the same author and instantly put the buyer in contact with the publisher."

      A subset of Ted Nelson's envisioned transcopyright system

    1. we reported evidence indicating that static type sys-tems acted as a form of implicit documentation

      wat

      There's nothing implicit about it. Type annotations are explicit.

    1. Would I want to keep URLs of such draft/work-in-progress files stable, shall they be first-class citizens of the site, should they be indexed, how would I indicate freshness/state etc.?
    2. I've started thinking in the direction of serving on-going writing in a separate folder as raw plain text. That would be quite frictionless
    1. The end-user thinks, "Ah, it was only a dollar, I got my money's worth," but the publisher has basically paid nothing for the work, adds a few hours of digital typesetting, and then makes 100% profit on the sale.

      I have real trouble seeing that as saddening.

      It isn't as if anyone is going around making the the free version arbitrarily defective. The reseller is putting in work to add value and getting paid a buck for it (literally).

      It would perhaps be upsetting, too, if they were going after folks somehow. But I don't see this in the rendition above.

      (In reality, a buyer would probably be fine if they took the Project Gutenberg version, bought the reseller's digitally reformatted one, extracted the TOC data and error corrections, and then slapped that onto the free version and sent it back upstream to Project Gutenberg or someone else who is distributing free copies. They would be legally in the clear, so the reseller, then, stands to make as little as $1 for their investment in that work, so in that case it seems imminently fairly priced.)

    1. Some filesystems (like ext2 specifically) complain if you have more than ~65k subdirectories in a directory, so my original plan of having tweets live at /{username}/status/{id}/index.html (and resolved to /{username}/status/{id}/) doesn't work on those filesystems. Instead all the files live at /{username}/status/{id}.html

      I'm not sure how this solves the problem specifically, since there will still be thousands of entries (one for each tweet) in the status/ directory...

      (Unless I'm not grasping something and the problem truly is the matter of having 2^16 subdirectories in particular—without similar concerns for ordinary files.

      That does raise questions about how someone would run into the original problem in the first place; vanilla ZIP has a fundamental limitation of 2^16 - 1 total files. Is Twitter using the ZIP64 extension?

    2. This won't work if your archive is "too big". This varies by browser, but if your zip file is over 2GB it might not work. If you are having trouble with this (it gets stuck on the "starting..." message), you could consider: unzipping locally, moving the /data/tweets_media directory somewhere else, rezipping (making sure that /data directory is on the zip root), putting the new, smaller zip into this thing, getting the resulting zip, and then re-adding the /data/tweets_media directory (it needs to live at "[username]/tweets_media" in the resulting archive). Unfortunately, this will include media for your retweets (but nothing private) so it'll take up a ton of disk space. I am sorry this isn't easier, it's a browser limitation on file sizes.

      Contra [1], the ZIP format was brilliantly designed and natively supports a solution to this; ZIP was conceived with the goal of operating under the constraint that an archive might need to span multiple volumes. So just use that.

      1. https://games.greggman.com/game/zip-rant/
    1. a printed book containing the 10000 best internet URLs

      The book is: Der große Report - Die besten Internetadressen. 2000. Data Becker.

    2. A few of the entries are pretty straightforward because I'm sure they'll be around for a long time and they're obviously important: Wikipedia and the Internet Archive.

      The context is 10,000 URLs, not "sites". The URL for "Wikipedia" leads to a document that is on its own not entirely interesting. It would be the URLs for individual articles that should make the cut, unless "URL" is being used as a euphemism here.

    3. something so ephemeral as a URL

      Well, they're not supposed to be ephemeral. They're supposed to be as durable as the title of whatever book you're talking about.

    1. The homepage is the most recent post which means you don't have to figure out if I posted something new since the last time you visited and I truly believe that is how a personal blog is supposed to be.

      That goes against the design of URLs and also confused/annoyed me when I first landed on this blog, so...

    1. I am extremely gentle by nature. In high school, a teacher didn’t believe I’d read a book because it looked so new. The binding was still tight.

      I see this a lot—and it seems like it's a lot more prevalent than it used to be—reasoning from a proxy. Like trying to suss out how competent someone is in your shared field by looking at their GitHub profile, instead just asking them questions about it (e.g. the JVM). If X is the thing you want to know about, then don't look at Y and draw conclusions that way. (See also: the X/Y problem.) There's no need to approach things in a roundabout, inefficient, error-prone manner, so don't bother trying unless you have to.

    1. Apple pointed out that this is apparently allowed by the spec, and that it was faulty feature detection on our part. Looking at the relevant spec, I still can't say, as a web developer rather than a browser maker, that it's obvious that it's allowed.

      C'mon. It's right there:

      Follow the instructions given in the WebGL specifications' Context Creation sections to obtain a WebGLRenderingContext, WebGL2RenderingContext, or null; if the returned value is null, then return null;

      (Not that it should even be necessary to resort to checking the spec—relying on an assumption of a non-null return value here should raise the commonsense suspicions of anyone.)

    2. In the end, they added a special browser quirk that detects our engine and disables OffscreenCanvas. This does avoid the compatibility disaster for us. But tough luck to anyone else

      I agree that this approach is bad. I hate that this exists. The differences between doctype-triggered standards and quirks mode was bad enough. This is so much worse—and impacts you even when you're in ostensible standards mode.

    3. I tried my best to persuade Apple to delay it, but I only got still-fairly-vague wording around it being likely to ship as it was.

      Huh? Why? Why even waste the time? Just go fix your code.

    4. preserves web compatibility

      "... you keep using that word"

    5. Safari is shipping OffscreenCanvas 4 years and 6 months after Chrome shipped full support for it, but in a way that breaks lots of content

      I don't think that has been shown here? The zip.js stuff breaking is one thing, but the poor error detection regarding off-screen canvas doesn't ipso facto look like part of a larger pattern.

    6. doesn't Apple care about web compatibility? Why not delay OffscreenCanvas

      Answer: because they care about Web compatibility. If they delay X because Y is not ready, then that's ΔT where their browser remains incompatible with the rest of the world, even though it doesn't have to be.

    7. Firstly my understanding of the purpose of specs was to preserve web compatibility - indeed the HTML Design Principles say Support Existing Content. For example when the new Array flatten method name was found to break websites, the spec was changed to rename it to flat so it didn't break things. That demonstrates how the spec reflects the reality of the web, rather than being a justification to break it. So my preferred solution here would be to update the spec to state that HTML canvas and OffscreenCanvas should support the same contexts. It avoids the web compatibility problem we faced (and possibly faced by others), and also seems more consistent anyway. Safari should then delay shipping OffscreenCanvas until it supported WebGL, and then all the affected web content keeps working.

      This is a huge reach.

      Although it's debatable whether having mismatched support is a good idea for a vendor, arguing that it breaks the commitment to compatibility is off. Construct broke not because something was removed, but because something was added and your code did not handle that well.

    8. MDN documentation mentioned nothing about inconsistent availability of contexts

      Two things: * Why would it have mentioned anything? It wouldn't have. It hadn't shipped yet. * MDN is not prescriptive; it's written by volunteers

    9. typeof OffscreenCanvas !== "undefined"

      The second = sign is completely superfluous here. Only one is necessary.

    10. Construct requires WebGL for rendering. So it was seeing OffscreenCanvas apparently supported, creating a worker, creating OffscreenCanvas, then getting null for a WebGL context, at which point it fails and the user is left with a blank screen. This is in fact the biggest problem of all.

      Well, the biggest problem is that anything can ever lead to a blank screen because Construct isn't doing simple error detection.

    1. In a resume-first hiring process, your resume is at best a raffle ticket that might pay off and grant you admission to the actual hiring process. That’s it. That’s all.

      This is why I don't get people who bristle at the thought of writing a cover letter. What the fuck. Just write a paragraph or two saying why you want this job, specifically—why you think you'll find it rewarding and how it fits with your professional interests. Is it that hard? I'll take this a thousand times over vs pruning, rearranging, and emphasizing line-item crap from my employment history, awards I was given 20 years ago, etc.

      We should be starting with the cover letter and handing that over to someone who's competent to review it (not a generic, know-nothing human pattern matcher) and then move on to hardcore testing for aptitude in a testing environment that matches as closely as possible the actual work environment where you're going to be expected to get things done on a day-to-day basis.

    2. I’m as convinced as a person can be that the resume-first hiring processes are just marginally worse than doing nothing at all. I spent 15 years tweaking resumes, writing cover letters, and generally taking all the very good advice I got only to have it never turn a cent of profit for me. What finally got me out of that pattern was a really odd situation where one of my articles got just enough heat on it that I was allowed to circumvent the middle part of the interview process and go straight to hiring manager interviews. And it was a whole different ballgame because I was now talking to someone who had both the power and desire to hire someone for a position, as opposed to someone whose biggest goal was keeping sufficient people away from that stage to keep them out of trouble.
    3. this isn’t supposed to be me calling out hiring managers and bosses everywhere

      Why not? Do it. It is literally their job.

    4. until someone invents an alternative, what’s to be done?

      The alternative: "smart" resumes that are something like contact cards plus an agreement from employers to put way less stock into resumes and less organizational infrastructure towards keeping classic HR droid positions filled with people that ultimately themselves don't do very much for the company.

      So from the applicant's perspective, you don't worry about creating a resume for this job. It feels more like handing out a business card with your contact info to someone who needs it, except in this case instead of it being contact info (or rather, in addition to the contact info), it contains other stuff, too.

    1. try to apply study into day-to-day, try to set a high personal bar so that even "easy" tasks are challenging

      Ugh.

    1. GIS files can be huge. Travis County's parcel file is 187M

      Surely that's meant to say "GB"?

    1. finding a way to do a "git pull" without having to write a commit message (does --rebase do that?) would help in a huge way

      It might "help" but it defeats the entire purpose of the recordkeeping endeavor.

      If you don't care about the recordkeeping aspect and are just using Git to sync stuff between machines, then you're not really using Git and should stop trying to use it and use something else. (A better option, of course, is to think about it long enough to understand why recordkeeping is good and then take the time to write commit messages that don't suck and not treat it as an arbitrary and pointless hurdle. It's not pointless; there's a reason it was put there, after all.)

    2. I tried adding some stuff to ".gitignore", but it did no good

      This is why git add is git add. The students should have been told not to add anything to the repo except for the source files they're actually changing. A good rule of thumb: if the change was made by a human, and the human was you, then you can commit it; if the change was made by a machine, then don't.

    3. Git made it easy to move students to a different computer, because their code was already there, but the git config for name and email remained that of the computer's previous resident.

      This is only a problem if they were doing git config --global. Considering these were shared machines, then they shouldn't have been.

    4. The class was sharp and realized there had to be a better way. I said git worked better if everyone took their turn and did check-ins one-at-a-time.

      Except, of course, branching and merging mean that this hurdle isn't a necessary one. Git was designed from the beginning so that this would be a non-issue (or at least not as bad as what this class experienced); that's where the D in DVCS comes from, after all...

      (And I thought that's where this was going—! Rather than just giving people the solution—in this case branches/remotes—and telling them to use it, then what you do is you let them experience the problem firsthand and then can appreciate the solution and why it's there. Really surprised that's not where this ended up.)

    5. The college we were at had locked down the networks crazy tight. Machines could not communicate with each other.
    6. I had a fever dream* in 2020 or 2021 that involved an epiphany for a clear way to integrate Git's data model into mass market computing systems (a la Mac OS and its Finder) in a way that was digestible to normal people. I've basically forgotten it. I think it was something like:

      1. use heuristics to figure out when someone is using the "[...]_draft", "[...]_final", etc, ad hoc versioning antipattern
      2. offer to make the directory a versioned one

      On systems like Mac OS where everything is tightly integrated, you wouldn't need to limit yourself to offering this in the Finder. Any time someone used the system-wide standard file save dialog in a way that exhibits the thing described in #1, the system could use the desktop notification subsystem to get the user's attention and offer to upgrade their experience. No interaction with the Git porcelain (as we know it) necessary.

      I fear that MS might do this first but bungle it (i.e. unthoughtfully) and also promote/upsell GitHub to you during the ride.

      * not really

    1. That means there can be any random data between records

      Yes, of course. That's another intentional feature.

    2. If you want to support reading from the front it seems required to state that the self extracting portion can't appear to have any records.

      Well, since you don't want to support it, then you aren't required to do that. (And good thing, because that would limit the format severely.)

    3. Does it mean the first time you see that scanning from the back you don't try to find a second match?

      It means you don't need to! (And why would you try? You have already found one, and you know there is only one, so to try to find more is to try to do something that you know is impossible.)

    4. But what does that mean?

      It means if you have a ZIP file (something that you know is a valid ZIP file) and you have found more than one end of central directory record, then there's something wrong with the method you used to find them (because there can by definition be only one).

    5. A forward scanner might fail to read these.

      Okay, fine. Don't use them (don't use broken software, generally—unless you're comfortable getting broken results).

    6. that contradicts 4.1.9 that says zip files maybe be streamed

      I don't take the spec to mean that you can reliably stream any arbitrary ZIP bytestream. If you are the producer and the consumer, though, then you can bend the format to your will to enable streaming.

      See Firefox's JAR handling for an example.

    7. Justine Tunney covers the genius of the ZIP format in her Redbean talk (@55:31) https://youtu.be/1ZTRb-2DZGs?t=3331

    1. If the data stream encodes values with byte order B, then the algorithm to decode the value on computer with byte order C should be about B, not about the relationship between B and C.

      See also: the brokenness of most schemes to cross-compile applications (including producing cross compilers).

      Rob's clear thinking here definitely had an influence on why Go's compiler is one of the few to have a sane cross-compilation story.

    1. This is:

      S. Mirhosseini and C. Parnin. “Docable: Evaluating the Executability of Software Tutorials”. 2020. https://chrisparnin.me/pdf/docable_FSE_20.pdf

    2. software decay

      See Hinsen, "software rot".

    3. Pimental et al. found that only 24% of Jupyter note-books could be executed

      This is the second time this appears in this paper.

      Previously: https://hypothes.is/a/Mm9whNQFEe2J6Y97btVQBQ

    4. The ambiguity (i.e. non-machine-readability) of tutorials described in this paper is a good example to demonstrate both what it means for something to be an algorithm and what it means to "code" something.

    5. Once I was *attempting* (Igave up) to install an application and the first tutorial allowed mea choice of 6 ways to install something and none worked.
    6. Our informants recognized this as a general problem with tu-torials: “There’s an implicit assumption about the environment”(I5) and “many tutorials assume you have things like a workingdatabase” (I4). If tutorials “were all written with *less* assumptionsand were more comprehensive that would be great
    7. Pimentel et al . [28] found that only 24% of Jupyter note-books could be executed without exceptions
    1. it's better than RSS but RSS just seems a better brand-name

      Isn't that pretty interesting? You'd think it would be the other way around.

      In fact, what if it is the other way around? What if the failure of classic/legacy Web feeds has to do with power users' insistence on calling it "RSS"?

    1. this post remindeds me of the initial comments to "Show HN: Dropbox"

      What? That's an insane comparison. This is like the total opposite of that comment; ActivityPub is super complicated.

    1. But for better or worse, ActivityPub requires a live server running custom software.

      This is bad protocol design. It violates (a variation of) the argument for the Principle of Least Power.

    1. This type of complexity has nothing to do with complexity theory

      Also not to be confused with the notion from the area of information theory of Kolmogorov complexity. (At least not directly—but that isn't to say there is no relation there.)

    1. Pretty nuts that Safari isn't open source. I thought for sure that Edge was going to be fully open source, both before and after the Blink conversion. Why even build closed source browsers in 2023?

    1. My father owed this man money, and this was his revenge.

      If you are allowed by someone to owe them money, then what are you getting revenge for...?

  2. Mar 2023
    1. the expert blind spot effect [ 25 ], whentutorial creators do not anticipate steps where novice tutorial takers

      the export blind spot effect, when tutorial creators do not anticipate steps where novice tutorial takers may have difficulty

      I call this "developer tunnel vision".

    Annotators

    1. After 10 years of industry and teaching nearly 1000 students various software engineering courses, including a specialized course on DevOps, I saw one common problem that has not gotten better. It was amazing to see how difficult and widespread the simple problem of installing and configuring software tools and dependencies was for everyone.
    1. Even notebooks still are problematic, for example, this study found that only 25% of Jupyter notebooks could be executed, and of those, only 4% actually reproduced the same results.
    1. This will also take the stress away from the developers in maintaining the SublimeText core, which will be supported by the community while they can focus on pro features for the text editor.
    2. I feel that open sourcing SublimeText is the only way for SubmlineText to be relevant and compete against VSCode.

      The purpose of SublimeText is not to be "relevant" or "compete" against VSCode in the social media influencer sense of relevance and competition. It is to be a text editor that makes the author money both directly and indirectly (i.e. by selling licenses and being the kind of text editor that the author themselves uses to make software).

    3. Is It Time to Open Source SublimeText?

      This is such bizarre article and headline. It's almost clickbait.

    1. Glenn, a seasoned pilot and astronaut, had just purchased an Ansco Autoset camera for a mere $40 from a drugstore

      Whether it was from a drugstore or not, that $40 in 1962 was like $400 today...

    1. On the one hand, it's a drag to do two different implementations, but on the other hand, it's a drag to have one of the two problems be solved badly because of baggage from the other problem; or to have the all-singing-all-dancing solution take longer than two independent solutions together.

      Premature generalization is the root of all evil?

    2. Well really the requirement is "small changes should be fast", right?

      Calling out the X/Y Problem.

    1. differentiating between using a database for indexing and as a canonical data store

      Most people who think they need a database really just need a cache? See jwz on the Netscape mail client:

      So, we have these ``summary files,'' one summary file for each folder, which provide the necessary info to generate that message list quickly.

      Each summary file is a true cache, in that if it is deleted (accidentally or on purpose) it will be regenerated when it is needed again

      https://www.jwz.org/doc/mailsum.html

    1. “For example, I personally believe that Visual Basic did more for programming than Object-Oriented Languages did,” Torvalds wrote, “yet people laugh at VB and say it's a bad language, and they've been talking about OO languages for decades. And no, Visual Basic wasn't a great language, but I think the easy DB interfaces in VB were fundamentally more important than object orientation is, for example.”
    2. never once reasoning about physical locations, hardware, operating systems, runtimes, or servers

      ... but instead replacing all the cognitive load that would have gone to that task instead to reasoning about AWS infrastructure...

    3. Coincidentally or not, the demise of Visual Basic lined up perfectly with the rise of the web as the dominant platform for business applications.

      ... which, as it turns out, is exactly what Yegge said he thought was going to happen in his response to the question that Linus was answering.

    4. Almost all Visual Basic 6 programmers were content with what Visual Basic 6 did. They were happy to be bus drivers: to leave the office at 5 p.m. (or 4:30 p.m. on a really nice day) instead of working until midnight; to play with their families on weekends instead of trudging back to the office. They didn't lament the lack of operator overloading or polymorphism in Visual Basic 6, so they didn't say much.The voices that Microsoft heard, however, came from the 3 percent of Visual Basic 6 bus drivers who actively wished to become fighter pilots. These guys took the time to attend conferences, to post questions on CompuServe forums, to respond to articles. Not content to merely fantasize about shooting a Sidewinder missile up the tailpipe of the car that had just cut them off in traffic, they demanded that Microsoft install afterburners on their buses, along with missiles, countermeasures and a head-up display. And Microsoft did.
    5. It gave me the start in understanding how functions work, how sub-procedures work, and how objects work. More importantly though, Visual Basic gave me the excitement and possibility that I could make this thing on my family's desk do pretty much whatever I wanted
    6. “The prevailing method of writing Windows programs in 1990 was the raw Win32 API. That meant the 'C' Language WndProc(), giant switch case statements to handle WM_PAINT messages. Basically, all the stuff taught in the thick Charles Petzold book. This was a very tedious and complex type of programming. It was not friendly to a corporate ‘enterprise apps' type of programming,”
    1. Yes, that is true. There's nothing you can do about that without breaking basic web expectations of URLs staying the same. The new endpoints can serve up the old content or Announce references to them, but the old URLs do need to continue resolving and at a minimum serve up a redirect if you want maximum availability.It would be a nice improvement to have a URL scheme that allowed referencing posts relative to a webfinger lookup to reduce the impact of that.

      Consider also a change to the conventions of UGC, where service operators give control of the objects (URLs) "owned" by a given user over to them, the owner. You should be able to connect your account with a request servicer like Cloudflare. You upload a document specifying how a worker should handle the request to your servicer of choice, inform the website operator that you'd like to route your requests through the servicer, and you're good.

    1. Browser-based interfaces are slow, clumsy, and require you to be online just to use them.

      No they don't.

      This conflates the runs-in-a-browser? property with the depends-on-mobile-code? property.

    1. NOTE: Cyren URL Lookup API has 1,000 free queries per month per user. COMPLETELY SEPARATE NOTE: You can use services like temp-mail to create temporary email addresses.

      Just be honest about your scumminess, instead of trying to be cute.

    1. Substack is growing fast: they now have 1M+ paid subscriptions but apparently generate no revenue. Which is already worrying to me. Because it means that yes, they can keep running like this if they keep getting investments but at some point something has to change.

      What if someone exploited this for a conversion strategy—and it was planned that way all along?

      Startup A receives VC investment. They spend it wooing creators to their platform, and those creators are able in turn to make a profit. The startup doesn't take a cut. The signs that they are going to crash and burn appear. Suddenly, 6 months before even the most pessimistic critics would have guessed, the startup announces that it is the end of their incredible journey. They warn everyone that in two weeks it will flip to read-only, and then 4 weeks after that it will go dark. Everything goes nuts. The creators were relying on the platform themselves to make money. No platform means no money. Suddenly a solution appears: Startup B. They offer more or less a drop-in replacement for Startup A's highest revenue-generating users. The only catch is that Startup B's plans cost money. Startup C also appears, along with Startup D, each catering to a different segment of stalwarts who haven't signed a deal with B. In fact, B then announces that they're investing in C and D, in order to promote a healthy ecosystem. Somehow A appears and says that they're investing in C and D, too. Meanwhile, A's sunset never happened—a month after the site went read-only, it's still in read-only mode. Then A announces low-cost paid plans, flips back to read-write, and opens for new signups, having successfully converted the most lucrative clients to the more expensive plans with B since they knew it was in their best interests to maintain continuity of revenue no matte the costs.

    1. I could port it to Hugo or Jekyll but I think the end result would make it harder to use, not simpler.
    2. Could this same design—or a similar one—be made available in a simpler form?

      Yes.

    1. I dislike the concept of editing old content on personal sites.

      Does that dislike extend to the reformatting of old publications e.g. when you pick a new template for your site? I'd guess not, but I'd argue that you should at least consider not doing that, either.

    1. haha. let’s not do that.

      ... unless...?

    2. perhaps subconsciously i have carried over those principles here

      Better analogy: in real life you can't actually unpublish something. At best you can go around trying to snatch up copies and then burn them. More realistically, you can published a new edition with corrections to errata incorporated, but—notably—the "bad" version will always be *Blah Blah Blah, first ed." The existence of the second edition doesn't erase the first one from people's bookshelves.

      It's on the Web and how poorly orgs' information architecture is carried out that foo.example.com/bar can be one thing one day and then another thing entirely the next day. If we used URLs like <https://mitpress.mit.edu/Zachary, G. Pascal. Endless Frontier: Vannevar Bush, Engineer of the American Century. 1st ed. MIT Press, 1999.> would this be as big of a problem? Would doing so nudge born-digital documents in the same direction?

    3. that’s not to say things should NEVER be edited

      Edit it, sure. But don't clutter the ability to unambiguously refer to the previous version (by the same name it was given initially) as a way to distinguish it from later versions.

    4. i don’t think it was designed to replicate the file and file folder model

      In my experience the file-and-folder model is the reason behind so much URL breakage. People don't seem to realize that even if you have foo/bar/ and foo/baz/, then that doesn't mean that you need to have a view that involves bar/ and baz/ among other things contributing to the "clutter" perceived when gazing upon something called foo/ (and that foo/, in turn, cluttering up whatever it's "inside").

      It's this perceived clutter and the compulsion to declutter that leads to people moving stuff around in the pursuit of a more legible model.

    5. in my experience, google docs documents are very rarely if ever deleted. every organization i’ve ever done work for that uses google workspace1 has a problem with document bloat where google drive is just a mess of disorganized files, and document management is a job in itself

      There's a lot wrong with Google Docs, but the fact that documents stick around is not one of them. Documents should stick around.

    1. Use GitHub issues for blog comments, wiki pages and more!

      No.

    1. The only exception is a page which is deliberately a "latest" page

      Nah. The latest URI should be a (temporary) redirect to the canonical URI of whatever the latest version is.

    2. There are no reasons at all in theory for people to change URIs (or stop maintaining documents)

      "Don't change your URIs" and "don't stop maintaining your documents" is contradictory.

      If Kartik has a published document at /about in 2022 and then when I visit /about in 2023 what it said before is no longer there and it says something else instead (because he has been "maintaining" "it"), then that's a URI change. These are two separate documents, and the older one used to be called /about but now the newer one is.

    3. During 1999, http://www.whdh.com/stormforce/closings.shtml was a page I found documenting school closings due to snow. An alternative to waiting for them to scroll past the bottom of the TV screen! I put a pointer to it from my home page.

      It's actually the expectation that /stormforce/closings.shtml should mutate, reflecting recency that is anathema to the project here...

    1. By reducing the duration of operations, he increased the chances of patient survival, saved thousands of lives, and pissed off the surgeons after they found out that he used the same methods for bricklayers. Surely those holier-than-thou doctors deserved better than to be compared to a bricklayer. 😉

      The tone immediately makes me question the credibility/reliability of the information in this piece. (Perhaps, though, I made a mistake in not realizing not to expect too much from a site calling itself "allaboutlean.com"...)

    2. He also optimized surgeons’ work, establishing the now-common method of a nurse handing the instruments to the surgeon rather than the surgeon turning around and looking for the right tool.

      Not unlike shopping carts and the modern grocery store (Piggly Wiggly), footnotes, and page numbers, this is something that had to be invented.

      Notably, though, it is not a market product.

    1. likely the new people learning to code and yelling about the new shiny libraries they found

      Bart asked me about what it is that I think causes NPM to be so bad, generally (or something like that), and I responded with the one-word answer "insecurity".

      I think "striving for acceptance" is a better, more diplomatic way to put it.

    1. If you truly want to understand NLS, you have to forget today. Forget everything you think you know about computers. Forget that you think you know what a computer is. Go back to 1962. And then read his intent.

      Alternatively, try cajoling yourself to invert the "[kind of like] the present, but cruder" thinking and frame the present in terms of the past—with present systems being "Engelbart, implemented poorly".

    1. I should be able to edit after I publish.

      'k, but we should also be able to see (a) that it has been edited, (b) what was edited, (c) how to unambiguously refer to a particular revision. To not offer the ability to do so is to take advantage of something that is technically achievable given the architecture of the Web but violates the original intent (i.e. to give someone a copy that looks like this at one point and then when they or someone else asks for that thing at a later date to then lie and say that it really looks like that).

    1. One of the reasons this is so complicated is that there’s no simple orfast way to pay out musicians or labels for songs that are streamed in podcasts over RSS.

      WTF? This has fuck-all to do with RSS.

  3. podcasting20.substack.com podcasting20.substack.com
    1. The burden of resubscribing on a per-podcast basis every 7-15 days goes up exponentially as the podcasts being monitored grows into 6 or 7 digits.

      Mmm... how? It's just linear, unless I'm missing something.

    2. If you want to know within 1 minute if a podcast has a new episode

      You don't need to do that.

      Also: this problem is not specific to podcasting. It affects everything to do with RSS, generally.

    3. contains

      read: links to

    1. when you try to simulate it on the screen it not only becomes silly but it slows you down
    1. We've come up with a rule that helps us here: a change that updates node_modules may not touch any other code in the codebase.

      This makes it sound like a hack/workaround, but to want to do otherwise is to want to do something that is already on its face wrong. So there's no issue.

    2. Yes, this can be managed by a package-lock.json

      This shouldn't even be an argument. package-lock.json isn't free. It's like cutting all foods with Vitamin C out of your diet and then saying, "but you can just take vitamin supplements." The recommended solution utterly fails to account for the problem in the first place, and therefore fails to justify itself as a solution.

    1. It's unlikely to matter from a performance perspective. If you're only going to load, it doesn't really matter.

      Uh... what? This is a total shifting of the goalposts.

    1. “Why don’t you just” is not an appropriate way to talk to another adult. It’s shorthand for, “I have instantaneously solved your problem because I am The Solution Giver.

      Excellent summation.

    1. It isn't a good long term solution unless you really don't care at all about disk space or bandwidth (which you may or may not).

      Give this one another go and think it through more carefully.

    1. This web site is maintained by Tim Kindberg and Sandro Hawke as a place for authoritative information about the "tag" URI scheme. It is expected to stay small and simple.

      Emphasis: last sentence

    1. The common perception of the Web as a sui generis medium is also harmful. Conceptually, the most applicable relevant standard for Web content are just the classic standards of written works, generally. But because it's embodied in a computer people end up applying the standards of have in mind for e.g. apps.

      You check out a book from the library. You read it and have a conversation about it. Your conversation partner later asks you to tell them the name of the book, so you do. Then they go to the library and try to check it out, but the book they find under that name has completely different content from what you read.

    1. He would say, “To be early is to be on time.  To be on time is to be late.”

      I abhor tardiness and agree that being early is good advice, but this is a fucking stupid saying.

      Being on time is on time.

      (I mean this generally—not specific to the setting of this article.)

    2. required us to show up for concerts at least 30 minutes early.  If we were not 45 minutes early, we were marked as tardy

      "30 minutes", immediately followed by "45 minutes"... What?

    1. As for committing node_modules, there are pros and cons. Google famously does this at scale and my understanding is that they had to invest in custom tooling because upgrades and auditing were a nightmare otherwise. We briefly considered it at some point at work too but the version control noise was too much.

      If you don't want version control, then that's your choice, but admit it (ideally out loud for others to hear, but failing that then at least to yourself) that that's what you're about.

    1. What problem does this try to solve?

      Funny (and ironic) that you should ask...

      I myself have been asking lately, what problem does the now-standard "Run npm install after you clone the repo" approach solve? Can you state the NPM hypothesis?

      See also: builds and burdens

    1. I'm not going to make the same defenses that folks on HN prefer.

      But the problem with Casey's worldview is that it provides no accommodations for the notion of zero-cost abstractions.

    1. absolute gem of a book, I use it for my compilers class:https://grugbrain.dev/#grug-on-parsing

      I didn't realize recursive descent was part of the standard grugbrain catechism, too, but it makes sense. Grugbrain gets it right again.

      Not unrelated—I always liked Bob's justification for using Java:

      I won't do anything revolutionary[...] I'll be coding in Java, the vulgar Latin of programming languages. I figure if you can write it in Java, you can write it in anything.

      https://journal.stuffwithstuff.com/2011/03/19/pratt-parsers-expression-parsing-made-easy/

  4. Feb 2023
    1. As is evident, this is a structured document. The structure is specified as HTML using tags to denote HTML elements, and a styling language called CSS is used to specify rules that use selectors to match elements. The desired styles can be applied to matched elements by specifying the properties which should take effect for each rule.

      This goes on to provide a bunch more info with the express purpose of making it possible to 1. Print out this page, and then 2. Recreate the whole thing by hand if you wanted to, using only the printed page as reference.

      Could easily add a section that describes a bookmarklet that you could use to transform the "live" (in-browser) document into something formatted like the one at "This page is a truly naked, brutalist html quine" https://secretgeek.github.io/html_wysiwyg/html.html.

    2. Note that if you add any highlights using Hypothes.is to any of the CSS code blocks here, it will break them.

    1. If you're looking for Stavros's "no-bullshit image host", that's https://imgz.org/

    1. After running code to load all of the outages by loading zoomed-in parts of the map, we verify that the number of outages we found matches the summary’s number of total outages. If it doesn’t, we don’t save the data, and we log an error.

      NB: there may be a race condition here? In which case, running into errors should be (one) expected outcome.

    1. If HTML was all the things we wanted it to be, we designed it to be, if reality actually matched the fantasies we tell ourselves in working group meetings, then mobile apps wouldn't be written once for iOS and once for Android and then once again for desktop web, they'd be written once, in HTML, and that's what everyone would use. You wouldn't have an app store, you'd have the web.

      This is stated like unanimous agreement is a foregone conclusion.

      The Web is for content. Just because people do build in-browser facsimiles of mobile-style app UIs doesn't mean that the flattening of content and controls into a single stream is something that everyone agrees is a good thing and what should be happening. They should be doing the opposite—curbing/reigning it in.

    2. for all of the work that we've put into HTML, and CSS, and the DOM, it has fundamentally utterly failed to deliver on its promise

      You mean your promise—the position of the Web Hypertext Application Technology Working Group.

      Have you considered that the problem might have been you and what you were trying to do? You're already conceding failure at what you tried. Would it be so much worse to say that it was the wrong thing to have even been trying for?

    3. we will only gain as we unleash the kinds of amazing interfaces that developers can build when you give them the raw bedrock APIs that other platforms already give their developers

      You mean developers will gain.

    4. they're holding developers back

      Fuck developers.

    5. Jesus fucking Christ. Fuck this shit.

    6. Developers are scrambling to get out of the web and into the mobile app stores.

      This isn't new. Also: good—application developers shouldn't be the only ones holding the keys to the kingdom when it comes to making stuff available on the Web. Authors* and content editors should have those keys.

      * in the classic sense; not the post-millennium dilution/corruption where "authors" is synonymous with the tautologically defined "developers" that are spoken of when this topic is at the fore

    1. Checking your own repos on a new computer is one thing… inheriting someone else’s project and running it on your machine in the node ecosystem is very rough.

    Tags

    Annotators

    1. On HN, the user bitwize (without knowing he or she is doing so) summarizes (the first half, at least) of the situation described here:

      The appeal of JavaScript when it was invented was its immediacy. No longer did you have to go through an edit-compile-debug loop, as with Java, or even an edit-upload-debug loop as with a Perl script, to see the changes you made to your web-based application. You could just mash reload on your browser and Bob's your uncle!

      The JavaScript community, in all its wisdom, reinvented edit-compile-debug loops for its immediate, dynamic language and I'm still assmad about it. So assmad that I, too, forgo all that shit when working on personal projects.

      https://news.ycombinator.com/item?id=34827569

    1. more tips for no-build-system javascript

      Basically, ignore almost everything that Modern JS practitioners tell you that you need to be doing. You're halfway there with this experiment.

      One of the most interesting and high-quality JS codebases that has ever existed is all the JS that powers/-ed Firefox, beginning from its conception through to the first release that was ever called "Firefox", the Firefox 3 milestone release, and beyond. To the extent that there was any build system involved (for all intents and purposes, there basically wasn't), the work it performed was very light. Basically a bunch of script elements, and later Components.utils.import calls for JSMs (NB: not to be confused with NodeJS's embarrassing .mjs debacle). No idea what things are like today, but in the event that there's a lot of heavy, NodeJS-style build work at play, it would be wrong to conclude that it has anything to do with necessity e.g. after finally reaching the limits of what no-build/low-build can give you (rather than just the general degradation of standards across Mozilla as a whole).

    2. But my experience with build systems (not just Javascript build systems!), is that if you have a 5-year-old site, often it’s a huge pain to get the site built again.
    1. Together we seek the best outcome for all people who use the web across many devices.

      The best possible outcome for everyone likely includes a world where MS open sourced (at least as much as they could have of) Trident/EdgeHTML—even if the plan still involved abandoning it.

    1. The compiler recognizes the syntax version by the MODULE keyword; if it is written in small caps, the new syntax is assumed.

      Ooh... this might benefit from improvement. I mentioned to Rochus the benefits of direct (no build step) runnability in the vein of Python or JS, and he responded that he has already done this for Oberon+ on the CLI/CLR.

      For the same reasons that direct runnability is attractive, so too might be the ability to copy and paste to mix both syntaxes. (Note that this entirely a different matter from whether or not it is a good idea to commit files that mix upper and lower case; I'm talking about friction.) Then again, maybe not—how much legacy Oberon is being copied and pasted?

    1. I agree, of course, with the criticism of the price point. As I often say, $9.99/month (or even $4.99/month) is more expensive than premium email—and no matter how cool you think your thing is, it's way less important than email. You should always return something for ~$20, especially if you already have a free tier. (When I say "for $20" here, I'm talking about a one time payment, or on a subscription basis that maxes out at $20/yr.)

      The following musings are highly specific to the market for what's being sold here.

      Paying $20 should get you something that you aren't bothered about again for the next year. Maybe to make it even easier, enable anyone to request a refund of their $20 for any reason within the first 7 days. This gives a similar feel to a free trial, but it curbs abuse and helps target serious buyers in the first place. In the event that 7 days is not enough time even for people to convince themselves that they need it, maybe keep open the ability to use a severely limited version of the service for the remainder of the year. E.g. you can continue to log in and simulate what you'd get with the full version, but it's only accessible to you because you can't publish them and/or share links with anyone who doesn't have access to your account.

    1. Despite being a Rust apologist and the fact that this paper makes Rust look better than its competitors, Steve Klabnik says this paper is quite bad and that he wishes people would stop referencing it.

    2. We have CSV files as a kind of open standard

      The W3C actually chartered a CSV working group for CSV on the Web. Their recommendation resolves ambiguities of the various CSV variants, and they even went on to supercharge the format in various well-defined, nice-to-have ways.

    1. Here is a larger example. In this case, the directory structure of the modules corresponds to the import paths.

      Huh? This sounds like it's saying the opposite of what was said two paragraphs earlier.

    1. Another point made by Wirth is that complexity promotes customer dependence on the vendor. So there is an incentive to make things complex in order to create a dependency of the customer generating a more stable stream of income.
    1. Title

      In order to make it way easier to keep track of things in bookmarklet workspaces, there needs to be an option that adds an incrementing counter (timestamp?) to the bookmarklet title, so when dragging and dropping into your bookmarks library, you don't lose track of what's going on with a sea of bookmarklets all named the same thing.

    2. window.history.pushState(null, "unescaped bookmarklet contents");

      This has been giving me problems in the latest Firefox releases. It ends up throwing an error.

    1. @34:00

      In theory? RDF, it's awesome. I like it a lot. This is why I'm working on this. But in practice [...] the developer experience it's not great, and when people when they see Turtle and RDF and all these things, they don't like it, you know? My opinion is that it's because of lack of learning materials and the developer experience to get started.

    1. Usability and accessibility can impact where a technology falls on the spectrum: not paying attention to these dimensions makes it harder to move to higher levels of agency, staying more exclusive as "Look at what I/you/we can do, as the capable ones"
    1. Miguel de Icaza Jun 17, 2022 @migueldeicaza Replying to @migueldeicaza @markrendle and 2 others The foundation should fund, promote and advance a fully open source stack. And the foundation should remove every proprietary bit from the code in http://dotnet.org.

      Microsoft can and should compete on the open marketplace on their own. [...] And we should start with the debugger: we should integrate the Samsung one, it should be the default for OmniSharp and this is now we get contributions and improvements- not by ceding terrain to someone that can change the rules to their advantage at will.

      I tried (although perhaps not valiantly, but as an outsider) to convince Miguel and the then-Director of the .NET Foundation in 2015 that this state of affairs was probably coming and that he/they should reach out to the FSF/GNU to get RMS to lift the .NET fatwa, become a stakeholder/tastemaker in the .NET ecosystem, and encourage free software groupies to take charge so that FSF/GNU would be around as a failsafe for the community and would inevitably benefit greatly esp. from any of MS's future failure on this front. I tried the same in reverse, too. They seemed to expect me to be a liaison, and I couldn't get them to talk to each other directly, even though that's what needed to happen.

    1. Nobody but Eric would have thought of that shortcut to solve the problem.

      I find it odd that this is framed here as an example of an "unusual thinker". The solution seems natural, if underappreciated, for a domain where any tool's output target is one that was specifically crafted to intersect with what is ordinarily a (or in this case, the) "preferred form for modification".

      You can (and we probably all more often should) do the same thing with e.g. HTML+CSS build pipelines that sit untouched for years and in that course become unfashionable and undesirable...

    1. you can't hang useful language features off static types.For example, TypeScript explicitly declares as a design goal that they don't use the type system to generate different code

      This is a good thing. https://bracha.org/pluggableTypesPosition.pdf

      Refer to §6 #3.

    1. It would even use eye contact correction software to make it feel like you were actually looking at each other.

      If this were using professionally installed videoconferencing hardware, there would be no need for "eye contact correcting software" if done right. The use of such software would be an indicator of failure elsewhere.

    1. the NABC model from Stanford. The model starts with defining the Need, followed by Approach, Benefits, and lastly, Competitors. Separating the Need from the Approach is very smart. While writing the need, the authors have to understand it very well. The approach and benefits sections are pretty straightforward, where authors define their strategy and list down the advantages. Since most people focus on them when they talk about ideas, it's also easy to write. Then the competition section comes. It is the part the authors have to consider competitors of their proposal. Thinking about an alternative solution instead of their suggestion requires people to focus on the problem instead of blindly loving and defending their solutions. With these four parts, the NABC is a pretty good model. But it's not the only one.
  5. Jan 2023
    1. Publish content to your website using Indiekit’s own content management system or any application that supports the Micropub API

      "... assuming you rebase your site on top of Indiekit beforehand" (which is a big leap).

    2. I’m formally launching Indiekit, the little Node.js server with all the parts needed to publish content to your personal website and share it on social networks. Think of Indiekit as the missing link between a statically generated website and the social web protocols developed by the IndieWeb community and recommended by the W3C.

      Now just get rid of the server part.

      The real missing link between (conventional) static sites and the W3C's social protocols is still static. This post itself already acknowledges the reasons why1.

      See also https://news.ycombinator.com/item?id=30862612

    3. Still, installing Indiekit on a web server can be a bit painful.
    1. Publishing them to the modern web is too hard and there are few purpose-built tools that help
    2. It’s too hard to build these kinds of experiences

      "... with current recommended practices", that is.

    1. On the other hand, it means that you now need to trust that Apple isn’t going to fuck with the podcasts you listen to.

      There really is no substantial increase in trust. You were already trusting their player to do the right thing.

    1. the author complains that an Apple Podcast user has to go through the app (and all its restrictions), but again, not that different from Instagram posts. As a user, you must go through Instagram to see photos.

      And cyclists need to make sure they have wheels attached before riding a bicycle.

      This is one of those things that superficially seems like a relevant retort, but as a reply it's actually total nonsense.

      Or, if you wanted to put it more abrasively: Instagram photos are not podcasts, dumbass.