2,977 Matching Annotations
  1. May 2023
    1. Mastodon, currently seeinga large influx of users in the wake ofMusk’s Twitter takeover, creates a dif-ferent set of challenges because differ-ent users can select the same handleon different instances.

      Except not, because the domain after the @ sign is part of your handle, and it's not "creating" any challenges that weren't already a thing with email...

    1. the dreaded PDF favored by academics

      This definitely needs to be corrected.

    2. The ephemeral and non-standardized way that individuals operate their own blogs and social media means that not only might something move or cease to exist (a findability problem) but there is also an honesty problem when contents change or update without record

      Ibid. There is nothing about HTTP which makes your URIs unstable. It is your organization.

    3. While blogging and tweeting is cheap and fast and encourages ideas to be shared, these aren’t trustworthy archives.

      There's nothing in principle that makes blogging untrustworthy. It comes down to, as Elavsky says just a bit later, "higher [or lower] standards" for longevity. But there's a sleight of hand here. The people who are receptive to the proposal in this paper are already almost by definition selected for those who have high standards. So this is not a robust argument that there's anything superior to this approach vs "Just put it on your blog and take the same amount of care that you would when following the archival advice in this paper."

    1. results of database queries using POST (rather than GET) are not addressable

      This is just a misuse of HTTP.

    2. idea: a link type where a document says "I call myself X". In combination with a back-link service, it's a nifty URN idea.

      Prior art to the (surprisingly late entry) rel=canonical link introduced by Google.

    1. Annotate

      See also: Linked Data Notifications

    2. In a drag-and drop world, every window should have an icon for the document it holds which can be dragged to make a link. (Later versions of NeXTStep had this with alt/click on the titlebar).

      It's embarrassing that this isn't supported by freedesktop.org-affiliated environments.

    3. Somewhere near the "draft" end of the scale is its use a hypertext communal or personal notebook which is very close to a major original use of the Web in 1990. In this mode I can browse over notes made by people in my group, and rapidly contribute new ideas.

      Related: w2g/graph.global

    4. If you have had to switch to edit mode, and think of a local filename in which to save the file, then you have lost doubly, If you have had to answer lots of difficult questions about where to save absolute or relative links, you have lost yet again and probably messed up the file already! You should not have to think about "where" things are.

      And if you have to look up API docs to write a plugin to publish to a given service, then you have lost, too.

      Things like Neocities's out-of-band WebDAV gateway are some of the most pointless things in the world. WebDAV is HTTP! Just allow a PUT to the intended URL!

    1. Most of the installed base of web client software allows users to view link address. But ironically, that option is not available for printing in many cases. It would be a straightforward enhancement, well within the bounds of the existing architecture.

      There's more going on in the quoted requirements than these comments suggest.

    2. View control can be achieved on the Web by custom processing by the information provider. A number of views can be provided, and the consumer can express their choice via links or HTML forms. For example, gateways to database query systems and fulltext search systems are commonplace. Another technique is to provide multiple views of an SGML document repository[dynatext, I4I]. Another approach to view control is client-side processing

      Server-side processing for view control is gross—it's a conceptual abomination.

    1. each manageable in isolation

      Consider the phenomenon of people screwing others over as as matter of course in what they consider ways small enough to be acceptable—on the assumption that the other person seems to be positioned in such a way that they can absorb the shock.

    1. This is: Berners-Lee, Tim. “World-Wide Computer.” Communications of the ACM 40, no. 2 (February 1997): 57–58. https://doi.org/10.1145/253671.253704

    2. World-Wide Brain

      See also: the brains in Futurama's Infosphere.

    Tags

    Annotators

    1. The Web does not yet meet its design goal as being a pool of knowledge that is as easy to update as to read. That level of immediacy of knowledge sharing waits for easy-to-use hypertext editors to be generally available on most platforms. Most information has in fact passed through publishers or system managers of one sort or another.

    2. Apart from being a place of communication and learning, and a new market place, the Web is a show ground for new developments in information technology.

      The eternal tyranny of the milieu of the Web

    1. a really good Windows no-code is still very important, though, because three-quarters of all PCs still run Windows

      It is important that it work on Windows, but three-quarters of all PCs do not run Windows. Three-quarters of all traditional laptops and desktops? Sure, but most personal computers these days are mobile phones.

    1. @17:11

      The idea of portability is that you take a fully running system that is compliant with the expectations of that host system, pick it up, put it on the other platform, and it takes care of all the problems associated with living on that new platform and just works.

      Presages containerization in the 2010s.

    2. @17:03

      The idea of portability is not that you take your C code and recompile it and hope it compiles and hope the compilers have the same bugs in them.

    1. A contrasting experience was to learn how to use the tools to turn my programs into executable. It was a painfully slow and deeply unpleasant process where knowledge was gathered here and there after trial, errors, and a lot of time spent on search engines.
    1. Shepard writes to Boring (yes, Boring again) at this point that his “only real source of anxiety now is the realization that much of my life would be lost if I don’t get my maze results published.”

      Echoes of Darwin.

    1. @54:06:

      Host: So in a way it's a regulation to drive change, or...

      Anderson: Or, regulation to stop change that would upset existing safety standards social expectations, social norms.

    1. almost all beginners to RDF go through a sort of "identity crisis" phase, where they confuse people with their names, and documents with their titles. For example, it is common to see statements such as:- <http://example.org/> dc:creator "Bob" . However, Bob is just a literal string, so how can a literal string write a document?

      This could be trivially solved by extending the syntax to include some notation that has the semantics of a well-defined reference but the ergonomics of a quoted string. So if the notation used the sigil ~ (for example), then ~"Bob" could denote an implicitly defined entity that is, through some type-/class-specific mechanism associated with the string "Bob".

    1. configuring xterm

      Ugh. This is the real problem, I'd wager. Nobody wants to derp around configuring the Xresources to make xterm behave acceptably—i.e. just to reach parity with Gnome Terminal—if they can literally just open up Gnome Terminal and use that.

      I say this as a Vim user. (Who doesn't do any of the suping/ricing that is commonly associated with Vim.)

      It is worth considering an interesting idea, though: what if someone wrote a separate xterm configuration utility? Suppose it started out where the default would be to produce the settings that would most closely match the vanilla Gnome Terminal (or some other contemporary desktop default) experience, but show you the exact same set of knobs that xterm's modern counter part gives you (by way of its settings dialog) to tweak this behavior? And then beyond that you could fiddle with the "advanced" settings to exercise the full breadth of the sort of control that xterm gives you? Think Firefox preferences/settings/options vs. dropping down to about:config for your odd idiosyncrasy.

      Since this is just an Xresources file, it would be straightforward to build this sort of frontend as an in-browser utility... (or a triple script, even).

    1. @~8:00 one quote says:

      With web articles, I think the biggest problem is that it is hard to create interactive elements in it. Some people are able to make very fancy web articles. But it takes efforts you know.

      Related to ho-hoism? Or at least the Torvaldsian sentiment about not being "big and professional like gnu[sic]".

    1. This is:

      Torvalds, Linus torvalds@klaava.helsinki.fi. Reply to "What would you like to see most in minix?"; Google Groups 2005 November edition. Message-ID 1991Aug26.110602.19446@klaava.Helsinki.FI. comp.os.minix, Usenet. 1991 August 26.

    2. (just a hobby, won't be big and professional like gnu)

      Is this (self-directed/-inflicted) ho-hoism? (Maybe even a consequence of Ra?)

  2. Apr 2023
    1. The word “entered” should not be used on the line above the Judge’s signature to show thedate on which a judgment or order is signed.

      Huh? Hard to understand the wording here.

    1. Wow, this is me. A friend once analogized it to being like a light source. I am a laser, deeply penetrating a narrow spot, but leaving the larger field in the dark while I do so. Other people are like a floodlight, illuminating a large area, but not deeply penetrating any particular portion of it.

      This way of thinking should be treated with care (caution, even), lest it end up undergirding a belief in a false dichotomy.

      That can be a sort of "attractive people are shallow and dumb and unattractive people are intelligent and deep"-style mindtrap.

    2. I can't seem to code and engage in an ongoing human interaction at the same time. It has to be one or the other. I also really hate having someone looking over my shoulder while I'm typing.

      This doesn't sound to me like they have actually been doing pair programming as I have always understood it. Neither participant needs to "engage" in those (admittedly distracting) things—least of all the person at the keyboard.

      In pair programming as I have had it laid out—and not as a consequence of hearing "pair programming" and extrapolating or assuming what it involves—one person is writing the code just like when they're alone, except they're not actually controlling the computer. That's the other person's job. The first person is controlling the person who is controlling the computer. Part of the job of the second person involves shutting the fuck up and just following what the other person is saying to do. This pattern only ever breaks when the pair decides to switch places or the person dictating runs into an issue, at which point the person at the keyboard (who has been thinking all the while as an observer of what the two have been producing and is expected to know what the problem is, having already recognized the problem the first time around) should speak up. When switching roles or after reaching milestones, the two can confer about high-level concerns,immediate and distant plans to deal with things overlooked or set aside in the last round, etc.

      I am aware that "two people working at a single computer" is how most people understand pair programming (and that there seems to be academic work covering the topic which lays it out in a way that contradicts what I've described here), but I regard that as wrong—for all the obvious reasons, including and especially the ones described by the commenter here...

    1. One of them has a meeting? Sorry can't do any work.

      One of them has a meeting? They both have a meeting—this one.

      To put it another way: what if you received an email informing you of meeting A and then were later informed of meeting B? How would you ordinarily handle this conflict?

    2. forcing two people to do a specific thing at the same time

      Being able to show up to work on time is a basic requirement for people who are, culturally, widely viewed as being not very responsible—grocery store workers, gas station attendants, etc. Being able to satisfy a similar expectation should not be difficult, then, for well-educated and well-paid folks on a regular basis.

    1. It sounds like the non-enthusiast “reimplement everything in my favorite language” answer is that Go’s FFI is a pain, even for C.

      Relative to the experience that Golang developers are used to, yes, it's a pain.

      But that isn't to say it's any more or less painful on an absolute scale, esp. wrt what comprises typical experiences in other ecosystems.

    1. consumes more CPU and memory to simplify the logic and improve reliability.

      Candid! I propose that this interpretation of "Modern" receive widespread recognition.

    1. These systems provide quite powerful tools for automaticreasoning, but encoding many kinds of knowledge using their rigid formal representations requiressignificant- -and often completely infeasible-amounts of effort.
    2. This is:

      Malone, Thomas W., Keh-Chiang Yu, and Jintae Lee. 1989. “What Good Are Semistructured Objects? : Adding Semiformal Structure to Hypertext.” Working Paper. Cambridge, Mass. : Sloan School of Management, Massachusetts Institute of Technology. https://dspace.mit.edu/handle/1721.1/49393

    Tags

    Annotators

    1. Stevens, W. Richard 1994. TCP/IP Illustrated, Volume 1: The Protocols. Addison-Wesley, Reading, Massachusetts
    2. Ondaatje, Michael 1992. The English Patient. Vintage International, New York
    3. Mockapetris, P.V. 1987b. "Domain Names: Concepts and Facilities," RFC 1035
    4. Mockapetris, P.V. 1987a. "Domain Names: Concepts and Facilities," RFC 1034
    5. Malone, Thomas W., Grant, Kenneth R., Lai, Jum-Yew, Rao, Ramana, and Rosenblitt, David 1987. "Semistructured Messages are Surprisingly Useful for Computer-Supported Coordination." ACM Transactions on Office Information Systems, 5, 2, pp. 115-131.
    6. Malone, Thomas W., Yu, Keh-Chaing, Lee, Jintae 1989. What Good are Semistructured Objects? Adding Semiformal Structure to Hypertext. Center for Coordination Science Technical Report #102. M.I.T. Sloan School of Management, Cambridge, MA
    7. Martin Gilbert 1991. Churchill A Life Henry Holt & Company, New York, page 595
    8. There are a few obvious objections to this mechanism. The most serious objection is that duplicate information must be maintained consistently in two places. For example, if the conference organizers decide to change the abstracts deadline from 10 August to 15 August, they'll have to make that change both in the META element in the HEAD and in some human-readable area of the BODY.

      Microdata addresses this.

    1. With the Richer Install UI web developers have a new opportunity to give their users specific context about their app at install time. This UI is available for mobile from Chrome 94 and for desktop from Chrome 108. While Chrome will continue to offer the simple install dialogs for installable apps, this bigger UI gives developers space to highlight their web app.

      K, but the installation comes from a context that the app vendor already controls—the entire surface of their own site—so...

      Even so, do what you want with your browser UI, but like... there are crazy levels of navel-gazing here.

    1. not free as in beer." At this point, a discussion on software has turned into one about the price of an alcoholic beverage.

      This turn of phrase always struck me as confusing. (Still does, actually.) Maybe I lack the cultural context where "free [as in] beer" is illuminating rather than confounding. It raises questions, though—what cultural context is the one where this is a logical and widely understood sequence of words?

      Complimentary beer nuts, sure, but the beer isn't free—that's why the nuts are: because they're trying to sell more beer.

      When and where was "free [as in] beer" ever a thing?

    1. This is:

      Caplan, Priscilla. Support for Digital Formats. Library Technology Reports 44, 19–21 (2008). https://journals.ala.org/index.php/ltr/article/view/4227

    2. Adrian White

      Did Adrian White become Adrian Brown? That's what the byline in DPTP-01 actually says.

    Tags

    Annotators

    1. The present leadership, particularly from RMS, creates an exclusionary environment in a place where inclusion and representation are important for the success of the movement.

      Does this mean that Drew is going to step back, then? He is after all yet another white guy himself who has few (or none) of the characteristics described, so...

      The term "virtue signalling" was probably played out 5 years ago, but geez—it's hard to see this as anything except an opportunistic version of exactly that. Like greenwashing being wielded by people with ulterior motives, this seems like a straightforward case of an intellectually bankrupt attempt to conspicuously dress up one's argument in the most politically unimpeachable cause and claim that it comes from a place of wanting the best for society when ultimately a self-serving motive underlies the thing. This is very cheap and very tacky.

    2. His polemeic rhetoric rivals even my own, and the demographics he represents – to the exclusion of all others – is becoming a minority within the free software movement. We need more leaders of color, women, LGBTQ representation, and others besides. The present leadership, particularly from RMS, creates an exclusionary environment in a place where inclusion and representation are important for the success of the movement.

      I'm not a vanguard for the FSF per se, but when I think about the community norms and attitudes that are most exclusionary and turn people away, it's the sort of stuff that Drew and his fans are most often associated with. Stallman at least e.g. views Emacs as something that "secretaries" can be taught. Drew's circle tends to come across as having superiority complexes and holding strong opinions about computing that stops them just short of calling you a little bitch for not being as hardcore as they are...

    3. hip new software isn’t using copyleft: over 1 million npm packages use a permissive license while fewer than 20,000 use the GPL

      I didn't realize Mr. DeVault was such an admirer of NPM.

    4. Many people assume that the MIT license is not free software because it’s not viral.

      Their fault, really. What does the FSF have to do, and how much and how often do they have to do it, to make clear that this isn't their position? The culprit is right there in the sentence: the word "assume". It's not unforgivable to not be certain, but the number of people I've interacted with who insist that this is the FSF's position are exasperating.

    1. As far as I can tell, Google Takeout lists every Google service that stores data of some kind

      Not Google Podcasts.

    1. const EACH$ = ((x) => (this.each(x))); const SAFE$ = ((x) => (this.escape(x))); const HTML$ = ((x) => (x));

      In my port of judell's slideshow tool, I made these built-ins. (They're bindings that are created in the ContentStereotype implementation.)

      In that app, the stereotype body is just a return statement. Perhaps the ContentStereotype implementation should introspect on the source parameter and check if it's an expression or a statement sequence. The rule should be that iff the first character is an open param, then it's an expression—so there is no need for an explicit return, nor the escaped backtick...

      This still gives the flexibility of introducing other bindings—like the ones for _CSSH_ and _CSSI_ here—but doesn't doesn't penalize people who don't need it.

    Annotators

    1. Identifiers are an area wherethe needs of libraries and publish-ing are not well supported by thecommercial developmen
    2. Handleshave one serious disadvantage.Effective use requires the user’sWeb browser to incorporate specialsoftware. CNRI provides this soft-ware, but digital libraries havebeen reluctant to require theirusers to install it.
    1. amd [sic.]

      I'm having trouble determining the source of this purported error. This PDF appears to have copied the content from the version published on kurzweilai.net, which includes the same "erratum". Meanwhile, however, this document which looks like it could plausibly be a scan of the original contains no such error: https://documents.theblackvault.com/documents/dod/readingroom/16a/977.pdf

      I wonder if someone transcribed the memo with this "amd" error and that copy was widely distributed (e.g. during the BBS era?) and then someone came across that copy and inserted the "[sic]" adornments.

    1. The system also includes a searchable online database that will give a buyer instant information. "DOI will alsO provide a national directory of who owns what online," said Burns. "The system will give permissions, list rights fees and provide other articles by the same author and instantly put the buyer in contact with the publisher."

      A subset of Ted Nelson's envisioned transcopyright system

    1. we reported evidence indicating that static type sys-tems acted as a form of implicit documentation

      wat

      There's nothing implicit about it. Type annotations are explicit.

    1. Would I want to keep URLs of such draft/work-in-progress files stable, shall they be first-class citizens of the site, should they be indexed, how would I indicate freshness/state etc.?
    2. I've started thinking in the direction of serving on-going writing in a separate folder as raw plain text. That would be quite frictionless
    1. The end-user thinks, "Ah, it was only a dollar, I got my money's worth," but the publisher has basically paid nothing for the work, adds a few hours of digital typesetting, and then makes 100% profit on the sale.

      I have real trouble seeing that as saddening.

      It isn't as if anyone is going around making the the free version arbitrarily defective. The reseller is putting in work to add value and getting paid a buck for it (literally).

      It would perhaps be upsetting, too, if they were going after folks somehow. But I don't see this in the rendition above.

      (In reality, a buyer would probably be fine if they took the Project Gutenberg version, bought the reseller's digitally reformatted one, extracted the TOC data and error corrections, and then slapped that onto the free version and sent it back upstream to Project Gutenberg or someone else who is distributing free copies. They would be legally in the clear, so the reseller, then, stands to make as little as $1 for their investment in that work, so in that case it seems imminently fairly priced.)

    1. Some filesystems (like ext2 specifically) complain if you have more than ~65k subdirectories in a directory, so my original plan of having tweets live at /{username}/status/{id}/index.html (and resolved to /{username}/status/{id}/) doesn't work on those filesystems. Instead all the files live at /{username}/status/{id}.html

      I'm not sure how this solves the problem specifically, since there will still be thousands of entries (one for each tweet) in the status/ directory...

      (Unless I'm not grasping something and the problem truly is the matter of having 2^16 subdirectories in particular—without similar concerns for ordinary files.

      That does raise questions about how someone would run into the original problem in the first place; vanilla ZIP has a fundamental limitation of 2^16 - 1 total files. Is Twitter using the ZIP64 extension?

    2. This won't work if your archive is "too big". This varies by browser, but if your zip file is over 2GB it might not work. If you are having trouble with this (it gets stuck on the "starting..." message), you could consider: unzipping locally, moving the /data/tweets_media directory somewhere else, rezipping (making sure that /data directory is on the zip root), putting the new, smaller zip into this thing, getting the resulting zip, and then re-adding the /data/tweets_media directory (it needs to live at "[username]/tweets_media" in the resulting archive). Unfortunately, this will include media for your retweets (but nothing private) so it'll take up a ton of disk space. I am sorry this isn't easier, it's a browser limitation on file sizes.

      Contra [1], the ZIP format was brilliantly designed and natively supports a solution to this; ZIP was conceived with the goal of operating under the constraint that an archive might need to span multiple volumes. So just use that.

      1. https://games.greggman.com/game/zip-rant/
    1. a printed book containing the 10000 best internet URLs

      The book is: Der große Report - Die besten Internetadressen. 2000. Data Becker.

    2. A few of the entries are pretty straightforward because I'm sure they'll be around for a long time and they're obviously important: Wikipedia and the Internet Archive.

      The context is 10,000 URLs, not "sites". The URL for "Wikipedia" leads to a document that is on its own not entirely interesting. It would be the URLs for individual articles that should make the cut, unless "URL" is being used as a euphemism here.

    3. something so ephemeral as a URL

      Well, they're not supposed to be ephemeral. They're supposed to be as durable as the title of whatever book you're talking about.

    1. The homepage is the most recent post which means you don't have to figure out if I posted something new since the last time you visited and I truly believe that is how a personal blog is supposed to be.

      That goes against the design of URLs and also confused/annoyed me when I first landed on this blog, so...

    1. I am extremely gentle by nature. In high school, a teacher didn’t believe I’d read a book because it looked so new. The binding was still tight.

      I see this a lot—and it seems like it's a lot more prevalent than it used to be—reasoning from a proxy. Like trying to suss out how competent someone is in your shared field by looking at their GitHub profile, instead just asking them questions about it (e.g. the JVM). If X is the thing you want to know about, then don't look at Y and draw conclusions that way. (See also: the X/Y problem.) There's no need to approach things in a roundabout, inefficient, error-prone manner, so don't bother trying unless you have to.

    1. Apple pointed out that this is apparently allowed by the spec, and that it was faulty feature detection on our part. Looking at the relevant spec, I still can't say, as a web developer rather than a browser maker, that it's obvious that it's allowed.

      C'mon. It's right there:

      Follow the instructions given in the WebGL specifications' Context Creation sections to obtain a WebGLRenderingContext, WebGL2RenderingContext, or null; if the returned value is null, then return null;

      (Not that it should even be necessary to resort to checking the spec—relying on an assumption of a non-null return value here should raise the commonsense suspicions of anyone.)

    2. In the end, they added a special browser quirk that detects our engine and disables OffscreenCanvas. This does avoid the compatibility disaster for us. But tough luck to anyone else

      I agree that this approach is bad. I hate that this exists. The differences between doctype-triggered standards and quirks mode was bad enough. This is so much worse—and impacts you even when you're in ostensible standards mode.

    3. I tried my best to persuade Apple to delay it, but I only got still-fairly-vague wording around it being likely to ship as it was.

      Huh? Why? Why even waste the time? Just go fix your code.

    4. preserves web compatibility

      "... you keep using that word"

    5. Safari is shipping OffscreenCanvas 4 years and 6 months after Chrome shipped full support for it, but in a way that breaks lots of content

      I don't think that has been shown here? The zip.js stuff breaking is one thing, but the poor error detection regarding off-screen canvas doesn't ipso facto look like part of a larger pattern.

    6. doesn't Apple care about web compatibility? Why not delay OffscreenCanvas

      Answer: because they care about Web compatibility. If they delay X because Y is not ready, then that's ΔT where their browser remains incompatible with the rest of the world, even though it doesn't have to be.

    7. Firstly my understanding of the purpose of specs was to preserve web compatibility - indeed the HTML Design Principles say Support Existing Content. For example when the new Array flatten method name was found to break websites, the spec was changed to rename it to flat so it didn't break things. That demonstrates how the spec reflects the reality of the web, rather than being a justification to break it. So my preferred solution here would be to update the spec to state that HTML canvas and OffscreenCanvas should support the same contexts. It avoids the web compatibility problem we faced (and possibly faced by others), and also seems more consistent anyway. Safari should then delay shipping OffscreenCanvas until it supported WebGL, and then all the affected web content keeps working.

      This is a huge reach.

      Although it's debatable whether having mismatched support is a good idea for a vendor, arguing that it breaks the commitment to compatibility is off. Construct broke not because something was removed, but because something was added and your code did not handle that well.

    8. MDN documentation mentioned nothing about inconsistent availability of contexts

      Two things: * Why would it have mentioned anything? It wouldn't have. It hadn't shipped yet. * MDN is not prescriptive; it's written by volunteers

    9. typeof OffscreenCanvas !== "undefined"

      The second = sign is completely superfluous here. Only one is necessary.

    10. Construct requires WebGL for rendering. So it was seeing OffscreenCanvas apparently supported, creating a worker, creating OffscreenCanvas, then getting null for a WebGL context, at which point it fails and the user is left with a blank screen. This is in fact the biggest problem of all.

      Well, the biggest problem is that anything can ever lead to a blank screen because Construct isn't doing simple error detection.

    1. In a resume-first hiring process, your resume is at best a raffle ticket that might pay off and grant you admission to the actual hiring process. That’s it. That’s all.

      This is why I don't get people who bristle at the thought of writing a cover letter. What the fuck. Just write a paragraph or two saying why you want this job, specifically—why you think you'll find it rewarding and how it fits with your professional interests. Is it that hard? I'll take this a thousand times over vs pruning, rearranging, and emphasizing line-item crap from my employment history, awards I was given 20 years ago, etc.

      We should be starting with the cover letter and handing that over to someone who's competent to review it (not a generic, know-nothing human pattern matcher) and then move on to hardcore testing for aptitude in a testing environment that matches as closely as possible the actual work environment where you're going to be expected to get things done on a day-to-day basis.

    2. I’m as convinced as a person can be that the resume-first hiring processes are just marginally worse than doing nothing at all. I spent 15 years tweaking resumes, writing cover letters, and generally taking all the very good advice I got only to have it never turn a cent of profit for me. What finally got me out of that pattern was a really odd situation where one of my articles got just enough heat on it that I was allowed to circumvent the middle part of the interview process and go straight to hiring manager interviews. And it was a whole different ballgame because I was now talking to someone who had both the power and desire to hire someone for a position, as opposed to someone whose biggest goal was keeping sufficient people away from that stage to keep them out of trouble.
    3. this isn’t supposed to be me calling out hiring managers and bosses everywhere

      Why not? Do it. It is literally their job.

    4. until someone invents an alternative, what’s to be done?

      The alternative: "smart" resumes that are something like contact cards plus an agreement from employers to put way less stock into resumes and less organizational infrastructure towards keeping classic HR droid positions filled with people that ultimately themselves don't do very much for the company.

      So from the applicant's perspective, you don't worry about creating a resume for this job. It feels more like handing out a business card with your contact info to someone who needs it, except in this case instead of it being contact info (or rather, in addition to the contact info), it contains other stuff, too.

    1. try to apply study into day-to-day, try to set a high personal bar so that even "easy" tasks are challenging

      Ugh.

    1. GIS files can be huge. Travis County's parcel file is 187M

      Surely that's meant to say "GB"?

    1. finding a way to do a "git pull" without having to write a commit message (does --rebase do that?) would help in a huge way

      It might "help" but it defeats the entire purpose of the recordkeeping endeavor.

      If you don't care about the recordkeeping aspect and are just using Git to sync stuff between machines, then you're not really using Git and should stop trying to use it and use something else. (A better option, of course, is to think about it long enough to understand why recordkeeping is good and then take the time to write commit messages that don't suck and not treat it as an arbitrary and pointless hurdle. It's not pointless; there's a reason it was put there, after all.)

    2. I tried adding some stuff to ".gitignore", but it did no good

      This is why git add is git add. The students should have been told not to add anything to the repo except for the source files they're actually changing. A good rule of thumb: if the change was made by a human, and the human was you, then you can commit it; if the change was made by a machine, then don't.

    3. Git made it easy to move students to a different computer, because their code was already there, but the git config for name and email remained that of the computer's previous resident.

      This is only a problem if they were doing git config --global. Considering these were shared machines, then they shouldn't have been.

    4. The class was sharp and realized there had to be a better way. I said git worked better if everyone took their turn and did check-ins one-at-a-time.

      Except, of course, branching and merging mean that this hurdle isn't a necessary one. Git was designed from the beginning so that this would be a non-issue (or at least not as bad as what this class experienced); that's where the D in DVCS comes from, after all...

      (And I thought that's where this was going—! Rather than just giving people the solution—in this case branches/remotes—and telling them to use it, then what you do is you let them experience the problem firsthand and then can appreciate the solution and why it's there. Really surprised that's not where this ended up.)

    5. The college we were at had locked down the networks crazy tight. Machines could not communicate with each other.
    6. I had a fever dream* in 2020 or 2021 that involved an epiphany for a clear way to integrate Git's data model into mass market computing systems (a la Mac OS and its Finder) in a way that was digestible to normal people. I've basically forgotten it. I think it was something like:

      1. use heuristics to figure out when someone is using the "[...]_draft", "[...]_final", etc, ad hoc versioning antipattern
      2. offer to make the directory a versioned one

      On systems like Mac OS where everything is tightly integrated, you wouldn't need to limit yourself to offering this in the Finder. Any time someone used the system-wide standard file save dialog in a way that exhibits the thing described in #1, the system could use the desktop notification subsystem to get the user's attention and offer to upgrade their experience. No interaction with the Git porcelain (as we know it) necessary.

      I fear that MS might do this first but bungle it (i.e. unthoughtfully) and also promote/upsell GitHub to you during the ride.

      * not really

    1. That means there can be any random data between records

      Yes, of course. That's another intentional feature.

    2. If you want to support reading from the front it seems required to state that the self extracting portion can't appear to have any records.

      Well, since you don't want to support it, then you aren't required to do that. (And good thing, because that would limit the format severely.)

    3. Does it mean the first time you see that scanning from the back you don't try to find a second match?

      It means you don't need to! (And why would you try? You have already found one, and you know there is only one, so to try to find more is to try to do something that you know is impossible.)

    4. But what does that mean?

      It means if you have a ZIP file (something that you know is a valid ZIP file) and you have found more than one end of central directory record, then there's something wrong with the method you used to find them (because there can by definition be only one).

    5. A forward scanner might fail to read these.

      Okay, fine. Don't use them (don't use broken software, generally—unless you're comfortable getting broken results).

    6. that contradicts 4.1.9 that says zip files maybe be streamed

      I don't take the spec to mean that you can reliably stream any arbitrary ZIP bytestream. If you are the producer and the consumer, though, then you can bend the format to your will to enable streaming.

      See Firefox's JAR handling for an example.

    7. Justine Tunney covers the genius of the ZIP format in her Redbean talk (@55:31) https://youtu.be/1ZTRb-2DZGs?t=3331

    1. If the data stream encodes values with byte order B, then the algorithm to decode the value on computer with byte order C should be about B, not about the relationship between B and C.

      See also: the brokenness of most schemes to cross-compile applications (including producing cross compilers).

      Rob's clear thinking here definitely had an influence on why Go's compiler is one of the few to have a sane cross-compilation story.

    1. This is:

      S. Mirhosseini and C. Parnin. “Docable: Evaluating the Executability of Software Tutorials”. 2020. https://chrisparnin.me/pdf/docable_FSE_20.pdf

    2. software decay

      See Hinsen, "software rot".

    3. Pimental et al. found that only 24% of Jupyter note-books could be executed

      This is the second time this appears in this paper.

      Previously: https://hypothes.is/a/Mm9whNQFEe2J6Y97btVQBQ

    4. The ambiguity (i.e. non-machine-readability) of tutorials described in this paper is a good example to demonstrate both what it means for something to be an algorithm and what it means to "code" something.

    5. Once I was *attempting* (Igave up) to install an application and the first tutorial allowed mea choice of 6 ways to install something and none worked.
    6. Our informants recognized this as a general problem with tu-torials: “There’s an implicit assumption about the environment”(I5) and “many tutorials assume you have things like a workingdatabase” (I4). If tutorials “were all written with *less* assumptionsand were more comprehensive that would be great
    7. Pimentel et al . [28] found that only 24% of Jupyter note-books could be executed without exceptions
    1. it's better than RSS but RSS just seems a better brand-name

      Isn't that pretty interesting? You'd think it would be the other way around.

      In fact, what if it is the other way around? What if the failure of classic/legacy Web feeds has to do with power users' insistence on calling it "RSS"?

    1. this post remindeds me of the initial comments to "Show HN: Dropbox"

      What? That's an insane comparison. This is like the total opposite of that comment; ActivityPub is super complicated.

    1. But for better or worse, ActivityPub requires a live server running custom software.

      This is bad protocol design. It violates (a variation of) the argument for the Principle of Least Power.

    1. This type of complexity has nothing to do with complexity theory

      Also not to be confused with the notion from the area of information theory of Kolmogorov complexity. (At least not directly—but that isn't to say there is no relation there.)

    1. Pretty nuts that Safari isn't open source. I thought for sure that Edge was going to be fully open source, both before and after the Blink conversion. Why even build closed source browsers in 2023?

    1. My father owed this man money, and this was his revenge.

      If you are allowed by someone to owe them money, then what are you getting revenge for...?

  3. Mar 2023
    1. the expert blind spot effect [ 25 ], whentutorial creators do not anticipate steps where novice tutorial takers

      the export blind spot effect, when tutorial creators do not anticipate steps where novice tutorial takers may have difficulty

      I call this "developer tunnel vision".

    Annotators

    1. After 10 years of industry and teaching nearly 1000 students various software engineering courses, including a specialized course on DevOps, I saw one common problem that has not gotten better. It was amazing to see how difficult and widespread the simple problem of installing and configuring software tools and dependencies was for everyone.
    1. Even notebooks still are problematic, for example, this study found that only 25% of Jupyter notebooks could be executed, and of those, only 4% actually reproduced the same results.
    1. This will also take the stress away from the developers in maintaining the SublimeText core, which will be supported by the community while they can focus on pro features for the text editor.
    2. I feel that open sourcing SublimeText is the only way for SubmlineText to be relevant and compete against VSCode.

      The purpose of SublimeText is not to be "relevant" or "compete" against VSCode in the social media influencer sense of relevance and competition. It is to be a text editor that makes the author money both directly and indirectly (i.e. by selling licenses and being the kind of text editor that the author themselves uses to make software).

    3. Is It Time to Open Source SublimeText?

      This is such bizarre article and headline. It's almost clickbait.

    1. Glenn, a seasoned pilot and astronaut, had just purchased an Ansco Autoset camera for a mere $40 from a drugstore

      Whether it was from a drugstore or not, that $40 in 1962 was like $400 today...

    1. On the one hand, it's a drag to do two different implementations, but on the other hand, it's a drag to have one of the two problems be solved badly because of baggage from the other problem; or to have the all-singing-all-dancing solution take longer than two independent solutions together.

      Premature generalization is the root of all evil?

    2. Well really the requirement is "small changes should be fast", right?

      Calling out the X/Y Problem.

    1. differentiating between using a database for indexing and as a canonical data store

      Most people who think they need a database really just need a cache? See jwz on the Netscape mail client:

      So, we have these ``summary files,'' one summary file for each folder, which provide the necessary info to generate that message list quickly.

      Each summary file is a true cache, in that if it is deleted (accidentally or on purpose) it will be regenerated when it is needed again

      https://www.jwz.org/doc/mailsum.html

    1. “For example, I personally believe that Visual Basic did more for programming than Object-Oriented Languages did,” Torvalds wrote, “yet people laugh at VB and say it's a bad language, and they've been talking about OO languages for decades. And no, Visual Basic wasn't a great language, but I think the easy DB interfaces in VB were fundamentally more important than object orientation is, for example.”
    2. never once reasoning about physical locations, hardware, operating systems, runtimes, or servers

      ... but instead replacing all the cognitive load that would have gone to that task instead to reasoning about AWS infrastructure...

    3. Coincidentally or not, the demise of Visual Basic lined up perfectly with the rise of the web as the dominant platform for business applications.

      ... which, as it turns out, is exactly what Yegge said he thought was going to happen in his response to the question that Linus was answering.

    4. Almost all Visual Basic 6 programmers were content with what Visual Basic 6 did. They were happy to be bus drivers: to leave the office at 5 p.m. (or 4:30 p.m. on a really nice day) instead of working until midnight; to play with their families on weekends instead of trudging back to the office. They didn't lament the lack of operator overloading or polymorphism in Visual Basic 6, so they didn't say much.The voices that Microsoft heard, however, came from the 3 percent of Visual Basic 6 bus drivers who actively wished to become fighter pilots. These guys took the time to attend conferences, to post questions on CompuServe forums, to respond to articles. Not content to merely fantasize about shooting a Sidewinder missile up the tailpipe of the car that had just cut them off in traffic, they demanded that Microsoft install afterburners on their buses, along with missiles, countermeasures and a head-up display. And Microsoft did.
    5. It gave me the start in understanding how functions work, how sub-procedures work, and how objects work. More importantly though, Visual Basic gave me the excitement and possibility that I could make this thing on my family's desk do pretty much whatever I wanted
    6. “The prevailing method of writing Windows programs in 1990 was the raw Win32 API. That meant the 'C' Language WndProc(), giant switch case statements to handle WM_PAINT messages. Basically, all the stuff taught in the thick Charles Petzold book. This was a very tedious and complex type of programming. It was not friendly to a corporate ‘enterprise apps' type of programming,”
    1. Yes, that is true. There's nothing you can do about that without breaking basic web expectations of URLs staying the same. The new endpoints can serve up the old content or Announce references to them, but the old URLs do need to continue resolving and at a minimum serve up a redirect if you want maximum availability.It would be a nice improvement to have a URL scheme that allowed referencing posts relative to a webfinger lookup to reduce the impact of that.

      Consider also a change to the conventions of UGC, where service operators give control of the objects (URLs) "owned" by a given user over to them, the owner. You should be able to connect your account with a request servicer like Cloudflare. You upload a document specifying how a worker should handle the request to your servicer of choice, inform the website operator that you'd like to route your requests through the servicer, and you're good.

    1. Browser-based interfaces are slow, clumsy, and require you to be online just to use them.

      No they don't.

      This conflates the runs-in-a-browser? property with the depends-on-mobile-code? property.

    1. NOTE: Cyren URL Lookup API has 1,000 free queries per month per user. COMPLETELY SEPARATE NOTE: You can use services like temp-mail to create temporary email addresses.

      Just be honest about your scumminess, instead of trying to be cute.

    1. Substack is growing fast: they now have 1M+ paid subscriptions but apparently generate no revenue. Which is already worrying to me. Because it means that yes, they can keep running like this if they keep getting investments but at some point something has to change.

      What if someone exploited this for a conversion strategy—and it was planned that way all along?

      Startup A receives VC investment. They spend it wooing creators to their platform, and those creators are able in turn to make a profit. The startup doesn't take a cut. The signs that they are going to crash and burn appear. Suddenly, 6 months before even the most pessimistic critics would have guessed, the startup announces that it is the end of their incredible journey. They warn everyone that in two weeks it will flip to read-only, and then 4 weeks after that it will go dark. Everything goes nuts. The creators were relying on the platform themselves to make money. No platform means no money. Suddenly a solution appears: Startup B. They offer more or less a drop-in replacement for Startup A's highest revenue-generating users. The only catch is that Startup B's plans cost money. Startup C also appears, along with Startup D, each catering to a different segment of stalwarts who haven't signed a deal with B. In fact, B then announces that they're investing in C and D, in order to promote a healthy ecosystem. Somehow A appears and says that they're investing in C and D, too. Meanwhile, A's sunset never happened—a month after the site went read-only, it's still in read-only mode. Then A announces low-cost paid plans, flips back to read-write, and opens for new signups, having successfully converted the most lucrative clients to the more expensive plans with B since they knew it was in their best interests to maintain continuity of revenue no matte the costs.

    1. I could port it to Hugo or Jekyll but I think the end result would make it harder to use, not simpler.
    2. Could this same design—or a similar one—be made available in a simpler form?

      Yes.

    1. I dislike the concept of editing old content on personal sites.

      Does that dislike extend to the reformatting of old publications e.g. when you pick a new template for your site? I'd guess not, but I'd argue that you should at least consider not doing that, either.

    1. haha. let’s not do that.

      ... unless...?

    2. perhaps subconsciously i have carried over those principles here

      Better analogy: in real life you can't actually unpublish something. At best you can go around trying to snatch up copies and then burn them. More realistically, you can published a new edition with corrections to errata incorporated, but—notably—the "bad" version will always be *Blah Blah Blah, first ed." The existence of the second edition doesn't erase the first one from people's bookshelves.

      It's on the Web and how poorly orgs' information architecture is carried out that foo.example.com/bar can be one thing one day and then another thing entirely the next day. If we used URLs like <https://mitpress.mit.edu/Zachary, G. Pascal. Endless Frontier: Vannevar Bush, Engineer of the American Century. 1st ed. MIT Press, 1999.> would this be as big of a problem? Would doing so nudge born-digital documents in the same direction?

    3. that’s not to say things should NEVER be edited

      Edit it, sure. But don't clutter the ability to unambiguously refer to the previous version (by the same name it was given initially) as a way to distinguish it from later versions.

    4. i don’t think it was designed to replicate the file and file folder model

      In my experience the file-and-folder model is the reason behind so much URL breakage. People don't seem to realize that even if you have foo/bar/ and foo/baz/, then that doesn't mean that you need to have a view that involves bar/ and baz/ among other things contributing to the "clutter" perceived when gazing upon something called foo/ (and that foo/, in turn, cluttering up whatever it's "inside").

      It's this perceived clutter and the compulsion to declutter that leads to people moving stuff around in the pursuit of a more legible model.

    5. in my experience, google docs documents are very rarely if ever deleted. every organization i’ve ever done work for that uses google workspace1 has a problem with document bloat where google drive is just a mess of disorganized files, and document management is a job in itself

      There's a lot wrong with Google Docs, but the fact that documents stick around is not one of them. Documents should stick around.

    1. Use GitHub issues for blog comments, wiki pages and more!

      No.

    1. The only exception is a page which is deliberately a "latest" page

      Nah. The latest URI should be a (temporary) redirect to the canonical URI of whatever the latest version is.

    2. There are no reasons at all in theory for people to change URIs (or stop maintaining documents)

      "Don't change your URIs" and "don't stop maintaining your documents" is contradictory.

      If Kartik has a published document at /about in 2022 and then when I visit /about in 2023 what it said before is no longer there and it says something else instead (because he has been "maintaining" "it"), then that's a URI change. These are two separate documents, and the older one used to be called /about but now the newer one is.

    3. During 1999, http://www.whdh.com/stormforce/closings.shtml was a page I found documenting school closings due to snow. An alternative to waiting for them to scroll past the bottom of the TV screen! I put a pointer to it from my home page.

      It's actually the expectation that /stormforce/closings.shtml should mutate, reflecting recency that is anathema to the project here...

    1. By reducing the duration of operations, he increased the chances of patient survival, saved thousands of lives, and pissed off the surgeons after they found out that he used the same methods for bricklayers. Surely those holier-than-thou doctors deserved better than to be compared to a bricklayer. 😉

      The tone immediately makes me question the credibility/reliability of the information in this piece. (Perhaps, though, I made a mistake in not realizing not to expect too much from a site calling itself "allaboutlean.com"...)

    2. He also optimized surgeons’ work, establishing the now-common method of a nurse handing the instruments to the surgeon rather than the surgeon turning around and looking for the right tool.

      Not unlike shopping carts and the modern grocery store (Piggly Wiggly), footnotes, and page numbers, this is something that had to be invented.

      Notably, though, it is not a market product.

    1. likely the new people learning to code and yelling about the new shiny libraries they found

      Bart asked me about what it is that I think causes NPM to be so bad, generally (or something like that), and I responded with the one-word answer "insecurity".

      I think "striving for acceptance" is a better, more diplomatic way to put it.

    1. If you truly want to understand NLS, you have to forget today. Forget everything you think you know about computers. Forget that you think you know what a computer is. Go back to 1962. And then read his intent.

      Alternatively, try cajoling yourself to invert the "[kind of like] the present, but cruder" thinking and frame the present in terms of the past—with present systems being "Engelbart, implemented poorly".

    1. I should be able to edit after I publish.

      'k, but we should also be able to see (a) that it has been edited, (b) what was edited, (c) how to unambiguously refer to a particular revision. To not offer the ability to do so is to take advantage of something that is technically achievable given the architecture of the Web but violates the original intent (i.e. to give someone a copy that looks like this at one point and then when they or someone else asks for that thing at a later date to then lie and say that it really looks like that).

    1. One of the reasons this is so complicated is that there’s no simple orfast way to pay out musicians or labels for songs that are streamed in podcasts over RSS.

      WTF? This has fuck-all to do with RSS.

  4. podcasting20.substack.com podcasting20.substack.com
    1. The burden of resubscribing on a per-podcast basis every 7-15 days goes up exponentially as the podcasts being monitored grows into 6 or 7 digits.

      Mmm... how? It's just linear, unless I'm missing something.

    2. If you want to know within 1 minute if a podcast has a new episode

      You don't need to do that.

      Also: this problem is not specific to podcasting. It affects everything to do with RSS, generally.

    3. contains

      read: links to

    1. when you try to simulate it on the screen it not only becomes silly but it slows you down
    1. We've come up with a rule that helps us here: a change that updates node_modules may not touch any other code in the codebase.

      This makes it sound like a hack/workaround, but to want to do otherwise is to want to do something that is already on its face wrong. So there's no issue.

    2. Yes, this can be managed by a package-lock.json

      This shouldn't even be an argument. package-lock.json isn't free. It's like cutting all foods with Vitamin C out of your diet and then saying, "but you can just take vitamin supplements." The recommended solution utterly fails to account for the problem in the first place, and therefore fails to justify itself as a solution.

    1. It's unlikely to matter from a performance perspective. If you're only going to load, it doesn't really matter.

      Uh... what? This is a total shifting of the goalposts.

    1. “Why don’t you just” is not an appropriate way to talk to another adult. It’s shorthand for, “I have instantaneously solved your problem because I am The Solution Giver.

      Excellent summation.

    1. It isn't a good long term solution unless you really don't care at all about disk space or bandwidth (which you may or may not).

      Give this one another go and think it through more carefully.

    1. This web site is maintained by Tim Kindberg and Sandro Hawke as a place for authoritative information about the "tag" URI scheme. It is expected to stay small and simple.

      Emphasis: last sentence

    1. The common perception of the Web as a sui generis medium is also harmful. Conceptually, the most applicable relevant standard for Web content are just the classic standards of written works, generally. But because it's embodied in a computer people end up applying the standards of have in mind for e.g. apps.

      You check out a book from the library. You read it and have a conversation about it. Your conversation partner later asks you to tell them the name of the book, so you do. Then they go to the library and try to check it out, but the book they find under that name has completely different content from what you read.

    1. He would say, “To be early is to be on time.  To be on time is to be late.”

      I abhor tardiness and agree that being early is good advice, but this is a fucking stupid saying.

      Being on time is on time.

      (I mean this generally—not specific to the setting of this article.)

    2. required us to show up for concerts at least 30 minutes early.  If we were not 45 minutes early, we were marked as tardy

      "30 minutes", immediately followed by "45 minutes"... What?

    1. As for committing node_modules, there are pros and cons. Google famously does this at scale and my understanding is that they had to invest in custom tooling because upgrades and auditing were a nightmare otherwise. We briefly considered it at some point at work too but the version control noise was too much.

      If you don't want version control, then that's your choice, but admit it (ideally out loud for others to hear, but failing that then at least to yourself) that that's what you're about.

    1. What problem does this try to solve?

      Funny (and ironic) that you should ask...

      I myself have been asking lately, what problem does the now-standard "Run npm install after you clone the repo" approach solve? Can you state the NPM hypothesis?

      See also: builds and burdens

    1. I'm not going to make the same defenses that folks on HN prefer.

      But the problem with Casey's worldview is that it provides no accommodations for the notion of zero-cost abstractions.

    1. absolute gem of a book, I use it for my compilers class:https://grugbrain.dev/#grug-on-parsing

      I didn't realize recursive descent was part of the standard grugbrain catechism, too, but it makes sense. Grugbrain gets it right again.

      Not unrelated—I always liked Bob's justification for using Java:

      I won't do anything revolutionary[...] I'll be coding in Java, the vulgar Latin of programming languages. I figure if you can write it in Java, you can write it in anything.

      https://journal.stuffwithstuff.com/2011/03/19/pratt-parsers-expression-parsing-made-easy/

  5. Feb 2023
    1. As is evident, this is a structured document. The structure is specified as HTML using tags to denote HTML elements, and a styling language called CSS is used to specify rules that use selectors to match elements. The desired styles can be applied to matched elements by specifying the properties which should take effect for each rule.

      This goes on to provide a bunch more info with the express purpose of making it possible to 1. Print out this page, and then 2. Recreate the whole thing by hand if you wanted to, using only the printed page as reference.

      Could easily add a section that describes a bookmarklet that you could use to transform the "live" (in-browser) document into something formatted like the one at "This page is a truly naked, brutalist html quine" https://secretgeek.github.io/html_wysiwyg/html.html.

    2. Note that if you add any highlights using Hypothes.is to any of the CSS code blocks here, it will break them.

    1. If you're looking for Stavros's "no-bullshit image host", that's https://imgz.org/