3,525 Matching Annotations
  1. Nov 2024
  2. ben-mini.github.io ben-mini.github.io
    1. It clocks in at 94 pages and has 30 ratings on Amazon! Go IMG_0416! I don’t care what you’re creating- I’m just a fan of creators.

      This is one of those weird positions like, "It doesn't matter who you vote for. Just vote!" that people are sure to regret when faced with the right unforeseen counterexample.

    1. In many of the above examples, once an organizing principle for the system is identified, the details of the solution are quite simple.

      This principle is behind good documentation, too. Too often programmers describe what their solution does and how it does it, but not why. Part of the why is just describing the problem that the solution is meant to address.

    1. I found this really hard to read on archive.is (https://archive.is/YkIyW).

      I used this snippet to reformat the article to manually float the "annotations" (pull-outs) to the margins:

      ```` javascript document.getElementById("CONTENT").style.width = "1720px";

      ([ ...document.querySelectorAll("[id^=annotation]") ]).forEach((x, i) => { if (i % 2) { x.style.left = ""; x.style.right = "-44ch"; } else { x.style.left = "-44ch"; x.style.right = ""; } }); ````

    1. I'm amazed at the lack of thoughtfulness in the original post that this change of heart refers to. From http://rachelbythebay.com/w/2011/11/16/onfire/:

      I assigned arbitrary "point values" to actions taken in the ticketing system. The exact details are lost to the sands of time, but this is an approximate idea. You'd get 16 points for logging into the ticketing system at the beginning of your shift, 8 for a public comment to the customer, 4 for an internal private comment, and 2 for changing status on a ticket. [...] The whole point of writing this was to see who was actually working and who was just being a lazy slacker. This tool made it painfully obvious [...]

      This is, uh, amazingly bad. It goes on, and in a way that makes it sound like self-aware irony, but it's clear by the end that it's not parody.

      The worst support experiences I've had were where it felt like this sort of pressure to conspicuously "perform" was going on behind the scenes, which was directly responsible for the shoddy experience—perfect case studies for Goodhart's Law.

      The author says they've had a change of heart, so surely they've realized this, right? That's what led to the change of heart? Nope. Reading this post, it's this: "my new position on that sort of thing is: fuck them." As in, fuck them for not appreciating the value of this work and needing it to be done for them in the first place. The latter is described at length where they describe the jobs of the managers to already know these things—that is, the stuff that these metrics would say, if the data were being crunched. "Make them do their own damn jobs", the author says.

      (I often see this blog appear on HN, and I've read plenty of the posts that were submitted to HN but have never exactly grokked what was so appealing about any of it. I think with this series of posts, it's a good signal that I can write it off and stop trying to "get" it, because there's nothing to get—just middling observations and, occasionally, bad conclusions.)

  3. Oct 2024
    1. To get a list of all the public domain scans, as of this writing:

      ```` javascript ([ ...document.querySelectorAll("table.auto-style21 a") ]). filter((x) => ( x.textContent.includes("19") && !x.textContent.includes("1929") && !x.textContent.includes("193") && !x.textContent.includes("194"))). map((x) => { let when = x.textContent; if (!when.includes(",")) when = when.split().reverse().join(" ") + " 01";

      try {
        var result = (new Date(when)).toISOString().substr(0, ("1928-10-29T...").indexOf("T"));
      } catch (ex) {
        console.log(x.textContent, when, ex);
      }
      
      if (when != x.textContent) {
        result = result.substr(0, ("1987-12-09").length - ("-09").length);
      }
      return result;
      

      }) ````

    1. It's far more performant than using getter-setters, on top of being more performant than generating getter-setters. Further it's type safe. Eslint or TypeScript can both warn you about non-existing properties and possibly type mis-matches.

      It's also, you know, way more grokkable.

    1. Filter all lists to include just the ones with partial serials:

      ([ ...document.querySelectorAll("li") ]).filter((x) => (!!x.querySelector("img.info"))).filter((x) => (!x.textContent.trim().endsWith("(partial serial archives)"))).forEach((x) => (x.parentNode.removeChild(x)))

    1. by porting ffmpeg to the zig build system, it becomes possible to compile ffmpeg on any supported system for any supported system using only a 50 MiB download of zig. For open source projects, this streamlined ability to build from source - and even cross-compile - can be the difference between gaining or losing valuable contributors.
    1. New products are often incongruent with consumer expectations. Researchershave shown that consumers prefer moderately incongruent products, while beingadverse to extremely incongruent products.
  4. Sep 2024
    1. You can't become the I HAVE NO IDEA WHAT I'M DOING dog as a professional identity. Don't embrace being a copy-pasta programmer whose chief skill is looking up shit on the internet.

      Similarly, a few years ago I was running into a bunch of people saying stuff like, "Every programmer uses Stack Overflow. Everyone." Which is weird because it definitely had the feel of a sort of proactive defensiveness every time it came up, plus there's the fact that it's not true that every programmer uses Stack Overflow. At the time I kept running into this kind of thing, I had basically never used it, but not for lack of trying or any sense of superiority. Every time I'd landed there the only thing I encountered was low-quality answers and a realization that Stack Overflow just doesn't specialize in the kind of stuff that's useful to me. (In the years since, I've landed there quite a bit more than before, and I have found it useful—but almost never for actual programming...)

    2. Especially if there are people within your profession who use their diplomas as a logical fallacy to prove why they're right and you're wrong.

      I don't think I've ever seen this in a technical discussion. Credentialism in the form of "X years experience with Y" or someone trying to flex other parts of their résumé (e.g. previous employers)? Definitely.

      Most often, though, I just run into Ra + a fuckton of ho-hoism. This is never tinged by academic credentials, even a little bit.

    1. a total of 26 volumes

      I'm curious where this comes from. As the archives below indicate, Hathitrust only has volumes up to volume 24 (1898). UT PCL stops also at volume 24. Everything available seems to stop there.

      I did chase down the Mott reference from the Wikipedia article that says it ran until 1900, but I don't remember seeing a volume count. I wonder if 1900 is substantiated anywhere else and whether the volume count is an independent claim or a derivative of the claim of cessation in 1900.

      (Maybe the copyright records indicate two more volumes?)

    1. We do not currently know of free online issues of The New Monthly Magazine. If you know of any, please let us know.

      This is listed at https://onlinebooks.library.upenn.edu/new.html, so I'm not sure why there are no volumes listed.

      Hathitrust has part of the magazine under the title The New Monthly Magazine and Universal Register (1814–1820; missing vols 3, 5, 7, 9, and 10), but under the title The New Monthly Magazine, all these volumes are represented. There are at least some under the name The New Monthly Magazine and Literary Journal (1821–1836), and also some under the name The New Monthly Magazine and Humorist (1837–1852). It has the final volume in 1882 under the name The New Monthly.

    1. These numbers may range from 1 to 9999

      So that means we won't see a classification/subclass with a range like AZ57482. But is there a limit on the number of digits after the decimal?

      (Also, the literal reading of the statement here means that the range in EG9999.293 is invalid—because 9999.293 is greater than 9999.)

    1. Nelson said that he would be describing a lot ofthese ideas in an anthology to be released later in 1989called Replacing the Printed Word

      It doesn't look like this happened.

    1. Turco argued in 2016 that the problem was of supply more than itwas of demand; while it was certainly the case that the sometimes-bewildering multiplicity of potential user interfaces deployed fordifferent digital editions was one factor putting humanities schol-ars off using them, more significant was that the coding skillsets(or the resources needed to buy these in) was so alien to thosesame scholars that it was discouraging them from producing themin the first place
    1. Avoid using a 1 unless specifically instructed to do so in the schedules or in the CSM. If youfind that it is absolutely necessary, never use it as the final digit of a cutter because youmight have to use a zero in the cutter for the next resource. Instead, add another digit.And finally, avoid using a 2 if at all possible. Using a 2 can force the use of a 1, which canforce the use of a 0.
    1. From reading the book, I learned that Cutler had the same mentality for his OS and, in fact, the system wasn’t ported to x86 until late in its development. He wanted developers to target non-x86 machines first to prevent sloppy non-portable coding, because x86 machines were already the primary desktops that developers had. In fact, even as x86 increasingly became the primary target platform for NT, the team maintained a MIPS build at all times, and having most tests passing in this platform was a requirement for launching.

      I'm reminded of the time when it was revealed about 10–15 years or so ago that when Apple switched to x86 from PowerPC, it wasn't the result of a big porting effort. They'd been maintaining portability all along—doing private builds internally that just never saw the light of day. When this came to light, the reaction was huge. People were awed.

      A few years ago, when this piece of trivia was brought back to the forefront of my mind again after having not thought about it for years, I was struck by how silly that reaction was. Of course it makes sense that they'd been maintaining portability. There was nothing stupendous about this.

      I think this is the one time when I saw and felt the effects of Apple's legendary reality distortion field firsthand. (In every other instance, I hadn't been close enough and so only perceived it from afar and only had other sources to trust that it was a real phenomenon.)

    2. This sounds ridiculous, right? Why wasn’t there a CI/CD pipeline to build the system every few hours and publish the resulting image to a place where engineers could download it? Ha, ha. Things weren’t this rosy back then. Having automated tests, reproducible builds, a CI/CD system, unattended nightly builds, or even version control… these are all a pretty new “inventions”. Quality control had to be done by hand, as did integrating the various pieces that formed the system.

      This still describes the way the semiconductor manufacturing world works.

    3. Now, more than 30 years later, NT is behind almost all desktop and laptop computers in the world.

      This is sort of an odd remark. Even excluding servers and focusing only on traditional desktop and laptop computers, Windows' dominance is as weak now as it has been at any other point in the last 30 years.

      Most desktop and laptop computers? Yeah, probably.

      "Almost all"? Surely not.

    1. You have also noticed the blue bar by this time. The bar indicates the number that is selected, and itcan be moved around by double-clicking.Moving it changes the data in the hierarchy pane. Watch the hierarchy pane as I double-click tomove the bar around the screen.

      This characterization is a good case study in the odd (off) conceptualization of computer UI...

    1. number that may be a whole number or a whole number witha decimal, such as 2301, 111, 756.5,

      So the decimal does not indicate the presence (introduction) of a cutter.

      Is the rule, then, that if the character following the decimal is a digit, then it's a decimalized classification, and if it's a non-digit (alpha?) then it's a cutter?

    1. The March 1964 issue has a 1964 copyright notice, but the CCE states that its actual copyright date was December 23, 1963.

      Sean Dudley pointed out something similar to me—even though a given Black Mask issue might be the January 19XX issue, it was probably actually published and distributed sometime in December. This means that if you have a serialization that began in 1928 that you want to us public excerpts from, if it it ended in the January 1929 issue, then there's a good chance that the whole thing is actually public domain.

    1. I referred (indirectly) to this in an annotation on https://www.theatlantic.com/magazine/archive/1945/07/as-we-may-think/303881/ as "the PDF". As the first page indicates this is rather a PDF—specifically someone's PDF of the ACM's reprint from 1996 (which can be found hanging off this DOI: https://dl.acm.org/doi/10.1145/227181.227186).

      The Atlantic's PDF can be found here https://cdn.theatlantic.com/media/archives/1945/07/176-1/132407932.pdf (at least for now).

    1. Handles, in contrast, can be homed on any handle server and transferred at will -- so another organization can take over handles for a merging or dying organization. Handle namespace names are not related in any way to the hostname of their home server except perhaps by coincidence.

      This is a superficially attractive argument, but it doesn't hold up.

      In practice, links to handles are tied to the domain of the organization operating the handle resolver service. The argument that the handles themselves are durable and can survive the demise of the domain and the org that controls it is not a good one; for everything that makes this true for handles, it's also true for names based on conventional/traditional URLs—

    1. In order to guarantee persistence, the DOI Foundation has built a social infrastructure on top of the technical infrastructure of the Handle System. Persistence is a function of organizations, not of technology; a persistent identifier system requires a persistent organization, agreed policies and defined processes.
    1. I might point out that the definite and formal techniques and procedures provided us by social heritage mostly involve specialized and idealized aspects of the workload and needs of the individual. There apparently never has been an over-all or “system” approach to the problem of assisting the individual in being effective in his over-all problem-solving role.

      I find this extremely hard to parse.

  5. Aug 2024
    1. the retailer response is to send me an individual email every time they notice one

      It's almost that link rot is a problem that publishers should, you know, do something about...

    2. this is a problem for print books as well as for the ebooks of course, but I think we’re more content to let the URLs in print books function essentially as decoration—as signs that there is scholarship underlying their claims

      baffling

    1. Nor were we using the pieces in waysinappropriate to their advertised scope of applicability.

      Kiczales is fond of the metaphor of implementing a spreadsheet by making each cell its own window under the native platform's windowing system.

    Tags

    Annotators

    1. My side projects from 2012-2017 cannot be built or ran because of dependencies. My jsbin repo with lots of experiments cannot be ran anymore. But I have the sqlite database.I forgot to pin dependencies when I was working. It would take a lot of trial and error and effort to get back to where I was.
    1. I have written down all these thoughts are as ‘remarks’, short paragraphs, of whichthere is sometimes a fairly long chain about the same subject, while I sometimes makea sudden change, jumping from one topic to another, – it was my intention at first tobring all this together in a book whose form I pictured differently at different times. Butthe essential thing was that the thoughts should proceed from one subject to another in anatural order and without breaks.After several unsuccessful attempts to weld my results together into such a whole, Irealised that I should never succeed. The best that I could write would never be morethan philosophical remarks; my thoughts were soon crippled if I tried to force themon in any single direction against their natural inclination.– And this was, of course,connected with the very nature of the investigation. For this compels us to travel over awide field of thought criss-cross in every direction.

      This precedes Nelson on hypertext.

    1. Nash’s Magazine:   (about) Nash’s Magazine—UK; Apr. 1909-Sep. 1937 (532 issues); merged with The Pall Mall Magazine, Oct. 1914, as Nash’s and Pall Mall Magazine, separated again May 1927-Sep. 1929, re-merged, Oct. 1929 as Nash’s—Pall Mall Magazine; Eveleigh Nash, London (1909-1911), Hearst’s National Magazine Company (1911-1937); monthly; standard format, on pulp paper until Feb. 1910, when better-quality coated stock introduced, with more illustrations; became a large-format slick in 1923; mostly fiction, including Algernon Blackwood, William Hope Hodgson, Oliver Onions, Marie Belloc Lowndes (“The Lodger” Jan. 1911).

      I can't seem to locate these issues. If I search Hathitrust or lib.utexas.edu, it just gives me Pall Mall. We know from a 1913 issue of Hearst's in which Chesterton's "The Treason of a Jingo" was (re-)published, there is some 1912 issue (apparently September) of Nash's in which the writer Sydney Brooks published "The Conquering English". (Chesterton's piece is a response to Brooks's.) Evidently, it is the September 1912 issue in which Brooks's article appears. However, at the time, Nash's and Pall Mall were still separate. According to Wikipedia, they didn't merge until 1914. And indeed, it looks like there are independent issues for September 1912 of both Nash's and Pall Mall. Viz:

    1. E are thinking and talk-ing a great deal now-a-days about placing theright man in the rightjob, about puttinground pegs in roundholes and square pegs in squareholes, and this subject is one ofthe most vital problems that con-fronts us all, whether we workfor others or employ men to workfor us.

      Is this a reference to Taylor and the sort of work that the Gilbreth's were doing?

    1. As seen in the table above, namespace URIs tend to be long and cryptic, with lots of punctuation and case-sensitive text. In this instance the W3C has compounded the problem by adding dates to ensure that the namespace URIs are unique, as if it were likely that the W3C would create another "XSL/Transform" or "xhtml" namespace in the future. While namespace URIs may be guaranteed to be unique, they are also guaranteed to be impossible to remember. Quick, without checking, can you remember if the namespace URI for W3C XML Schema ends with "xmlschema", "XML/Schema", or "XMLSchema"? Was the namespace URI for SVG allocated in 1999, 2000, or 2001?

      It's odd that this is considered to be an issue and something that I take to be a consequence of the times.

      Does anybody worry about being able to remember the URLs of, say, their Golang imports?

    1. If HTML had been precisely defined as having to have an SGML DTD, it may not have become as popular as fast, but it would have been a lot architecturally stronger.

      Alternative take: if the HTML5 parsing algorithm (and its error handling) had been precisely defined, then HTML would have become as popular as fast (maybe even faster?) while being a lot more cross-compatible.

    1. There is a set of formats which every client must be able to handle. These include 80-column text and basic hypertext ( HTML ).

      TBL says that browsers must be able to handle plain text (and not just that, but 80-column text). I wonder if this mandate appears anywhere else in modern standards (rather than just implemented by convention). It should.

      (I am genuinely concerned about the possibility that browsers could/would remove support for plain text.)

    1. I’m gonna go out on a limb and suggest Pratt would not have fared well in the #MeToo era.

      #MeToo was about rape, other sexual assault, and harassment. Pratt may very well have been guilty of some or all of these things, but it's not suggested by anything in the preceding passage. (And it's pretty reductive/diminutive of the actual crimes and other transgressions relevant to the #MeToo label to point to that passage and have a response that is essentially, "lol #MeToo amiright?")

    1. Written inPython, Cython

      Is this accurate? I don't have a lot of firsthand experience with data science stuff, but usually when looking just past surface-level you find that some Python package is really a shell around some native binary core implemented in e.g. C (or Fortran?).

      When I at the repos for spaCy and its assistant Thinc, GitHub's language analysis shows that it's pretty much Python. Is there something lurking in the shadows that I'm not seeing? Or does this mean that if someone cloned spaCy and Thinc and wrote it in JS, then the subset of data scientists whose work can be done with those two packages (and whatever datavis generators they use) will benefit from the faster runtime and the the elimination of figging and other setup?

  6. Jul 2024
    1. Eventually, there will be different ways of paying for different levels of quality. But today there some things we can do to make better use of the bandwidth we have, such as using compression and enabling many overlapping asynchronous requests. There is also the ability to guess ahead and push out what a user may want next, so that the user does not have to request and then wait. Taken to one extreme, this becomes subscription-based distribution, which works more like email or newsgroups. One crazy thing is that the user has to decide whether to use mailing lists, newsgroups, or the Web to publish something. The best choice depends on the demand and the readership pattern. A mistake can be costly. Today, it is not always easy for a person to anticipate the demand for a page. For example, the pictures of the Schoemaker-Levy comet hitting Jupiter taken on a mountain top and just put on the nearest Mac server or the decision Judge Zobel put onto the Web - both these generated so much demand that their servers were swamped, and in fact, these items would have been better delivered as messages via newsgroups. It would be better if the ‘system’, the collaborating servers and clients together, could adapt to differing demands, and use pre-emptive or reactive retrieval as necessary.

      It's hard to make sense of these comments in light of TBL's frequent claims that the Web is foremost about URLs. (Indeed, he starts out this piece describing the Web as a universal information space.) It can really only be reconciled if you ignore that and understand "the Web" here to mean HTML over HTTP.

      (In any case, the remarks and specific examples are now pretty stale and out of date.)

    1. This foreword is described in the book as being "written as an article in 1997". There's a brief introduction (8 paragraphs dated December 2002), and then what follows is purportedly that same article, which begins, "The Web was designed to be a universal space of information[...]". The acknowledgements of the foreword, too, says that it "is based on a talk presented at the W3C meeting, London, December 3, 1997".

      The same material, including acknowledgement, but sans the 8-paragraph introduction, is available on a webpage titled "Realising the Full Potential of the Web" on the W3C site. https://www.w3.org/1998/02/Potential.html

    1. Programming models, user interfaces, and foundational hardware can, and must, be shallow and composable. We must, as a profession, give agency to the users of the tools we produce. Relying on towering, monolithic structures sprayed with endless coats of paint cannot last. We cannot move or reconfigure them without tearing them down.

      Counterpoint: the judicious use of abstraction is/can be, in some instances, the solution to giving users agency and reconfigurability.

      Software that has to be torn down is the result of software built upon bad abstractions. Abstractions are not ipso fact bad. They just need to be chosen on the criteria of whether or not they solve a problem.

    1. We decided that we would like to see better documented code included within web pages for convenient browsing. The motivation behind this peculiar aim is to be able to include high quality documentation alongside working code, hopefully making it easier for programmers to produce more maintainable, readable programs.
    1. It would be useful to track down the misleading statement that Mozilla PR released that suggested that neither party was receiving kickbacks with the new Pocket integration. The reality is that that there was money changing hands related to the decision to integrate Pocket (NB: this was pre-Pocket acquisition by Mozilla), but the statement was worded just so to merely suggest that no money was changing hands without ever explicitly stating so—the idea being no doubt that they could claim plausible deniability wrt any false statements and blame the reader/listener for misunderstanding. The problem with this is that it backfired because it was so successful that Mozilla programmers who weren't in-the-know themselves took the statement to mean exactly the thing implied, and then they took to all sorts of public fora and "refuted" people using the PR piece, only these duped employees were explicitly claiming that there wasn't anything untoward going on, rather than the way the PR statement merely implied it. Plausible deniability moot.

      (I was hoping after stumbling upon this old piece that I'd see the statement here, which would allow me to trace the contamination to e.g. HN comment threads around the same time, but this isn't the statement. It's a good clue as to when, precisely, it might have been issued.)

  7. Jun 2024
    1. They never end up in the terminal, because that is a huge jump in complexity, usability, and frustration

      I've been saying for years that if, say, the Gnome project wrote a new terminal emulator and replaced the default desktop terminal emulator with it and they made sure that the new one had a scrollback buffer that made your call-and-response session with the machine look more like iMessage/SMS bubbles that people are more comfortable with, people would be a lot less reluctant to use it. Once you've done that, replace Bash with an even better shell (in the same vein as the overhaul I just mentioned—not merely with something as conservative as fish or zsh), and you could magnify that effect 10x.

      The real problem is:

      1. public perception upon showing someone a terminal window—that they immediately adopt the thought, "Uh-oh. I don't know about this computer stuff, and I shouldn't be here"

      2. the fact that the default terminal emulators do evoke that feeling

      3. and the fact that tech folks are okay with this (and defensive of it, even)

    2. the right knowledge to make a full-stack app

      Worth considering Brooks's distinction of essential versus incidental complexity. It's especially worth considering the instances where the "incidental" part comes from being incidental to the fact that if it were easier, there it would make a lot of people unhappy for reasons that I call "the consultant effect".

    3. He painted a vision of applications that could be used by dozens of users rather than thousands or millions. This is an absurd target population both then and now.

      It's interesting, because there's tons of software out that has exactly one user, and then it drops off sharply for x>1 and then goes back up when you get into, what, I dunno, the hundreds?

    1. some non-technical people are just scared of any monospaced writing with syntax highlight

      An obvious question arises: why not just not format it like that?

      How many people could you trick into using a conventional CLI if the text entry and output weren't in a window that looks like a traditional terminal emulator? What if the commands were more humane (like a step up from PowerShell) and the screen looked like you were interfacing with a not-explicitly-human agent on the other end of a messaging-like app?

    1. there is also an honesty problem when contents change or update without record

      To underscore this, I've also settled on characterizing this as a problem of honesty. I put it in terms of lying—i.e. people lying about the identity (URL) of their work (Web resources).

      URLs should not be considered reusable/recyclable—at least for the duration of the original publishing authority's continued renewal and control of the domain where it appeared (and even then...)

    1. because there is no compilation step

      That (a compilation step) is not why you get advance warning of the sort the interviewer means when you're programming in C++ and Java. It's static typechecking that's responsible for that. You can have static typechecking without also requiring compilation.

    2. I went back to Dmitry and asked him if my understanding of “everything is a table” was correct and, if so, why Lua was designed this way. Dmitry told me that Lua was created at the Pontifical Catholic University of Rio de Janeiro and that it was acceptable for Pontifical Catholic Universities to design programming languages this way.
  8. May 2024
    1. Take three different translations of any book – say, The Brothers Karamazov or Madame Bovary – and compare them to see how many times they have entire sentences exactly the same.  Never, or almost never.

      I'm curious if Bart has ever actually tried doing this exercise.

  9. Apr 2024
  10. Mar 2024
    1. WACZ - Web Archive Collection Zipped - is used by the WebRecorder project, which seems to be an active effort to create an open standard for web archiving, though you wouldn't realize it by the design of their website. I almost thought it was also a dead 1990s effort until I saw the August 2022 update.

      What? Neither one of those links are particularly sparse and both have the marks of modernity.

    2. a CSP header on the index page to prevent malicious scripts from running

      Browser's don't need the author to put CSP headers on the page [sic] or the server response to prevent scripts from running. They can just not run the scripts.

    3. just think how happy TBL will be to finally have Phase 2 completed after 30 years

      Not mentioned explicitly by this author, and he does say "completed" here, but it's not the case that TBL never got around to Phase 2. The original WorldWideWeb.app did do document editing.

    4. Using a standard set of semantic HTML and CSS as the underlying markup for documents solves these problems

      It doesn't solve anything re using Git for version control. Any problems that exist with docx is going to exist with this propos, too.

    5. Rich text, such as bold and italic, among other examples, shouldn't be optional or considered extraneous to language. The fact that computing technology has gone so long ignoring these essential parts of communication is bewildering. You can send a text message from your phone including a variety of customizable emojis in various skin tones, but basic text formats used for literally hundreds of years are either impossible to enter, or lost in transmission. There's more than subtle a difference between, "You really should do something," and "You really should do something". Having to write out ideas using plain text with weird symbols such as _ this _ or * this * is truly a loss, and in the 21st century, completely inexcusable.

      I disagree.

    6. You can whip up cover letters in no time using ChatGPT! Just paste in your resume text, position title and company name and ask it to write a cover letter for you. It summarizes your skills really well in context of the position and company. Such a time saver. Like everything else AI does lately, it's absurdly good and in Ryan Reynold's words, "mildly terrifying." I have no idea who actually reads cover letters
    1. most of the coding trouble I’ve ever gotten myself into was mainly a result of thinking I was smart

      See also: the rise of orthogonal version control systems, aka "language package managers".

  11. gitlet.maryrosecook.com gitlet.maryrosecook.com
    1. we write functions as functionName rather than functionName(); the latter ismore common, but people don’t use objectName{} for objects or arrayName[] for arrays,and the empty parentheses makes it hard to tell whether we’re talking about “the functionitself” or “a call to the function with no parameters”

    Tags

    Annotators

  12. Feb 2024
    1. So What Would a Static Site Generator for the Rest of Us Like Like?

      Not like a static site generator, that's for sure. Normal people don't a step in between input source code and the output. They don't want a difference between input and output at all. Programmers want a compilation step, because they're programmers.

  13. Jan 2024
    1. Wirth himself realized the problems of Pascal and his later languages are basically improved versions of Pascal -- Modula, Modula-2, and Oberon. But these languages didn't even really displace Pascal itself let alone C -- but maybe if he had named them in a way that made it clear to outsiders that these were Pascal improvements they would have had more uptake.

      Modula and Oberon should have been codenames rather than independent projects.

    1. Pascal largely lost to its design opposite, C, the epitome of permissiveness, where you can (for example) add anything to almost anything

      C programmers balk and cry, "JavaScript!"

    1. in Java, the vulgar Latin of programming languages. I figure if you can write it in Java, you can write it in anything

      One of my favorite turns of phrase about programming. I come back to it multiple times a year.

    2. You can do this with recursive descent, but it’s a chore.

      Jonathan Blow recently revisited this topic with Casey Muratori. (They last talked about this 3 years ago.)

      What's a little absurd is that (a) the original discussion is something like 3–6 hours long and doesn't use recursive descent—instead they descended into some madness about trying to work out from first principles how to special-case operator precedence—and (b) they start out in this video poo-pooing people who speak about "recursive descent", saying that it's just a really obnoxious way to say writing ordinary code—again, all this after they three years ago went out of their way to not "just" write "normal" code—and (c) they do this while launching into yet another 3+ hour discussion about how to do it right—in a better, less confusing way this time, with Jon explaining that he spent "6 or 7 hours" working through this "like 5 days ago". Another really perverse thing is that when he talks about Bob's other post (Parsing Expressions) that ended up in the Crafting Interpreters book, he calls it stupid because it's doing "a lot" for something so simple. Again: this is to justify spending 12 hours to work out the vagaries of precedence levels and reviewing a bunch of papers instead of just spending, I dunno, 5 or 10 minutes or so doing it with recursive descent (the cost of which mostly comes down to just typing it in).

      So which one is the real chore? Doing it the straightforward, fast way, or going off and attending to one's unrestrained impulse that you for some reason need to special-case arithmetic expressions (and a handful of other types of operations) like someone is going to throw you off a building if you don't treat them differently from all your other ("normal") code?

      Major blind spots all over.

    1. There’s not much of a market for what I’m describing.

      There is, actually. Look at Google Docs, Office 365, etc. Those are all an end-run around the fact that webdevs are self-serving and haven't prioritized making desktop publishing for casual users a priority.

      The webdev industry subverts users' ability to publish to the Web natively, and Google, MS et al subvert native Web features in order to capture users.

      The users are there.

    1. "I've been thinking about the problem with division of labor for 7 years now, and I think I've boiled it down to two sentences. Why division of labor is disempowering: 1. (the setup) Power = capability - supervision. 2. Division of labor tends to discourage supervision."

      I think this is too pithy. It's hard to make out what applies to which actors and what's supposed to be good or bad; in order for me to understand this, I have to know a priori Kartik's position on division of labor (it's bad), then work backwards to see what the equations are saying and try to reconstruct his thinking. That's the opposite of what you want! The equations are supposed to be themselves the explanatory aide—not the thing needing explanation.

    1. (Sounds a little antisocial, sure, but you can imagine good reasons.)

      Geez. What?

      I'm not even sure Brent actually believes this so much as that he felt the need to post a defense. Or maybe he really does believe it. But it needs no defense.