3,465 Matching Annotations
  1. Jul 2021
    1. The world could benefit from a curated set of bookmarklets in the style of Smalltalk ("doIt", "printIt", etc buttons) that you can place in your bookmarks bar (or copy into a bookmarks document and open in it in your browser), where the purpose would be to allow you to:

      1. access a new scratch area (about:blank) for experimentation
      2. make it editable, or make any given element on a page editable
      3. let you evaluate any code written into the scratch space

      scratch.js aims for something something similar, and though laudable it falls short of what I actually crave (and what I imagine would be be most beneficial/appreciated by the public).

    1. we have discovered a game-changing way of structuring cyberspaces: the Social Web, where content orbits the author like planets orbit a star

      I've also articulated this point, but in a negative context. This piece speaks of actor-indexed content in a positive light. I maintain that actor-indexed content is on the whole negative. It's a direct and indirect cause of the familiar social media ills and has wide-reaching privacy implications, which is bad in and of itself, but especially so when you realize that this in turn leads to a chilling effect where people simply opt not to play because it's the only remedy.

      We'd do well socially to abandon the "game-changing" practice of indexing based on actor and return to topic-based indexing, which better matches the physical world.

    1. they don't get counted towards the total amount of friction in the system

      You can say the same for any number of things that GitHub natives usually put their thumb on scale for in order to not count it among the costs of using GitHub. This type of "blindness" is a recurring issue that has come up every time I've tried to have a discussion about the costs of GitHub.

    1. "I don't want to interact with anyone who uses GitHub" is developer-hostile

      In fact, the reverse ("I won't interact with anyone who isn't using GitHub") is the default for many (most?) projects hosted on GitHub.

    1. being provably terminating is a problem dealing with the full body of C programs written in the world. The OP is dealing with their own self-published content. That's a different problem

      Far too few programmers understand this, which creates a conversational nuisance. I'm not sure why, though. Charged with writing out explicitly the explanation for why it's true, you might come away thinking it's so because the issue is deceptively nuanced.

      ... but it's not that nuanced.

      This should really not pose as big of a problem as it does, and yet I see the "cache miss" occur way too often.

    1. I mean, over 40M devs from over 41 countries on GitHub? Pretty amazing.

      Is it, though? From where I'm sitting, GitHub has been good for exactly two things. Getting the uninteresting one out of the way first: free hosting. Secondly, convincing the long tail of programmers who would not otherwise have been engaged with the FOSS scene to free their code, by presenting a shimmering city on the horizon where cool things are happening that you'd be missing out on if you were to remain stuck in the old ways that predate the GitHub era's collective mindset of free-and-open by default. That is, GitHub strictly increases the amount of free software by both luring in new makers (who wouldn't have created software otherwise, free or not) and rooting out curmudgeons (who would have produced software of the non-free variety) with social pressure.

      I'm less convinced of the positive effects on "the choir"—those who are or were already convinced of the virtues of FOSS and would be producing it anyway. In fact, I'm convinced of the opposite. I grant that it has "changed the way [people] collaborate", but not for the better; the "standard way of working" referred to here by my measures looks like a productivity hit every time I've scrutinized it. The chief issue is that undertaking that level of scrutiny isn't something that people seem to be interested in...

      Then you have GitHub's utter disregard for the concerns of people interested in maintaining their privacy (by not exposing a top-level index of their comings and goings to anyone who asks for it —and even those who don't—in the way that GitHub does, again, whether you asked for it or not).

    1. Let the browser vendorskeep developing forever more.

      Odd choice of a pairing between context and link destination. Again, this seems to come down to a misconception (or—less charitably, a misrepresentation—of how Web standards progress).

      CADT most aptly describes the NPMers on GitHub, not the rudiments of the Web platform. (Or if anything, the types of folks pushing such misguided efforts as Gemini, ironically enough...)

    2. set up a website

      Something which should be standardized, by the way. Signing up for an account on Neocities or Netlify should be just as readily available over a neutral, non-HATEOAS client as their bespoke APIs for updating content. (Their APIs, for that matter, should be deprioritized where vanilla HTTP would suffice.)

      Furthermore, it's nice that reading from DNS is standardized, but proprietary control panels are anathema to the general accessibility (that is, to the general public) of this aspect of Internet infrastructure. The mechanisms for writing/editing DNS records should be just as standardized as the ones for doing lookups.

    3. But stable standards are incredibly important.

      Right. Which is why the folks working on Web standards have endeavored to make stability a goal up to this point and beyond; the Web is one of (if not the) most stable piece of widely adopted computing infrastructure that exists. The author's conception of Web standards is at odds with reality.

    4. it is impossible to build a new web browser

      Perhaps it's not possible. (Probably not, even.) It would be very much possible to build a web browser capable of handling this page, on the other hand, and to do so in a way that produces an appreciable result in 10 minutes of hacking around with the lowliest of programming facilities: text editor macros—that is, if only it had actually been published as a webpage. Is it possible to do the same for if not just this PDF but others, too? No.

    5. Taking my own advice, this document was written in the world’s greatestweb authoring tool: LibreOffice Writer.

      Great. This is something that I advocate for technical people to put forth as a "serious" solution more often than I see today (which is essentially never). But next time, save it as HTML. (And ditch the stylistic "rubbish"; don't abuse "the sanctity of the written word by coercing it to serve the vanity of a graphic artist incapable of discharging his duty as a mere lieutenant".)

    6. Eh, they look alright to me.

      I have a rule that any response that begins with someone having typed out "Eh" is empty and/or junk. (The response here is no proof by contradiction.) In other words, one is free—or perhaps obligated—to meet the zero-effort dismissal with a similarly dismissive response.

    7. weshould use PDF/A instead, which forbids interactive content

      (The author purports to address the following, but just uses some rhetorical flourishes and misdirection. In an effort to not let that go unnoticed and to hold his or her feet to the fire...)

      How does this type of "we should" differ at all from saying "we should use HTML 4 with no JS" or "we should use EPUB"?

    8. Overall, I'm pretty happy with the level of scrutiny the claims here are being subjected to over on HN. https://news.ycombinator.com/item?id=27880905

      (One of the rare times on HN in recent memory where a potentially attractive position on what could have been a contentious issue involving techno-pessimism related to the Web seems to thankfully be overwhelmingly opposed, in recognition that the arguments are not sound.)

    9. PDFs used to be large, and although they are still larger thanequivalent HTML, they are still an order of magnitude smaller than thetorrent of JavaScript sewage pumped down to your browser by mostsites

      It was only 6 days ago that an effective takedown of this type of argument was hoisted to the top of the discussion on HN:

      This latter error leads people into thinking the characteristics of an individual group member are reflective of the group as a whole, so "When someone in my tribe does something bad, they're not really in my tribe (No True Scotsman / it's a false flag)" whereas "When someone in the other tribe does something bad, that proves that everyone in that tribe deserves to be punished".

      and:

      I'm pretty sure the combination of those two is 90% of the "cyclists don't obey the law" meme. When a cyclist breaks the law they make the whole out-group look bad, but a driver who breaks the law is just "one bad driver."¶ The other 10% is confirmation bias.

      https://news.ycombinator.com/item?id=27816612

    1. I deliver PDFs daily as an art director; not ideal, but they work in most cases. There's certainly nothing rebellious or non-commercial about them

      It reminds me of The Chronicle's exhorting ordinary people to support the then-underway cause intended to banish Uber and Lyft from Austin, on ostensibly populist grounds, when in reality the cause was aimed at preserving the commercial interests of an entrenched, unworthy industry. I saw a similar thing with the popular sentiment against Facebook's PATENTS.txt—a clever hack on par with copyleft which had the prospect of making patent trolls' weapons ineffective, subverted by a meme that ended with people convinced to work against their own interests and in favor of the trolls.

      Maybe it's worth coining a new term ("anti-rebellion"?) for this sort of thing. Se also: useful idiot

    1. It's great to enhance the Internet Archive, but you can bet I'm keeping my local copy too.

      Like the parent comment by derefr, my actual, non-hypothetical practice is saving to the Wayback Machine. Right now I'm probably saving things at a rate of half a dozen a day. For those who are paranoid and/or need offline availability, there's Zotero https://www.zotero.org. Zotero uses Gildas's SingleFile for taking snapshots of web pages, not PDF. As it turns out, Zotero is pretty useful for stowing and tracking any PDFs that you need to file away, too, for documents that are originally produced in that format. But there's no need to (clumsily) shoehorn webpages into that paradigm.

      If you do the print-to-PDF workflow outlined earlier in the thread, you'll realize it doesn't scale well, requiring too much manual intervention and discipline (including taking care to make sure it's filed correctly; hopefully you remember the ad hoc system you thought up last time you saved something), that it's destructive, and that it ultimately gives you an opaque blob. SingleFile-powered Zotero mostly solves all of this, and it does it in a way that's accessible in one or two clicks, depending on your setup. If you ever actually need a PDF, you can of course go back to your saved copy and produce a PDF on-demand, but it doesn't follow that you should archive the original source material in that format.

      My only reservation is that there is no inverse to the SingleFile mangling function, AFAIK. For archival reasons, it would be nice to be able to perfectly reconstruct the original, pre-mangled resources, perhaps by storing some metadata in the file that details the exact transformations that are applied.

    1. it seems that most of these links are rehash of ES6 spec, which is pretty technical

      Yes. The problem also with relying on programmers' blogged opinions for advice and understanding is that a lot of the material is the result of people trying to work things out for themselves in public—hoping to solidify their own understanding by blogging—and it's not expert advice. Aspiring programmers further run the risk of mistaking any given blogger's opinion for deep and/or widely accepted truths. (And JS in particular certainly has lots of widely accepted "truths" that aren't actually true. Something about intermediate JS programmers has led to an abundance of bad conventional folk wisdom.) Indeed, spot-checking just a few of the links collected in the list here reveals plenty of issues—enough to outright recommend against pointing anyone in its direction.

      On the other hand, the problem with the ECMAScript spec is that it has gotten incredibly complicated (in comparison to the relative simplicity of ed. 3). There is a real need for something that is as rigorously correct as the spec, but more approachable. This was true even in the time of the third edition. In the early days of developer.mozilla.org, the "Core JavaScript Reference" filled this hole, but unfortunately editorial standards have dropped so low in the meantime that this is no longer true. Nowadays, there is not even any distinction between what was originally the language reference versus the separate, looser companion for learners that was billed as the JavaScript guide. The effect that it has had is the elevation of some of the bad folk wisdom to the point of providing it with a veneer of respectability, perhaps even a "seal of approval"—since it lives on MDN, so it's gotta be right, right?

    1. something called federated wiki which was by ward cunningham if anyone knows the details behind that or how we got these sliding panes in the first place i'm always interested

      it looks like my comment got moderated out, and I didn't save a copy. Not going to retype it here, but the gist is that:

      • Ward invented the wiki, not just the sliding panes concept.
      • Sliding panes are a riff on Miller columns, invented by Mark S. Miller
      • Miller columns are like a visual analog of UNIX pipes
      • One obvious use case for Miller columns is in web development tools, but (surprisingly) none of the teams working on browsers' built-in devtools at this point have have managed to get this right!

      Some screenshots of a prototype inspector that I was working on once upon a time which allowed you to infinitely drill down on any arbitrary data structures:

      https://web.archive.org/web/20170929175241/https://addons.cdn.mozilla.net/user-media/previews/full/157/157212.png?modified=1429612633

      https://web.archive.org/web/20170929175242/https://addons.cdn.mozilla.net/user-media/previews/full/157/157210.png?modified=1429612633

      Addendum (not mentioned my original comment): the closest "production-quality" system we have that does permit this sort of thing is Glamorous Toolkit https://gtoolkit.com/.

  2. www.dreamsongs.com www.dreamsongs.com
    1. as a more experienced user I know one can navigate much more quickly using a terminal than using the hunt and peck style of most file system GUIs

      As an experienced user, this claim strikes me as false.

      I often start in a graphical file manager (nothing special, Nautilus on my system, or any conventional file explorer elsewhere), then use "Open in Terminal" from the context menu, precisely because of how much more efficient desktop file browsers are for navigating directory hierarchies in comparison.

      NB: use of a graphical file browser doesn't automatically preclude keyboard-based navigation.

    1. object-orientation offers a more effective way to let asystem make good use of parallelism, with each objectrepresenting its own behavior in the form of a privateprocess

      something, something, Erlang

    2. Functional programming implies much more thanavoiding goto statements, however.It also implies restriction to localvariables, perhaps with the excep-tion of a few global state variables.It probably also considers the nest-ing of procedures as undesirable.
    1. You can use LibreOffice's Draw

      Nevermind LibreOffice Draw, you can use LibreOffice Writer to author the actual content. That this is never seriously pushed as an option (even, to my knowledge, by the LibreOffice folks themselves) is an indictment of the computing industry.

      Having said that, I guess there is some need to curate a set of templates for small and medium size businesses who want their stuff to "pop".

    1. it is also clear that there would be no need for copyleft licences to govern the exercise of copyright in software code by third-party developers at all if copyright did not guarantee rightsholders such a high degree of exclusive control over intellectual creations in the first place

      This is simply not true. The unique character of software under the conventions that most software is published (effectively obfuscated, albeit not for the purpose obfuscation itself, but for the purposes of producing an executable binary) means that reciprocal licenses like the GPL are very much reliant on the existing copyright regime. Ubiquitous and pervasive non-destructive compilation would be a prerequisite for a world where copyright's role on free software were nil.

    1. The array prototype is syntax sugar. You can make your own Array type in pure JavaScript by leveraging objects.

      At the risk of saying something that might not now be correct due to recent changes in the language spec, this has historically not been true; Array objects are more than syntax sugar, with the spec carving out special exceptions for arrays' [[PutValue]] operation.

    1. Of course not. Reading some copyrighted code can have you entirely excluded from some jobs

      There is a classic mistake being committed here. Private policy does not the law make.

      The Wine project wants to exclude you if you've seen laid eyes on Windows sources? That's fine; that's their right. But the Wine devs are neither writing legislation nor issuing binding opinion.

      We see this everywhere. Insurance company wants to have their adjusters follow some guideline when investigating/settling claims? Again, that's fine, but their stance may or may not have anything to do with actual law. Shop proprietor wants to exclude you from the store if you are (or are not) wearing a mask? Okay, just don't infer that this necessarily has any bearing on what is law in the eyes of the courts. Imposing keto on yourself except one cheat meal on Sundays? Fine again, but not law.

    1. it could make sense for it to start a completely new browser based on WebKit

      Indeed. I've argued for awhile that WebKit is an excellent basis for a new, community-developed browser, both strategically and technically. That is, it makes sense to start a browser project based on WebKit even now, whether Hachette exists or not.

    2. The repository software has to be released under a libre license.

      An alternative, don't create a service. Let the Web be the platform.

      If I'm a contributor to the project, then the extension's about.html (or whatever) should both credit me and at my option include a link to the place where I publish or link to useful scripts. This goes for all contributors. The way to share would be to (a) publish their scripts somewhere online and (b) contribute to the project and add their entry to about.html. Alternatively, for people not interested in contributing (and to maintain code quality and avoid poorly aligned incentive structures), the project itself can keep a simple static page up to date, linking to various others' pages as the requests to do so come in.

  3. Jun 2021
    1. They are artifacts of a very particular circumstance, and it’s unlikely that in an alternate timeline they would have been designed the same way.

      I've mentioned before that the era we're currently living in is incredibly different from the era of just 10–15 years ago. I've called the era of yesterdecade (where the author of this piece appeared on Colbert a ~week or so after Firefox 3 was released and implored the audience to go download it and start using it) the "Shirky era", since Shirky's Here Comes Everybody really captures the spirit of the times.

      The current era of Twitter-and-GitHub has a distinct feel. At least, I can certainly feel it, as someone who's opted to remain an outsider to the T and G spheres. There's some evidence that those who haven't aren't really able to see the distinction, being too close to the problem. Young people, of course, who don't really have any memories of the era to draw upon, probably aren't able to perceive the distinction as a rule.

      I've also been listening to a lot of "old" podcasts—those of the Shirky era. If ever there were a question of whether the perceived distinction is real or imagined these podcasts—particularly shows Jon Udell was involved with, which I have been enjoying immensely—eliminate any doubts about its existence. There's an identifiable feel when I go back and listen to these shows or watch technical talks from the same time period. We're definitely experiencing a lowpoint in technical visions. As I alluded to earlier, I think this has to do with a technofetishistic focus on certain development practices and software stacks that are popular right now—"the way" that you do things. Wikis have largely fallen by the wayside, bugtrackers are disused, and people are pursuing busywork on GitHub and self-promoting on social media to the detriment of the things envisioned in the Shirky era.

    1. If they did I think there would actually be some quality of discussion, and it might be useful

      I used to think this. (That isn't to say I've changed my mind. I'm just not convinced one way or the other.)

      Another foreseeable outcome, relative to the time when the friend here was making the comment, is that it would lead to people being nastier in real life. Whether that's true or not (and I think that it might be), Twitter has turned out to be a cesspool, and it has shown us that people are willing to engage in all sorts of nastiness under their real name.

    1. I tried all the different static site generators, and I was annoyed with how everything was really complicated. I also came to the realization that I was never going to need a content management system with the amount of blogging I was doing, so I should stop overanalyzing the problem and just do the minimum thing that leads to more writing.

      Great way to put it. One thing that I keep trying to hammer is that the "minimum thing" here looks more like "open up a word processor, use the default settings, focus on capturing the content—i.e. writing things out just as you would if you were dumping these thoughts into a plain text file or keeping it to, say, the subset of Markdown that allows for paragraph breaks, headings, and maybe ordered and unordered lists—and then use your word processor's export-to-HTML support to recast it into the format that lets use their browser to read it, and then FTP/scp/rsync that to a server somewhere".

      This sounds like I'm being hyperbolic, and I kind of am, but I'm also kind of not. The process described is still more reasonable than the craziness that people (HN- and GitHub-type people) end up leaping into when they think of blogging on a personal website. Think about that. Literally uploading Microsoft Word-generated posts to a server* is better than the purpose-built workflows that people are otherwise coming up with (and pushing way too hard).

      (*Although, just please, if you are going to do this, then do at least export to HTML and don't dump them online as PDFs, a la berkshirehathaway.com.)

    2. I could write something to do that for me, but then I'd just end up with a static site generator anyway.

      I'm not sure this is true! The intended use has a lot more influence over how people do things than people think!

    3. I've looked at GitHub Pages/Jekyll, all the cool kids seem to be doing that? It seemed kind of complicated

      I can't wait for the people pushing GitHub Pages and building products/integrations on top of it to realize this, but I'm not sure they ever will.

    1. Avoid 'global magic' or things that are defined outside of scope where they are not visible.

      From the commentary in the video "Workflow: Universal project folder structure"

      "I can intuit that this has something to do with[...]"

      "I look at this folder[...] and I get some sense[...]"

      "It's got this package dot bin thing, oh okay, so that means there's also a special command that I can run with this[...] you understand there is a command here"

  4. May 2021
    1. many people have attached sensors

      This differs from LDN, where the the annotation service is squarely under the control of the document author. This is also using sensor attachment in a different sense that the way it first appears above. The application is more akin to RSS. With RSS, the links exist in some other "document" (or something like it; generall can be modeled as OPML, even if it's really, say, an sqlite store).

    2. So she writes an explanatory note for Jack, links the note to the Parallel Compiling report, and then links the note to Jack's mailbox: in this open hypertext system, a mailbox is simply a publicly readable document to which the owner has attached a sensor.

      Okay, so this is back to looking like LDN, except the (novel?) idea that after sending the annotation to the annotation service responsible for annotations to the report, her final annotation gets sent to that that annotation service corresponding to a different document—Jack's mailbox. Interesting!

      (Maybe this is explicitly laid out as a possibility in one of the several pieces associated with LDN and I just never noticed?)

    1. that involves looking up where to find Guix's source code, `git clone`ing it, finding the Guix revision I'm currently on with `guix describe` so I can check out the same one for consistency's sake, `make`ing it, `guix environment guix`, using `pre-inst-env`, etc

      This is a direct response to the question, so it makes sense to write it out, but Spitz's piece (linked earlier) Open source is not enough describes the problem adequately.

    1. A w2g <cite> tag identifies/distinguishes itself in css by have a w2gid propert

      If you look at the incomplete graph.js implementation, you can see that it's actually using a class instead. This is good! However, I advocate for class names that happen to also be a hostname + resource (i.e., like URL, but omitting the http:// part).

      (I suppose that, alternatively, if someone wanted to use a shorter class name, such as "w2g-tag", then graph.js could be configured to allow for that, but it would need to be explicit configuration on the author/publisher's part.)

    1. _parseDefinition

      Revisiting this some months later while thinking about programming in general, I realize that that what I want from a nextgen magic development environment are are live comments.

      Take this _parseDefinition method, for example, and now compare it to the RSCAssembler's getRR method. The latter has a comment (with an interesting diagram), and the former does not. Suppose even that there were a comment for the _parseDefinition method—it might contain a snippet that's meant to be sample input to help visualize what the method is doing—there would still be the matter of needing to simulate the program in your head. Really, what I want is to be able to write out a snippet and then while reading the code here, anyone can mouse over each statement in the method definition and see what effect it has—say a caret moves along, pointing at the scanner "tip", and visually chunking the tokens.

      One should also be able to dynamically modify that snippet—let's say they wanted to "link" the live, scrubbable execution to a particular point in Fig. 1 (the fixT entry instead of fixP, for example)—in this case that snippet should "take over" the live comment, so we can concretely visualize execution of it instead. We should go further than that, even, and not even require the reader know which part of Fig. 1 is actually handled by this section of the code. We should be able to dynamically discover, based on the Ctrl+Space activation, that _parseDefinition is used to handle the fixP, fixD, and fixT lines, and then then reader can point to them (like the difference between a command-line interface where you have to know what you're allowed to do and how to say it, compared to a graphical interface that visually communicates the program's affordances).

      The closest I've seen to this are the Moonchild editor :https://harc.github.io/moonchild/ and REPLugger :https://github.com/Glench/REPLugger.

    1. Here's a novel(?) way to explain it that should appeal to those who claim that the host revealing the goat behind door #3 doesn't "matter":

      You are given the choice of 3 doors to pick from. Someone else is given the compulsory opportunty to "bet" against your pick. (Stop here and think of your odds of getting it right compared to their odds of beating you.)

      Now, the host reveals door #3. Your adversary originally had 2/3 odds of winning, and indeed this is where the door #3 revelation doesn't "matter": your adversary's odds that you were wrong are still 2/3. Let's mix up the game, though: suppose the host gives you the opportunity to switch places with the person betting against you—so if you switch then your adversary gets stuck with whatever your original guess was, and in the event that your original guess was wrong, then you actually win the game. This is exactly the balance of odds presented to you in the original formulation.

    1. What do we know that no-one else does? What’s a non-obvious controversial true fact? How does our system exploit this?

      Widely relevant questions beyond even the selected topic.

    1. It’s more productive to work with fewer but larger documents that bundle many bits and pieces together. If I send you a link to a section called out in the TOC, it’s as if I sent you a link to an individual document. But you land in a context that enables you to find related stuff by scanning the TOC.
  5. Apr 2021
    1. it is so because nobody owns it, we all own it. It can't go bankrupt, it can't pivot to be something else, it can't lose all its best features when a new product manager is promoted and wants to make a statement. Things that every proprietary system will do and eventually disappear.
    1. Documents should offer the same granularity.

      That neither content creators nor browser vendors are particularly concerned with the production and consumption of documents, as such, is precisely the issue. This is evident in the banner that the majority of the work has occurred under over the last 10+ years: they're the Web Hypertext Applications Technology Working Group.

      No one, not even the most well-intentioned (such as the folks at Automattic who are responsible for the blogging software that made Christina's post here possible), see documents when they think of the Web. No, everything is an app—take this page, for example; even the "pages" that WordPress produces are facets of an application. Granted, it's an application meant for reading the written word (and meant for occasionally writing it), but make no mistake, it's an application first, and a "document" only by happenstance (i.e. the absence of any realistic alternative to HTML & co for application delivery).

    1. Though its format can be copied and manipulated, HTML doesn’t make that easy.

      In fact, HTML makes it very easy (true for the reasons that lead Mark to write that it can be copied and manipulated). It's contemporary authoring systems and the typical author-as-publisher and the choices they make that are what makes this difficult.

      The future of rich media lies in striving to be more like dead media (or at least mining it for insights by better understanding it through thoughtful study), rather than the misguided attempts we've been living inside.

      (This is something that I've done a 180 on in the last year or so.)

    1. Ideally, GitHub would understand rich formats

      I've advocated for a different approach.

      Most of these "rich formats" are, let's just be honest, Microsoft Office file formats that people aren't willing to give up. But these aren't binary formats through-and-through; the OOXML formats are ZIP archives (following Microsoft's "Open Packaging Conventions") that when extracted are still almost entirely simple "files containing lines of text".

      So rather than committing your "final-draft.docx", "for-print.oxps" and what-have-you to the repo, run them through a ZIP extractor then commit that to the repo. Then, just like any other source code repo, include a "build script" for these—which just zips them back up and gives them the appropriate file extension.

      (I have found through experimentation that some of these packages do include some binary files (which I can't recall offhand), but they tend to be small, and you can always come up with a text-based serialization for them, and then rework your build script so it's able to go from that serialization format to the correct binary before zipping everything up.)

    1. Only you may read this blog (while you're logged in).

      Change the "mood" (?) here for unselected options so that it's quasi-subjunctive(?) so we'd say e.g. "Only you will be able to read this blog [...]". (Should probably also eliminate ambiguity about "while you're logged in qualifier".)

    1. I'm very interested in this, but it's a lot of work — at least if you want to get it "right" (and I do). That would involve:

      • parallel implementations with perfect feature parity, one each in golang (for the backend) JS (for ProseMirror, which may itself need to be hacked to support our alternate impl? don't know—I haven't dived deep on ProseMirror at all, but I have some reasonable suspicions, based on some familiarity with marijnh's past work)

      Having looked into this before for my own reasons, I would probably not use any existing library from the Go or JS ecosystems. I'm pretty sure what I found the PowerShell folks settled on was the best at the time. Written in C#; would need to be ported. I expect this to be at least a couple months work, full time.

      (Not a good first bug, or even a good undertaking right now—which is why I hadn't already done it on my own.)

    1. Your data on herp is always free.

      Is this the kind of claim we want to be making at the WriteFreely level? Ideally, this statement is true on all instances, but I fear a cavalcade of hacked together versions powering instances where various promises like this get broken.

    1. this document contains minimal commentary about the structure of the object file--allowing the pseudocode speak for itself

      I have to admit that I'm totally making excuses here. Really, at the time of publication, I was just eager to get something out there, and I was (and am) not entirely motivated or sure of how to fully document the subject in the "literate" style. (Although it has to be said [see Kartik on Knuth] that it's possible no one else, even LP practitioners, really grok how to use LP, either.)

    2. Our aim is to describe an "assembler" for the RSC object file format

      Okay, so the real goal of this publication is supposed to highlight the application of a new method literate programing. I think the obscurity of the chosen subject matter (and, unintentionally, the length of the text) compromises this goal.

      Better, probably, would be something like the ZIP file format (sans compression), or maybe the under-appreciated Unix ar file format? The latter also has some nice properties that would make it tangentially relevant, too. (An archive containing only plain text files in Unix ar format is itself plain text.)

    3. // p q u v ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? // +-----+ +-----+ +-----+ +-----+ +---------------------+ +-----+ // | (4) | |a : 4| |b : 4| |op: 4| | unused : 12 | |id: 4| // +-----+ +-----+ +-----+ +-----+ +---------------------+ +-----+ // 31 28 27 24 23 20 19 16 15 4 3 0 // // p = 0, q = 0

      I'm pretty satsified with these bit-level diagrams.

    4. let units = parseInt(glider.content.charAt(glider.position), 16); value = ((value << 4) >>> 0) + units;

      I think these two lines give away the "surprise". The bit-shifting could at the very least benefit from an explanation.

    1. I made it look like a page on Github

      Everyone should "take control" of their project like this. I've said it before. The tools are already there, with GitHub pages. It should be the norm to link to e.g. ghost.github.io/foo instead of github.com/ghost/foo, and for the former to give the viewer everything that the latter gets them (except for anything that would require auth).

    1. @29:39

      In the Smalltalk community, I always had the reputation as the guy who was trying to make Smalltalk more static so it was more applicable for industrial applications. In the Microsoft environment, which is a very static world, you know, I go around pushing people to make everything more dynamic.

    1. Should be able to use this to keep track where/when a piece was actually published

      This requires some expertise (which I don't have) and/or some market research. How are writers actually writing? Do they use Microsoft Word and then email it or...?

      How about a Twitter thread? A poll, similar in spirit to usesthis.com?

    1. @7:40:

      We're aware that some students might actually revel in the gymnastics of a sophiscated writing and retrieval system like this. Now, we don't want to subordinate the material to the system, nor is the system merely being used to provide an alternative to a classroom experience. What we are striving for is to make a flexible system with lots of interesting material so that we may serve the needs of a genuinely contemporary student.

    1. Sun’s API (to our knowledge) will not call up the task of determining which great Arabic scholar decided to use Arabic numerals (rather than Roman numerals) to per-form that “larger integer” task.

      Not sure of the significance of this sentence.

  6. Mar 2021
    1. Here is what McCarthy said about it later in an interview

      This telling is also relayed in the Dragon Book (subsection "Bootstrapping", 11.2 Approaches to Compiler Development), which ends up citing, in a roundabout way, McCarthy's paper on LISP for HOPL I (1981).