113 Matching Annotations
  1. Jan 2024
    1. There’s not much of a market for what I’m describing.

      There is, actually. Look at Google Docs, Office 365, etc. Those are all an end-run around the fact that webdevs are self-serving and haven't prioritized making desktop publishing for casual users a priority.

      The webdev industry subverts users' ability to publish to the Web natively, and Google, MS et al subvert native Web features in order to capture users.

      The users are there.

  2. Nov 2023
    1. This conference imitating the old Providing papers for this conference is a choice between latex (which is a pre-web technology) or Word! There's a page limit! There's a styleguide on how references should be visually displayed! IT'S ALL ABOUT PAPER!
    1. dass für das Element <head> in der TEI das Attribut @placenicht zur Verfügung steht, mit dem diese Positionierung (analog zu der Position von Randbemerkungen,siehe dazu die entsprechenden Beispiele unten) angegeben werden könnte. Um nicht den Rahmen derTEI-Vorgaben verlassen zu müssen, das Phänomen aber dennoch kodieren zu können, wurde innerhalbdes DTABf-M festgelegt, dass die ggf. abweichende Position von Überschriften anstatt mit @place durchdas im Element <head> verfügbare Attribut @type festgelegt wird. Somit konnte an dieser Stelleein Kompromiss zwischen genereller TEI-Konformität und individueller Expressivität gefunden werden.Vgl. dazu auch Haaf/Thomas 2017, Abs. 49 und Anm. 39.

      Dieser Fallback ist jetzt (bzw. seit 2017 mit Release TEI P5 version 3.2.0, Date: 2017-07-10) nicht mehr notwendig, siehe dazu auch https://github.com/cthomasdta/diss-avhkv/issues/5.

    1. 2. La vioLation du principe deneutraLité poLitique peut-eLLejustifier Le refus de diffusiond’un document ?Non. Le CDPE des Vosges a obtenu du Tribunal ad-ministratif de Nancy l’annulation du refus de distri-bution d’un tract qu’il avait rédigé et appelant à unesemaine de mobilisation nationale pour s’opposer auxsuppressions de postes dans l’enseignement. Dansson jugement du 2 octobre 2012, le tribunal relèveque l’administration a fondé son refus sur une pré-tendue violation du principe de laïcité (critère prévupar la loi mais nullement violé en l’espèce) et sur uneprétendue violation du principe de « neutralité poli-tique » qui n’est pas prévu par l’article D.111-9 ducode de l’éducation et qui, comme le rappelle le tri-bunal, « s’impose aux seuls agents du service ». La« neutralité politique » ne peut donc pas justifier unrefus de distribution par l’établissement.
  3. Oct 2023
    1. HTML had blown open document publishing on the internet

      ... which may have really happened, per se, but it didn't wholly incorporate (subsume/cannibalize) conventional desktop publishing, which is still in 2023 dominated by office suites (a la MS Word) or (perversely) browser-based facsimiles like Google Docs. Because the Web as it came to be used turned out to be as a sui generis medium, not exactly what TBL was aiming for, which was giving everything (everything—including every existing thing) its own URL.

    1. The important part, as is so often the case with technology, isn’t coming up with a solution to the post portability problem, but coming up with a solution together so that there is mutual buy-in and sustainability in the approach.

      The solution is to not create keep creating these fucking problems in the first place.

  4. Sep 2023
    1. A big problem with what's in this paper is that its logical paths reflect the déformation professionnelle of its author and the technologists' milieu.

      Links are Works Cited entries. Works Cited entries don't "break"; the works at the other end don't "change".

  5. Aug 2023
    1. Purple is a small suite of quickly hacked tools inspired by Doug Engelbart's attempt to bootstrap the addressing features of his Augment system onto HTML pages. Its purpose is simple: produce HTML documents that can be addressed at the paragraph level. It does this by automatically creating name anchors with static and hierarchical addresses at the beginning of each text node, and by displaying these addresses as links at the end of each text node.    1A  (02)

      Purple is a suite of tools from 2001 that allow one to create numbered addresses/anchors at the paragraph level of a digital document.


      Link: Dave Winer's site still has support for purple numbers.

    1. on Aug. 12, the National Museum of American History is giving the artifact pristine treatment.WpGet the full experience.Choose your planArrowRight"Have You Heard the One . . . ? The Phyllis Diller Gag File" is an exhibition of the beige cabinet in the quiet Albert H. Small Documents Gallery.

      The National Museum of American History debuted Phyllis Diller's gag file on August 12, 2011 in the Albert H. Small Documents Gallery in an exhibition entitled "Have you Hard the One...? The Phyllis Diller Gag File."

      see also: press release https://www.si.edu/newsdesk/releases/national-museum-american-history-showcases-life-and-laughs-phyllis-diller

    1. I think I get what you're saying but I have some difficulty moving past the fact that you're claiming it doesn't need to be a website because it would be sufficient if it was a bunch of hosted markup documents that link to each other.
  6. Jun 2023
    1. The soft power of Google Doc publishing

      See also:

      Google Docs is one of the best ways to make content to put on the Web.

      https://crussell.ichi.city/food/inauguration/

    1. Lost history ± the web is designed for society,but crucially it neglects one key area: its history.Information on the web is today's information.Yesterday's information is deleted or overwrit-ten

      It's my contention that this is a matter of people misusing the URL (and the Web, generally); Web pages should not be expected to "update" any more than you expect the pages of a book or magazine or a journal article to be self-updating.

      We have taken the original vision of the Web -- an elaborately cross-referenced information space whose references can be mechanically dereferenced -- and rather than treating the material as imbued with a more convenient digital access method and keeping in place the well-understood practices surrounding printed copies, we compromised the entire project by treating it as a sui generis medium. This was a huge mistake.

      This can be solved by re-centering our conception of what URLs really are: citations. The resources on the other sides of a list of citations should not change. To the extent that anything ever does appear to change, it happens in the form of new editions. When new editions come out, nobody goes around snapping up the old copies and replacing it for no charge with the most recent one while holding the older copies hostage for a price (or completely inaccessible no matter the price).

    1. A resource can map to the empty set, which allowsreferences to be made to a concept before any realization ofthat concept exist

      This is a very useful but underutilized property. It allows you to e.g. announce in advance that a resource will exist at some point in the future, and thereby effectively receive "updates" to the linking document without requiring changes to the document itself.

    2. how dowe ensure that its introduction does not adversely impact, oreven destroy, the architectural properties that have enabledthe Web to succeed?

      Another good jumping off point for why document mutability should be considered harmful. https://hypothes.is/a/N_gPAAmQEe6kvXNEm10s7w

    1. where theraw source could be directly modified and potentially read. Thiswas not widely implemented, and was subsequently removed. Thiseffectively limits WebDAV remote authoring to situations wherethere is a nearly direct correspondence

      I'll go further and say that, wrt the original goals of the Web—and the principles we should continue to strive for today, despite widespread practice otherwise—Modification (should be) Considered Harmful.

      Git has the right idea. Hypermedia collections should also be append-only (not different resources masquerading under the same name).

  7. May 2023
    1. If you doubt my claim that internet is broad but not deep, try this experiment. Pick any firm with a presence on the web. Measure the depth of the web at that point by simply counting the bytes in their web. Contrast this measurement with a back of the envelope estimate of the depth of information in the real firm. Include the information in their products, manuals, file cabinets, address books, notepads, databases, and in each employee's head.
    1. The Web does not yet meet its design goal as being a pool of knowledge that is as easy to update as to read. That level of immediacy of knowledge sharing waits for easy-to-use hypertext editors to be generally available on most platforms. Most information has in fact passed through publishers or system managers of one sort or another.

  8. Apr 2023
  9. Mar 2023
    1. Over the past few years, many “efficient Trans-former” approaches have been proposed that re-duce the cost of the attention mechanism over longinputs (Child et al., 2019; Ainslie et al., 2020; Belt-agy et al., 2020; Zaheer et al., 2020; Wang et al.,2020; Tay et al., 2021; Guo et al., 2022). However,especially for larger models, the feedforward andprojection layers actually make up the majority ofthe computational burden and can render process-ing long inputs intractable

      Recent improvements in transformers for long documents have focused on efficiencies in the attention mechanism but the feed-forward and projection layers are still expensive for long docs

    1. The common perception of the Web as a sui generis medium is also harmful. Conceptually, the most applicable relevant standard for Web content are just the classic standards of written works, generally. But because it's embodied in a computer people end up applying the standards of have in mind for e.g. apps.

      You check out a book from the library. You read it and have a conversation about it. Your conversation partner later asks you to tell them the name of the book, so you do. Then they go to the library and try to check it out, but the book they find under that name has completely different content from what you read.

    1. Washington, George. “From George Washington to The States, 8 June 1783.” University of Virginia Press, Founders Online, National Archives http://founders.archives.gov/documents/Washington/99-01-02-11404.

      See also copy at: https://americainclass.org/sources/makingrevolution/independence/text1/washingtoncircularstates.pdf


      Referenced by Chapter: Founding Myths by Akhil Reed Amar in Kruse, Kevin M., and Julian E. Zelizer. Myth America: Historians Take On the Biggest Legends and Lies About Our Past. Basic Books, 2023. Location 538-539

      Washington had emphasized the need for such an indivisible union—most dramatically in his initial farewell address, a world-famous circular letter to America’s governors in 1783.

  10. Feb 2023
    1. Le chef d'établissement fixe l'ordre du jour, les dates et heures des séances du conseil d'administration en tenant compte, au titre des questions diverses, des demandes d'inscription que lui ont adressées les membres du conseil. Il envoie les convocations, accompagnées de l'ordre du jour et des documents préparatoires, au moins huit jours à l'avance, ce délai pouvant être réduit à un jour en cas d'urgence.
  11. Dec 2022
    1. This brings interesting questions back up like what happens to your online "presence" after you die (for lack of a better turn of phrase)?

      Aaron Swartz famously left instructions predating (by years IIRC) the decision that ended his life for the way that unpublished and in-progress works should be licensed and who should become stewards/executors for the personal infrastructure he managed.

      The chrisseaton.com landing page has three social networking CTAs ("Email me", etc.) Eventually, the chrisseaton.com domain will lapse, I imagine, and the registrar or someone else will snap it up to squat it, as is their wont. And while in theory chrisseaton.github.io will retain all the same potential it had last week for much longer, no one will be able to effect any changes in the absence of an overseer empowered to act.

  12. Nov 2022
    1. layers of wat are essentially hacks to build something resembling a UI toolkit on top of a document markup language

      So make your application document-driven (i.e. actually RESTful).

      It's interesting that we have Web forms and that we call them that and yet very few people seem to have grokked the significance of the term and connected it to, you know, actual forms—that you fill out on paper and hand over to someone to process, etc. The "application" lies in that latter part—the process; it is not the visual representation of any on-screen controls. So start with something like that, and then build a specialized user agent for it if you can (and if you want to). If you find that you can't? No big deal! It's not what the Web was meant for.

    2. There is no good way to develop a UI in HTML/CSS/JS

      So don't.

  13. Oct 2022
    1. @1:10:20

      With HTML you have, broadly speaking, an experience and you have content and CSS and a browser and a server and it all comes together at a particular moment in time, and the end user sitting at a desktop or holding their phone they get to see something. That includes dynamic content, or an ad was served, or whatever it is—it's an experience. PDF on the otherhand is a record. It persists, and I can share it with you. I can deliver it to you [...]

      NB: I agree with the distinction being made here, but I disagree that the former description is inherent to HTML. It's not inherent to anything, really, so much as it is emergent—the result of people acting as if they're dealing in live systems when they shouldn't.

    2. @48:20

      I should actually add that the PDF specification only specifies the file format and very few what we call process requirements on software, so a lot of those sort of experiential things are actually not defined in the PDF spec.

  14. Sep 2022
    1. Ever tried to look up some news from 12 years ago? Back in library days you were able to do that. On news portals, most articles are deleted after a year, and on newspaper web sites you hardly ever get access to the archives – even with a subscription.

      This is a massive failure of infrastructure (and education/"professionalism"—by and large, most people whose careers are in operating or maintaining Web infrastructure don't haven't been inculcated into or adopted the sort of "code of ethics" that sees this as a failure).

      The thing might just be for something like the Internet Archive to get into training or selling professional services for handling companies' "Web presence, done the right way". (This is def. take some organizational restructuring, however.) I'd like to see, for example, IA-certified partner organizations that uphold the principles described here and the original vision for the Web, and professional associations that work hard at making sure the status quo improves a lot over what's common today (and doesn't slide back).

    1. We need not interpret in the Jewish oretymological sense the dictum of Renan : " I do notthink it possible for any one to acquire a clearnotion of history, its limits, and the amount ofconfidence to be placed in the different categoriesof historical investigation, unless he is in the habitof handling original documents." ^

      Renan, Essais de morale et de critique, p. 36

  15. Aug 2022
    1. I can't get behind the call to anger here, even if I don't approve of Apple's stance on being the gatekeeper for the software that runs on your phone.

      Elsewhere (in the comments by the author on HN), he or she writes:

      The biggest problem I try to convey is that you have no way of knowing you'll get the rejection

      No, I think there were pretty good odds that before even submitting the first iteration it would have been rejected, based purely on the concept alone. This is not an app. It's a set of pages—only implemented with the iOS SDK (and without any of the affordances, therefore, that you'd get if you were visiting in a Web browser. For whatever reason, the author both thought this was a good idea and didn't review the App Store guidelines and decided to proceed anyway.

      Then comes the part where Apple sends the rejection and tells the author that it's no different from a Web page and doesn't belong on the App Store.

      Here's where the problem lies: at the point where you're - getting rejections, and then - trying to add arbitrary complexity to the non-app for no reason other than to try to get around the rejection

      ... that's the point where you know you're wasting your time, if it wasn't already clear before—and, once again, it should have been. This is a series of Web pages. It belongs on the Web. (Or dumped into a ZIP and passed around via email.) It is not an app.

      The author in the same HN comment says to another user:

      So you, like me, wasted probably days (if not weeks) to create a fully functional app, spent much of that time on user-facing functions that you would have probably not needed

      In other words, the author is solely responsible for wasting his or her own time.

      To top it off, they finish their HN comment with this lament:

      It's not like on Android where you can just share an APK with your friends.

      Yeah. Know what else allows you to "just" share your work...? (No APK required, even!)

      Suppose you were taking classes and wanted to know the rubric and midterm schedule. Only rather than pointing you to the appropriate course Web page or sharing a PDF or Word document with that information, the professor tells you to download an executable which you are expected to run on your computer and which will paint that information on the screen. You (and everyone else) would hate them—and you wouldn't be wrong to.

      I'm actually baffled why an experienced iOS developer is surprised by any of the events that unfolded here.

    1. The “work around” was to detect users in an IAB and display a message on first navigation attempt to prompt them to click the “open in browser” button early.

      That's a pretty deficient workaround, given the obvious downsides. A more robust workaround would be to make the cart stateless, as far as the server is concerned, for non-logged-in users; don't depend on cookies. A page request instead amounts to a request for the form that has this and this and this pre-selected ("in the cart"). Like with paper.

    1. I feel those implications are very intellectually liberating and are far more cognitively organic than the print paradigm which currently shapes our intellectual mindset

      There's a lot to be gained by reflecting on pre-Web paradigms and hewing to similar approaches—electing to be bound by the constraints. Print is a discipline, and the practices surrounding the print-based publishing industry comprises a form of technology itself.

      Related: lineality is a virtue.

  16. Jul 2022
    1. resulting HTML

      Imagine if this were just the format that the source document itself used...

    1. I recently started building a website that lives at wesleyac.com, and one of the things that made me procrastinate for years on putting it up was not being sure if I was ready to commit to it. I solved that conundrum with a page outlining my thoughts on its stability and permanence:

      It's worth introspecting on why any given person might hesitate to feel that they can commit. This is almost always comes down to "maintainability"—websites are, like many computer-based endeavors, thought of as projects that have to be maintained. This is a failure of the native Web formats to appreciably make inroads as a viable alternative to traditional document formats like PDF and Word's .doc/.docx (or even the ODF black sheep). Many people involved with Web tech have difficulty themselves conceptualizing Web documents in these terms, which is unfortunate.

      If you can be confident that you can, today, bang out something in LibreOffice, optionally export to PDF, and then dump the result at a stable URL, then you should feel similarly confident about HTML. Too many people have mental guardrails preventing them from grappling with the relevant tech in this way.

  17. May 2022
    1. Building and sharing an app should be as easy as creating and sharing a video.

      This is where I think Glitch goes wrong. Why such a focus on apps (and esp. pushing the same practices and overcomplicated architecture as people on GitHub trying to emulate the trendiest devops shovelware)?

      "Web" is a red herring here. Make the Web more accessible for app creation, sure, but what about making it more accessible (and therefore simpler) for sharing simple stuff (like documents comprising the written word), too? Glitch doesn't do well at this at all. It feels less like a place for the uninitiated and more like a place for the cool kids who are already slinging/pushing Modern Best Practices hang out—not unlike societal elites who feign to tether themself to the mast of helping the downtrodden but really use the whole charade as machine for converting attention into prestige and personal wealth. Their prices, for example, reflect that. Where's the "give us, like 20 bucks a year and we'll give you better alternative to emailing Microsoft Office documents around (that isn't Google Sheets)" plan?

    1. On the other hand, the notion of the “document” that is intrinsic to web development today is overdetermined by the legacy of print media.

      I dunno. I think all the things about dynamism and liveness that follow this claim are true in the minds of most people. The rarity is for people to conceive of content on the Web (or elsewhere rendered to a computer screen) as capable of being imbued with the fixity of print. Everything feels transient and rests on a shaky, fleeting foundation..

    1. it's not as simple as copying homepage.php to homepagenew.php, hacking on it until it works, and then renaming homepagenew.php to homepage.php

      It actually can be easier than that if the only reason PHP is involved is for templating and you don't want CGI. (NB: This is admittedly contra "the mildly dynamic website" described in the article).

      If you're Alice of alice.example.com, then you open up your local copy of e.g. https://alice.example.com/sop/pub.app.htm, proceed to start "hacking on it until it works", then rsync the output.

    1. Are you limited to PHP?

      No, but further: the question (about being "limited") presupposes something that isn't true.

      If you're doing PHP here, you're doing it wrong—unless the PHP application is written with great care (i.e. unidiomatically) and has some way to reveal its own program text (as first-class content). Otherwise, that's a complete failure to avoid the "elsewhere"-ness that we're trying to eradicate.

    2. who hosts that?

      Answer: it's hosted under the same auspices as the main content. The "editor" is first-class content (in the vein of ANPD); it's really just another document describing detailed procedures for how the site gets updated.

    1. Perhaps each page load shows a different, randomly chosen header image.

      That makes them constitute separate editions. It makes things messy.

    1. My argument for the use of the Web as a medium for publishing the procedures by which the documents from a given authority are themselves published shares something in common with the argument for exploiting Lisp's homoiconicity to represent a program as a data structure that is expressed like any other list.

      There are traces here as well from the influence of the von Neumann computational model, where programs and data are not "typed" such that they belong to different "classes" of storage—they are one and the same.

    1. Updating the script

      This is less than ideal. Besides non-technical people needing to wade into the middle of (what very well might appear to them to be a blob of) JS to update their site, here are some things that Zonelets depends on JS for:

      1. The entire contents of the post archives page
      2. The footer
      3. The page title

      This has real consequences for e.g. the archivability for a Zonelets site.

      The JS-editing problem itself could be partially ameliorated by with something like the polyglot trick used on flems.io and/or the way triple scripts do runtime feature detection using shunting. When the script is sourced via script element from another page, it behaves as JS, but when visited directly as the browser destination it is treated like HTML and has its own DOM tree for the script itself to make the necessary modifications easier. Instead of requiring the user to edit it as freeform text, provide a structured editing interface, so e.g. adding a new post is as simple as clicking the synthesized "+" button in the list of posts, copying the URL of the post in question, and then pasting it in—to a form field. The Zonelets script itself should take care of munging it into the appropriate format upon form "submission". It can also, right there, take care of the escaping issue described in the FAQ—allow the user to preview the generated post title and fix it up if need be.

      Additionally, the archives page need not by dynamically generated by the client—or rather, it can be dynamically filled in exactly once per update—on the author's machine, and then be reified into static HTML, with the user being instructed to save it and overwrite the served version. This gets too unwieldy for next/prev links in the footer, but (a) those are non-essential, and don't even get displayed for no-JS users right now, anyway; and (b) can be seen to violate the entire "UNPROFESSIONAL" etthos.

      Alternatively, the entire editing experience can be complimented with bookmarklets.

    1. it’s hard to look at recent subscription newsletter darling, Substack, without thinking about the increasingly unpredictable paywalls of yesteryear’s blogging darling, Medium. In theory you can simply replatform every five or six years, but cool URIs don’t change and replatforming significantly harms content discovery and distribution.
  18. Apr 2022
    1. NB: This piece has been through many revisions (or, rather, many different pieces have been published with this identifier).

      Check it out with the Wayback Machine: https://web.archive.org/web/*/http://akkartik.name/about

    1. This appeal would have a greater effect if it weren't itself published in a format that exhibits so much of what was less desirable of the pre-modern Web—fixed layouts that show no concern for how I'm viewing this page and causes horizontal scrollbars, overly stylized MySpace-ish presentation, and a general imposition of the author's preferences and affinity for kitsch above all else—all things that we don't want.

      I say this as someone who is not a fan of the trends in the modern Web. Responsive layouts and legible typography are not casualties of the modern Web, however. Rather, they exhibit the best parts of its maturation. If we can move the Web out of adolescence and get rid of the troublesome aspects, we'd be doing pretty good.

    1. By the early 1970s, Barthes’ long-standing use of index cards wasrevealed through reproduction of sample cards in Roland Barthes byRoland Barthes (see Barthes, 1977b: 75). These reproductions,Hollier (2005: 43) argues, have little to do with their content andare included primarily for reality-effect value, as evidence of anexpanding taste for historical documents

      While the three index cards of Barthes that were reproduced in the 1977 edition of Roland Barthes by Roland Barthes may have been "primarily for reality-effect value as evidence of an expanding taste for historical documents" as argued by Hollier, it does indicate the value of the collection to Barthes himself as part of an autobiographical work.

      I've noticed that one of the cards is very visibly about homosexuality in a time where public discussion of the term was still largely taboo. It would be interesting to have a full translation of the three cards to verify Hollier's claim, as at least this one does indicate the public consumption of the beginning of changing attitudes on previously taboo subject matter, even for a primarily English speaking audience which may not have been able to read or understand them, but would have at least been able to guess the topic.

      At least some small subset of the general public might have grown up with an index-card-based note taking practice and guessed at what their value may have been though largely at that point index card note systems were generally on their way out.

    1. There's way too much excuse-making in this post.

      They're books. If there's any defensible* reason for making the technical decision to go with "inert" media, then a bunch of books has to be it.

      * Even this framing is wrong. There's a clear and obvious impedance mismatch between the Web platform as designed and the junk that people squirt down the tubes at people. If there's anyone who should be coming up with excuses to justify what they're doing, that burden should rest upon the people perverting the vision of the Web and treating it unlike the way it's supposed to be used—not folks like acabal and amitp who are doing the right thing...

  19. Mar 2022
    1. Many of the items in the docuverse are not static, run-of-the-mill materials, i.e. unformatted text, graphics, database files, or whatever. They are, in fact, executable programs, materials that from a docuverse perspective can be viewed as Executable Documents (EDs). Such programs run the gamut from the simplest COBOL or C program to massive expert systems and FORTRAN programs. Since the docuverse address scheme allows us to link documents at will, we can link together compiled code, source code, and descriptive material in hypertext fashion. Now, if, in addition, we can prepare and link to an executable document an Input-Output Document (IOD), a document specifying a program's input and output requirements and behavior, and an RWI describing the IOD, we can entertain the notion of integrating data and programs that were not originally designed to work together.
  20. citeseerx.ist.psu.edu citeseerx.ist.psu.edu
    1. The complete overlapping of readers’ and authors’ roles are important evolution steps towards a fully writable web, as is the ability of deriving personal versions of other authors’ pages.
  21. Feb 2022
    1. What is a document in this world? A list of block addresses.

      Documents with paragraph structures are similar to albums with songs or perhaps digital playlists with individual songs that can be in a variety of different orders.

    1. and keep your site consistent

      But maybe you don't need to do that. Maybe it would be instructive to take lessons from the traditional (pre-digital) publishing industry and consider how things like "print runs" and "reissues" factored in.

      If you write a blog post in 2017 and the page is acceptable*, and then five years later you're publishing a new post today in 2022 under new design norms and trying to retrofit that design onto your content from 2017, then that sounds a lot like a reprint. If that makes sense and you want to go ahead and do that, then fair enough, but perhaps first consider whether it does make sense. Again, that's what you're doing—every time you go for a visual refresh, it's akin to doing a new run for your entire corpus. In the print industry (even the glossy ones where striking a chord visually was and is something considered to merit a lot of attention), publishers didn't run around snapping up old copies and replacing them with new ones. "The Web is different", you might say, but is it? Perhaps the friction experienced here—involved with the purported need to pick up a static site generator and set your content consistently with templates—is actually the result of fighting against the natural state of even the digital medium?

      * ... and if you wrote a blog post in 2017 and the page is not acceptable now in 2022, maybe it's worth considering whether it was ever really acceptable—and whether the design decisions you're making in 2022 will prove to be similarly unacceptable in time (and whether you should just figure out the thing where that wouldn't be the case, and instead start doing that immediately).

    1. I used Publii for my blog, but it was very constraining in terms of its styling

      This is a common enough feeling (not about Publii, specifically; just the general concern for flexibility and control in static site generators), but when you pull back and think in terms of normalcy and import, it's another example of how most of what you read on the internet is written by insane people.

      Almost no one submitting a paper for an assignment or to a conference cares about styling the way that the users of static site generators (or other web content publishing pipelines) do. Almost no one sending an email worries about that sort of thing, either. (The people sending emails who do care a lot about it are usually doing email campaigns, and not normal people carrying out normal correspondence.) No one publishing a comment in the thread here—or a comment or post to Reddit—cares about these things like this, nor does anyone care as much when they're posting on Facebook.

      Somehow, though, when it comes to personal Web sites, including blogs, it's MySpace all over again. Visual accoutrement gets pushed to the foreground, with emphasis on the form of expression itself, often with little relative care for the actual content (e.g. whether they're actually expressing anything interesting, or whether they're being held back from expressing something worthwhile by these meta-concerns that wouldn't even register if happening over a different medium).

      When it comes to the Web, most instances of concern for the visual aesthetic of one's own work are distractions. It might even be prudent to consider those concerns to be a trap.

    1. keeping your website's look 'up to date' requires changes

      Yeah, but...

      Keeping your website's look 'up to date' requires changes, but keeping your website up does not require "keeping its look 'up to date'".

    2. The problem almost certainly starts with the conception of what we're doing as "building websites".

      When we do so, we mindset of working on systems

      If your systems work compromises the artifacts then it's not good work

      This is part of a broader phenomenon, which is that when computers are involved with absolutely anything people seem to lose their minds good sensibilities just go out the window

      low expectations from everyone everyone is so used to excusing bad work

      sui generis medium

      violates the principle of least power

      what we should be doing when grappling with the online publishing problem—which is what this is; that's all it is—is, instead of thinking in terms of working on systems, thinking about this stuff in such a way that we never lose sight of the basics; the thing that we aspire to do when we want to put together a website is to deal in

      documents and their issuing authority

      That is, a piece of content and its name (the name is a qualified name that we recognize as valid only when the publisher has the relevant authority for that name, determined by its prefix; URLs)

      that's it that's all a Web site is

      anything else is auxiliary

      really not a lot different from what goes on when you publish a book take a manuscript through final revisions for publication and then get an ISBN issued for it

      so the problem comes from the industry

      people "building websites" like politicians doing bad work and then their constituents not holding them accountable because that's not how politics works you don't get held accountable for doing bad work

      so the thing to do is to recognize that if we're thinking about "websites" from any other position things that technical people try to steer us in the direction of like selecting a particular system and then propping it up and how to interact with a given system to convince it to do the thing we want it to do— then we're doing it wrong

      we're creating content and then giving it a name

  22. Dec 2021
    1. Desired workflow:

      1. I navigate to the APL login page https://austin.bibliocommons.com/user/login
      2. I invoke a bookmarklet on the login page that opens a new browser window/tab
      3. In the second tab, I navigate here—to a locally saved copy of (a facsimile of) my library card
      4. I invoke a bookmarklet on my library card to send the relevant details to the APL login page using window.postMessage
      5. The bookmarklet set up in step 2 receives the details, fills in the login form, and automatically "garbage collects" the second tab

      Some other thoughts: We can maintain a personal watchlist/readlist similarly. This document (patron ID "page") itself is probably not a good place for this. It is, however, a good place to reproduce a convenient copy of the necessary bookmarklets. (With this design, only one browser-managed bookmarklet would be necessary; with both bookmarklets being part of the document contents, the second bookmarklet used for step 4 can just be invoked directly from the page itself—no need to follow through on actually bookmarking it.)

    1. I buy domains on a regular basis and often from more than one registrar because of a better deal or TLD availability. As a result, I tend to forget I have some domains! True story, I once ran a WHOIS search on a domain I own.

      The subtext here is, "that's why i created BeachfrontDigital". But this shows how "apps" (and systems) have poisoned how we conceptualize problems and their solutions.

      The simplest solution to the problem described is a document, not a never-finished/never-production-ready app. Bespoke apps have lots of cost overhead. Documents, on the other hand—even documents with rich structure—are cheap.

    1. If you try to export the document in an internet-compatible format like HTML, you get a mess.

      I've noted elsewhere that despite the reputation of WYSIWYG editors' tendencies for handling HTML, modern mainstream Web development practices are so bad today that just typing a bunch of junk into LibreOffice and saving as HTML results in one of the most economical ways to do painless authoring of Web content...

  23. Nov 2021
  24. Oct 2021
    1. b) Modalités de diffusionLes documents remis par les associations sont distribués aux élèves pour être donnés à leurs parents au fur et à mesure de leur remise.
  25. Sep 2021
    1. Around 1:48:00

      What if every library that you use had, like, some interactive documentation or interactive representation? [...] The author could maybe add annotations.

    1. playing house

      This is how I feel about most people's personal websites. Few people have homepages these days, but even for people who do, even fewer of those homes have anyone really living there. All their interesting stuff is going on on Twitter, GitHub, comments on message boards...

      Really weird when this manifests as a bunch of people having really strong opinions about static site tech stacks and justifications for frontend tech that in practice they never use, because the content from any one of their profiles on the mainstream social networks outstrips their "home" page 100x to 1.

  26. Aug 2021
    1. I looked at workflows that were similar to GitHub Pages. I realized that what I was craving was very simple: Write text. Put on internet. Repeat.
    1. Funnily enough, I've been on an intellectual bent in the other direction: that we've poisoned our thinking in terms of systems, for the worse. This shows up when trying to communicate about the Web, for example.

      It's surprisingly difficult to get anyone to conceive of the Web as a medium suited for anything except the "live" behavior exhibited by the systems typically encountered today. (Essentially, thin clients in the form of single-page apps that are useless without a host on the other end for servicing data and computation requests.) The belief/expectation that content providers should be given a pass for producing brittle collections of content that should be considered merely transitory in nature just leads to even more abuse of the medium.

      Even actual programs get put into a ruddy state by this sort of thinking. Often, I don't even care about the program itself, so much as I care about the process it's applying, but maintainers make this effectively inextricable from the implementation details of the program itself (what OS version by which vendor does it target, etc.)

    1. the Web has graduallyevolved from the original static linked document modelwhose language was HTML, to a model of intercon-nected programming environments whose language isJavaScript
  27. Jul 2021
    1. real or virtual

      interesting taxonomy; useful for communicating about a concerted effort towards a more document-oriented correction to the modern Web?

    1. “But how can I automate updates to my site’s look and feel?!”

      Perversely, the author starts off getting this part wrong!

      The correct answer here is to adopt the same mindset used for print, which is to say, "just don't worry about it; the value of doing so is oversold". If a print org changed their layout sometime between 1995 and 2005, did they issue a recall for all extant copies and then run around trying to replace them with ones consistent with the new "visual refresh"? If an error is noticed in print, it's handled by correcting it and issuing another edition.

      As Tschichold says of the form of the book (in The Form of the Book):

      The work of a book designer differs essentially from that of a graphic artist. While the latter is constantly searching for new means of expression, driven at the very least by his desire for a "personal style", a book designer has to be the loyal and tactful servant of the written word. It is his job to create a manner of presentation whose form neither overshadows nor patronizes the content [... whereas] work of the graphic artist must correspond to the needs of the day

      The fact that people publishing to the web regularly do otherwise—and are expected to do otherwise—is a social problem that has nothing to do with the Web standards themselves. In fact, it has been widely lamented for a long time that with the figurative death of HTML frames, you can no longer update something in one place and have it spread to the entire experience using plain ol' HTML without resorting to a templating engine. It's only recently (with Web Components, etc.) that this has begun to change. (You can update the style and achieve consistency on a static site without the use of a static site generator—where every asset can be handcrafted, without a templating engine.) But it shouldn't need to change; the fixity is a strength.

      As Tschichold goes on to say of the "perfect" design of the book, "methods and rules upon which it is impossible to improve have been developed over centuries". Creators publishing on the web would do well to observe, understand, and work similarly.

    2. it is impossible to build a new web browser

      Perhaps it's not possible. (Probably not, even.) It would be very much possible to build a web browser capable of handling this page, on the other hand, and to do so in a way that produces an appreciable result in 10 minutes of hacking around with the lowliest of programming facilities: text editor macros—that is, if only it had actually been published as a webpage. Is it possible to do the same for if not just this PDF but others, too? No.

    1. It's great to enhance the Internet Archive, but you can bet I'm keeping my local copy too.

      Like the parent comment by derefr, my actual, non-hypothetical practice is saving to the Wayback Machine. Right now I'm probably saving things at a rate of half a dozen a day. For those who are paranoid and/or need offline availability, there's Zotero https://www.zotero.org. Zotero uses Gildas's SingleFile for taking snapshots of web pages, not PDF. As it turns out, Zotero is pretty useful for stowing and tracking any PDFs that you need to file away, too, for documents that are originally produced in that format. But there's no need to (clumsily) shoehorn webpages into that paradigm.

      If you do the print-to-PDF workflow outlined earlier in the thread, you'll realize it doesn't scale well, requiring too much manual intervention and discipline (including taking care to make sure it's filed correctly; hopefully you remember the ad hoc system you thought up last time you saved something), that it's destructive, and that it ultimately gives you an opaque blob. SingleFile-powered Zotero mostly solves all of this, and it does it in a way that's accessible in one or two clicks, depending on your setup. If you ever actually need a PDF, you can of course go back to your saved copy and produce a PDF on-demand, but it doesn't follow that you should archive the original source material in that format.

      My only reservation is that there is no inverse to the SingleFile mangling function, AFAIK. For archival reasons, it would be nice to be able to perfectly reconstruct the original, pre-mangled resources, perhaps by storing some metadata in the file that details the exact transformations that are applied.

    1. You can use LibreOffice's Draw

      Nevermind LibreOffice Draw, you can use LibreOffice Writer to author the actual content. That this is never seriously pushed as an option (even, to my knowledge, by the LibreOffice folks themselves) is an indictment of the computing industry.

      Having said that, I guess there is some need to curate a set of templates for small and medium size businesses who want their stuff to "pop".

    1. I’m not confident I’ll be able to keep a server running to serve up my notes, so I bundled them up into an archive of pregenerated HTML, which anyone who has a copy can unpack and read, without requiring any online resources.
  28. Jun 2021
    1. I tried all the different static site generators, and I was annoyed with how everything was really complicated. I also came to the realization that I was never going to need a content management system with the amount of blogging I was doing, so I should stop overanalyzing the problem and just do the minimum thing that leads to more writing.

      Great way to put it. One thing that I keep trying to hammer is that the "minimum thing" here looks more like "open up a word processor, use the default settings, focus on capturing the content—i.e. writing things out just as you would if you were dumping these thoughts into a plain text file or keeping it to, say, the subset of Markdown that allows for paragraph breaks, headings, and maybe ordered and unordered lists—and then use your word processor's export-to-HTML support to recast it into the format that lets use their browser to read it, and then FTP/scp/rsync that to a server somewhere".

      This sounds like I'm being hyperbolic, and I kind of am, but I'm also kind of not. The process described is still more reasonable than the craziness that people (HN- and GitHub-type people) end up leaping into when they think of blogging on a personal website. Think about that. Literally uploading Microsoft Word-generated posts to a server* is better than the purpose-built workflows that people are otherwise coming up with (and pushing way too hard).

      (*Although, just please, if you are going to do this, then do at least export to HTML and don't dump them online as PDFs, a la berkshirehathaway.com.)

  29. May 2021
  30. Apr 2021
    1. Documents should offer the same granularity.

      That neither content creators nor browser vendors are particularly concerned with the production and consumption of documents, as such, is precisely the issue. This is evident in the banner that the majority of the work has occurred under over the last 10+ years: they're the Web Hypertext Applications Technology Working Group.

      No one, not even the most well-intentioned (such as the folks at Automattic who are responsible for the blogging software that made Christina's post here possible), see documents when they think of the Web. No, everything is an app—take this page, for example; even the "pages" that WordPress produces are facets of an application. Granted, it's an application meant for reading the written word (and meant for occasionally writing it), but make no mistake, it's an application first, and a "document" only by happenstance (i.e. the absence of any realistic alternative to HTML & co for application delivery).

  31. Mar 2021
    1. Un exemplaire du dossier contenant des copies des pièces suivantes sera adressé au Rectorat –DASAEE dès lors que l’appel sera effectif

      list

    1. I was pretty annoyed with myself for having fallen for the trap of not documenting my own systems, but not sure how I could have remembered all of the Hugo-isms

      I've explained such a system, and promised Andy Chu an example that I've yet to be able to complete, but it comes down to this:

      A website is fundamentally a document repository. One of the first documents that you should store in that repository is one which explains, in detail, the procedures for provisioning the host powering the site and how content gets published. (Even better if it's so detailed that the procedures exhibit a degree of rigor such that a machine can carry them out, rather than requiring manual labor by a human.)

  32. Feb 2021
  33. Dec 2020
    1. Brebeuf commenced his letter when he described the conversion , baptism, and happy death of some Hurons. At a council of the Huron chiefs, Brébeuf produces letters from Champlain and Duplessis-Bochart, who exhort the tribesmen to follow the teaching of the missionaries. The Hurons are in constant dread of hostile incursions from the Iroquois. In August, Mercier and Pijart arrive from Quebec. Brébeuf recounts the many perils of the journey hither, and the annoyances and dangers to which apostles of the faith are continually exposed among the savages. But he offers much encouragement. Brébeuf closes his account with an expression of much hope for the future success of their labors. Mingled, however, with fear lest these savage neophytes may grow restive when placed under greater restrictions on their moral and social conduct, than have thus far seemed advisable to the cautious missionaries.

    1. Summary This design of the Cambria-Automerge integration has several useful properties: Multi-schema: The document can be read and written in any schema at any time. A document has no single “canonical” schema—just a log of writes from many schemas. Self-evolving: The logic for evolving formats is contained within the document itself. An older client can read newer versions without upgrading its code, since it can retrieve new lenses from the document itself. Future-proof: Edits to the document are resilient to the addition of future lenses. Since no evolution is done on write, old changes can be evolved using lenses that didn’t even exist at the time of the original change. One area for future research is further exploring the interaction between the guarantees provided by lenses and CRDTs. For example, Automerge has different logic for handling concurrent edits on a scalar value and an array. But what happens if the concurrent edits are applied to an array in one schema, and a scalar value in another?
  34. Oct 2020
  35. Sep 2020
  36. Aug 2020
  37. icla2020.jonreeve.com icla2020.jonreeve.com
    1. The Union Jack, Pluck and The Halfpenny Marvel

      Cool link I found for anyone interested. Apparently these stories were really popular among kids who lived in Dublin due to their narratives of the Wild West, although attached to these stories were (not surprisingly) nationalist views.

      Original purpose of the stories were to put the "Penny Dreadfuls" out of business, which were sensational stories that cost a penny. Apparently, these magazines were sold for cheaper and boasted quality content that wasn't that good in reality.

  38. Apr 2020
    1. the cost of reading consent formats or privacy notices is still too high.
    2. Third, the focus should be centered on improving transparency rather than requesting systematic consents. Lack of transparency and clarity doesn’t allow informed and unambiguous consent (in particular, where privacy policies are lengthy, complex, vague and difficult to navigate). This ambiguity creates a risk of invalidating the consent.

      systematic consents

    3. the authority found that each digital platform’s privacy policies, which include the consent format, were between 2,500 and 4,500 words and would take an average reader between 10 and 20 minutes to read.
    1. the Web has be­come many things, but doc­u­ments and, by ex­ten­sion, pub­li­ca­tions, have re­mained close to the heart of the Web.
  39. Mar 2020
    1. Because humans hate being bored or confused and there are countless ways to make decisions look off-puttingly boring or complex — be it presenting reams of impenetrable legalese in tiny greyscale lettering so no-one will bother reading
    1. "I have read and agree to the terms and conditions” may well be the most common lie in the history of civilization. How many times do you scroll and click accept without a second thought? You’re not alone. Not only they go unread, but they also include a self-updating clause requiring you to go back and review those documents for changes. You’re agreeing to any changes, and their consequences, indefinitely. 
    1. And, frankly, we’re abetting this behavior. Most users just click or tap “okay” to clear the pop-up and get where they’re going. They rarely opt to learn more about what they’re agreeing to. Research shows that the vast majority of internet users don’t read terms of service or privacy policies — so they’re probably not reading cookie policies, either. They’re many pages long, and they’re not written in language that’s simple enough for the average person to understand.
    2. But in the end, they’re not doing much: Most of us just tediously click “yes” and move on.
    3. The site invites you to read its “cookie policy,” (which, let’s be honest, you’re not going to do), and it may tell you the tracking is to “enhance” your experience — even though it feels like it’s doing the opposite.
  40. Apr 2019
  41. Feb 2019
  42. Jul 2018
  43. course-computational-literary-analysis.netlify.com course-computational-literary-analysis.netlify.com
    1. My diary informs me

      This is an interesting reversal of typical subject-object relations. The diary, which is an object, is grammatically positioned as an informative agent, while Miss Clack, a person, becomes an object that is acted upon. Some part-of-speech tagging in scenes that feature document evidence would help us to better understand when and why this happens, and why it might be significant.

  44. Jul 2017
  45. Jul 2016
    1. Page 41

      discussions of digital scholarship tend to distinguish implicitly or explicitly between data and documents. Some of you data and documents as a Continuum rather than a dichotomy in this sense data such as numbers images and observations are the initial products of research, and Publications are the final products that set research findings in context.

  46. Jan 2016
    1. Set Semantics¶ This tool is used to set semantics in EPUB files. Semantics are simply, links in the OPF file that identify certain locations in the book as having special meaning. You can use them to identify the foreword, dedication, cover, table of contents, etc. Simply choose the type of semantic information you want to specify and then select the location in the book the link should point to. This tool can be accessed via Tools->Set semantics.

      Though it’s described in such a simple way, there might be hidden power in adding these tags, especially when we bring eBooks to the Semantic Web. Though books are the prime example of a “Web of Documents”, they can also contribute to the “Web of Data”, if we enable them. It might take long, but it could happen.