10,000 Matching Annotations
  1. Last 7 days
    1. There is no way to link PRs; you have to manually note changes in commit messagesIf changes are made to the parent, you have to rebase all of the child branches
    2. Assess the team’s trunk-based development maturity. Ensure developers are comfortable with concepts like rebasing over merging, using short-lived branches, and rapid integration before layering on stacking.
    1. Vitest makes writing tests directly within your source code easy, eliminating the need for separate test files. This approach, known as in-source testing, is useful when you want to quickly test individual functions without the overhead of creating and managing separate test files.
  2. Jun 2025
    1. When shallow routing, you may want to render another +page.svelte inside the current page.

      Good to know that this is possible!

      Use it just like any other component... import PhotoPage from './[id]/+page.svelte';

  3. May 2025
    1. For larger projects with multiple interconnected components, monorepos can be a game-changer, providing efficient dependency management, atomic commits, simplified code sharing, and an improved developer experience.
    2. Node_modules is the heaviest object in the universe for a reason.
    3. Interact with live systems whenever feasible instead of mocking components to uncover potential integration issues.
    1. There is growing support across tooling for a shared .ignore file as well. Precedence should likely be .prettierignore > .ignore > .gitignore.
    1. It is more clean to have all those ignore files in package.json rather than: .gitignore .eslintignore .stylelintignore .prettierignore .whateverignore ...
    1. To dig deep on this though, .gitignore isn't a standard. It's a well documented and familiar syntax from a specific, widely adopted, tool. Maybe we can even pretend the git implementation is a reference implementation too. There's no spec though and, importantly, it isn't considered a standard by the git maintainers themselves. That's why I kept calling it a quasi-standard in my original post.
    1. There has been an attempt to systematize exit status numbers (see /usr/include/sysexits.h), but this is intended for C and C++ programmers. A similar standard for scripting might be appropriate. The author of this document proposes restricting user-defined exit codes to the range 64 - 113 (in addition to 0, for success), to conform with the C/C++ standard.

      It sounds like he's proposing aligning with the sysexits.h standard?

      But I'm not clear why he refers to "exit codes to the range 64 - 113 (in addition to 0, for success)" as user-defined. To me, these seem the complete opposite: those are reserved for pre-defined, standard exit codes — with 0 for success being the most standard (and least user-defined) of all!

      Why to use exit codes from 1-63 for user-defined errors??

    2. An update of /usr/include/sysexits.h allocates previously unused exit codes from 64 - 78. It may be anticipated that the range of unallotted exit codes will be further restricted in the future. The author of this document will not do fixups on the scripting examples to conform to the changing standard. This should not cause any problems, since there is no overlap or conflict in usage of exit codes between compiled C/C++ binaries and shell scripts.

      Eh, 0 and 64 - 78 are the only codes it defines. So if it had different codes defined before, what on earth were those codes before? Was only 0 "used"/defined here before? Nothing defined from 1-128? Or were the codes defined there different ones, like 20-42 and then they arbitrarily shifted these up to 64-78 one day? This is very unclear to me.

      Also unclear whether this is saying it won't update for any future changes after this, or if he hasn't even updated to align with this supposed "change". (Unclear because I can't figure out whether his "proposes restricting user-defined exit codes to the range 64 - 113 (in addition to 0, for success), to conform with the C/C++ standard" statement is actually conforming or rejecting the sysexits.h standard.)

      It seems that he's overreacting a bit here. It's hard to imagine there has been or will be any major changes to the sysexits.h. I would only imagine there being additions to, but not changes to because backwards compatibility would be of utmost concern.

    3. This seems awfully incomplete! What about errors like "The command was used incorrectly, e.g., with the wrong number of arguments, a bad flag, a bad syn- tax in a parameter, or whatever."?

      This is where a standard like

      https://man.freebsd.org/cgi/man.cgi?query=sysexits&sektion=3

      steps in and is useful to have!

    1. This interface has been deprecated and is retained only for compatibility. Its use is discouraged.

      This is great!!

      So... Why is this deprecated and what should be used instead?? Standardizing this stuff would be good, and this de facto standard seems as good as any!!

    1. BSD-derived OS's have defined an extensive set of preferred interpretations: Meanings for 15 status codes 64 through 78 are defined in sysexits.h.[15] These historically derive from sendmail and other message transfer agents, but they have since found use in many other programs.[16] It has been deprecated and its use is discouraged.

      [duplicate of https://hyp.is/12j9KjELEfCQc79IbTwQnQ/man.freebsd.org/cgi/man.cgi?query=sysexits&sektion=3 ]

      Why is this deprecated and what should be used instead?? Standardizing this stuff would be good, and this de facto standard seems as good as any!!

    1. @GwynethLlewelyn, this answer describes the de facto standard, which in my experience is far more widely adopted (and hence should take precedence) over the proposal in sysexits.h.
    1. For any new environments and databases, you can use just drizzle-kit migrate, and all the migrations together with init will be applied
    2. When you run migrate on a database that already has all the tables from your schema, you need to run it with the drizzle-kit migrate --no-init flag, which will skip the init step. If you run it without this flag and get an error that such tables already exist, drizzle-kit will detect it and suggest you add this flag.
    3. When you introspect the database, you will receive an initial migration without comments. Instead of commenting it out, we will add a flag to journal entity with the init flag, indicating that this migration was generated by introspect action
    1. root@51a758d136a2:~/test/test-project# npx prisma migrate diff --from-empty --to-schema-datamodel prisma/schema.prisma --script > migration.sql root@51a758d136a2:~/test/test-project# cat migration.sql -- CreateTable CREATE TABLE "test" ( "id" SERIAL NOT NULL, "val" INTEGER, CONSTRAINT "test_pkey" PRIMARY KEY ("id") ); root@51a758d136a2:~/test/test-project# mkdir -p prisma/migrations/initial root@51a758d136a2:~/test/test-project# mv migration.sql prisma/migrations/initial/
    1. While that change fixes the issue, there’s a production outage waiting to happen. When the schema change is applied, the existing GetUserActions query will begin to fail. The correct way to fix this is to deploy the updated query before applying the schema migration. sqlc verify was designed to catch these types of problems. It ensures migrations are safe to deploy by sending your current schema and queries to sqlc cloud. There, we run your existing queries against your new schema changes to find any issues.
    1. There is no shortage of command runners! Some more or less similar alternatives to just include:
    2. Recipes that start with #! are called shebang recipes, and are executed by saving the recipe body to a file and running it. This lets you write recipes in different languages:
    3. It isn't strictly necessary, but set -euxo pipefail turns on a few useful features that make bash shebang recipes behave more like normal, linewise just recipe: set -e makes bash exit if a command fails. set -u makes bash exit if a variable is undefined. set -x makes bash print each script line before it's run. set -o pipefail makes bash exit if a command in a pipeline fails. This is bash-specific, so isn't turned on in normal linewise just recipes.
    4. Just 1 file? How would you get it to load all of .env, .env.local, etc. like Vite does? I guess wrap every command with a dotenvx command as needed?

    1. A universal load function can return an object containing any values, including things like custom classes and component constructors. A server load function must return data that can be serialized with devalue
    1. So what I've been doing is using bulidx to build images for multiple architectures then you can pull those images with docker compose. # docker-bake.hcl variable "platforms" { default = ["linux/amd64", "linux/arm64"] } group "default" { targets = [ "my_image", ] } target "my_image" { dockerfile = "myimage.Dockerfile" tags = ["myrepo/myimage:latest"] platforms = platforms } # Command docker buildx bake --push
    1. #!/usr/bin/env npx ts-node // TypeScript code Whether this always works in macOS is unknown. There could be some magic with node installing a shell command shim (thanks to @DaMaxContext for commenting about this). This doesn't work in Linux because Linux distros treat all the characters after env as the command, instead of considering spaces as delimiting separate arguments. Or it doesn't work in Linux if the node command shim isn't present (not confirmed that's how it works, but in any case, in my testing, it doesn't work in Linux Docker containers). This means that npx ts-node will be treated as a single executable name that has a space in it, which obviously won't work, as that's not an executable.
    1. In order to fix #2554, I added a --script option to vite-node so it can be used in shebang! 🎊
    2. Another limitation in this implementation is that passing other options in the shebang to vite-node won't work, as I expect exact indexes in order to figure out what are the arguments to forward. It's not perfect but I think it's good enough to unblock people like me as a start. 👍
    1. It looks like ts-node achieved this by adding a --script-mode and a new bin script. TypeStrong/ts-node@1ad44bf
    2. As I understand it, vite-node would need two changes when running scripts this way: The -- separator shouldn't be required, and all args should be considered args of the calling script. The tsconfig.json for the script should be used.
    3. Using vite-node with a shebang enables no-build workflows for utilities in monorepos, which can reduce complexity and improves DX.
    1. Some ISPs are blocking or throttling SMTP port 25. Using port 587 is recommended.

      How is this better? Then everyone (spammers) will just use that port and that becomes the new de facto fort. How does that solve anything?

    1. If this seems arbitrary and confusing, we understand! It’s a convention, just like how most programmers need to learn zero-based array indexing and then at some point it becomes second nature.
    1. obj.self = obj; stringified = devalue.stringify(obj); // '[{"message":1,"self":0},"hello"]'

      Genius. A super simple way (array indexes) to encode circular references in objects.

    1. There is recent update that enables such functionality - https://github.com/moby/buildkit/releases/tag/dockerfile/1.7.0-labs To work with it - add comment in the beginning of the Dockerfile # syntax=docker.io/docker/dockerfile:1.7-labs
    1. Available in docker/dockerfile-upstream:master-labs. Will be included in docker/dockerfile:1.7-labs.
    2. Use the syntax parser directive to declare the Dockerfile syntax version to use for the build. If unspecified, BuildKit uses a bundled version of the Dockerfile frontend. Declaring a syntax version lets you automatically use the latest Dockerfile version without having to upgrade BuildKit or Docker Engine, or even use a custom Dockerfile implementation.
  4. Apr 2025
    1. This seems similar to what svelte-kit gives us, like import { POSTMARK_KEY } from '$env/static/private';

    1. FPThe functional programming submodule provides a better alternative to chaining: composition; which makes your code clean and safe and doesn't bloat your build.
    1. annotated tags point to a tag object in the object database. git tag -as -m msg annot cat .git/refs/tags/annot contains the SHA of the annotated tag object: c1d7720e99f9dd1d1c8aee625fd6ce09b3a81fef and then we can get its content with: git cat-file -p c1d7720e99f9dd1d1c8aee625fd6ce09b3a81fef
    1. Why file history can be important you ask In a commit ( or a series of commits ) there can be a lot of information that can explain decisions that were taken and why the code has evolved as it is right now. This information can be as valuable as the code itself so you can understand why I find --follow useful.
    1. I would argue that "whole tree" thinking is enhanced by --follow being the default. What I mean is when I want to see the history of the code within a file, I really don't usually care whether the file was renamed or not, I just want to see the history of the code, regardless of renames. So in my opinion it makes sense for --follow to be the default because I don't care about individual files; --follow helps me to ignore individual file renames, which are usually pretty inconsequential.
    1. You (and your collaborators) need to re-generate hooks every time there’s a change in .huskyrc.js. Re-generation could be bound to some events, but there’s no reliable way to cover all possible cases and unexpected behaviors would appear.

      Seems like you could just use a git hook (or several) to trigger the sync from js to .git/hooks?

    2. Since your hooks definition is not in one place anymore but in two (.huskyrc.js and .git/hooks/), suddenly you need boilerplate to keep JS world in sync with Git world.
    1. Similarly, if you use register / sign in you avoid confusion, but you also fit common usage.
    2. Google: Sign Out, Sign In, "Create an account"
    3. If you use "register/log in", there is no chance of confusion, and you lighten the cognitive load.
    4. I would be very careful with the "common usage" argument. For example: the use of sign up and sign in has a very pleasant symmetry which doubtless appeals to many people. Unfortunately, this symmetry reduces the difference by which the user recognizes the button she needs to just two letters. It's very easy to click sign up when you meant sign in.
    5. "Log in" is a valid verb where "Login" is a valid noun. "Signin", however, isn't a valid noun. On the other hand, "Signup" and "Sign up" have the same relationship, and if you use "Log in", you'll probably use "Register" as opposed to "Sign up". Then there's also "Log on" and "Logon", and of course "Log off" or "Log out".
    1. Not the third though - "Login" is a noun (if it is really a word at all): "What is your login?" The other two are verbs "to sign in", or "to log in".
    1. In the case of email, it can be argued that the widespread use of the unhyphenated spelling has made this compound noun an exception to the rule. It might also be said that closed (unhyphenated) spelling is simply the direction English is evolving, but good luck arguing that “tshirt” is a good way to write “t-shirt.”
    1. Vite's job is to get your source modules into a form that can run in the browser as fast as possible. To that end, we recommend separating static analysis checks from Vite's transform pipeline.
    2. Vite uses esbuild to transpile TypeScript into JavaScript which is about 20~30x faster than vanilla tsc, and HMR updates can reflect in the browser in under 50ms.
    1. One important thing to remember is that in regular strings the \ character needs to be escaped while in the regex literal (usually) the / character needs to be escaped. So /\w+\//i becomes new RegExp("\\w+/", "i")

      Easier/prettier in Ruby than JS

    1. Your design should strongly depend from your purpose.
    2. Ask yourself what is the main purpose of storing this data? Do you intend to actually send mail to the person at the address? Track demographics, populations? Be able to ask callers for their correct address as part of some basic authentication/verification? All of the above? None of the above? Depending on your actual need, you will determine either a) it doesn't really matter, and you can go for a free-text approach, or b) structured/specific fields for all countries, or c) country specific architecture.
    3. However, don't force users to supply postcode or region, they may not be used locally.
    4. Locality can be unclear, particularly the distinction between map locality and postal-locality. The postal locality is the one deemed by a postal authority which may sometimes be a nearby large town. However, the postcode will usually resolve any problems or discrepancies there, to allow correct delivery even if the official post-locality is not used.
    5. naming the settlement (city/town/village), which can be generically referred to as a locality
    1. n general, a locality is a particular place or location. More specifically, a locality should be defined as a distinct population cluster. Localities are commonly recognized as cities, towns, and villages; but they may also include other areas such as fishing hamlets, mining camps, ranches, farms and market towns. Localities are often lower-level administrative areas and they may consist of sub-localities, which are segments of a single locality. Sub-localities should not be confused for being the lowest level administrative area of a country, nor should they be confused as being separate localities.
    2. The regions in which a country is divided into. Each region typically has a defined boundary with an administration that performs some level of government functions. These areas are commonly expected to manage themselves with a certain level of autonomy. Various administrative levels exist that can range from “first-level” administrative to “fifth-level” administrative. The higher the level number is the lower its rank will be on the administrative level hierarchy. For example, the US is made up of states (first-level), which are divided into counties (second-level) that consist of municipalities (third-level). For comparison, the United Kingdom (GB) is comprised of the four countries England, Scotland, Wales and Northern Ireland (first-level). These countries are made up of counties, districts and shires (second-level), which in turn are made up of cities and towns (third-level) and small villages and parishes (fourth-level). Other common terms for an administrative area are administrative division, administrative region, administrative unit, administrative entity and subdivision.
    3. In the United States of America, (US), we commonly use the terms city, state and zip code when referring to addresses. While that may mostly work for a country like Mexico (MX), it is not appropriate for other countries like Japan (JP) where the country is divided into prefectures instead of states. Not all countries call their sub-region divisions the same thing and many countries have several levels of sub-divisions. To further complicate the matter, not all sub-division levels are necessarily interchangeable from one country to another. For example, a first level sub-region in the US is a state, such as California (US-CA), but a first level sub-region for the United Kingdom of Great Britain and Northern Ireland (GB) is a country, such as England (GB-ENG).
    4. Every country can have its own particular set of terms and definitions; to try to go over them all would be too complicated and inefficient. Instead, let’s go over some commonly used terms that are helpful when talking about international addresses.
    1. Thank you for an eye-opening answer – I had no idea native speakers don't really make the distinction! As for using "town" about cities, I was thinking more of the fact that dictionaries explain the meaning of "city" in terms of "large town", which to me indicates that "town" would be a hypernym of "town" and "city" in much the same way as "dog" is a hypernym of "dog" and "bitch", but I guess I've drawn the wrong conclusion here.
    1. Some developers have the intuition that the file extension could be used to determine the module type, as it is in many existing non-standard module systems. However, it's a deep web architectural principle that the suffix of the URL (which you might think of as the "file extension" outside of the web) does not lead to semantics of how the page is interpreted. In practice, on the web, there is a widespread mismatch between file extension and the HTTP Content Type header. All of this sums up to it being infeasible to depend on file extensions/suffixes included in the module specifier to be the basis for this checking.

      .

    1. I use dashes -, for the reasons you mention above. I avoid underscores because they require using the shift key, so take at least twice as long to type (also, I think they're ugly)
    2. With so many characters that you might not think should be special, in fact being special, I just use the special characters anyway. This also puts me in the good habits of using bash completion, where it will auto-escape all the special characters in a filename. But it also puts me in the good habits of escaping/quoting EVERYTHING in scripts and multi-part 1-liners in bash. For example, in just a simple 1-liner: for file in *.txt; do something.sh "$file"; done That way, even if one of the files has a space, or some other character, the do part of the loop will still act on it, and not miss on 2 or more file-name-parts, possibly causing unintended side-effects. Since I cannot control the space/not-space naming of EVERY file I encounter, and if I tried, it would probably break some symlinks somewhere, causing yet more unintended consequences, I just expect that all filename/directoryname could have spaces in it, and just quote/escape all variables to compensate. So, then I just use whatever characters I want (often spaces) in filenames. I even use spaces in ZFS dataset names, which I have to admit has caused a fair amount of head-scratching among the developers that write the software for the NAS I use. Sum-up: Spaces are not an invalid character, so there's no reason not to use them.
    1. I've noticed somethings, particularly ADB commands, use spaces to issue the next part of the command. Thus having a space would say that there's a new command. And since hyphens are actually used in some spelling, such as my last name, it's better to use underscores.
    1. I didn't see this mentioned, but lots of software doesn't treat the underscore as a word separator (also called a "break character") these days. In other words, you can't jump to next word with Ctrl+Right-arrow or select individual words by double-clicking on them. The argument is that variable names in some programming languages like Python use snake_case, so highlighting them might require an extra Ctrl+Right-arrow. I do not necessarily like that decision, because, while being (marginally) useful in those limited domains, it makes file renaming and any other word manipulations or typesetting very cumbersome.
    2. But one thing to remember is that if you are primarily doing python coding - and your code tree has files/directories with hyphen in them and you intend to use them as modules (do an import filename in your code), then this will cause errors as python modules cannot have hyphen in them.
    3. Underscores are usually the convention that people use when replacing spaces, although hyphens are fine too I'd say. But since hyphens might show up in other ways such as hyphenated words, you'll have more success in preserving a name value by using underscores. For instance, if you have a file called "A picture taken in Winston-Salem, NC.jpg" and you want to convert the spaces to underscores, then you can preserve the hyphen in the name and retain its meaning.
    1. I must be the exception, because I use both spaces and underscores, depending on circumstances.   The practical/obsessive-compulsive side of me saves all my documents using spaces. They're cleaner to read than underscores, and they look far more professional.   The programmer side of me still uses underscores in files that will be accessible via the web or that need to be parsed in a program.   And to complicate matters worse, I use camel case to name all my programming files. So in actuality I use 3 standards interchangeably.   Both have their uses, I just choose one for clarity and one for ease of use.
    1. Separating each of these entities with a hyphen allows you to double-click and highlight only that entity. With underscores-only, you need to enlist the painstaking process of precisely positioning your cursor at the beginning of the entity, then dragging your blue selector to the end of the entity.
    1. Unfortunately, we don't have control over updates to Debian and Alpine distributions or the upstream postgres image. Because of this, there might be some issues that we cannot fix right away. On the positive side, the postgis/postgis images are regenerated every Monday. This process is to ensure they include the latest changes and improvements. As a result, these images are consistently kept up-to-date.
    1. This is the Git repo of the Docker "Official Image" for postgres (not to be confused with any official postgres image provided by postgres upstream).

      That is confusing! They both sound official.

    1. In simple words, the database client and the server prove and convince each other that they know the password without exchanging the password or the password hash. Yes, it is possible by doing a Salted Challenge and Responses, SCRAM-SHA-256, as specified by RFC 7677. This way of storing, communicating, and verifying passwords makes it very hard to break a password.

      Interesting!

    1. Treat object-name columns in the information_schema views as being of type name, not varchar (Tom Lane) § § § Per the SQL standard, object-name columns in the information_schema views are declared as being of domain type sql_identifier. In PostgreSQL, the underlying catalog columns are really of type name. This change makes sql_identifier be a domain over name, rather than varchar as before. This eliminates a semantic mismatch in comparison and sorting behavior, which can greatly improve the performance of queries on information_schema views that restrict an object-name column. Note however that inequality restrictions, for example
    1. It's fairly trivial to write functionality in plpgsql that more than covers what timetravel did.
    2. The extension depended on old types which are about to be removed. As the code additionally was pretty crufty and didn't provide much in the way of functionality, removing the extension seems to be the best way forward.
    1. I have included code from others trusting that it would work, and that they would fix reported problems. And often that is true, there are quite a few faithful contributors. But sometimes someone just wants to get his feature in, and as soon as the things he uses are working, he disappears. And then I end up having to fix problems. These days I’m a lot more careful about including new features. Especially when it’s complex and interferes with several existing parts of the code. I’m insisting more often on writing tests and documentation before including anything.
    2. A lot of it feels like someone who doesn’t like the old code and wants to do it “right.” I can agree that the old code is ugly. But it will take an awful lot of effort to make a new implementation. It’s a lot like what happened to Elvis: A rewrite was going to make it much better, but it took so long, during which Vim added more features, that eventually there are not so many Elvis users. And the rewritten Elvis may have nice code, but users don’t notice that.
    1. Why not a library? We've found it extremely hard to develop a library that: Supports the many database libraries, ORMs, frameworks, runtimes, and deployment options available in the ecosystem. Provides enough flexibility for the majority of use cases. Does not add significant complexity to projects.
    2. Lucia is now a learning resource on implementing auth from scratch.
    1. Lucia, the authentication library that we are using, is deprecated (Q1/2025). However, the author of Lucia decided to make it a learning resource, because Lucia is just a thin wrapper around cryptographic libraries like Oslo. So we are following the migration path on their website and will also use Oslo instead of Lucia.
    1. If I follow the new examples and implement them in my code (e.g. Passkeys), how will I know if a security issue is found in the examples in the future? Currently, libraries get updated and I pull in the new version. Unless I remember to check back occasionally, I'll never know if the example code is updated or fixed.
    1. A search for .editorconfig files will stop if the root filepath is reached or an EditorConfig file with root=true is found.
    2. EditorConfig helps maintain consistent coding styles for multiple developers working on the same project across various editors and IDEs.
    1. Overrides let you have different configuration for certain file extensions, folders and specific files. Prettier borrows ESLint’s override format.
    2. Prettier intentionally doesn’t support any kind of global configuration. This is to make sure that when a project is copied to another computer, Prettier’s behavior stays the same. Otherwise, Prettier wouldn’t be able to guarantee that everybody in a team gets the same consistent results.
    1. They do provide one, via CSS tab-size (which sites and user styles can both set). also, GitHub will obey .editorconfig in a repo and display tabs at the resulting width (via that CSS), so nobody has had to use spaces on GitHub.com for many many years.
  5. Mar 2025
    1. he Web, sadly, defaults to 8 spaces which is an abomination for every snippet of code that would ike to be instantly readable on Mobile Phones too browsers don't provide a tab size setting anywhere (last time I've checked) to override that horrifying 8 spaces legacy nobody wants or need since tabs were invented

      a later comment shows this is incorrect; we have CSS tab-size

    2. because not enough code and content is written with tabs, nobody cares about these details while I am pretty sure that if tabs were the default, every tool would surely have a setting/preference for users consuming tabs based content
    3. I was pretty anti-tabs for the longest time, until I heard the best argument for them, accessibility. Tabs exist for indentation customization, and this is exactly what is needed for people with impaired sight. IMO, this is a pretty good argument for moving towards tabs.
    1. Auth shouldn't be a paid service!

      I assume this is referring to services like Auth0 where people out-source authentication instead of keeping it directly part of your own code base

    2. The goal of Lucia v3 was to be the easiest and cleanest way to implement database-backed sessions in your projects. It didn't have to be a library. I just assumed that a library will be the answer. But I ultimately came to conclusion that my assumption was wrong. I don't see this change as me abandoning the project. In fact, I think it's a step forward. If implementing sessions wasn't easy, I wouldn't be deprecating the package. But why wouldn't a library be the answer? It seems like a such an obvious answer. One word - database. I talked about how database adapters were a significant complexity tax to the library. I think a lot of people interpreted that as maintenance burden on myself. That's not wrong, but the bigger issue is how the adapters limit the API. Adapters always felt like a black box to me as both an end user and a maintainer. It's very hard to design something clean around it and makes everything clunky and fragile, especially when you need to deal with TypeScript shenanigans.
    1. the only part I agree with is that it could be annoying to change behaviour based on this variable, but any library worth its salt will use this to set sensible defaults and allow explicit overrides for all of the settings.
    1. The biggest mistake I see is thinking of a Job to be Done as an activity or task. Examples include store and retrieve music or listen to music. These are not Jobs; rather, they are tasks and activities — which means they describe how you use a product or what you do with it.
    1. Jobs to be done theory, also called jobs theory, posits that people don’t buy products; they “hire” them to do jobs, such as solving a problem or fulfilling a desire.
    1. The Markdown syntax is not supported, but you can add bold styling with single asterisks, which is the standard Markdown syntax for italic. Very confusing!
    1. I’m not a believer in languages designed by a committee and I have faith in Matz making reasonable decisions at the end of the day.
    2. By the way, while .: doesn’t really get any awards for code elegance, it least it’s aligned with another existing pattern in Ruby. Victor Shepelev echoes my sentiment almost precisely here.