10,000 Matching Annotations
  1. Nov 2022
    1. The btoa() function takes a JavaScript string as a parameter. In JavaScript strings are represented using the UTF-16 character encoding: in this encoding, strings are represented as a sequence of 16-bit (2 byte) units. Every ASCII character fits into the first byte of one of these units, but many other characters don't. Base64, by design, expects binary data as its input. In terms of JavaScript strings, this means strings in which each character occupies only one byte. So if you pass a string into btoa() containing characters that occupy more than one byte, you will get an error, because this is not considered binary data:
    1. Honestly, at this point, I don't even know what tools I'm using, and which is responsible for what feature. Diving into the code of capybara and cucumber yields hundreds of lines of metaprogramming magic that somehow accretes into a testing framework. It's really making me loathe TDD despite my previous youthful enthusiasm.

      opinion: too much metaprogramming magic

      I'm not so sure it's "too much" though... Any framework or large software project is going to feel that way to a newcomer looking at the code, due to the number of layers of abstractions, etc. that eventually were added/needed by the maintainers to make it maintainable, decoupled, etc.

    1. So far for the obligatory warning. I get the point, I even agree with the argument, but I still want to send a POST request. Maybe you are testing an API without a user interface or you are writing router tests? Is it really impossible to simulate a POST request with Capybara? Nah, of course not!
    2. The Capybara Ruby gem doesn’t support POST requests, the built-in visit method always uses GET. This is by design and with good reason: Capybara is built for acceptance testing and a user would never ask to ‘post’ parameter X and Y to the application. There will always be some kind of interface, a form for example. It makes more sense to simulate what the visitor would really do
    1. There IS a super nice Visual Studio Code plugin for it but it still comes with the environment pre-reqs AS well as the cumbersome pressing of Alt-D (or was it Ctrl-D??) to get the preview. For other IDE’s I bet there are similar plugins leaving the same nasty taste of dissatisfaction… and the pollution of your environment.
    1. You may want to change the controllers to your custom controllers with: Rails.application.routes.draw do use_doorkeeper do # it accepts :authorizations, :tokens, :token_info, :applications and :authorized_applications controllers :applications => 'custom_applications' end end
    1. git_workspace/ ├── .vscode │ └── settings.json # global settings, my preferred ones ├── my-personal-projects/ │ └── project1/ │ └── .git/ └── company-projects/ ├── .vscode │ └── settings.json # local settings, overrides some of my personal ones ├── project2/ │ └── .git/ └── project3/ └── .git/
    1. As you note, Activity diagrams inherently can include concurrency and timing. If you look at this example cribbed from Wikipedia, shown below, you can observe the section with two heavy horizontal bars, and two parallel activities of "present idea" and "record idea". That is read as "start these activities in parallel, and continue only when both are complete." Flowcharts can't express this within the notation. Practically, using activity diagrams lets you think clearly about concurrent processes. I think you'll find that anyone who can read a flowchart will quickly adapt.
    2. It might seem as a preference, but if we have a standardized language for describing software systems, Why do we use something else? This can lead to bad habit of overusing flowcharts. Activity diagrams are really simple. But if you decide to describe a more complicated aspect of the system or try to change the part you are describing, you might have to switch anyway. So just use UML and prevent confusion in the future.
    1. If the Client is a Single-Page App (SPA), an application running in a browser using a scripting language like JavaScript, there are two grant options: the Authorization Code Flow with Proof Key for Code Exchange (PKCE) and the Implicit Flow with Form Post. For most cases, we recommend using the Authorization Code Flow with PKCE because the Access Token is not exposed on the client side, and this flow can return Refresh Tokens.
    1. It is handy to manually generate the diagram from times to times using the previously created command: npm run db:diagram:generate. Though, getting the diagram to update itself on its own automatically without a developer interaction would ensure that it the diagram is never obsolete. There are several ways of doing this.You could use a pre-commit git hook or even better simply configure your CI/CD pipeline(s) to run the npm script whenever something gets merged into the main branch 🙂
    1. It would be nice if we could get some official word on whether this repository is affect by the catastrophic CVE-2021-44228 that is currently affecting a considerable percentage of softwares around the globe. From my limited understanding and looking at the refreshingly concise list of dependencies in the pom.xml, I would think this project is not affected, but I and probably others who are not familiar with the projects internals would appreciate an official word.
    2. I understand that typically, it wouldn't make much sense to comment on every CVE that doesn't affect a product, but considering the severity and pervasiveness of this particular issue, maybe an exception is warranted.
    1. Docker suffers from the Xerox problem. Like it or not the industry refers to them as Dockerfiles.

      But the industry can change what they call it... just like it's already changed - from "master" to "main" - from "blacklist" to "blocklist" - and so on

    2. They are 100% identical; just different names. From podman-build: “Builds an image using instructions from one or more Containerfiles or Dockerfiles and a specified build context directory. A Containerfile uses the same syntax as a Dockerfile internally. For this document, a file referred to as a Containerfile can be a file named either ‘Containerfile’ or ‘Dockerfile’.”
    1. Changing the second line to: foo.txt text !diff would restore the default unset-ness for diff, while: foo.txt text diff will force diff to be set (both will presumably result in a diff, since Git has presumably not previously been detecting foo.txt as binary).

      comments for tag: undefined vs. null: Technically this is undefined (unset, !diff) vs. true (diff), but it's similar enough that don't need a separate tag just for that.

      annotation meta: may need new tag: undefined/unset vs. null/set

    1. For dependent packages installed by apt there is not usually a need to pin them to a version.

      Just install the specific initial version, but don't pin it. This allows users to easily update to later versions later...

    2. Because the official images are intended to be learning tools for those new to Docker as well as the base images for advanced users to build their production releases, we review each proposed Dockerfile to ensure that it meets a minimum standard for quality and maintainability. While some of that standard is hard to define (due to subjectivity), as much as possible is defined here, while also adhering to the "Best Practices" where appropriate.
    1. Second, if Jenkins runs as PID 1, then it may not receive the signals you send it! That's a subtlety in PID 1. Unlike other unlike processes, PID 1 does not have default signal handlers, which means that if Jenkins hasn't explicitly installed a signal handler for SIGTERM, then that signal is going to be discarded when it's sent (whereas the default behavior would have been to terminate the process).
    2. First, if Jenkins runs as PID 1, then it's difficult to differentiate between process that were re-parented to Jenkins (which should be reaped), and processes that were spawned by Jenkins (which shouldn't, because there's other code that's already expecting to wait them).
    3. A second problem is that once your process has exited, Bash will proceed to exit as well. If you're not being careful, Bash might exit with exit code 0, whereas your process actually crashed (0 means "all fine"; this would cause Docker restart policies to not do what you expect). What you actually want is for Bash to return the same exit code your process had.
    1. Zombie processes should not be confused with orphan processes: an orphan process is a process that is still executing, but whose parent has died. When the parent dies, the orphaned child process is adopted by init (process ID 1). When orphan processes die, they do not remain as zombie processes; instead, they are waited on by init.
    1. An init system does not have to be heavyweight. You may be thinking about Upstart, Systemd, SysV init etc with all the implications that come with them. You may be thinking that full system needs to be booted inside the container. None of this is true. A "full init system" as we may call it, is neither necessary nor desirable.
    2. Let's look at a concrete example. Suppose that your container contains a web server that runs a CGI script that's written in bash. The CGI script calls grep. Then the web server decides that the CGI script is taking too long and kills the script, but grep is not affected and keeps running. When grep finishes, it becomes a zombie and is adopted by the PID 1 (the web server). The web server doesn't know about grep, so it doesn't reap it, and the grep zombie stays in the system.
    3. In every day language, people consider "zombie processes" to be simply runaway processes that cause havoc. But formally speaking -- from a Unix operating system point of view -- zombie processes have a very specific definition. They are processes that have terminated but have not (yet) been waited for by their parent processes.
    1. You need .notdef, unicode value undefined: microsoft.com/typography/otspec/recom.htm Characters are assigned in blocks of the same kind. Most blocks have some unassigned points at the end to start the next block on a round number. These points allow Unicode Consortium to add new glyphs to a block. New glyphs don't come into existence often. See typophile.com/node/102205. Maybe you can ask your question in the Typophile forum. They can tell you more about how this exactly works and how to render .notdef
    2. I wasn't aware of 'missing character glyph', some Googling suggests that U+0000 can/should be used for missing characters in the font. However in at least one font I've tested with U+0000 is rendered as whitespace while missing characters are rendered as squares (similar to U+25A1).
    1. Consider a text file containing the German word für (meaning 'for') in the ISO-8859-1 encoding (0x66 0xFC 0x72). This file is now opened with a text editor that assumes the input is UTF-8. The first and last byte are valid UTF-8 encodings of ASCII, but the middle byte (0xFC) is not a valid byte in UTF-8. Therefore, a text editor could replace this byte with the replacement character symbol to produce a valid string of Unicode code points. The whole string now displays like this: "f�r".