10,000 Matching Annotations
  1. Nov 2022
    1. Mash duplicates any sub-Hashes that you add to it and wraps them in a Mash. This allows for infinite chaining of nested Hashes within a Mash without modifying the object(s) that are passed into the Mash. When you subclass Mash, the subclass wraps any sub-Hashes in its own class. This preserves any extensions that you mixed into the Mash subclass and allows them to work within the sub-Hashes, in addition to the main containing Mash.
    2. foo = Foo.new(bar: 'baz') #=> {:bar=>"baz"} qux = { **foo, quux: 'corge' } #=> {:bar=> "baz", :quux=>"corge"} qux.is_a?(Foo) #=> true

      This surprised me.

      I would have expected — since Hash literal notation { } was used — that the resulting type would be Hash, not the type of foo. Strange.

      Is this a good thing... or?


      Also, in my quick test, I didn't find this to be true, so...?

      ``` main > symbol_mash.class => SymbolizedMash

      main > { **symbol_mash }.class => Hash ```

    3. by using symbols as keys, you will be able to use the implicit conversion of a Mash via the #to_hash method to destructure (or splat) the contents of a Mash out to a block

      Eh? The example below:

      symbol_mash = SymbolizedMash.new(id: 123, name: 'Rey') symbol_mash.each do |key, value| # key is :id, then :name # value is 123, then 'Rey' end

      seems to imply that this is possible (and does an implicit conversion) because it defines to_hash. But that's simply not true, as these 2 examples below prove:

      ``` main > symbol_mash.class_eval { undef :to_hash } => nil

      main > symbol_mash.each {|k,v| p [k,v] } [:id, 123] [:name, "Rey"] => {:id=>123, :name=>Rey} ```

      ``` main > s = 'a' => a

      main > s.class_eval do def to_hash chars.zip(chars).to_h end end => :to_hash

      main > s.to_hash => {a=>a}

      main > s.each Traceback (most recent call last) (filtered by backtrace_cleaner; set Pry.config.clean_backtrace = false to see all frames): 1: (pry):85:in __pry__' NoMethodError: undefined methodeach' for "a":String ```

    4. by using symbols as keys, you will be able to use the implicit conversion of a Mash via the #to_hash method to destructure (or splat) the contents of a Mash out to a block

      This doesn't actually seem to be an example of destructure/splat. (When it said "destructure the contents ... out to a block", I was surprised and confused, because splatting is when you splat it into an argument or another hash — never a block.)

      An example of destructure/splat would be more like

      method_that_takes_kwargs(**symbol_mash)

    1. In our system, events are generated by physical hosts and follow different routes to the event storage. Therefore, the order in which they appear in the storage and become retrievable - via the events API - does not always correspond to the order in which they occur. Consequently, this system behavior makes straight forward implementation of event polling miss some events. The page of most recent events returned by the events API may not contain all the events that occurred at that time because some of them could still be on their way to the storage engine. When the events arrive and are eventually indexed, they are inserted into the already retrieved pages which could result in the event being missed if the pages are accessed too early (i.e. before all events for the page are available). To ensure that all your events are retrieved and accounted for please implement polling the following way:
    1. Bash maintains an internal hash of previously found executables in your path. In this case, it has details that at one time there was an executable at /usr/bin/siege, and reuses that path to avoid having to search again. You need to tell bash to manually rehash the path for siege like so: hash siege You can also clear all hashed locations: hash -r
    1. Remember there are two kinds of variable. Internal Variables and Environment Variables. PATH should be an environment variable.

      In my case, I was trying to debug which asdf not finding asdf, in a minimal shell.

      I had checked bash-5.1$ echo $PATH|grep asdf /home/tyler/.asdf/bin

      but ```

      The PATH environment variable

      env | /bin/grep PATH `` being empty was the key discovery here. Must have forgotten theexport`.

    1. In v3, svelte-preprocess was able to type-check Svelte components. However, giving the specifics of the structure of a Svelte component and how the script and markup contents are related, type-checking was sub-optimal. In v4, your TypeScript code will only be transpiled into JavaScript, with no type-checking whatsoever. We're moving the responsibility of type-checking to tools better fit to handle it, such as svelte-check, for CLI and CI usage, and the VS Code extension, for type-checking while developing.
    1. You're likely not using "type": "module" in your package.json, so import statements don't work in svelte.config.js. You have three ways to fix this: Use require() instead (also see https://github.com/sveltejs/language-tools/blob/master/docs/preprocessors/in-general.md#generic-setup) Rename svelte.config.js to svelte.config.mjs Set "type": "module" in your package.json (may break other scripts)
    1. Warning: This ignores the user's keyboard layout, so that if the user presses the key at the "Y" position in a QWERTY keyboard layout (near the middle of the row above the home row), this will always return "KeyY", even if the user has a QWERTZ keyboard (which would mean the user expects a "Z" and all the other properties would indicate a "Z") or a Dvorak keyboard layout (where the user would expect an "F"). If you want to display the correct keystrokes to the user, you can use Keyboard.getLayoutMap().

      Wow, that's quite a caveat!

    1. It would probably be worth mentioning this explicitly in the README: "Configuration of the language server happens over the LSP protocol by passing a configuration object; your LSP client should have a way of setting the configuration object for a server. Here is a link to the spec for the configuration that is supported [...]"
    1. So when configuring Capybara, I'm using ignore_default_browser_options, and only re-use this DEFAULT_OPTIONS and exclude the key I don't want Capybara::Cuprite::Driver.new( app, { ignore_default_browser_options: true, window_size: [1200, 800], browser_options: { 'no-sandbox': nil }.merge(Ferrum::Browser::Options::Chrome::DEFAULT_OPTIONS.except( "disable-features", "disable-translate", "headless" )), headless: false, } )
    1. You can definitely set the Return-Path header as a sender. But yes, some receivers might rewrite it (But not always ), or depending on who you're sending through, it might be re-written by them. For instance when using MailGun to send bulk email you have to do things just right in order to set a Return-Path that will be preserved. I know this contradicts the RFC you cite, but it's in practice true.
    1. I've developed additional perspective on this issue - I have DNS settings in my hosts file that are what resolve the visits to localhost, but also preserve the subdomain in the request (this latter point is important because Rails path helpers care which subdomain is being requested) To sum up the scope of the problem as it stands now - I need a way within Heroku/Capybara system tests to both route requests to localhost, but also maintain the subdomain information of the request. I've been able to accomplish one or the other, but haven't found a configuration that provides both yet.
    1. first we're looking for the "main" object. The word "main" is used in lots of places in Ruby, so that will be hard to track down. How else can we search?Luckily, we know that if you print out that object, it says "main". Which means we should be able to find the string "main", quotes and all, in C.
    1. You might notice that the “expires_in” property refers to the access token, not the refresh token. The expiration time of the refresh token is intentionally never communicated to the client. This is because the client has no actionable steps it can take even if it were able to know when the refresh token would expire.
    1. But what about a Refresh Token flow? When using a refresh token, confidential clients also have to authenticate. Public clients, such as browser-based applications, do not authenticate during the Refresh Token flow. So in a typical frontend application, refresh tokens issued to frontend web applications are bearer tokens.   In practice, this means that if an attacker manages to steal a refresh token from a frontend application, they can use that token in a Refresh Token flow. To counter such attacks, the OAuth 2.0 specifications mandate that browser-based applications apply a security measure known as refresh token rotation.
    1. For example, if I make an application (Client) that allows a user (Resource Owner) to make notes and save them as a repo in their GitHub account (Resource Server), then my application will need to access their GitHub data. It's not secure for the user to directly supply their GitHub username and password to my application and grant full access to the entire account. Instead, using OAuth 2.0, they can go through an authorization flow that will grant limited access to some resources based on a scope, and I will never have access to any other data or their password.
    1. This document defines how a JWT Bearer Token can be used to request an access token when a client wishes to utilize an existing trust relationship, expressed through the semantics of the JWT, without a direct user-approval step at the authorization server.

      [transfer fo trust/credentials]

    1. If the Client is a Single-Page App (SPA), an application running in a browser using a scripting language like JavaScript, there are two grant options: the Authorization Code Flow with Proof Key for Code Exchange (PKCE) and the Implicit Flow with Form Post. For most cases, we recommend using the Authorization Code Flow with PKCE because the Access Token is not exposed on the client side, and this flow can return Refresh Tokens.
    1. The Console now supports redeclaration of const variables across separate REPL scripts (such as when you run a statement in the Console), in addition to the existing let and class redeclarations. This support allows you to experiment with different declarations for const variables without refreshing the page. Previously, DevTools threw a syntax error if you redeclared a const binding.

      Edge version of this matching release note from the matching Chrome feature:

      https://hyp.is/d9XEKGfOEe2a27vFWUjjSA/developer.chrome.com/blog/new-in-devtools-92/

      Interesting, they're copying some content, but not all of it verbatim.

    1. The btoa() function takes a JavaScript string as a parameter. In JavaScript strings are represented using the UTF-16 character encoding: in this encoding, strings are represented as a sequence of 16-bit (2 byte) units. Every ASCII character fits into the first byte of one of these units, but many other characters don't. Base64, by design, expects binary data as its input. In terms of JavaScript strings, this means strings in which each character occupies only one byte. So if you pass a string into btoa() containing characters that occupy more than one byte, you will get an error, because this is not considered binary data:
    1. Honestly, at this point, I don't even know what tools I'm using, and which is responsible for what feature. Diving into the code of capybara and cucumber yields hundreds of lines of metaprogramming magic that somehow accretes into a testing framework. It's really making me loathe TDD despite my previous youthful enthusiasm.

      opinion: too much metaprogramming magic

      I'm not so sure it's "too much" though... Any framework or large software project is going to feel that way to a newcomer looking at the code, due to the number of layers of abstractions, etc. that eventually were added/needed by the maintainers to make it maintainable, decoupled, etc.

    1. So far for the obligatory warning. I get the point, I even agree with the argument, but I still want to send a POST request. Maybe you are testing an API without a user interface or you are writing router tests? Is it really impossible to simulate a POST request with Capybara? Nah, of course not!
    2. The Capybara Ruby gem doesn’t support POST requests, the built-in visit method always uses GET. This is by design and with good reason: Capybara is built for acceptance testing and a user would never ask to ‘post’ parameter X and Y to the application. There will always be some kind of interface, a form for example. It makes more sense to simulate what the visitor would really do
    1. There IS a super nice Visual Studio Code plugin for it but it still comes with the environment pre-reqs AS well as the cumbersome pressing of Alt-D (or was it Ctrl-D??) to get the preview. For other IDE’s I bet there are similar plugins leaving the same nasty taste of dissatisfaction… and the pollution of your environment.
    1. You may want to change the controllers to your custom controllers with: Rails.application.routes.draw do use_doorkeeper do # it accepts :authorizations, :tokens, :token_info, :applications and :authorized_applications controllers :applications => 'custom_applications' end end
    1. git_workspace/ ├── .vscode │ └── settings.json # global settings, my preferred ones ├── my-personal-projects/ │ └── project1/ │ └── .git/ └── company-projects/ ├── .vscode │ └── settings.json # local settings, overrides some of my personal ones ├── project2/ │ └── .git/ └── project3/ └── .git/
    1. As you note, Activity diagrams inherently can include concurrency and timing. If you look at this example cribbed from Wikipedia, shown below, you can observe the section with two heavy horizontal bars, and two parallel activities of "present idea" and "record idea". That is read as "start these activities in parallel, and continue only when both are complete." Flowcharts can't express this within the notation. Practically, using activity diagrams lets you think clearly about concurrent processes. I think you'll find that anyone who can read a flowchart will quickly adapt.
    2. It might seem as a preference, but if we have a standardized language for describing software systems, Why do we use something else? This can lead to bad habit of overusing flowcharts. Activity diagrams are really simple. But if you decide to describe a more complicated aspect of the system or try to change the part you are describing, you might have to switch anyway. So just use UML and prevent confusion in the future.
    1. It is handy to manually generate the diagram from times to times using the previously created command: npm run db:diagram:generate. Though, getting the diagram to update itself on its own automatically without a developer interaction would ensure that it the diagram is never obsolete. There are several ways of doing this.You could use a pre-commit git hook or even better simply configure your CI/CD pipeline(s) to run the npm script whenever something gets merged into the main branch 🙂
    1. It would be nice if we could get some official word on whether this repository is affect by the catastrophic CVE-2021-44228 that is currently affecting a considerable percentage of softwares around the globe. From my limited understanding and looking at the refreshingly concise list of dependencies in the pom.xml, I would think this project is not affected, but I and probably others who are not familiar with the projects internals would appreciate an official word.
    2. I understand that typically, it wouldn't make much sense to comment on every CVE that doesn't affect a product, but considering the severity and pervasiveness of this particular issue, maybe an exception is warranted.
    1. Docker suffers from the Xerox problem. Like it or not the industry refers to them as Dockerfiles.

      But the industry can change what they call it... just like it's already changed - from "master" to "main" - from "blacklist" to "blocklist" - and so on

    2. They are 100% identical; just different names. From podman-build: “Builds an image using instructions from one or more Containerfiles or Dockerfiles and a specified build context directory. A Containerfile uses the same syntax as a Dockerfile internally. For this document, a file referred to as a Containerfile can be a file named either ‘Containerfile’ or ‘Dockerfile’.”
    1. Changing the second line to: foo.txt text !diff would restore the default unset-ness for diff, while: foo.txt text diff will force diff to be set (both will presumably result in a diff, since Git has presumably not previously been detecting foo.txt as binary).

      comments for tag: undefined vs. null: Technically this is undefined (unset, !diff) vs. true (diff), but it's similar enough that don't need a separate tag just for that.

      annotation meta: may need new tag: undefined/unset vs. null/set

    1. For dependent packages installed by apt there is not usually a need to pin them to a version.

      Just install the specific initial version, but don't pin it. This allows users to easily update to later versions later...