19,785 Matching Annotations
  1. Aug 2021
    1. Sanity check = Does this even make sense? For example, if your application only outputs integers, sqrt(-1) and log(-1) are undefined.
    2. Regardless of the differences between the words, what you'd use to describe a test depends on the semantics (meaning) of the test, not the exact code
    1. Alternately, set out a few bowls of white vinegar, which also neutralize odor molecules.
    2. To remove any lingering musty smell, try the old-fashioned yet effective remedy of setting out a few small bowls of baking soda around the room; baking soda absorbs and neutralizes odor molecules well.
    1. Treat the carpet with a white vinegar spray. One part vinegar to two parts warm water. A simple spray over the carpet will remove any light surface residue – definitely better suited for a lesser spill or odor.
    1. Once a card has been played, any player can say “Mischief!” and call you on your potential lie

      veggie alternative word

    1. You can add event modifiers with the on:click$preventDefault$capture={handler} syntax. If you use Svelte's native on:click|preventDefault={handler} syntax, it will not compile. You have to use "$" instead of "|". (The extra S inside the | stands for SMUI.)

      How does it do that? I didn't think components could introspect to see which event handlers were added by the calling component?!

      Does it actually somehow generate an event named something like click$preventDefault$capture? I still don't get how that would work.

    1. hover and resize window support #965 Closed Sorry, something went wrong. Collaborator jnicklas commented on Feb 25, 2013 Go, go @twalpole!
    2. I have a rule that I won't allow Capybara to be monkey-patched in Poltergeist. This gives some indication to users about whether something is non-standard. So basically all non-standard stuff must be on page.driver rather than page (or a node).
    3. What seems more problematic is divergence between drivers. For example, capybara-webkit and poltergeist support several of the same things. Let's take resizing the window as an example. In capybara-webkit this is page.driver.resize_window(x, y) and in poltergeist it's page.driver.resize(x, y). This means that if a user wants to switch from one to the other they have to change their code. Now I don't know if selenium does or doesn't support resizing the window, but supposing it doesn't I think there's still a lot of value in the capybara project deciding what the blessed API is, because then all the drivers that support that feature can implement it using the same API, increasing portability.
    1. Here is one of the most confusing cases: def foo(x, **kwargs) p [x, kwargs] end def bar(x=1, **kwargs) p [x, kwargs] end foo({}) #=> [{}, {}] bar({}) #=> [1, {}] bar({}, **{}) #=> expected: [{}, {}], actual: [1, {}]
    2. The automatic conversion not only confuses people but also makes the method less extensible. See [Feature #14183] for more details about the reasons for the change in behavior, and why certain implementation choices were made.
    3. 3. The no-keyword-arguments syntax (**nil) is introduced You can use **nil in a method definition to explicitly mark the method accepts no keyword arguments. Calling such methods with keyword arguments will result in an ArgumentError. (This is actually a new feature, not an incompatibility)
    4. This is useful to make it explicit that the method does not accept keyword arguments. Otherwise, the keywords are absorbed in the rest argument in the above example.
    5. If you extend a method to accept keyword arguments, the method may have incompatibility as follows: # If a method accepts rest argument and no `**nil` def foo(*args) p args end # Passing keywords are converted to a Hash object (even in Ruby 3.0) foo(k: 1) #=> [{:k=>1}] # If the method is extended to accept a keyword def foo(*args, mode: false) p args end # The existing call may break foo(k: 1) #=> ArgumentError: unknown keyword k
    6. There are three minor changes about keyword arguments in Ruby 2.7.
    7. If your code doesn’t have to run on Ruby 2.6 or older, you may try the new style in Ruby 2.7. In almost all cases, it works. Note that, however, there are unfortunate corner cases as follows:
    8. Ruby 2.6 or before themselves have tons of corner cases in keyword arguments.
    9. You need to explicitly delegate keyword arguments. def foo(*args, **kwargs, &block) target(*args, **kwargs, &block) end
    10. ruby2_keywords might be removed in the future after Ruby 2.6 reaches end-of-life. At that point, we recommend to explicitly delegate keyword arguments
    11. foo({}, **{}) #=> Ruby 2.7: [{}] (You can pass {} by explicitly passing "no" keywords)
    12. you can use the new delegation syntax (...) that is introduced in Ruby 2.7. def foo(...) target(...) end
    13. ruby2_keywords allows you to run the old style even in Ruby 2.7 and 3.0.
    14. In Ruby 2, you can write a delegation method by accepting a *rest argument and a &block argument, and passing the two to the target method. In this behavior, the keyword arguments are also implicitly handled by the automatic conversion between positional and keyword arguments.
    15. Will my code break on Ruby 2.7? A short answer is “maybe not”. The changes in Ruby 2.7 are designed as a migration path towards 3.0. While in principle, Ruby 2.7 only warns against behaviors that will change in Ruby 3, it includes some incompatible changes we consider to be minor. See the “Other minor changes” section for details. Except for the warnings and minor changes, Ruby 2.7 attempts to keep the compatibility with Ruby 2.6. So, your code will probably work on Ruby 2.7, though it may emit warnings. And by running it on Ruby 2.7, you can check if your code is ready for Ruby 3.0.
    16. If you want to disable the deprecation warnings, please use a command-line argument -W:no-deprecated or add Warning[:deprecated] = false to your code.
    17. However, this style is not recommended in new code, unless you are often passing a Hash as a positional argument, and are also using keyword arguments
    1. In line with dishonest asset flippers, this has a number of fake positive reviews from compromised accounts, all in the same broken English.
    2. Dungless is another GameMaker Studio asset flip from serial copy+paste infringers, Imperium Game. All these guys do is rip off game templates and projects from the Yoyogames/GameMaker Studio store and try to scam people into paying for someone else's work on Steam.This time they've ripped off a basic template for a 2D retro pixel platformer/brawler. In line with this asset flipping behaviour, the game was dumped immediately at launch into DailyIndieGame shovelware bundles.In line with dishonest asset flippers, this has a number of fake positive reviews from compromised accounts, all in the same broken English. Even if this wasn't just an asset flip, it would be garbage. Impossible to recommend.
    1. Developers leave in glaring issues that should have been resolved in the base game. For example, the original development had fuel, then scrapped it, then 3 or 4 years later they realize that logistics is actually important.The monetization scheme is inherently predatory. Charge your customers for a game that without DLC is without a backbone.
    2. it shows history well, along with a good display of alternate history!
    3. This is somehow the only game where you can play as an anti-fascist faction in Nazi Germany, drive the Nazis into the ocean, kill Hitler and reclaim Germany as a democracy, and that's somehow the most boring possible outcome.
    1. Before you go like “Wow!!!”, understand that the packages highlighted above take a lot into consideration when detecting timezones. This makes them slightly more accurate than Intl API alone.

      What exactly does moment do for us, then, that

      TimeFormat().resolvedOptions().timeZone;

      doesn't do? Name one example where it is more accurate.

    2. Intl.DateTimeFormat().resolvedOptions().timeZone
    3. Don’t hate me, but let’s face it, if I showed you this, you may have ignored the rest of this article lol.

      .

  2. Jul 2021
    1. Some applications mentioned here are not open source. They are listed here because they are available on Linux and the article’s focus is on Linux. Such applications are duly marked non-FOSS so that you can make a decision yourself.
    1. function omit(obj, ...props) { const result = { ...obj }; props.forEach(function(prop) { delete result[prop]; }); return result; }
    1. If you prefer monkeypatching (around 70) linguistics methods directly onto core classes, you can do that by adding a 'monkeypatch' option to ::use:
    1. that's why I bolded "same column" with the or query. I can delete the comment altogether, but thought it would be helpful for people perusing "or" query SO questions.
    2. Arel is a public API, or more precisely, it exposes one. Active Record just provides convenience methods that use it under the hood, It's completely valid to use it on it's own. It follows semantic versioning, so unless you are changing major versions (3.x.x => 4.x.x), there is no need to worry about breaking changes.
    1. A semantic command dispatch intended for loading external dependencies into the environment.
    1. direnv is not loading the .envrc into the current shell. It’s creating a new bash sub-process to load the stdlib, direnvrc and .envrc, and only exports the environment diff back to the original shell. This allows direnv to record the environment changes accurately and also work with all sorts of shells. It also means that aliases and functions are not exportable right now.
    1. Add this to ~/.direnvrc:

      This is almost the same as used by

      https://github.com/apollographql/apollo-server/blob/main/.envrc

      except they (like I) prefer to just put it in local .envrc instead of requesting all developers put something in a file outside of the project root.

      Also, the version at https://github.com/apollographql/apollo-server/blob/main/.envrc is slightly better.

    1. watch_file .nvmrc local NVM_PATH="$HOME/.nvm/nvm.sh" if ! [ -f "$NVM_PATH" ]; then echo "Installing NVM" >&2 curl -o- https://raw.githubusercontent.com/creationix/nvm/master/install.sh | bash fi . "${NVM_PATH}" # `nvm use` will set up the environment to use some version matching what we have # in .nvmrc without talking to the network, assuming that there is any matching # version already downloaded. If there isn't (eg, you're just getting started, or # we just did a major version upgrade) then it will fail and `nvm install` will # download a matching version. nvm use || nvm install # Let you run npm-installed binaries without npx. layout node
    1. # This is a configuration file for direnv (https://direnv.net/), a tool that # allows you to automatically set up environment variables based on the current # directory. If you install and enable direnv, then this file will ensure that # `nvm` is installed in your home directory and that the version of Node in # .nvmrc is selected.
  3. datatracker.ietf.org datatracker.ietf.org
    1. Relationship to TCP and HTTP _This section is non-normative._ The WebSocket Protocol is an independent TCP-based protocol. Its only relationship to HTTP is that its handshake is interpreted by HTTP servers as an Upgrade request. By default, the WebSocket Protocol uses port 80 for regular WebSocket connections and port 443 for WebSocket connections tunneled over Transport Layer Security (TLS) [RFC2818].
    2. It is similarly intended to fail to establish a connection when data from other protocols, especially HTTP, is sent to a WebSocket server, for example, as might happen if an HTML "form" were submitted to a WebSocket server. This is primarily achieved by requiring that the server prove that it read the handshake, which it can only do if the handshake contains the appropriate parts, which can only be sent by a WebSocket client. In particular, at the time of writing of this specification, fields starting with |Sec-| cannot be set by an attacker from a web browser using only HTML and JavaScript APIs such as XMLHttpRequest [XMLHttpRequest].
    3. The protocol is intended to be extensible; future versions will likely introduce additional concepts such as multiplexing.
    4. The WebSocket Protocol is designed on the principle that there should be minimal framing (the only framing that exists is to make the protocol frame-based instead of stream-based and to support a distinction between Unicode text and binary frames). It is expected that metadata would be layered on top of WebSocket by the application Fette & Melnikov Standards Track [Page 9] RFC 6455 The WebSocket Protocol December 2011 layer, in the same way that metadata is layered on top of TCP by the application layer (e.g., HTTP). Conceptually, WebSocket is really just a layer on top of TCP that does the following: o adds a web origin-based security model for browsers o adds an addressing and protocol naming mechanism to support multiple services on one port and multiple host names on one IP address o layers a framing mechanism on top of TCP to get back to the IP packet mechanism that TCP is built on, but without length limits o includes an additional closing handshake in-band that is designed to work in the presence of proxies and other intermediaries Other than that, WebSocket adds nothing. Basically it is intended to be as close to just exposing raw TCP to script as possible given the constraints of the Web. It's also designed in such a way that its servers can share a port with HTTP servers, by having its handshake be a valid HTTP Upgrade request. One could conceptually use other protocols to establish client-server messaging, but the intent of WebSockets is to provide a relatively simple protocol that can coexist with HTTP and deployed HTTP infrastructure (such as proxies) and that is as close to TCP as is safe for use with such infrastructure given security considerations, with targeted additions to simplify usage and keep simple things simple (such as the addition of message semantics).
    5. When an endpoint is to interpret a byte stream as UTF-8 but finds that the byte stream is not, in fact, a valid UTF-8 stream, that endpoint MUST _Fail the WebSocket Connection_. This rule applies both during the opening handshake and during subsequent data exchange.
    6. The goal of this technology is to provide a mechanism for browser-based applications that need two-way communication with servers that does not rely on opening multiple HTTP connections (e.g., using XMLHttpRequest or <iframe>s and long polling).
    1. Our members are technically self-pay; however, 100 percent of our members pay their bills. All we—and they—ask is that healthcare providers not penalize them for this technical designation. Please give our members the same consideration in terms of discounts that insurance companies receive for negotiated contracts.
    1. SvelteKit gives you complete freedom with respect to all its features. There’s always a way to exclude a feature if you prefer to.

      .

    2. SvelteKit offers a very elegant solution for this — the load function. The load function can run both on the client and on the server side and in both cases will be executed before the component renders.
    3. There are two ways SvelteKit does this: prerendering and server-side rendering
    1. The fetch is re-executed on the client, but fetch is overloaded and never hits the network - a cached HTTP blob is shipped in the HTML and the load function uses it on the client as a kind of simple cache.

      .

    2. If so I think this is a very sad omittance and makes SvelteKit much harder to work with than it has to, since it's very common for many components to not want to hydrate data on the frontend after it's been loaded.This is not React. There is no hydration problem.

      .

    1. If the API call fails due to a stale token, the endpoint can get a new token and send it to the browser via Set-Cookie in the response headers. Note for this to work, you must ensure the endpoint is being called by the browser (not the server.) SvelteKit templates are executed first on the server, then again in the browser. If the endpoint is called from the server, the browser cookie will not be set.

      .

    2. Although not well-documented, fetch is available from all of these hooks. Simply add node-fetch as a dependency in package.json (not a devDependency!).

      .

    3. getSession() is probably not a good choice. The main purpose of this hook is create a sanitized version of the server context for the browser (like remove sensitive information like passwords/API keys.) It is called after the handle() hook, so it would be too late for setting any headers in the response.

      .

    1. That's because in CSS, any value at all -- even 'disabled="false"' or whatever -- is exactly the same as 'disabled' or 'disabled="disabled"'. The opposite of 'disabled="anything"' is to not have a disabled keyword in there at all.
    1. Guiding Principles¶ Some guiding principles Nokogiri tries to follow: be secure-by-default by treating all documents as untrusted by default be a thin-as-reasonable layer on top of the underlying parsers, and don't attempt to fix behavioral differences between the parsers
    1. HTTP CachingWe run a website called UNPKG. It serves over 70 billion requests per month without making a dent on our credit card bill. It's possible because of HTTP caching and CDNs

      .

    1. e.preventDefault(); // because we're progressively enhancing the 'add to cart' <form>

      progressively enhancing

      this.action this.method

    1. Do you prefer a different email validation gem? If so, open an issue with a brief explanation of how it differs from this gem. I'll add a link to it in this README.
    1. It's now possible to move terminals between windows by detaching via Terminal: Detach Session in one window and attaching to another with Terminal: Attach to Session. In the future, this should help enable cross-window drag and drop!
    1. Please note that the strategy: :build option must be passed to an explicit call to association, and cannot be used with implicit associations:
    1. Are you more comfortable with multiple inheritances that modules provide, or do you prefer composition?
    2. I guess it is more of a feeling whether to use them or not.
    3. But having an experienced team that knows the codebase well as an argument for using them is weird and not strong.
    4. the interesting thing here is that there are comments that say which concern depends on which.
    5. Putting comments like these can be helpful, but it’s still set up for doing something sketchy, especially if you are new to the codebase. Being new and not being aware of all the “gotchas” a code has can certainly send you down the concern downward spiral.
    6. By looking at the code screenshot, you are either opening your mouth in awe or in appall. I feel there is no in-between here.
    7. What is risky here is that the concern (mixin) knows a lot about the model it gets included in. It is what is called a circular dependency. Song and Album depend on Trashable for trashing, Trashable depends on both of them for featured_authors definition. The same can be said for the fact that a trashed field needs to exist in both models in order to have the Trashable concern working.
    8. This works nicely wherever we show authors, but after we deploy to production, the folks from other parts of the world won’t get notified anymore about their songs. Mistakes like these are easy to make when using concerns.
    1. was due to a form that was submitted (ALWAYS SET THAT TYPE-PROPERTY ON YOUR BUTTONS!) after my onclick-event fired, but before my request had any chance to be completed.
    2. in my case having the browser clearing it's network-tab and the reason for the next request beeing due to "Initiator: document" should have been a clue. (meaning: it's not done by some JS, but by some html functionality)
    1. Following "NetworkError when attempting to fetch resource." only on Firefox I found the problem. It seems that Firefox' onclick event propagation interferes here with the fetch() call. As soon as I added event.preventDefault() in the onclick-handler before doing the actual fetch(), everything started to work again.
    1. Rails' inability to automatically route my link_to and form_for in STI subclasses to the superclass is a constant source of frustration to me. +1 for fixing this bug.

      I've had to work around this by doing record.as(BaseClass)

    1. You can do this elegantly with throw/catch, like this:
    2. In most languages, there is no clean equivalent for breaking out of a recursive algorithm that uses a recursive function. In Ruby, though, there is!
    3. it's much faster—the stack frame does not have to be carried along the "thrown symbol", and no object is created. Lightweight nonlinear flow control.
    4. Throw it's a more elegant way to use an exception-like system as a control flow.
    1. The most important part of this query is the with block. It's a powerful resource, but for this example, you can think of it as a "way to store a variable" that is the path of the contact you need to update, which will be dynamic depending on the record.
    2. It just builds the path as '{1, value}', but we need to convert to text[] because that’s the type expected on the jsonb_path function.
    1. For user-contributed data that's freeform and unstructured, use jsonb. It should perform as well as hstore, but it's more flexible and easier to work with.
    1. This is one of the more-satisfying ruby expressions I've seen in a long time. I can't say that it also has prosaic transparency, but I think seeing it teaches important things.
    2. x = -3 "++-"[x <=> 0] # => "-" x = 0 "++-"[x <=> 0] # => "+" x = 3 "++-"[x <=> 0] # => "+"
    3. I think that it's nonsense not to have a method that just gives -1 or +1. Even BASIC has such a function SGN(n). Why should we have to deal with Strings when it's numbers we want to work with. But's that's just MHO.
    1. As for why - a GET can be cached and in a browser, refreshed. Over and over and over. This means that if you make the same GET again, you will insert into your database again. Consider what this may mean if the GET becomes a link and it gets crawled by a search engine. You will have your database full of duplicate data.
    2. This is not advice. A GET is defined in this way in the HTTP protocol. It is supposed to be idempotent and safe.
    1. has the same effect (that is no side effect)
    2. The difference between PUT and POST is that PUT is idempotent: calling it once or several times successively has the same effect (that is no side effect), whereas successive identical POST requests may have additional effects, akin to placing an order several times.
  4. store.steampowered.com store.steampowered.com
    1. Why is there a reservation fee?The main reason for reservations is to ensure an orderly and fair ordering process for customers when Steam Deck inventory becomes available. The additional fee gives us a clearer signal of intent to purchase, which gives us better data to balance supply chain, inventory, and regional distribution leading up to launch.
    2. Steam Deck is a PC so you can install third party software and operating systems.
    1. Induction does not pander, but gives you the satisfaction of mastering an imaginary yet honest set of physical laws.
    2. Across more than 50 meticulously designed puzzles

      hard-crafted

    3. you must explore the counter-intuitive possibilities time travel permits. You will learn to choreograph your actions across multiple timelines, and to construct seemingly impossible solutions, such as paradoxical time loops, where the future depends on the past and the past depends on the future.
    1. whereas now, they know that user@domain.com was subscribed to xyz.net at some point and is unsubscribing. Information is gold. Replace user@domain with abcd@senate and xyz.net with warezxxx.net and you've got tabloid gold.
    1. While Microsoft is entirely in the right by reminding people of the terms they agreed to, many users are taking issue with the fact that they hadn’t been warned about the limit in the eight years it’s been in place, and many people are now being told they are over the limit after years of being over.
    1. Sending body/payload in a GET request may cause some existing implementations to reject the request — while not prohibited by the specification, the semantics are undefined. It is better to just avoid sending payloads in GET requests.
    2. Requests using GET should only be used to request data (they shouldn't include data).
    1. So long as the filters are only using GET requests to pull down links, there’s nothing fundamentally wrong with them. It’s a basic (though oft-ignored) tenet of web development that GET requests should be idempotent; that is, they shouldn’t somehow change anything important on the server. That’s what POST is for. A lot of people ignore this for convenience’s sake, but this is just one way that you can get bitten. Anyone remember the Google Web Accelerator that came out a while ago, then promptly disappeared? It’d pre-fetch links on a page to speed up things if you clicked them later on. And if one of those links happened to delete something from a blog, or log you out… well, then you begin to see why GET shouldn’t change things. So yes, the perfect solution to this is a 2-step unsubscribe link: the first step takes to you a page with a form on it, and that form then POSTs something back that finalizes the unsubscribe request.
    2. Two step unsubscribe, where the link in the email goes to a webpage with a prominent “click here to unsubscribe” button is often a good thing for unsubscription. It also gives people an option to not unsubscribe, when they click on the wrong link, or hit “return” with the wrong link focused, in a mail inadvertently, which isn’t that unusual in link-laden emails.
    3. Idempotent just means that following a link twice has exactly the same effect on persistent state as clicking it once. It does not mean that following the link must not change state, just that after following it once, following it again must not change state further. There are good reasons to avoid GET requests for changing state, but that’s not what idempotent means.
    1. Arguably any link that performs such an action via GET is fundamentally broken. A proper unsubscribe should direct to a page with a form that requires a POST submission. (Of course, in the real world, few things are proper.)
    2. Assuming that people trust your site, abusing redirections like this can help avoid spam filters or other automated filtering on forums/comment forms/etc. by appearing to link to pages on your site. Very few people will click on a link to https://evilphishingsite.example.com, but they might click on https://catphotos.example.com?redirect=https://evilphishingsite.example.com, especially if it was formatted as https://catphotos.example.com to hide the redirection from casual inspection - even if you look in the status bar while hovering over that, it starts with a reasonable looking string.
    1. Each of them implements a different semantic, but some common features are shared by a group of them: e.g. a request method can be safe, idempotent, or cacheable.

      Which ones are in each group?

      Never mind. The answer is in the pages that are being linked to.

    2. request method can be safe, idempotent, or cacheable.
    1. Testing at GitLab is a first class citizen, not an afterthought. It’s important we consider the design of our tests as we do the design of our features.
    1. A big gotcha needs to be mentioned: When testing transaction, you need to turn off transactional_fixtures. This is because the test framework (e.g Rspec) wraps the test case in transaction block. The after_commit is never called because nothing is really committed. Expecting rollback inside transaction doesn't work either even if you use :requires_new => true. Instead, transaction gets rolled back after the test runs.
    1. urql stays true to server data and doesn’t provide functions to manage local state like Apollo Client does. In my opinion, this is perfectly fine as full-on libraries to manage local state in React are becoming less needed. Mixing server-side state and local state seems ideal at first (one place for all states) but can lead to problems when you need to figure out which data is fresh versus which is stale and when to update it.
    2. Looking deeper, you can see a large amount of issues open, bugs taking months to fix, and pull requests never seem to be merged from outside contributors. Apollo seems unfocused on building the great client package the community wants.
    3. This sort of behaviour indicates to me that Apollo is using open-source merely for marketing and not to make their product better. The company wants you to get familiar with Apollo Client and then buy into their products, not truly open-source software in my opinion. This is one of the negatives of the open-core business model.
    4. This “bloat,” along with recently seeing how mismanaged the open-source community is, finally broke the camel’s back for me. I realized that I needed to look elsewhere for a GraphQL client library.
    5. Sometimes libraries can be too opinionated and offer too much “magic”. I’ve been using Apollo Client for quite some time and have become frustrated with its caching and local state mechanisms.
    6. Because GraphQL is an opinionated API spec where both the server and client buy into a schema format and querying format. Based on this, they can provide multiple advanced features, such as utilities for caching data, auto-generation of React Hooks based on operations, and optimistic mutations.
    1. In the examples above, the number 42 on the left hand side isn’t a variable that is being assigned. It is a value to check that the same element in that particular index matches that of the right hand side.
    1. The proposed syntax is much harder to implement than it looks. It conflicts with Hash literals. As a result, humans can be confused as well.

      harder than it looks

    2. You'll note that it doesn't give the possibility to map the key to a different variable. Indeed, I don't think that it would be useful and I would rather encourage rubyists to use meaningful option and variable names

      .

    1. It’s fun but when would we ever use things like this in actual code?When it’s well tested, commented, documented, and becomes an understood idiom of your code base.We focus so much on black magic and avoiding it that we rarely have a chance to enjoy any of the benefits. When used responsibly and when necessary, it gives a lot of power and expressiveness.