19,825 Matching Annotations
  1. Dec 2022
    1. Also, make sure a user does not need to enter credentials or select a checkbox to complete the unsubscription process.
    2. With the http method, behind the list-unsubscribe header, there is a URL leading to a web page for opting out. When a user clicks the unsubscribe link, a landing page opens, and the user is asked to confirm unsubscription.
    3. Imagine what happens when subscribers change activities, interests, or focus. As a result, they may no longer be interested in the products and services you offer. The emails they receive from you are now either ‘marked as read’ in their inbox or simply ignored. They neither click the spam reporting button nor attempt to find the unsubscribe link in the text. They are no longer your customers, but you don’t know it.
    4. Let’s say the recipient is considering unsubscribing. He or she may be too busy to search through the email to find the unsubscribe link, so he or she just clicks “Report as SPAM” to stop the emails from coming. This is the last thing any marketer wants to see happen. It negatively impacts sender reputation, requiring extra work to improve email deliverability. With the list-unsubscribe header, you will avoid getting into this kind of trouble in the first place.
    1. Can't annotate on https://feedback.mailgun.com/forums/156243-feature-requests/suggestions/39905227-provide-meaningful-delivery-status-description-rat so posting here instead.

      Anonymous commented · May 26, 2021 4:36 AM

      Without your comment I'd never find the real issue, because I was only look at permanent failures. That error message is really misleading, hope they can fix this.

      Kelly commented · December 30, 2020 2:35 AM

      Yes we desperately need this too. Half of our recipients were soft bounced due to "Too old" but we could still send to them previously on other ESPs.

    2. Certain email servers, Yahoo especially, throttle deliveries when multiple inbound is detected from the same IP. When this happens, Mailgun sends a "temporary" severity bounce. Mailgun will continue to retry over a period of time. If it can't deliver after 8 hours. The email will permanently fail with severity: permanent and reason: old.
    3. Just to add that there is also reason: old. This happens when email cannot be delivered after 8 hours. It should still be treated as a non permanent bounce though.
    4. Adding a condition to ensure that a contact is marked as bounced only if the reason is one of the above, should hopefully resolve this issue.
    5. I did some further digging and found that there is a reason property that can be used to determine whether Mailgun added an email address to its bounce suppression list:
    6. ...but even repeated soft bounces is a message level event, not one that means there will never be an opportunity to deliver to this address again. Hence Mailgun itself not adding this to their permanent uppression list..but that implies, right, that they will send to the permanent failure hook in this case?

      That could be a problem, if it actually send to the permanent failure hook in this case. Then you would have to hit their bounces API to check whether it's actually a permanent failure / hard bounce for the recipient as opposed to just for this message.

    7. From that quote above, it is clear Mailgun recognise this issue themselves (the possibility of one-off soft bounces for a variety of reasons) and therefore do not add these contacts to their permanent bounce list - unless its a true hard bounce. But they are rightly still alerting that the message in question has permanently failed to be delivered on this occasion.
    8. Mailgun, with its permanent failure webhook, is sending a message about a permanent failure of that specific message - it is Campaign that is then making a decision to translate this message, about just that one message, into a permanently bounced (suppressed) contact, and blocking all future emails to that contact - based on, what is clearly quite possibly just a temporary failure. It's really the distinction between a single message level (temporary) problem and a (permanent) contact level problem that is being lost with Campaign's current approach.
    9. but that before marking the contact as a bounced, Campaign should double check it was really a hard bounce that would affect future deliverability.
    10. some are legit bounces (people who typed emails wrongly etc) - but some are 'too old' which is a generic deliverability type message (according to Mailgun)

      too old error

    11. This becomes, then, a thorny problem - these perfectly valid emails, that are affected by this temporary issue - are marked as permanently bounced in Campaign...when they really shouldn't be given this bigger picture.
    12. But these are not permanent failures - they ARE permanent for that message of course

      Exactly. I arrived at the same conclusion.

    13. his is not wonderfully clear/great form Mailgun's end (as they are effectively translating a temporary delivery issues into a message about a permanent contact failure) - but the net effect is pretty broken handling of temporary bounces against contacts, which just creates inaccuracies and a bit of a mess.
    14. but really anyone using Mailgun is going to want this, aren't they?
    15. Mailgun, like all of these services at the more affordable levels, uses shared IPs for sending the mail. Unfortunately, as I have found over the last 3 or 4 years with them, it is not uncommon that one of their IPs gets blacklisted by SpamCop and similar services due to some other user of that IP being 'noisy' as Mailgun put it.
    16. Yes you could disable the Mailgun webhooks altogether as Mailgun will never allow sending to hard bounced email addresses.
  2. Nov 2022
    1. Mash duplicates any sub-Hashes that you add to it and wraps them in a Mash. This allows for infinite chaining of nested Hashes within a Mash without modifying the object(s) that are passed into the Mash. When you subclass Mash, the subclass wraps any sub-Hashes in its own class. This preserves any extensions that you mixed into the Mash subclass and allows them to work within the sub-Hashes, in addition to the main containing Mash.
    2. eierlegende Wollmilchsau
    3. foo = Foo.new(bar: 'baz') #=> {:bar=>"baz"} qux = { **foo, quux: 'corge' } #=> {:bar=> "baz", :quux=>"corge"} qux.is_a?(Foo) #=> true

      This surprised me.

      I would have expected — since Hash literal notation { } was used — that the resulting type would be Hash, not the type of foo. Strange.

      Is this a good thing... or?


      Also, in my quick test, I didn't find this to be true, so...?

      ``` main > symbol_mash.class => SymbolizedMash

      main > { **symbol_mash }.class => Hash ```

    4. by using symbols as keys, you will be able to use the implicit conversion of a Mash via the #to_hash method to destructure (or splat) the contents of a Mash out to a block

      Eh? The example below:

      symbol_mash = SymbolizedMash.new(id: 123, name: 'Rey') symbol_mash.each do |key, value| # key is :id, then :name # value is 123, then 'Rey' end

      seems to imply that this is possible (and does an implicit conversion) because it defines to_hash. But that's simply not true, as these 2 examples below prove:

      ``` main > symbol_mash.class_eval { undef :to_hash } => nil

      main > symbol_mash.each {|k,v| p [k,v] } [:id, 123] [:name, "Rey"] => {:id=>123, :name=>Rey} ```

      ``` main > s = 'a' => a

      main > s.class_eval do def to_hash chars.zip(chars).to_h end end => :to_hash

      main > s.to_hash => {a=>a}

      main > s.each Traceback (most recent call last) (filtered by backtrace_cleaner; set Pry.config.clean_backtrace = false to see all frames): 1: (pry):85:in __pry__' NoMethodError: undefined methodeach' for "a":String ```

    5. by using symbols as keys, you will be able to use the implicit conversion of a Mash via the #to_hash method to destructure (or splat) the contents of a Mash out to a block

      This doesn't actually seem to be an example of destructure/splat. (When it said "destructure the contents ... out to a block", I was surprised and confused, because splatting is when you splat it into an argument or another hash — never a block.)

      An example of destructure/splat would be more like

      method_that_takes_kwargs(**symbol_mash)

    6. It can be useful to use with keywords argument, which required symbol keys.
    7. This extension still allows indifferent access, but keeps the form of the keys to eliminate confusion when you're not expecting the keys to change.
    8. Since Mash is conceived to provide psuedo-object functionality, handling keys which cannot represent a method call falls outside its scope of value.
    9. Hashie does not have built-in support for coercing boolean values, since Ruby does not have a built-in boolean type or standard method for coercing to a boolean. You can coerce to booleans using a custom proc.

      I use: ActiveRecord::Type::Boolean.new.cast(value)

    10. Note: The ? method will return false if a key has been set to false or nil. In order to check if a key has been set at all, use the mash.key?('some_key') method instead.
    11. Mash is an extended Hash that gives simple pseudo-object functionality that can be built from hashes and easily extended. It is intended to give the user easier access to the objects within the Mash through a property-like syntax, while still retaining all Hash functionality.
    12. user.deep_fetch :groups, 1, :name

      similar to: dig

    13. Though this is a hash extension, it conveniently allows for arrays to be present in the nested structure. This feature makes the extension particularly useful for working with JSON API responses.
    14. Value coercions, on the other hand, will coerce values based on the type of the value being inserted. This is useful if you are trying to build a Hash-like class that is self-propagating.
    1. In our system, events are generated by physical hosts and follow different routes to the event storage. Therefore, the order in which they appear in the storage and become retrievable - via the events API - does not always correspond to the order in which they occur. Consequently, this system behavior makes straight forward implementation of event polling miss some events. The page of most recent events returned by the events API may not contain all the events that occurred at that time because some of them could still be on their way to the storage engine. When the events arrive and are eventually indexed, they are inserted into the already retrieved pages which could result in the event being missed if the pages are accessed too early (i.e. before all events for the page are available). To ensure that all your events are retrieved and accounted for please implement polling the following way:
    1. A SimpleDelegator instance can take advantage of the fact that SimpleDelegator is a subclass of Delegator to call super to have methods called on the object being delegated to. class SuperArray < SimpleDelegator def [](*args) super + 1 end end SuperArray.new([1])[0] #=> 2
    1. Bash maintains an internal hash of previously found executables in your path. In this case, it has details that at one time there was an executable at /usr/bin/siege, and reuses that path to avoid having to search again. You need to tell bash to manually rehash the path for siege like so: hash siege You can also clear all hashed locations: hash -r
    1. Remember there are two kinds of variable. Internal Variables and Environment Variables. PATH should be an environment variable.

      In my case, I was trying to debug which asdf not finding asdf, in a minimal shell.

      I had checked bash-5.1$ echo $PATH|grep asdf /home/tyler/.asdf/bin

      but ```

      The PATH environment variable

      env | /bin/grep PATH `` being empty was the key discovery here. Must have forgotten theexport`.

    2. All shells should tell you that your path is the same thing with BOTH of the two commands: # The PATH variable echo "$PATH" # The PATH environment variable env | /bin/grep PATH
    1. In v3, svelte-preprocess was able to type-check Svelte components. However, giving the specifics of the structure of a Svelte component and how the script and markup contents are related, type-checking was sub-optimal. In v4, your TypeScript code will only be transpiled into JavaScript, with no type-checking whatsoever. We're moving the responsibility of type-checking to tools better fit to handle it, such as svelte-check, for CLI and CI usage, and the VS Code extension, for type-checking while developing.
    1. Undecided yet what bundler to use? We suggest using SvelteKit, or Vite with vite-plugin-svelte.

      Undecided?

    1. You're likely not using "type": "module" in your package.json, so import statements don't work in svelte.config.js. You have three ways to fix this: Use require() instead (also see https://github.com/sveltejs/language-tools/blob/master/docs/preprocessors/in-general.md#generic-setup) Rename svelte.config.js to svelte.config.mjs Set "type": "module" in your package.json (may break other scripts)
    1. Warning: This ignores the user's keyboard layout, so that if the user presses the key at the "Y" position in a QWERTY keyboard layout (near the middle of the row above the home row), this will always return "KeyY", even if the user has a QWERTZ keyboard (which would mean the user expects a "Z" and all the other properties would indicate a "Z") or a Dvorak keyboard layout (where the user would expect an "F"). If you want to display the correct keystrokes to the user, you can use Keyboard.getLayoutMap().

      Wow, that's quite a caveat!

    1. It would probably be worth mentioning this explicitly in the README: "Configuration of the language server happens over the LSP protocol by passing a configuration object; your LSP client should have a way of setting the configuration object for a server. Here is a link to the spec for the configuration that is supported [...]"
    1. So when configuring Capybara, I'm using ignore_default_browser_options, and only re-use this DEFAULT_OPTIONS and exclude the key I don't want Capybara::Cuprite::Driver.new( app, { ignore_default_browser_options: true, window_size: [1200, 800], browser_options: { 'no-sandbox': nil }.merge(Ferrum::Browser::Options::Chrome::DEFAULT_OPTIONS.except( "disable-features", "disable-translate", "headless" )), headless: false, } )
    1. You can definitely set the Return-Path header as a sender. But yes, some receivers might rewrite it (But not always ), or depending on who you're sending through, it might be re-written by them. For instance when using MailGun to send bulk email you have to do things just right in order to set a Return-Path that will be preserved. I know this contradicts the RFC you cite, but it's in practice true.
    2. Return-Path header is written by the receiving server, not by the sending server. And as per the RFC 5321, it is the same as the address supplied in MAIL FROM command.
    1. Do you know about lacolhost.com? as in, do something like blerg.lacolhost.com:3000/ as your url and it'll resolve to localhost:3000, which is where your tests are running.
    2. I have DNS settings in my hosts file that are what resolve the visits to localhost, but also preserve the subdomain in the request (this latter point is important because Rails path helpers care which subdomain is being requested)
    3. I've developed additional perspective on this issue - I have DNS settings in my hosts file that are what resolve the visits to localhost, but also preserve the subdomain in the request (this latter point is important because Rails path helpers care which subdomain is being requested) To sum up the scope of the problem as it stands now - I need a way within Heroku/Capybara system tests to both route requests to localhost, but also maintain the subdomain information of the request. I've been able to accomplish one or the other, but haven't found a configuration that provides both yet.
    1. Benoit Daloze of TruffleRuby points out that this is all much easier to read if you define your Ruby internals in Ruby, like they do. He's not wrong.
    2. first we're looking for the "main" object. The word "main" is used in lots of places in Ruby, so that will be hard to track down. How else can we search?Luckily, we know that if you print out that object, it says "main". Which means we should be able to find the string "main", quotes and all, in C.
    1. Refresh tokens are bearer tokens. It's impossible for the authorization server to know who is legitimate or malicious when receiving a new access token request. We could then treat all users as potentially malicious.
    2. How could we handle a situation where there is a race condition between a legitimate user and a malicious one?
    1. There are also many reasons refresh tokens may expire prior to any expected lifetime of them as well.

      such as...?

    2. You might notice that the “expires_in” property refers to the access token, not the refresh token. The expiration time of the refresh token is intentionally never communicated to the client. This is because the client has no actionable steps it can take even if it were able to know when the refresh token would expire.
    1. But what about a Refresh Token flow? When using a refresh token, confidential clients also have to authenticate. Public clients, such as browser-based applications, do not authenticate during the Refresh Token flow. So in a typical frontend application, refresh tokens issued to frontend web applications are bearer tokens.   In practice, this means that if an attacker manages to steal a refresh token from a frontend application, they can use that token in a Refresh Token flow. To counter such attacks, the OAuth 2.0 specifications mandate that browser-based applications apply a security measure known as refresh token rotation.
    1. you can use a Backend for Frontend (BFF)

      first sighting: Backend for Frontend

    2. Grant Types
    3. For example, if I make an application (Client) that allows a user (Resource Owner) to make notes and save them as a repo in their GitHub account (Resource Server), then my application will need to access their GitHub data. It's not secure for the user to directly supply their GitHub username and password to my application and grant full access to the entire account. Instead, using OAuth 2.0, they can go through an authorization flow that will grant limited access to some resources based on a scope, and I will never have access to any other data or their password.
    1. Until now, we had a lot of code. Although we were using a plugin to help with boilerplate code, ready endpoints, and webpages for sign in/sign up management, a lot of adaptations were necessary. This is when Doorkeeper comes to the rescue. It is not only an OAuth 2 provider for Rails but also a full OAuth 2 suite for Ruby and related frameworks (Sinatra, Devise, MongoDB, support for JWT, and more).
    2. The process used to create an OAuth wrapper client is very simple.
    1. This document defines how a JWT Bearer Token can be used to request an access token when a client wishes to utilize an existing trust relationship, expressed through the semantics of the JWT, without a direct user-approval step at the authorization server.

      [transfer fo trust/credentials]

    1. Can I try the endpoints before I implement my application?
    2. If the Client is a Single-Page App (SPA), an application running in a browser using a scripting language like JavaScript, there are two grant options: the Authorization Code Flow with Proof Key for Code Exchange (PKCE) and the Implicit Flow with Form Post. For most cases, we recommend using the Authorization Code Flow with PKCE because the Access Token is not exposed on the client side, and this flow can return Refresh Tokens.
    3. Is the Client absolutely trusted with user credentials?
    4. Which OAuth 2.0 Flow Should I Use?
    5. If the Client is a regular web app executing on a server, then the Authorization Code Flow is the flow you should use. Using this the Client can retrieve an Access Token and, optionally, a Refresh Token.
    6. The first decision point is about whether the party that requires access to resources is a machine. In the case of machine-to-machine authorization, the Client is also the Resource Owner, so no end-user authorization is needed.
    1. you will need to add a few more braces

      If "brace" means {}, then this is incorrect. A few more parentheses, (), is correct.

    1. The Console now supports redeclaration of const variables across separate REPL scripts (such as when you run a statement in the Console), in addition to the existing let and class redeclarations. This support allows you to experiment with different declarations for const variables without refreshing the page. Previously, DevTools threw a syntax error if you redeclared a const binding.

      Edge version of this matching release note from the matching Chrome feature:

      https://hyp.is/d9XEKGfOEe2a27vFWUjjSA/developer.chrome.com/blog/new-in-devtools-92/

      Interesting, they're copying some content, but not all of it verbatim.

    1. The Console now supports redeclaration of const statement, in addition to the existing let and class redeclarations. The inability to redeclare was a common annoyance for web developers who use the Console to experiment with new JavaScript code.
    1. Note that strings here are encoded as UTF-8, unlike the usual JavaScript UTF-16 strings.
    1. binary string (i.e., a string in which each character in the string is treated as a byte of binary data)
    2. convert the string such that each 16-bit unit occupies only one byte

      What is a 16-bit "unit"?

      How can a 16-bit unit fit in 8 bits (1 byte)?

    3. The btoa() function takes a JavaScript string as a parameter. In JavaScript strings are represented using the UTF-16 character encoding: in this encoding, strings are represented as a sequence of 16-bit (2 byte) units. Every ASCII character fits into the first byte of one of these units, but many other characters don't. Base64, by design, expects binary data as its input. In terms of JavaScript strings, this means strings in which each character occupies only one byte. So if you pass a string into btoa() containing characters that occupy more than one byte, you will get an error, because this is not considered binary data:
    4. If you need to encode Unicode text as ASCII using btoa(), one option is to convert the string such that each 16-bit unit occupies only one byte.
    1. Honestly, at this point, I don't even know what tools I'm using, and which is responsible for what feature. Diving into the code of capybara and cucumber yields hundreds of lines of metaprogramming magic that somehow accretes into a testing framework. It's really making me loathe TDD despite my previous youthful enthusiasm.

      opinion: too much metaprogramming magic

      I'm not so sure it's "too much" though... Any framework or large software project is going to feel that way to a newcomer looking at the code, due to the number of layers of abstractions, etc. that eventually were added/needed by the maintainers to make it maintainable, decoupled, etc.

    2. Wow, the man himself.
    1. Please refer to the help center for possible explanations why a question might be removed.

      Why not just show the page and let people see the content and decide for themselves if it's helpful? (Could also show the moderation outcome there, with the reason.)

    2. This question was removed from Stack Overflow for reasons of moderation.
    1. So far for the obligatory warning. I get the point, I even agree with the argument, but I still want to send a POST request. Maybe you are testing an API without a user interface or you are writing router tests? Is it really impossible to simulate a POST request with Capybara? Nah, of course not!
    2. The Capybara Ruby gem doesn’t support POST requests, the built-in visit method always uses GET. This is by design and with good reason: Capybara is built for acceptance testing and a user would never ask to ‘post’ parameter X and Y to the application. There will always be some kind of interface, a form for example. It makes more sense to simulate what the visitor would really do
    1. module InjectSession include Warden::Test::Helpers def inject_session(hash) Warden.on_next_request do |proxy| hash.each do |key, value| proxy.raw_session[key] = value end end end end
    1. This site is making use of some basic analytics cookies so I can hopefully learn something from of the metrics. By using the site you are totally fine with that.
    2. And with diagrams as text close to the code chances are they will be kept to up to date (and created to begin with…).
    3. There IS a super nice Visual Studio Code plugin for it but it still comes with the environment pre-reqs AS well as the cumbersome pressing of Alt-D (or was it Ctrl-D??) to get the preview. For other IDE’s I bet there are similar plugins leaving the same nasty taste of dissatisfaction… and the pollution of your environment.
    1. You may want to change the controllers to your custom controllers with: Rails.application.routes.draw do use_doorkeeper do # it accepts :authorizations, :tokens, :token_info, :applications and :authorized_applications controllers :applications => 'custom_applications' end end
    2. If you want to extend the default behaviour, just inherit from Doorkeeper::ApplicationsController. For example: class CustomApplicationsController < Doorkeeper::ApplicationsController end
    1. This is ugly by design, as an inducement to test properties instead of specifics.
    2. So transcriptor aims to do less, and impose the bare minimum of cognitive load needed to convert a REPL interaction into a test. The entire API is four functions:
    3. Testing frameworks often introduce their own abstractions for e.g. evaluation order, data validation, reporting, scope, code reuse, state, and lifecycle. In my experience, these abstractions are always needlessly different from (and inferior to) related abstractions provided by the language itself.
    1. Check the "Auto-open DevTools for popups".

      Without this feature, when a pop-up opens without DevTools open, if it redirects, it will be too late to open DevTools and see the redirect logged...

      There is still a problem though: If the pop-up window closes, so does that DevTools. So you can't see logs or network logs (redierects) that happened right before it closed...

    1. I just assumed that nesting/inheriting settings would be a thing because of course it would
    2. git_workspace/ ├── .vscode │ └── settings.json # global settings, my preferred ones ├── my-personal-projects/ │ └── project1/ │ └── .git/ └── company-projects/ ├── .vscode │ └── settings.json # local settings, overrides some of my personal ones ├── project2/ │ └── .git/ └── project3/ └── .git/
    1. As you note, Activity diagrams inherently can include concurrency and timing. If you look at this example cribbed from Wikipedia, shown below, you can observe the section with two heavy horizontal bars, and two parallel activities of "present idea" and "record idea". That is read as "start these activities in parallel, and continue only when both are complete." Flowcharts can't express this within the notation. Practically, using activity diagrams lets you think clearly about concurrent processes. I think you'll find that anyone who can read a flowchart will quickly adapt.
    2. Activity diagram spreads confusion by its own name, there must be a reason why nobody understand them and ask similar questions.
    3. It might seem as a preference, but if we have a standardized language for describing software systems, Why do we use something else? This can lead to bad habit of overusing flowcharts. Activity diagrams are really simple. But if you decide to describe a more complicated aspect of the system or try to change the part you are describing, you might have to switch anyway. So just use UML and prevent confusion in the future.
    4. assuming a standard is better because the standard says so, it is like that old while(1) infinite loop it is better not to enter.
    1. to set up
    2. It is handy to manually generate the diagram from times to times using the previously created command: npm run db:diagram:generate. Though, getting the diagram to update itself on its own automatically without a developer interaction would ensure that it the diagram is never obsolete. There are several ways of doing this.You could use a pre-commit git hook or even better simply configure your CI/CD pipeline(s) to run the npm script whenever something gets merged into the main branch 🙂
    3. When it comes to showing up somewhere in your documentation a diagram describing your SQL database, you often end up with a recurring problem : after a few days / weeks / months, the diagram you made became obsolete.
    1. It would be nice if we could get some official word on whether this repository is affect by the catastrophic CVE-2021-44228 that is currently affecting a considerable percentage of softwares around the globe. From my limited understanding and looking at the refreshingly concise list of dependencies in the pom.xml, I would think this project is not affected, but I and probably others who are not familiar with the projects internals would appreciate an official word.
    2. I understand that typically, it wouldn't make much sense to comment on every CVE that doesn't affect a product, but considering the severity and pervasiveness of this particular issue, maybe an exception is warranted.
    1. I agree that these fields should be whitelisted by ActiveAdmin automatically as it generates them via the form helpers. Regardless of if you use :raise or :log you wouldn't usually want these causing unnecessary noise.