16 Matching Annotations
  1. May 2020
  2. Apr 2020
    1. While risking false positives, Bloom filters have a substantial space advantage over other data structures for representing sets
    2. More generally, fewer than 10 bits per element are required for a 1% false positive probability, independent of the size or number of elements in the set.
    1. There's a tradeoff to be made between the false positive rate, the number of passwords checked, and the amount of disk/network bandwidth used.
  3. Mar 2020
  4. tonydye.typepad.com tonydye.typepad.com
    1. The absolutely worst thing that can happen in your anti-spam solution is to block a good email and not let anybody know about it!  Anti-spam solutions should always generate an NDR such that a legitimate sender can know their message didn't get through. (Of course, we know many legitimate users don't read nor understand NDRs, so there's still an issue)  A really good anti-spam solution should not only generate an NDR, but that NDR should have an "escape clause" in it that gives that legitimate user a special way to get through the anti-spam solution, if they take some reasonable steps.
    1. The more data you send with each comment check, the better chance Akismet has of avoiding missed spam and false positives.

      They avoid saying "false negatives" and call it "missed spam" instead.... okay.

  5. Jan 2020
  6. Nov 2019
    1. we believe that search companies may need to go to considerable trouble to separate user-generated from TMN-generated searches. Further, any such filtering efforts are likely to contain some number of false positives.
    1. Because they're more integrated and try to serialize an incomplete system (e.g. one with some kind of side effects: from browser/library/runtime versions to environment to database/API changes), they will tend to have high false-negatives (failing test for which the production code is actually fine and the test just needs to be changed). False negatives quickly erode the team's trust in a test to actually find bugs and instead come to be seen as a chore on a checklist they need to satisfy before they can move on to the next thing.
    1. I could rename toggle to handleButtonClick (and update the corresponding onClick reference). My test breaks despite this being a refactor.
    2. So finally I'm coming out with it and explaining why I never use shallow rendering and why I think nobody else should either. Here's my main assertion:With shallow rendering, I can refactor my component's implementation and my tests break. With shallow rendering, I can break my application and my tests say everything's still working.This is highly concerning to me because not only does it make testing frustrating, but it also lulls you into a false sense of security. The reason I write tests is to be confident that my application works and there are far better ways to do that than shallow rendering.
    1. Can break when you refactor application code. False negatives
    2. This is what's called a false negative. It means that we got a test failure, but it was because of a broken test, not broken app code.

      Actually, this is a false positive (also known as a false alarm): it indicates the presence of a condition (something is wrong with the behavior of the code), when in fact it is not present (nothing is wrong with the behavior).

      Unless you define the condition as "everything as fine", but that is not usually how these terms are used.

      Read https://en.wikipedia.org/wiki/False_positives_and_false_negatives.

    3. Why is testing implementation details bad?There are two distinct reasons that it's important to avoid testing implementation details. Tests which test implementation details:Can break when you refactor application code. False negativesMay not fail when you break application code. False positives