23 Matching Annotations
- Dec 2022
-
learn.microsoft.com learn.microsoft.com
-
Submit false positives (good email that was blocked or sent to junk folder) and false negatives (unwanted email or phish that was delivered to the inbox)
They got it correct. Good!
-
- Nov 2022
-
unix.stackexchange.com unix.stackexchange.com
-
Grepping has the problem of "false positives": The output of a pip list | grep NAME would match on any module which name contains "NAME", e.g. also match "some_other_NAME". While pip3 show MODULENAME only matches on complete matches.
-
-
github.com github.com
-
I don't see why this is a false positive
-
-
github.com github.com
-
Expected behavior this code should be ignored. Actual behavior this code is flagged.
This issue is a correct usage of 'false positive"
-
- Oct 2022
-
github.com github.com
-
That's actually a false negative if it doesn't trigger the cop but should; a false positive is when it does trigger the cop/test/fire alarm/etc. but should not.
-
- Jun 2021
-
docs.gitlab.com docs.gitlab.com
-
Controller specs should not be used to write N+1 tests as the controller is only initialized once per example. This could lead to false successes where subsequent “requests” could have queries reduced (e.g. because of memoization).
-
- May 2021
-
htmlpreview.github.io htmlpreview.github.io
-
These checks can have both false positives and false negatives.
Tags
Annotators
URL
-
- May 2020
- Apr 2020
-
en.wikipedia.org en.wikipedia.org
-
While risking false positives, Bloom filters have a substantial space advantage over other data structures for representing sets
-
More generally, fewer than 10 bits per element are required for a 1% false positive probability, independent of the size or number of elements in the set.
Tags
Annotators
URL
-
-
github.com github.com
-
There's a tradeoff to be made between the false positive rate, the number of passwords checked, and the amount of disk/network bandwidth used.
-
-
github.com github.com
-
flag a false positive ("ham" is not-spam)
Tags
Annotators
URL
-
- Mar 2020
-
tonydye.typepad.com tonydye.typepad.comTony Dye1
-
The absolutely worst thing that can happen in your anti-spam solution is to block a good email and not let anybody know about it! Anti-spam solutions should always generate an NDR such that a legitimate sender can know their message didn't get through. (Of course, we know many legitimate users don't read nor understand NDRs, so there's still an issue) A really good anti-spam solution should not only generate an NDR, but that NDR should have an "escape clause" in it that gives that legitimate user a special way to get through the anti-spam solution, if they take some reasonable steps.
-
-
-
The more data you send with each comment check, the better chance Akismet has of avoiding missed spam and false positives.
They avoid saying "false negatives" and call it "missed spam" instead.... okay.
-
- Jan 2020
-
github.com github.com
-
Hypothesis considers a "close enough match" to be a success and does NOT place the corresponding annotation in the Orphan category, thus creating a false positive.
-
- Nov 2019
-
trackmenot.io trackmenot.io
-
we believe that search companies may need to go to considerable trouble to separate user-generated from TMN-generated searches. Further, any such filtering efforts are likely to contain some number of false positives.
-
-
kentcdodds.com kentcdodds.com
-
Because they're more integrated and try to serialize an incomplete system (e.g. one with some kind of side effects: from browser/library/runtime versions to environment to database/API changes), they will tend to have high false-negatives (failing test for which the production code is actually fine and the test just needs to be changed). False negatives quickly erode the team's trust in a test to actually find bugs and instead come to be seen as a chore on a checklist they need to satisfy before they can move on to the next thing.
-
-
kentcdodds.com kentcdodds.com
-
I could rename toggle to handleButtonClick (and update the corresponding onClick reference). My test breaks despite this being a refactor.
-
So finally I'm coming out with it and explaining why I never use shallow rendering and why I think nobody else should either. Here's my main assertion:With shallow rendering, I can refactor my component's implementation and my tests break. With shallow rendering, I can break my application and my tests say everything's still working.This is highly concerning to me because not only does it make testing frustrating, but it also lulls you into a false sense of security. The reason I write tests is to be confident that my application works and there are far better ways to do that than shallow rendering.
-
-
github.com github.com
-
kentcdodds.com kentcdodds.com
-
Can break when you refactor application code. False negatives
-
This is what's called a false negative. It means that we got a test failure, but it was because of a broken test, not broken app code.
Actually, this is a false positive (also known as a false alarm): it indicates the presence of a condition (something is wrong with the behavior of the code), when in fact it is not present (nothing is wrong with the behavior).
Unless you define the condition as "everything as fine", but that is not usually how these terms are used.
Read https://en.wikipedia.org/wiki/False_positives_and_false_negatives.
-
Why is testing implementation details bad?There are two distinct reasons that it's important to avoid testing implementation details. Tests which test implementation details:Can break when you refactor application code. False negativesMay not fail when you break application code. False positives
-