162 Matching Annotations
  1. Last 7 days
  2. Jun 2021
  3. May 2021
  4. Mar 2021
    1. Your validation functions should also treat undefined and '' as the same. This is not too difficult since both undefined and '' are falsy in javascript. So a "required" validation rule would just be error = value ? undefined : 'Required'.
    1. Using these attributes will show validation errors, or limit what the user can enter into an <input>.
    2. The HTML5 form validation techniques in this post only work on the front end. Someone could turn off JavaScript and still submit jank data to a form with the tightest JS form validation.To be clear, you should still do validation on the server.
    3. With these JavaScript techniques, the display of server validation errors could be a lot simpler if you expect most of your users to have JS enabled. For example, Rails still encourages you to dump all validation errors at the top of a form, which is lulzy in this age of touchy UX. But you could do that minimal thing with server errors, then rely on HTML5 validation to provide a good user experience for the vast majority of your users.
    1. Website: <input type="url" name="website" required pattern="https?://.+"> Now our input box will only accept text starting with http:// or https:// and at least one additional character
    1. Therefore client side validation should always be treated as a progressive enhancement to the user experience; all forms should be usable even if client side validation is not present.
    2. It's important to remember that even with these new APIs client side validation does not remove the need for server side validation. Malicious users can easily workaround any client side constraints, and, HTTP requests don't have to originate from a browser.
    3. Since you have to have server side validation anyways, if you simply have your server side code return reasonable error messages and display them to the end user you have a built in fallback for browsers that don't support any form of client side validation.
    1. Responders don't use valid? to check for errors in models to figure out if the request was successful or not, and relies on your controllers to call save or create to trigger the validations.
  5. Feb 2021
    1. It is based on the idea that each validation is encapsulated by a simple, stateless predicate that receives some input and returns either true or false.
    2. URI::MailTo::EMAIL_REGEXP

      First time I've seen someone create a validator by simply matching against URI::MailTo::EMAIL_REGEXP from std lib. More often you see people copying and pasting some really long regex that they don't understand and is probably not loose enough. It's much better, though, to simply reuse a standard one from a library — by reference, rather than copying and pasting!!

    1. ActiveInteraction type checks your inputs. Often you'll want more than that. For instance, you may want an input to be a string with at least one non-whitespace character. Instead of writing your own validation for that, you can use validations from ActiveModel. These validations aren't provided by ActiveInteraction. They're from ActiveModel. You can also use any custom validations you wrote yourself in your interactions.
    2. Note that it's perfectly fine to add errors during execution. Not all errors have to come from type checking or validation.
    1. with ActiveForm-Rails, validations is the responsability of the form and not of the models. There is no need to synchronize errors from the form to the models and vice versa.

      But if you intend to save to a model after the form validates, then you can't escape the models' validations:

      either you check that the models pass their own validations ahead of time (like I want to do, and I think @mattheworiordan was wanting to do), or you have to accept that one of the following outcomes is possible/inevitable if the models' own validations fail:

      1. if you use object.save then it may silently fail to save
      2. if you use object.save then it will fail to save and raise an error

      Are either of those outcomes acceptable to you? To me, they seem not to be. Hence we must also check for / handle the models' validations. Hence we need a way to aggregate errors from both the form object (context-specific validations) and from the models (unconditional/invariant validations that should always be checked by the model), and present them to the user.

      What do you guys find to be the best way to accomplish that?

      I am interested to know what best practices you use / still use today after all these years. I keep finding myself running into this same problem/need, which is how I ended up looking for what the current options are for form objects today...

    2. I agre with your concern. I realy prefer to do this : form.assign_attributes(hash) if form.valid? my_service.update(form) #render something else #render somthing else end It looks more like a normal controller.
    3. My only concern with this approach is that if someone calls #valid? on the form object afterwards, it would under the hood currently delete the existing errors on the form object and revalidate. The could have unexpected side effects where the errors added by the models passed in or the service called will be lost.
    4. My concern with this approach is still that it's somewhat brittle with the current implementation of valid? because whilst valid? appears to be a predicate and should have no side effects, this is not the case and could remove the errors applied by one of the steps above.
    1. Any attribute in the list will be allowed, and any defined as attr_{accessor,reader,writer} will not be populated when passed in as params. This means we no longer need to use strong_params in the controllers because the form has a clear definition of what it expects and protects us by design.

      strong params not needed since form object handles that responsibility.

      That's the same opinion Nick took in Reform...

    1. If you include ActiveModel::Validations you can write the same validators as you would with ActiveRecord. However, in this case, our form is just a collection of Contact objects, which are ActiveRecord and have their own validations. When I save the ContactListForm, it attempts to save all the contacts. In doing so, each contact has its error_messages available.
  6. Jan 2021
    1. Finally, through its reference to “the accumulated evidence,” the definition in the Standards emphasizes that obtaining validity evidence is a process rather than a sin-gle study from which a dichotomous “valid/not valid” decision is made

    Tags

    Annotators

  7. Oct 2020
    1. we update the validation schema on the fly (we had a similar case with a validation that needs to be included whenever some fetch operation was completed)
    2. Final Form makes the assumption that your validation functions are "pure" or "idempotent", i.e. will always return the same result when given the same values. This is why it doesn't run the synchronous validation again (just to double check) before allowing the submission: because it's already stored the results of the last time it ran it.
    1. export const validationSchema = {
        field: {
          account: [Validators.required.validator, iban.validator, ibanBlackList],
          name: [Validators.required.validator],
          integerAmount: [
      

      Able to update this schema on the fly, with:

        React.useEffect(() => {
          getDisabledCountryIBANCollection().then(countries => {
            const newValidationSchema = {
              ...validationSchema,
              field: {
                ...validationSchema.field,
                account: [
                  ...validationSchema.field.account,
                  {
                    validator: countryBlackList,
                    customArgs: {
                      countries,
                    },
                  },
                ],
              },
            };
      
            formValidation.updateValidationSchema(newValidationSchema);
          });
        }, []);
      
    2. Meat:

      validate={values => formValidation.validateForm(values)}
      
    1. Form Validation
    2. return { type: "COUNTRY_BLACK_LIST", succeeded, message: succeeded ? "" : "This country is not available" }
    3. Validation Schema: A Form Validation Schema allows you to synthesize all the form validations (a list of validators per form field) into a single object definition. Using this approach you can easily check which validations apply to a given form without having to dig into the UI code.
    4. It is easily extensible (already implemented Final Form and Formik plugin extensions).
    5. Form validation can get complex (synchronous validations, asynchronous validations, record validations, field validations, internationalization, schemas definitions...). To cope with these challenges we will leverage this into Fonk and Fonk Final Form adaptor for a React Final Form seamless integration.
    6. Just let the user fill in some fields, submit it to the server and if there are any errors notify them and let the user start over again. Is that a good approach? The answer is no, you don't want users to get frustrated waiting for a server round trip to get some form validation result.
    1. Add new plugin Recaptcha3Token that sends the reCaptcha v3 token to the back-end when the form is valid
    2. All validators can be used independently. Inspried by functional programming paradigm, all built in validators are just functions.

      I'm glad you can use it independently like:

      FormValidation.validators.creditCard().validate({
      

      because sometimes you don't have a formElement available like in their "main" (?) API examples:

      FormValidation.formValidation(formElement
      
    1. Knight, S. R., Ho, A., Pius, R., Buchan, I., Carson, G., Drake, T. M., Dunning, J., Fairfield, C. J., Gamble, C., Green, C. A., Gupta, R., Halpin, S., Hardwick, H. E., Holden, K. A., Horby, P. W., Jackson, C., Mclean, K. A., Merson, L., Nguyen-Van-Tam, J. S., … Harrison, E. M. (2020). Risk stratification of patients admitted to hospital with covid-19 using the ISARIC WHO Clinical Characterisation Protocol: Development and validation of the 4C Mortality Score. BMJ, 370. https://doi.org/10.1136/bmj.m3339

  8. Sep 2020
    1. We must always return at least some validation rule. So first off if value !== undefined then we'll return our previous validation schema. If it is undefined then we'll use the yup.mixed().notRequired() which will just inform yup that nothing is required at the optionalObject level. optionalObject: yup.lazy(value => { if (value !== undefined) { return yup.object().shape({ otherData: yup.string().required(), }); } return yup.mixed().notRequired(); }),
    1. Mark the schema as required. All field values apart from undefined and null meet this requirement.
    2. The same as the mixed() schema required, except that empty strings are also considered 'missing' values.
    1. Form validation is hard. That's why there are so many different form handling libraries for the popular web frameworks. It's usually not something that is built-it, because everyone has a different need and there is no one-fit-all solution.
  9. Aug 2020
    1. Triggers error messages to render after a field is touched, and blurred (focused out of), this is useful for text fields which might start out erronous but end up valid in the end (i.e. email, or zipcode). In these cases you don't want to rush to show the user a validation error message when they haven't had a chance to finish their entry.
    2. Triggers error messages to show up as soon as a value of a field changes. Useful for when the user needs instant feedback from the form validation (i.e. password creation rules, non-text based inputs like select, or switches etc.)
    1. The bindings are two-way because any HTML5 contraint validation errors will be added to the Final Form state, and any field-level validation errors from Final Form will be set into the HTML5 validity.customError state.
    1. So when we ask users to answer questions that deal with the future, we have to keep in mind the context in which they’re answering. They can tell us about a feature they think will make their lives better, but user val-idation will always be necessary to make sure that past user’s beliefs about future user are accurate.

    Tags

    Annotators

    1. It's worth pointing out that filenames can contain a newline character on many *nix systems. You're unlikely to ever run into this in the wild, but if you're running shell commands on untrusted input this could be a concern
  10. Jul 2020
    1. To verify that your structured data is correct, many platforms provide validation tools. In this tutorial, we'll validate our structured data with the Google Structured Data Validation Tool.
    1. Meyer, B., Torriani, G., Yerly, S., Mazza, L., Calame, A., Arm-Vernez, I., Zimmer, G., Agoritsas, T., Stirnemann, J., Spechbach, H., Guessous, I., Stringhini, S., Pugin, J., Roux-Lombard, P., Fontao, L., Siegrist, C.-A., Eckerle, I., Vuilleumier, N., & Kaiser, L. (2020). Validation of a commercially available SARS-CoV-2 serological immunoassay. Clinical Microbiology and Infection, 0(0). https://doi.org/10.1016/j.cmi.2020.06.024

    1. There's not a way to do this. What you could do instead is use Cloud Functions HTTP triggers as an API for writing data. It could check the conditions you want, then return a response that indicates what's wrong with the data the client is trying to write. I understand this is far from ideal, but it might be the best option you have right now

      it's definitely far from ideal :(

  11. May 2020
    1. (Thus, for these curves, the cofactor is always h = 1.)

      This means there is no need to check if the point is in the correct subgroup.

  12. Apr 2020
    1. As mentioned in StateMachines::Machine#state, you can define behaviors, like validations, that only execute for certain states. One important caveat here is that, due to a constraint in ActiveRecord's validation framework, custom validators will not work as expected when defined to run in multiple states.
    1. 1- Validation: you “validate”, ie deem valid or invalid, data at input time. For instance if asked for a zipcode user enters “zzz43”, that’s invalid. At this point, you can reject or… sanitize. 2- sanitization: you make data “sane” before storing it. For instance if you want a zipcode, you can remove any character that’s not [0-9] 3- escaping: at output time, you ensure data printed will never corrupt display and/or be used in an evil way (escaping HTML etc…)
    2. This style of validation most closely follows WordPress’ whitelist philosophy: only allow the user to input what you’re expecting.
    1. What Is Input Validation and Sanitization? Validation checks if the input meets a set of criteria (such as a string contains no standalone single quotation marks). Sanitization modifies the input to ensure that it is valid (such as doubling single quotes).
    1. Having visibility to the prevalence means, for example, you might outright block every password that's appeared 100 times or more and force the user to choose another one (there are 1,858,690 of those in the data set), strongly recommend they choose a different password where it's appeared between 20 and 99 times (there's a further 9,985,150 of those), and merely flag the record if it's in the source data less than 20 times.
    1. Validators, like all attribute extensions, are only called by normal userland code; they are not issued when the ORM is populating the object
  13. Mar 2020
    1. Designers using these curves should be aware that for each public key, there are several publicly computable public keys that are equivalent to it, i.e., they produce the same shared secrets. Thus using a public key as an identifier and knowledge of a shared secret as proof of ownership (without including the public keys in the key derivation) might lead to subtle vulnerabilities.
    2. Protocol designers using Diffie-Hellman over the curves defined in this document must not assume "contributory behaviour". Specially, contributory behaviour means that both parties' private keys contribute to the resulting shared key. Since curve25519 and curve448 have cofactors of 8 and 4 (respectively), an input point of small order will eliminate any contribution from the other party's private key. This situation can be detected by checking for the all- zero output, which implementations MAY do, as specified in Section 6. However, a large number of existing implementations do not do this.
    3. The check for the all-zero value results from the fact that the X25519 function produces that value if it operates on an input corresponding to a point with small order, where the order divides the cofactor of the curve (see Section 7).
    4. Both MAY check, without leaking extra information about the value of K, whether K is the all-zero value and abort if so (see below).
    1. n

      n is the order of the subgroup and n is prime

    2. an ECC key-establishment scheme requires the use of public keys that are affine elliptic-curve points chosen from a specific cyclic subgroup with prime order n

      n is the order of the subgroup and n is prime

    3. 5.6.2.3.3ECC Full Public-Key Validation Routine
    4. The recipient performs a successful full public-key validation of the received public key (see Sections 5.6.2.3.1for FFCdomain parameters andSection5.6.2.3.3for ECCdomain parameters).
    5. Assurance of public-key validity –assurance that the public key of the other party (i.e., the claimed owner of the public key) has the (unique) correct representation for a non-identity element of the correct cryptographic subgroup, as determined by the
    1. 5.6.2.3.2ECC Full Public-Key Validation Routine
    2. The recipient performs a successful full public-key validation of the received public key (see Sections 5.6.2.3.1 and 5.6.2.3.2).
    3. Assurance of public-key validity – assurance that the public key of the other party (i.e., the claimed owner of the public key) has the (unique) correct representation for a non-identity element of the correct cryptographic subgroup, as determined by the domain parameters (see Sections 5.6.2.2.1 and 5.6.2.2.2). This assurance is required for both static and ephemeral public keys.
    1. Misusing public keys as secrets: It might be tempting to use a pattern with a pre-message public key and assume that a successful handshake implies the other party's knowledge of the public key. Unfortunately, this is not the case, since setting public keys to invalid values might cause predictable DH output. For example, a Noise_NK_25519 initiator might send an invalid ephemeral public key to cause a known DH output of all zeros, despite not knowing the responder's static public key. If the parties want to authenticate with a shared secret, it should be used as a PSK.
    2. Channel binding: Depending on the DH functions, it might be possible for a malicious party to engage in multiple sessions that derive the same shared secret key by setting public keys to invalid values that cause predictable DH output (as in the previous bullet). It might also be possible to set public keys to equivalent values that cause the same DH output for different inputs. This is why a higher-level protocol should use the handshake hash (h) for a unique channel binding, instead of ck, as explained in Section 11.2.
    3. The public_key either encodes some value which is a generator in a large prime-order group (which value may have multiple equivalent encodings), or is an invalid value. Implementations must handle invalid public keys either by returning some output which is purely a function of the public key and does not depend on the private key, or by signaling an error to the caller. The DH function may define more specific rules for handling invalid values.
    1. WireGuard excludes zero Diffie-Hellman shared secrets to avoid points of small order, while Noiserecommends not to perform this check
    1. This check strikes a delicate balance: It checks Y sufficiently to prevent forgery of a (Y, Y^x) pair without knowledge of X, but the rejected values for X are unlikely to be hit by an attacker flipping ciphertext bits in the least-significant portion of X. Stricter checking could easily *WEAKEN* security, e.g. the NIST-mandated subgroup check would provide an oracle on whether a tampered X was square or nonsquare.
    2. X25519 is very close to this ideal, with the exception that public keys have easily-computed equivalent values. (Preventing equivalent values would require a different and more costly check. Instead, protocols should "bind" the exact public keys by MAC'ing them or hashing them into the session key.)
    3. Curve25519 key generation uses scalar multiplication with a private key "clamped" so that it will always produce a valid public key, regardless of RNG behavior.
    4. * Valid points have equivalent "invalid" representations, due to the cofactor, masking of the high bit, and (in a few cases) unreduced coordinates.
    5. With all the talk of "validation", the reader of JP's essay is likely to think this check is equivalent to "full validation" (e.g. [SP80056A]), where only valid public keys are accepted (i.e. public keys which uniquely encode a generator of the correct subgroup).
    6. (1) The proposed check has the goal of blacklisting a few input values. It's nowhere near full validation, does not match existing standards for ECDH input validation, and is not even applied to the input.
    1. If Alice generates all-zero prekeys and identity key, and pushes them to the Signal’s servers, then all the peers who initiate a new session with Alice will encrypt their first message with the same key, derived from all-zero shared secrets—essentially, the first message will be in the clear for an eavesdropper.