80 Matching Annotations
  1. Last 7 days
  2. May 2020
    1. (Thus, for these curves, the cofactor is always h = 1.)

      This means there is no need to check if the point is in the correct subgroup.

  3. Apr 2020
    1. Note that by definition of the corruptionqueryCi⊆Ci+1.

      I think this means that no notion of “post-compromise security” is modeled.

    1. findu′≤nsuchthat defined(y[u′],u1[u′],...,um[u′])∧u1[u′] =u1∧...∧um[u′] =umthenc〈y[u′]

      If the same indices are queried again, return the same y.

    2. Q|Rx≈Q|R′x

      When proving a “query secret k” for Q, CryptoVerif proves the indistinguishability of Q|R_x and Q|R'_x. One could imagine that CryptoVerif appends the oracle R_x to Q by parallel composition.

    1. Enfin, il faut que les quatre opérateurs principaux soient obligatoirement im-pliqués dans le processus de traçage.

      How? By providing free internet access? For the tracing itself, I do not think help of network providers is needed.

    2. pourraient alors être prévenus par sms

      SMS, not push notification via the app? The app is using an internet connection anyways, to download and/or upload ephemeral IDs.

    3. des applications d’intelligence artificielle (IA),

      It is new to me that StopCovid uses AI.

    1. The probability that the authenticated encryption function ever will be invoked with the same IV and the same key on two (or more) distinct sets of input data shall be no greater than 2-32.
    1. learn

      typo: learns

    2. s

      low priority: bytes vs byte is not consistent throughout the document

    3. Thehealthauthorityneedstoknowwhoisatrisksothattheycannotifythem.

      With the description of the system until now, at-risk individuals are informed by their app; no interaction with the health authority seems to be necessary for this notification. In contrast, the app needs to notify the health authority. Maybe this could be clarified?

    4. PRG( PRF(SK​t​, “broadcast key”) )

      Could this also be HKDF-Expand? Its maximum output length is 256HashLen. When used with SHA-256, this is 256256 bit and would be enough for for 512 16-byte chunks (changing every 2.8 minutes during one day).

      PRG(PRF()) is fine too; the construction with “broadcast key” just reminded my of the context variable of HKDF-Expand. HKDF-Expand could reduce the needed cryptographic primitives to just a hash function.

    5. Itsufficeswithaddingthetarget​EphID​stothelistofobservedeventspriortouploadingittothe backend.

      As a mitigation, maybe a potentially at-risk user could confirm that their app saw the infected person's EphID?

    6. Inthedecentralizedsystem

      Why is this only possible in the decentralized system?

    7. Thesmartphonethenrelaysthisat-riskstatustothehealthauthoritysothatthehealthauthoritycancontactthephone’suser.

      Ok here this becomes clearer.

    8. as if it was not roaming

      I found this hard to understand. Does this mean “as if it was not abroad”?

    9. Afterreportingtheircurrent​SK​t​,thesmartphoneoftheinfectedpatientpicksanewcompletely random key.

      Nice, was about to ask about that. This probably only makes sense from an epidemiologic view if the app user then is sent to quarantine/confinement/has to wear a mask/etc? Well, this is the choice of the health authority, but if it's an assumption of the system it might be worth mentioning

    10. beablelearn the

      “be able to learn”

    1. HMAC is used with all hash functions instead of allowing hashes to use a more specialized function (e.g. keyed BLAKE2), because: HKDF requires the use of HMAC

      This does not comment on the choice of HKDF over specialized hash function modes that are designed to be a KDF (like BLAKE3 seems to do). The comment “HMAC applies nested hashing to process each input. This "extra" hashing might mitigate the impact of hash function weakness.” applies on the level of HKDF, too.

    2. SHA3 candidates such as Keccak and BLAKE were required to be suitable with HMAC
  4. Mar 2020
    1. In the case of password-based KDFs, a main goal is to slow down dictionary attacks using two ingredients: a salt value, and the intentional slowing of the key derivation computation. HKDF naturally accommodates the use of salt; however, a slowing down mechanism is not part of this specification.
    2. Ideally, the salt value is a random (or pseudorandom) string of the length HashLen. Yet, even a salt value of less quality (shorter in size or with limited entropy) may still make a significant contribution to the security of the output keying material;
    3. (and adding 'info' as an input to the extract step is not advisable -- see [HKDF-paper]).
    1. Definition 2.1(GenericPRF-ODHassumption)
    2. The strong DH assumption (StDH) demands that the adversary solves the computational problemof computingguvfromgu,gv, but having access to a decisional oracleDDH(gu,·,·)checking for DHtuples.
    3. the strong Diffie–Hellman (StDH), or the even more general Gap-Diffie–Hellman (GapDH) problem
    4. While we show that even the strongest variantis achievable in the random oracle model under the strong Diffie–Hellman assumption, we provide anegative result showing that it is implausible to instantiate even the weaker variants in the standardmodel via algebraic black-box reductions to common cryptographic problems.
    1. This is acceptable because the standard security levels are primarily driven by much simpler, symmetric primitives where the security level naturally falls on a power of two. For asymmetric primitives, rigidly adhering to a power-of-two security level would require compromises in other parts of the design, which we reject.
    2. Designers using these curves should be aware that for each public key, there are several publicly computable public keys that are equivalent to it, i.e., they produce the same shared secrets. Thus using a public key as an identifier and knowledge of a shared secret as proof of ownership (without including the public keys in the key derivation) might lead to subtle vulnerabilities.
    3. Protocol designers using Diffie-Hellman over the curves defined in this document must not assume "contributory behaviour". Specially, contributory behaviour means that both parties' private keys contribute to the resulting shared key. Since curve25519 and curve448 have cofactors of 8 and 4 (respectively), an input point of small order will eliminate any contribution from the other party's private key. This situation can be detected by checking for the all- zero output, which implementations MAY do, as specified in Section 6. However, a large number of existing implementations do not do this.
    4. The check for the all-zero value results from the fact that the X25519 function produces that value if it operates on an input corresponding to a point with small order, where the order divides the cofactor of the curve (see Section 7).
    5. Both MAY check, without leaking extra information about the value of K, whether K is the all-zero value and abort if so (see below).
    1. n

      n is the order of the subgroup and n is prime

    2. an ECC key-establishment scheme requires the use of public keys that are affine elliptic-curve points chosen from a specific cyclic subgroup with prime order n

      n is the order of the subgroup and n is prime

    3. 5.6.2.3.3ECC Full Public-Key Validation Routine
    4. The recipient performs a successful full public-key validation of the received public key (see Sections 5.6.2.3.1for FFCdomain parameters andSection5.6.2.3.3for ECCdomain parameters).
    5. Assurance of public-key validity –assurance that the public key of the other party (i.e., the claimed owner of the public key) has the (unique) correct representation for a non-identity element of the correct cryptographic subgroup, as determined by the
    1. 5.6.2.3.2ECC Full Public-Key Validation Routine
    2. The recipient performs a successful full public-key validation of the received public key (see Sections 5.6.2.3.1 and 5.6.2.3.2).
    3. Assurance of public-key validity – assurance that the public key of the other party (i.e., the claimed owner of the public key) has the (unique) correct representation for a non-identity element of the correct cryptographic subgroup, as determined by the domain parameters (see Sections 5.6.2.2.1 and 5.6.2.2.2). This assurance is required for both static and ephemeral public keys.
    1. Misusing public keys as secrets: It might be tempting to use a pattern with a pre-message public key and assume that a successful handshake implies the other party's knowledge of the public key. Unfortunately, this is not the case, since setting public keys to invalid values might cause predictable DH output. For example, a Noise_NK_25519 initiator might send an invalid ephemeral public key to cause a known DH output of all zeros, despite not knowing the responder's static public key. If the parties want to authenticate with a shared secret, it should be used as a PSK.
    2. Channel binding: Depending on the DH functions, it might be possible for a malicious party to engage in multiple sessions that derive the same shared secret key by setting public keys to invalid values that cause predictable DH output (as in the previous bullet). It might also be possible to set public keys to equivalent values that cause the same DH output for different inputs. This is why a higher-level protocol should use the handshake hash (h) for a unique channel binding, instead of ck, as explained in Section 11.2.
    3. The public_key either encodes some value which is a generator in a large prime-order group (which value may have multiple equivalent encodings), or is an invalid value. Implementations must handle invalid public keys either by returning some output which is purely a function of the public key and does not depend on the private key, or by signaling an error to the caller. The DH function may define more specific rules for handling invalid values.
    1. This check strikes a delicate balance: It checks Y sufficiently to prevent forgery of a (Y, Y^x) pair without knowledge of X, but the rejected values for X are unlikely to be hit by an attacker flipping ciphertext bits in the least-significant portion of X. Stricter checking could easily *WEAKEN* security, e.g. the NIST-mandated subgroup check would provide an oracle on whether a tampered X was square or nonsquare.
    2. The client is relying on the server's unauthenticated DH public key Y to somehow authenticate the server's knowledge of X. Obviously, this is making an assumption about a DH that could be bad, thus is an unsafe protocol. This is Tor's (older) TAP circuit handshake (using regular DH, not ECDH). The original deployment was easily attacked by a fake server sending a public key Y = 0, 1, or -1, thus allowing the fake server to calculate Y^x without seeing X [TAP].
    3. Thus it's an incomplete fix, and the correct solution is binding the transcript.
    4. It's well-understood nowadays that channel binding must cover the session transcript.
    5. A safe DH protocol is easy to instantiate with a wide range of DH algorithms, and in many cases non-DH key agreements (e.g. post-quantum algorithms, and encryption like RSA).
    6. X25519 is very close to this ideal, with the exception that public keys have easily-computed equivalent values. (Preventing equivalent values would require a different and more costly check. Instead, protocols should "bind" the exact public keys by MAC'ing them or hashing them into the session key.)
    7. Curve25519 key generation uses scalar multiplication with a private key "clamped" so that it will always produce a valid public key, regardless of RNG behavior.
    8. * Valid points have equivalent "invalid" representations, due to the cofactor, masking of the high bit, and (in a few cases) unreduced coordinates.
    9. With all the talk of "validation", the reader of JP's essay is likely to think this check is equivalent to "full validation" (e.g. [SP80056A]), where only valid public keys are accepted (i.e. public keys which uniquely encode a generator of the correct subgroup).
    10. (1) The proposed check has the goal of blacklisting a few input values. It's nowhere near full validation, does not match existing standards for ECDH input validation, and is not even applied to the input.
    1. If Alice generates all-zero prekeys and identity key, and pushes them to the Signal’s servers, then all the peers who initiate a new session with Alice will encrypt their first message with the same key, derived from all-zero shared secrets—essentially, the first message will be in the clear for an eavesdropper.
    2. arguing that a zero check “adds complexity (const-time code, error-handling, and implementation variance), and is not needed in good protocols.”
    1. Figure 7: One-phase experiment

      I use this security notion for DHKEM

    2. We then considerthe same question for key-encapsulation mechanisms (KEMs)and show that in this case the fournotionsareall equivalent.
    1. SHAKE256(96,μ‖SHAKE256(32,pk))

      This is the same G(m||H(pk)) construction as in Kyber.

    1. Asimilarstatementholdsforadditionallyhashingtheciphertextintothenalkey.Severalproto colsneedtoensurethatthekeydep endsonthecompleteviewofexchangedproto colmessages.Thisisthecase,forexample,fortheauthenticated-key-exchangeproto colsdescrib edintheKyberpap er[22,Sec.5].Hashingthefullproto colview(publickeyandciphertext)intothenalkeyalreadyaspartoftheKEMmakesitunnecessary(althoughofcoursestillsafe)totakecareofthesehashesonthehigherproto collayer

      Here is their reasoning about why the public key and the ciphertext are used in the “context” of the key derivation.

    2. InanearlierversionofKyberweinstantiatedH,G,andPRFallwithSHAKE-256.WedecidedtochangethistodierentfunctionsfromtheFIPS-202familytoavoidanydomain-separationdiscussion.

      Random oracle cloning by using different hash functions.

    3. KDF( ̄K‖H(c))

      Here they put the ciphertext into the “context” as well, just like FrodoKEM.

    4. G(m‖H(pk))
      • this makes the shared secret dependent on the exact public key; good in case there would be equivalent public keys
      • this construction looks a bit similar to HMAC, but not exactly; how do they model it in the security proof? SHA-2 is vulnerable to length extension attacks!
    5. weinstantiateKDFwithSHAKE-256

      Interesting that the last derivation step is done with Keccak even in the 90s variant. Probably for security reasons. Todo: look at the proof to see if they assume RO for KDF.

    6. Asanadditionalup dateforround2,wealsopresentavariantofKyberthatinsteadofrelyingonKeccakforallsymmetricprimitives,reliesonAESandSHA-2.

      It could be interesting to compare this variant to DHKEM.

    1. These domain separators have bit patterns (0x5F=01011111,0x96=10010110) that were chosen to make it hard to use individual or consecutive bit flipping attacks to turn oneinto the other

      Interesting point when choosing prefixes. Sounds like this is meant to harden against fault injection.

    2. FrodoKEM.Encaps: By the chain of highlighted variables, the shared secret seems to depend on the (complete, not only parts like seed_A or b) recipient public key.

    1. Cryptology ePrint Archive, Report2001/108, 2001.<http://eprint.iacr.org>
    2. SetK=KDF(Z‖PEH,KeyLen)

      If not in SingleHashMode, the "context" includes the ephemeral public key. ECIES does not include the recipient's public key in the context, though.

    3. 8.1.1 Prefix-freeness propertyAdditionally, a key encapsulation mechanism must satisfy the following property. The set of allpossible ciphertext outputs of the encryption algorithm should be a subset of acandidateset ofoctet strings (that may depend on the public key), such that the candidate set is prefix free andelements of the candidate set are easy to recognize (given either the public key or the privatekey).

      Is there a security implication here? Why don't we require this for HPKE? (I suspect this has been replaced by a more modern or just different definition by now) (Edited to add: might be related to the fact that the notion of AEAD seems to have appeared only later)

    4. DEM.Encrypt(K, L, M)

      Here, in contrast to the DEM used in the Tag-KEM paper, we have a label (additional data).

    1. or instance, schemes that follow the CCA KEM/DEM framework are better suitable forstreaming applications where the receiver does not need to buffer the entire ciphertex

      Would HPKE be used for streaming? I'd say rather export a key from HPKE and use it externally.

    2. Theorem 3.1[Tag-KEM/DEM Composition Theorem] If the Tag-KEM is CCA secure and theDEM is one-time secure then the Hybrid PKE scheme in Section3is CCA secure. In particular,²pke<2²tkem+²dem.

      Could we reuse this theorem? Or at least replicate this proof in CryptoVerif?

    3. Notethat, in the above syntactic definition,τis not included inψand explicitly given toTKEM.Dec

      If we change DHKEM to use a context when deriving zz, does this make DHKEM a Tag-KEM?