Last 12 weeks ยท 0 commits
4 of 6 standards met
โฆd chunk counter (0.6.4) Previously a single counter tracked both the rate position and the absolute chunk number; after a permutation was reset to 0, so rate_idxchunk_numchunk_num + 1 == num_elementsrate_idxchunk_numrate_idx > 0` to absorb outstanding elements. Regression: Add/keep the 85-byte test case to prevent regressions. Refs: rp64_256 last-chunk padding bug.
Repository: facebook/winterfell. Description: A STARK prover and verifier for arbitrary computations Stars: 885, Forks: 223. Primary language: Rust. Languages: Rust (99.7%), HTML (0.1%), Makefile (0.1%), Shell (0%). License: MIT. Latest release: v0.13.1 (7mo ago). Open PRs: 16, open issues: 47. Last activity: 7mo ago. Community health: 75%. Top contributors: irakliyk, Nashtare, Al-Kindi-0, 0xkanekiken, plafer, hackaugusto, grjte, andrewmilson, Jasleen1, Fumuran and others.
Addresses #9 . Based on the recent work. The most noticeable features are: 1. We take into account the degree of the extension field when randomizing. 2. We add randomization of the quotient segment polynomials. 3. We add a random code-word to the DEEP composition polynomial as done in the Aurora paper. 4. In addition to salting the vector commitment, we also salt the Fiat-Shamir as is done in the specification of the BCS transform. This is ready for full review but I am putting in draft mode as the current solution for generating randomness for zero-knowledge is not clean. More specifically: 1. For salted Merkle trees, should we also use a PRNG? 2. For the PRNG used in the prover, I am not happy with the way it is currently implemented, I am thinking that it should be sort of optional to have it but I couldn't come up with a good way to do that. There are also some unnecessary allocations but I can remove those once we agree on the general structure.
Problem The current trait has a hardcoded 32-byte output constraint: This makes it impossible to properly implement hashers with different output sizes. For example, produces a 64 byte digest, but the trait forces truncation to 32 bytes. Using truncated (first 32 bytes of output) is not the same as , which uses different initialization vectors as specified in FIPS 180-4. This creates a confusing situation where implementers must choose between: 1. Implementing with incorrect/truncated output 2. Not implementing at all for 64-byte hashers Proposed Solution Add a const generic parameter to the trait: This would allow: as default for existing 32 byte hashers (backward compatible) for and other 64 byte hashers Future flexibility for other digest sizes Context This issue was raised while implementing support in 0xMiden/crypto#692. cc @huitseeker