Previously, we assume the existence of a incomplete block at
end of the input. However, it's possible that input's an exact
multiple of block size. In this case, the first argument of
process_final_incomplete_block() will be one-past-the-last
element, the second argument will be zero. This' an ill-defined
call, and it will trigger an assertion failure of std::vector
Assertion '__builtin_expect(__n < this->size(), true)' failed.
This commit introduced a check. If we see the length of the last
incomplete block is zero, we call
process_final_incomplete_block(NULL, 0);
which immediately finalizes CubeHash without hashing additional
data.
Signed-off-by: Tom Li <tomli@tomli.me>
Currently, process_final_incomplete_block() will perform the round
R calculation with the remaining data, then finalize CubeHash. It
is not possible to finalize CubeHash if there's no incomplete block.
Here, we define the call of process_final_incomplete_block(NULL, 0)
as a way to directly finalize CubeHash when input is a multiple of
block size and there is no remaining data for round R.
Also, in this case, any call of process_final_incomplete_block(),
but only with a single NULL pointer, or only with n == 0, is an
indication of bug. We assert both to be zero/nonzero.
Signed-off-by: Tom Li <tomli@tomli.me>
In load_key_vector(), the program passes a std::vector<byte> to
a C-style function, load_key (const byte*begin, const byte*end)
by creating references
load_key (& (K[0]), & (K[K.size()]));
However, accessing the one-past-the-last element in a std::vector
via [] is not allowed in C++, it triggers an assertion failure.
Assertion '__builtin_expect(__n < this->size(), true)' failed.
In this commit, we use K.data() and K.data() + K.size() to expose
the underlying pointers and pass them to the C function.
Signed-off-by: Tom Li <tomli@tomli.me>
Well, there's a reason for that test vectors are published on wikipedia.
Although this looks scary (like writing past array bounds), cubehash B
parameter is in all cases smaller than 63 (which is the first B value where
this would write behind the array), so no harm is done. For similar reason, the
"misimplemented" cubehash was cryptographically correct (i.e. without
cryptographic weakness), only implemented differently and producing different
results than those prescribed by the standard.
Practical implications of changing the hash functions are:
- everyone gets a new KeyID
- FMTSeq keys that used cubehash are invalid now, users are forced to generate
new ones
The `online' modification of unsatisfied eqn counts caused increased rate of
decoding failures (verified experimentally). Use the variant that doesn't
modify the counts until next round.
This replaces the periodic recalculation of error correlations and the syndrome
by in-place modification. Bit flip is therefore a bit slower, but overall
decoding of the 256-bit secure variant fits in 200ms, and 128-bit variant
decodes under 20ms.
There still could be some (blatantly nondeterministic) method to do this using
FFT, research underway.