abrown commented on issue #4778:
There are a couple of issues to resolve before merging this:
- someone (@alexcrichton?) had mentioned previously that it would be good to expand the number of V128 known values so that we can find bugs faster. There are a lot of known values that could be interesting, though: 6 kinds of V128 (i8x16, i16x8, ..., f64x2) x up to 8 values (NaN, Inf, -Inf, ..., 1, MAX). Any suggestions on a nice way to generate all of these?
- when run on this branch, the
differential
fuzz target will eventually find NaN canonicalization issues for the V128 values--e.g., the spec will canonicalize them but the single-instruction generator does not know how to do this. I'm wondering how best to tackle this:
a. try to add canonicalization to single-inst modules to match wasm-smith; I haven't looked into this much... does wasm-smith even canonicalize V128s?
b. try to teach theimpl PartialEq for DiffValue
to ignore NaNs for V128s; this already happens for F32 and F64, but for V128 this is trickier because we don't know how to split up the lanes (is it an F32x4? No, an I8x16?) and there is actually no easy way to know this from the WebAssembly function signature itself--the information is lost. But perhaps we could first try==
, thenF32x4
NaNs, thenF64x2
NaNs, then fail (something like that).Thoughts on these two questions?
abrown edited a comment on issue #4778:
There are a couple of issues to resolve before merging this:
- someone (@alexcrichton?) had mentioned previously that it would be good to expand the number of V128 known values so that we can find bugs faster. There are a lot of known values that could be interesting, though: 6 kinds of V128 (i8x16, i16x8, ..., f64x2) x up to 8 values (NaN, Inf, -Inf, ..., 1, MAX). Any suggestions on a nice way to generate all of these?
- when run on this branch, the
differential
fuzz target will eventually find NaN canonicalization issues for the V128 values--e.g., the spec will canonicalize them but the single-instruction generator does not know how to do this. I'm wondering how best to tackle this:
a) try to add canonicalization to single-inst modules to match wasm-smith; I haven't looked into this much... does wasm-smith even canonicalize V128s?
b) try to teach theimpl PartialEq for DiffValue
to ignore NaNs for V128s; this already happens for F32 and F64, but for V128 this is trickier because we don't know how to split up the lanes (is it an F32x4? No, an I8x16?) and there is actually no easy way to know this from the WebAssembly function signature itself--the information is lost. But perhaps we could first try==
, thenF32x4
NaNs, thenF64x2
NaNs, then fail (something like that).Thoughts on these two questions?
github-actions[bot] commented on issue #4778:
Subscribe to Label Action
cc @fitzgen
<details>
This issue or pull request has been labeled: "fuzzing"Thus the following users have been cc'd because of the following labels:
- fitzgen: fuzzing
To subscribe or unsubscribe from this label, edit the <code>.github/subscribe-to-label.json</code> configuration file.
Learn more.
</details>
jameysharp commented on issue #4778:
There are a couple of issues to resolve before merging this:
- someone (@alexcrichton?) had mentioned previously that it would be good to expand the number of V128 known values so that we can find bugs faster. There are a lot of known values that could be interesting, though: 6 kinds of V128 (i8x16, i16x8, ..., f64x2) x up to 8 values (NaN, Inf, -Inf, ..., 1, MAX). Any suggestions on a nice way to generate all of these?
To make sure I'm looking at the right thing: you're talking about
DiffValue::arbitrary_of_type
incrates/fuzzing/src/generators/value.rs
, right?I'm thinking that
arbitrary_of_type
could first pick a vector kind (f32x4 etc), then call itself once for each lane, requesting the right lane type. For example, 4 calls toDiffValue::arbitrary_of_type(u, F32)
, then pack the bits from each call into au128
. Is that too simplistic? I guess it doesn't quite work for vectors with lanes narrower than 32 bits.
- when run on this branch, the
differential
fuzz target will eventually find NaN canonicalization issues for the V128 values--e.g., the spec will canonicalize them but the single-instruction generator does not know how to do this. I'm wondering how best to tackle this:The cranelift-fuzzgen target turns on a "nan_canonicalization" pass inside Cranelift. I see there's a Wasmtime config option,
config.cranelift_nan_canonicalization(true)
, that should do the same thing. Does that help here?
afonso360 commented on issue #4778:
The cranelift-fuzzgen target turns on a "nan_canonicalization" pass inside Cranelift. I see there's a Wasmtime config option, config.cranelift_nan_canonicalization(true), that should do the same thing. Does that help here?
If I understand this correctly, this fuzz target can fuzz against engines other than Wasmtime, right? They might not have an equivalent nan canonicalization pass. So we would have to run that pass before sending the wasm code to the engines.
afonso360 edited a comment on issue #4778:
The cranelift-fuzzgen target turns on a "nan_canonicalization" pass inside Cranelift. I see there's a Wasmtime config option, config.cranelift_nan_canonicalization(true), that should do the same thing. Does that help here?
If I understand this correctly, this fuzz target can fuzz against engines other than Wasmtime, right? They might not have an equivalent nan canonicalization pass. So we would have to run an equivalent pass in wasm before sending the wasm code to the engines.
fitzgen commented on issue #4778:
wasm-smith
has an option to canonicalize NaNs today, not sure if it is complete for SIMD stuff or not tho: https://github.com/bytecodealliance/wasm-tools/blob/de47b042d93a6e28208e9a83c10304b3d64db625/crates/wasm-smith/src/core/code_builder.rs#L867
abrown commented on issue #4778:
Yeah, maybe we could update which instructions can have their outputs canonicalized in wasm-smith (thanks @fitzgen for the link, was wondering about that) but the other big issue is that we would have to build similar canonicalization infrastructure in the single-instruction module generator for all the same instructions.
I'm actually thinking through option B above which is to implement fail-overs in
PartialEq for DiffValue
--why? The first N bits of an FP NaN are set: for f32, it's the first 10 bits (0b0_11111111_1
... sign, exponent, first bit of mantissa)<sup>1</sup>. If we compare two V128s and they do not match because a single F32 lane contains a NaN with differing payload, then the bits of those V128s can differ by up to 22 bits / 128 bits = ~17%. If in fact the difference in the V128s was not due to NaN canonicalization, would we really want to take that 17% risk of ignoring a bug?So now I'm leaning towards canonicalizing certain instructions in the single-instruction generator as done here.
<a name="nan">#1</a>: as a reminder, after the first
1
bit in the mantissa, canonical NaNs are filled with 0s whereas arithmetic NaNs, the problem here, are any arbitrary bits, see the Wasm spec+
abrown edited a comment on issue #4778:
Yeah, maybe we could update which instructions can have their outputs canonicalized in wasm-smith (thanks @fitzgen for the link, was wondering about that) but the other big issue is that we would have to build similar canonicalization infrastructure in the single-instruction module generator for all the same instructions.
I'm actually thinking through option B above which is to implement fail-overs in
PartialEq for DiffValue
--why? The first N bits of an FP NaN are set: for f32, it's the first 10 bits (0b0_11111111_1
... sign, exponent, first bit of mantissa)<sup>1</sup>. If we compare two V128s and they do not match because a single F32 lane contains a NaN with differing payload, then the bits of those V128s can differ by up to 22 bits / 128 bits = ~17%. If in fact the difference in the V128s was not due to NaN canonicalization, would we really want to take that 17% risk of ignoring a bug?So now I'm leaning towards canonicalizing certain instructions in the single-instruction generator as done here.
<a name="nan">#1</a>: as a reminder, after the first
1
bit in the mantissa, canonical NaNs are filled with 0s whereas arithmetic NaNs, the problem here, are any arbitrary bits, see the Wasm spec).
abrown edited a comment on issue #4778:
Yeah, maybe we could update which instructions can have their outputs canonicalized in wasm-smith (thanks @fitzgen for the link, was wondering about that) but the other big issue is that we would have to build similar canonicalization infrastructure in the single-instruction module generator for all the same instructions.
I'm actually thinking through option B above which is to implement fail-overs in
PartialEq for DiffValue
--why? The first N bits of an FP NaN are set: for f32, it's the first 10 bits (0b0_11111111_1
... sign, exponent, first bit of mantissa)<sup>1</sup>. If we compare two V128s and they do not match because a single F32 lane contains a NaN with differing payload, then the bits of those V128s can differ by up to 22 bits / 128 bits = ~17%. If in fact the difference in the V128s was not due to NaN canonicalization, would we really want to take that 17% risk of ignoring a bug?So now I'm leaning towards canonicalizing certain instructions in the single-instruction generator as done here.
<a name="nan">#1</a>: as a reminder, after the first
1
bit in the mantissa, canonical NaNs are filled with 0s whereas arithmetic NaNs, the problem here, are any arbitrary bits, see the Wasm spec.
abrown edited a comment on issue #4778:
Yeah, maybe we could update which instructions can have their outputs canonicalized in wasm-smith (thanks @fitzgen for the link, was wondering about that) but the other big issue is that we would have to build similar canonicalization infrastructure in the single-instruction module generator for all the same instructions.
I'm actually thinking through option B above which is to implement fail-overs in
PartialEq for DiffValue
--why? The first N bits of an FP NaN are set: for f32, it's the first 10 bits (0b0_11111111_1
... sign, exponent, first bit of mantissa)<sup>1</sup>. If we compare two V128s and they do not match because a single F32 lane contains a NaN with differing payload, then the bits of those V128s can differ by up to 22 bits / 128 bits = ~17%. If in fact the difference in the V128s was not due to NaN canonicalization, would we really want to take that 17% risk of ignoring a bug?So now I'm leaning towards canonicalizing certain instructions in the single-instruction generator as done here.
<a name="nan">#1</a>: as a reminder, after the first
1
bit in the mantissa, canonical NaNs are filled with 0s whereas arithmetic NaNs (which are the problem here) can contain any arbitrary bits. Also see the Wasm spec.
jameysharp commented on issue #4778:
Let me see if I have this right: a wasm-level NaN canonicalization pass would append a type-specific fixed sequence of instructions immediately after every instruction that pushes a newly-computed floating-point value on the stack, and you can tell which sequence to use by looking only at the instruction, without any surrounding context.
That does sound better than ignoring possible-NaN differences at the end, after all type information is lost.
abrown commented on issue #4778:
Yeah, that's right.
alexcrichton commented on issue #4778:
I think it's ok to expand the set of interesting v128 values later, and I think it's ok to be a bit verbose and just list them all out rather than procedurally generating them (basically I don't have a better idea).
Otherwise though with NaN issues I think the best solution is to add code to
wasm-smith
to do the same canonicalization that happens for f32 and f64. I think the sequences of instructions are basically going to be the same, just for vectors instead.
abrown commented on issue #4778:
Seems like a good start to me! I personally think that the nan canonicalization code would go better in wasm-smith itself along the lines of f32/f64 canonicalization that's already in there. I'm ok landing this first though and updating that afterwards.
I don't get what you mean. How would that work?
alexcrichton commented on issue #4778:
Wasm-smith based canonicalication is currently here. Actually reading over that it looks like there's already affordances for simd-based float operations. If your differential fuzzing here found issues with nans though then it may actually just be a case of extending the opcode listings there to canonicalize on a few more operations.
abrown commented on issue #4778:
Ah, no, the NaN canonicalization problem I observed was with a
SingleInstModule
, whichwasm-smith
does not generate. So my solution was to bring over that same NaN canonicalization logic over fromwasm-smith
toSingleInstModule
. For now, I have not observed a scalar failure though it seems possible one day that this could happen (which is why I ported the scalar canonicalization as well); when it happens we can enable the same scalar instructions that you pointed to. So only SIMD canonicalization is actually enabled in c6bb6c0.When I run this locally with V8 disabled (still haven't figured out what is going on with the segfaults there), I have not seen a failure yet:
$ ALLOWED_ENGINES=-v8 cargo +nightly fuzz run -s none differential ... === Execution rate (1152302 successes / 2313000 attempted modules): 49.82% === wasmi: 8.06%, spec: 8.46%, wasmtime: 83.48%, v8: 0.00% wasm-smith: 72.32%, single-inst: 27.68%
I guess I'll merge it and we can see if anything is found by oss-fuzz.
alexcrichton commented on issue #4778:
Oh wow sorry I completely forgot that this isn't wasm-smith related at all. Unsure how I missed that for so long... Disregard me and this is definitely good to land :+1:
Last updated: Dec 23 2024 at 12:05 UTC