alexcrichton commented on issue #911:
Another fuzz bug today - testcase0.wasm.gz
alexcrichton labeled issue #911:
In reviewing some fuzz bugs we've got a good number of test cases that end up timing out unfortunately. I believe that cranelift has a number of known issues about the speed of its compilation, particularly around register allocation. I figure it'd be good to collect a few concrete wasm files (discovered from fuzzing) which take an abnormally long amount of time to compile compared to the size of the input.
It's worth noting that the timeout on the fuzzers is relatively high, I think something like 30 or 60 seconds. When fuzzing though binaries can be up to 50x slower which means our time budget for passing the fuzzers is pretty small, generally less than 3 seconds I think (ish). It's also worth noting that fuzzers are compiled with debug assertions enabled, which enables, well, debug assertions, but also the cranelift verifier pass. I've seen the verifier pass be quite expensive on some of these modules below, but in general the modules without the verifier pass still take an abnormally long amount of time to compile.
For the files below I'm testing with:
$ cargo build --release && time ./target/release/wasmtime --disable-cache ./foo2.wasm
most of them fail to instantiate or run, but it's the compilation which largely matters here which all happens as part of
time
. Which is to say you can ignore the errors of the CLI generally
- file1.wasm[1].gz - takes 1s locally. Profiling shows lots of time in the register allocator.
- file2.wasm[1].gz - this takes 300ms locally, but with the verifier and debug assertions enabled takes about 1s. The time looks to be largely in the verifier/register allocator like before.
- file3.wasm[1].gz - same as previous
- file4.wasm[1].gz - same as previous
I'm assuming that there's generally not a huge amount we can do about this. When cranelift is looking to benchmark new register allocator implementations, however, we can perhaps use the files here as test beds to see how things are improving?
In any case I figure it's good to start tracking files as we come across them if we can.
alexcrichton labeled issue #911:
In reviewing some fuzz bugs we've got a good number of test cases that end up timing out unfortunately. I believe that cranelift has a number of known issues about the speed of its compilation, particularly around register allocation. I figure it'd be good to collect a few concrete wasm files (discovered from fuzzing) which take an abnormally long amount of time to compile compared to the size of the input.
It's worth noting that the timeout on the fuzzers is relatively high, I think something like 30 or 60 seconds. When fuzzing though binaries can be up to 50x slower which means our time budget for passing the fuzzers is pretty small, generally less than 3 seconds I think (ish). It's also worth noting that fuzzers are compiled with debug assertions enabled, which enables, well, debug assertions, but also the cranelift verifier pass. I've seen the verifier pass be quite expensive on some of these modules below, but in general the modules without the verifier pass still take an abnormally long amount of time to compile.
For the files below I'm testing with:
$ cargo build --release && time ./target/release/wasmtime --disable-cache ./foo2.wasm
most of them fail to instantiate or run, but it's the compilation which largely matters here which all happens as part of
time
. Which is to say you can ignore the errors of the CLI generally
- file1.wasm[1].gz - takes 1s locally. Profiling shows lots of time in the register allocator.
- file2.wasm[1].gz - this takes 300ms locally, but with the verifier and debug assertions enabled takes about 1s. The time looks to be largely in the verifier/register allocator like before.
- file3.wasm[1].gz - same as previous
- file4.wasm[1].gz - same as previous
I'm assuming that there's generally not a huge amount we can do about this. When cranelift is looking to benchmark new register allocator implementations, however, we can perhaps use the files here as test beds to see how things are improving?
In any case I figure it's good to start tracking files as we come across them if we can.
alexcrichton commented on issue #911:
Some more fuzz bugs:
alexcrichton commented on issue #911:
Here's another test case:
$ time RAYON_NUM_THREADS=1 cargo run --release -- compile --wasm-features multi-memory,memory64 testcase70.wasm --fuel 0 --epoch-interruption --interruptable RAYON_NUM_THREADS=1 cargo run --release -- compile --wasm-features --fuel 0 3.45s user 0.10s system 99% cpu 3.553 total
This one is a fuzz bug where lots of instrumentation was enabled (three separate ways of interrupting a module), which blows up compile time by quite a lot. Simply adding fuel is enough to make the compile time somewhat excessive at 2 seconds, though, so there may be other cranelift things here.
perf
still shows everything in register allocation right now
alexcrichton commented on issue #911:
Another fuzz bug foo.wasm.gz
$ time RAYON_NUM_THREADS=1 perf record ./target/release/wasmtime compile foo.wasm --wasm-features memory64,multi-memory RAYON_NUM_THREADS=1 perf record ./target/release/wasmtime compile foo.wasm 2.05s user 0.15s system 70% cpu 3.121 total
(all in regalloc again)
alexcrichton commented on issue #911:
Another fuzz bug: testcase0.wasm.gz
$ time RAYON_NUM_THREADS=1 ./target/release/wasmtime compile --wasm-features multi-memory,memory64 testcase0.wasm RAYON_NUM_THREADS=1 ./target/release/wasmtime compile --wasm-features 0.63s user 0.00s system 99% cpu 0.634 total
alexcrichton commented on issue #911:
Another: testcase0.wasm.gz
$ time RAYON_NUM_THREADS=1 ./target/release/wasmtime compile testcase0.wasm RAYON_NUM_THREADS=1 ./target/release/wasmtime compile testcase0.wasm 1.91s user 0.00s system 99% cpu 1.911 total
alexcrichton commented on issue #911:
Another: testcase3.wasm.gz
$ time ./target/release/wasmtime compile testcase3.wasm ./target/release/wasmtime compile testcase3.wasm 3.21s user 0.02s system 100% cpu 3.207 total
alexcrichton commented on issue #911:
Another: testcase0.wasm.gz
$ time RAYON_NUM_THREADS=1 wasmtime compile testcase0.wasm --wasm-features multi-memory,memory64 RAYON_NUM_THREADS=1 wasmtime compile testcase0.wasm --wasm-features 5.13s user 0.22s system 99% cpu 5.355 total
alexcrichton commented on issue #911:
$ time RAYON_NUM_THREADS=1 cargo run -q --release compile foo.wasm RAYON_NUM_THREADS=1 cargo run -q --release compile foo.wasm 1.58s user 0.04s system 99% cpu 1.629 total
alexcrichton edited a comment on issue #911:
$ time RAYON_NUM_THREADS=1 cargo run -q --release compile foo.wasm RAYON_NUM_THREADS=1 cargo run -q --release compile foo.wasm 1.58s user 0.04s system 99% cpu 1.629 total
alexcrichton commented on issue #911:
:warning:️:warning:️:warning:️:warning:️:warning:️
$ time wasmtime compile testcase0.wasm wasmtime compile testcase0.wasm 100.89s user 0.02s system 100% cpu 1:40.89 total
alexcrichton commented on issue #911:
$ time wasmtime compile testcase0.wasm wasmtime compile testcase0.wasm 4.84s user 0.00s system 100% cpu 4.829 total
alexcrichton commented on issue #911:
I'm going to close this given the discussion on https://github.com/bytecodealliance/wasmtime/issues/4060, these sorts of outliers are expected and eventually we'll want to tweak fuzzers to not generate these patterns of code.
alexcrichton closed issue #911:
In reviewing some fuzz bugs we've got a good number of test cases that end up timing out unfortunately. I believe that cranelift has a number of known issues about the speed of its compilation, particularly around register allocation. I figure it'd be good to collect a few concrete wasm files (discovered from fuzzing) which take an abnormally long amount of time to compile compared to the size of the input.
It's worth noting that the timeout on the fuzzers is relatively high, I think something like 30 or 60 seconds. When fuzzing though binaries can be up to 50x slower which means our time budget for passing the fuzzers is pretty small, generally less than 3 seconds I think (ish). It's also worth noting that fuzzers are compiled with debug assertions enabled, which enables, well, debug assertions, but also the cranelift verifier pass. I've seen the verifier pass be quite expensive on some of these modules below, but in general the modules without the verifier pass still take an abnormally long amount of time to compile.
For the files below I'm testing with:
$ cargo build --release && time ./target/release/wasmtime --disable-cache ./foo2.wasm
most of them fail to instantiate or run, but it's the compilation which largely matters here which all happens as part of
time
. Which is to say you can ignore the errors of the CLI generally
- file1.wasm[1].gz - takes 1s locally. Profiling shows lots of time in the register allocator.
- file2.wasm[1].gz - this takes 300ms locally, but with the verifier and debug assertions enabled takes about 1s. The time looks to be largely in the verifier/register allocator like before.
- file3.wasm[1].gz - same as previous
- file4.wasm[1].gz - same as previous
I'm assuming that there's generally not a huge amount we can do about this. When cranelift is looking to benchmark new register allocator implementations, however, we can perhaps use the files here as test beds to see how things are improving?
In any case I figure it's good to start tracking files as we come across them if we can.
Last updated: Nov 22 2024 at 16:03 UTC