Stream: git-wasmtime

Topic: wasmtime / PR #4543 cranelift-wasm: Only allocate if vect...


view this post on Zulip Wasmtime GitHub notifications bot (Jul 27 2022 at 20:46):

jameysharp opened PR #4543 from lazy-bitcast to main:

For wasm programs using SIMD vector types, the type known at function
entry or exit may not match the type used within the body of the
function, so we have to bitcast them. We have to check all calls and
returns for this condition, which involves comparing a subset of a
function's signature with the CLIF types we're trying to use. Currently,
this check heap-allocates a short-lived Vec for the appropriate subset
of the signature.

But most of the time none of the values need a bitcast. So this patch
avoids allocating unless at least one bitcast is actually required, and
only saves the pointers to values that need fixing up. I leaned heavily
on iterators to keep space usage constant until we discover allocation
is necessary after all.

I don't think it's possible to eliminate the allocation entirely,
because the function signature we're examining is borrowed from the
FuncBuilder, but we need to mutably borrow the FuncBuilder to insert the
bitcast instructions. Fortunately, the FromIterator implementation for
Vec doesn't reserve any heap space if the iterator is empty, so we can
unconditionally collect into a Vec and still avoid unnecessary
allocations.

Since the relationship between a function signature and a list of CLIF
values is somewhat complicated, I refactored all the uses of
bitcast_arguments and wasm_param_types. Instead there's
bitcast_wasm_params and bitcast_wasm_returns which share a helper
that combines the previous pair of functions into one.

According to DHAT, when compiling the Sightglass Spidermonkey benchmark,
this avoids 52k allocations averaging about 9 bytes each, which are
freed on average within 2k instructions.

Most Sightglass benchmarks, including Spidermonkey, show no performance
difference with this change. Only one has a slowdown, and it's small:

compilation :: nanoseconds :: benchmarks/shootout-matrix/benchmark.wasm

Δ = 689373.34 ± 593720.78 (confidence = 99%)

lazy-bitcast.so is 0.94x to 1.00x faster than main-e121c209f.so!
main-e121c209f.so is 1.00x to 1.06x faster than lazy-bitcast.so!

[19741713 21375844.46 32976047] lazy-bitcast.so
[19345471 20686471.12 30872267] main-e121c209f.so

But several Sightglass benchmarks have notable speed-ups, with smaller
improvements for shootout-ed25519, meshoptimizer, and pulldown-cmark,
and larger ones as follows:

compilation :: nanoseconds :: benchmarks/bz2/benchmark.wasm

Δ = 20071545.47 ± 2950014.92 (confidence = 99%)

lazy-bitcast.so is 1.26x to 1.36x faster than main-e121c209f.so!
main-e121c209f.so is 0.73x to 0.80x faster than lazy-bitcast.so!

[55995164 64849257.68 89083031] lazy-bitcast.so
[79382460 84920803.15 98016185] main-e121c209f.so

compilation :: nanoseconds :: benchmarks/blake3-scalar/benchmark.wasm

Δ = 16620780.61 ± 5395162.13 (confidence = 99%)

lazy-bitcast.so is 1.14x to 1.28x faster than main-e121c209f.so!
main-e121c209f.so is 0.77x to 0.88x faster than lazy-bitcast.so!

[54604352 79877776.35 103666598] lazy-bitcast.so
[94011835 96498556.96 106200091] main-e121c209f.so

compilation :: nanoseconds :: benchmarks/intgemm-simd/benchmark.wasm

Δ = 36891254.34 ± 12403663.50 (confidence = 99%)

lazy-bitcast.so is 1.12x to 1.24x faster than main-e121c209f.so!
main-e121c209f.so is 0.79x to 0.90x faster than lazy-bitcast.so!

[131610845 201289587.88 247341883] lazy-bitcast.so
[232065032 238180842.22 250957563] main-e121c209f.so

<!--

Please ensure that the following steps are all taken care of before submitting
the PR.

Please ensure all communication adheres to the code of conduct.
-->

view this post on Zulip Wasmtime GitHub notifications bot (Jul 27 2022 at 21:14):

cfallin created PR review comment:

Can we split up this iterator chain a bit and give names to intermediates to make it easier to follow? I'm thinking perhaps wasm_param_tys for the first part up to grabbing value_types; then another intermediate after zip_eq, with a comment that we're zipping with given wasm arg values and asserting length is equal; then do the last two filters on those, commenting that we're taking all vectors whose arg-type is not equal to the param's value type.

view this post on Zulip Wasmtime GitHub notifications bot (Jul 27 2022 at 21:14):

cfallin created PR review comment:

Does it help at all to use a SmallVec here, tuned to some size that captures most cases?

view this post on Zulip Wasmtime GitHub notifications bot (Jul 27 2022 at 21:14):

cfallin submitted PR review.

view this post on Zulip Wasmtime GitHub notifications bot (Jul 27 2022 at 21:14):

cfallin submitted PR review.

view this post on Zulip Wasmtime GitHub notifications bot (Jul 27 2022 at 21:14):

cfallin submitted PR review.

view this post on Zulip Wasmtime GitHub notifications bot (Jul 27 2022 at 21:27):

jameysharp created PR review comment:

I thought about using SmallVec but figured first I'd try the experiment that doesn't require any tuning. I'm not sure: how would you suggest picking a size? In the past I've usually picked however many elements will fit in two usize, because that's the minimum size the inline part of a SmallVec will consume anyway. But in this case that's 1, which hardly seems worth doing.

view this post on Zulip Wasmtime GitHub notifications bot (Jul 27 2022 at 21:27):

jameysharp submitted PR review.

view this post on Zulip Wasmtime GitHub notifications bot (Jul 27 2022 at 21:30):

cfallin submitted PR review.

view this post on Zulip Wasmtime GitHub notifications bot (Jul 27 2022 at 21:30):

cfallin created PR review comment:

The SmallVec in this case will be on-stack so we can be a bit looser with its size I think (and sub rsp, 128 is almost always faster than malloc(128)). For some cases like this I have just picked a reasonable value but to be more objective about it we could try 1, 4, 16 and see what the difference is...

view this post on Zulip Wasmtime GitHub notifications bot (Jul 27 2022 at 21:30):

cfallin deleted PR review comment.

view this post on Zulip Wasmtime GitHub notifications bot (Jul 27 2022 at 21:30):

cfallin submitted PR review.

view this post on Zulip Wasmtime GitHub notifications bot (Jul 27 2022 at 21:30):

cfallin created PR review comment:

The SmallVec in this case will be on-stack so we can be a bit looser with its size I think (and sub rsp, 128 is almost always faster than malloc(128)). For some cases like this I have just picked a reasonable value but to be more objective about it we could try 1, 4, 16 and see what the difference is...

view this post on Zulip Wasmtime GitHub notifications bot (Jul 27 2022 at 23:00):

jameysharp submitted PR review.

view this post on Zulip Wasmtime GitHub notifications bot (Jul 27 2022 at 23:00):

jameysharp created PR review comment:

On the intgemm-simd benchmark, Sightglass says plain Vec is 53-73% faster than a size-16 SmallVec, and 8-22% faster than a size-4 SmallVec. I don't understand that result, do you?

Unless there's a simple explanation I'm not seeing, I'm inclined to not think further about SmallVec right now.

view this post on Zulip Wasmtime GitHub notifications bot (Jul 27 2022 at 23:01):

jameysharp updated PR #4543 from lazy-bitcast to main.

view this post on Zulip Wasmtime GitHub notifications bot (Jul 27 2022 at 23:04):

cfallin submitted PR review.

view this post on Zulip Wasmtime GitHub notifications bot (Jul 27 2022 at 23:04):

cfallin created PR review comment:

That's somewhat surprising, but I can hypothesize at least one potential explanation: if the return value is almost always an empty list, then the empty SmallVec takes more stack space than an empty Vec and hence creates a little more spread in the memory usage (so bad cache effects). I'm stretching a bit with that though.

Anyway if it's rare enough then a simple Vec is totally fine here; it's certainly better than what we had before!

view this post on Zulip Wasmtime GitHub notifications bot (Jul 27 2022 at 23:40):

jameysharp submitted PR review.

view this post on Zulip Wasmtime GitHub notifications bot (Jul 27 2022 at 23:40):

jameysharp created PR review comment:

I decided to give size-1 SmallVec a shot too for completeness: if it's a matter of different stack frame sizes, then that should be no worse than Vec. But Vec was 44-58% faster on intgemm-simd than even size-1 SmallVec. So I remain thoroughly puzzled. :shrug:

view this post on Zulip Wasmtime GitHub notifications bot (Jul 27 2022 at 23:41):

jameysharp merged PR #4543.


Last updated: Oct 23 2024 at 20:03 UTC