Stream: git-wasmtime

Topic: wasmtime / PR #3516 Add a fuzz mode to stress unaligned w...


view this post on Zulip Wasmtime GitHub notifications bot (Nov 11 2021 at 21:29):

alexcrichton opened PR #3516 from fuzz-unaligned to main:

Alignment on all memory instructions in wasm is currently best-effort
and not actually required, meaning that whatever wasm actually uses as
an address should work regardless of whether the address is aligned or
not. This is theoretically tested in the fuzzers via
wasm-smith-generated code, but wasm-smith doesn't today have too too
high of a chance of generating an actual successful load/store.

This commit adds a new configuration option to the Config generator
for fuzzing which forces usage of a custom linear memory implementation
which is backed by Rust's Vec<u8> and forces the base address of
linear memory to be off-by-one relative to the base address of the
Vec<u8> itself. This should theoretically force host addresses to
almost always be unaligned, even if wasm addresses are otherwise
aligned.

The main interesting fuzz coverage here is likely to be in the existing
differential target which compares running the same module in wasmtime
with two different Config values to ensure the same results are
produced. This probably won't increase coverage all that much in the
near future due to wasm-smith rarely generating successful loads/stores,
but in the meantime by hooking this up into Config it also means that
we'll be running in comparison against v8 and also ensuring that all
spec tests succeed if misalignment is forced at the hardware level.

As a side effect this commit also cleans up the fuzzers slightly:

<!--

Please ensure that the following steps are all taken care of before submitting
the PR.

Please ensure all communication adheres to the code of conduct.
-->

view this post on Zulip Wasmtime GitHub notifications bot (Nov 11 2021 at 21:33):

alexcrichton requested fitzgen for a review on PR #3516.

view this post on Zulip Wasmtime GitHub notifications bot (Nov 11 2021 at 22:35):

fitzgen submitted PR review.

view this post on Zulip Wasmtime GitHub notifications bot (Nov 11 2021 at 22:35):

fitzgen submitted PR review.

view this post on Zulip Wasmtime GitHub notifications bot (Nov 11 2021 at 22:35):

fitzgen created PR review comment:

There is an interesting question here: do we need to _always_ pessimize (simd) loads/stores to work with unaligned memory?

Ideally, we would have a fast path and a slow path, emit the fast path when the Wasm says it is correctly aligned, and the slow path otherwise. The fast path would then have a fallback (probably via signal handlers) if the load/store wasn't actually aligned, like the Wasm said it would be.

However, if the memory itself is unaligned, then the "aligned" loads/stores in Wasm are unaligned at the native level, meaning we would always hit the fast path's fallback, which is presumably even worse than just using slow path loads/stores.

So allowing unaligned memories potentially invalidates our speculative fast paths for loads/stores. I guess that is fine, because no one should ever give us unaligned memories in practice?

view this post on Zulip Wasmtime GitHub notifications bot (Nov 11 2021 at 22:59):

cfallin submitted PR review.

view this post on Zulip Wasmtime GitHub notifications bot (Nov 11 2021 at 22:59):

cfallin created PR review comment:

This is a good question! In practice, at least on modern x86 cores, unaligned SSE loads have the same performance as aligned ones (zero penalty on Haswell and later, IIRC, which is the 2013 core); and on earlier cores, IIRC, it split into two uops (or two issues of one uop, I don't remember), so at worst it costs "one extra ALU instruction". An alignment check is at least that (AND rAddr, 1/3/7/15) and a conditional branch; the branch doesn't enter the BTB if never taken, but still costs fetch bandwidth. So I think it's always better to use aligned loads/stores (movups rather than movaps) unconditionally.

I'm not sure about aarch64; maybe @akirilov-arm or @sparker-arm can say more? I suspect at least on some smaller cores it might be an interesting question.

view this post on Zulip Wasmtime GitHub notifications bot (Nov 11 2021 at 23:06):

fitzgen submitted PR review.

view this post on Zulip Wasmtime GitHub notifications bot (Nov 11 2021 at 23:06):

fitzgen created PR review comment:

Alex mentioned that he saw crashes on the aarch64 machine with this change, so I assume it is something we can't rely on modern x86 behavior with.

view this post on Zulip Wasmtime GitHub notifications bot (Nov 11 2021 at 23:07):

fitzgen submitted PR review.

view this post on Zulip Wasmtime GitHub notifications bot (Nov 11 2021 at 23:07):

fitzgen created PR review comment:

(I'll let him provide more details)

view this post on Zulip Wasmtime GitHub notifications bot (Nov 11 2021 at 23:16):

alexcrichton submitted PR review.

view this post on Zulip Wasmtime GitHub notifications bot (Nov 11 2021 at 23:16):

alexcrichton created PR review comment:

Personally I've always been under the impression that unaligned things are "just as fast" nowadays and the overhead and angst of correctly implementing "fixup the operation in a signal handler" is quite far from being worth it.

To clarify though I haven't seen any new actual crashes with this fuzzer. I ran locally for a bit with simd enable on both x64 and aarch64 but no crashes that were interesting (just bugs in me writing this new fuzz stuff).

My main goal of this sort of fuzzer is to weed out codegen issues like https://github.com/bytecodealliance/wasmtime/issues/2943 with more regularity.

view this post on Zulip Wasmtime GitHub notifications bot (Nov 11 2021 at 23:40):

cfallin submitted PR review.

view this post on Zulip Wasmtime GitHub notifications bot (Nov 11 2021 at 23:40):

cfallin created PR review comment:

I just read a bit more about this and it seems that aarch64 does indeed support unaligned vector loads as well, architecturally (and tested just now with a little asm on my RPi4 doing a LDR qN, [xN] with a pointer ending in ...1). I think for simplicity I'd prefer to avoid the two-path solution here, as (on thinking through implications to codegen a bit more) a CFG diamond at every load/store would (i) significantly increase code size, (ii) slow down any CFG-sensitive analyses, e.g. liveness and branch splitting in regalloc, and (iii) most likely add measurable runtime overhead. We can definitely think about this more if we have to support an architecture that forces us into this, IMHO...

view this post on Zulip Wasmtime GitHub notifications bot (Nov 11 2021 at 23:54):

fitzgen created PR review comment:

I was imagining we wouldn’t actually emit code to do checks (so no cfg diamond), just catch unaligned access faults in signal handlers and then jump to slow path stubs from there, but it seems I misunderstood Alex and we don’t actually even support any architectures where this is an issue, so I think we can continue ignoring it for now.

view this post on Zulip Wasmtime GitHub notifications bot (Nov 11 2021 at 23:54):

fitzgen submitted PR review.

view this post on Zulip Wasmtime GitHub notifications bot (Nov 15 2021 at 14:24):

alexcrichton merged PR #3516.

view this post on Zulip Wasmtime GitHub notifications bot (Nov 15 2021 at 17:22):

akirilov-arm submitted PR review.

view this post on Zulip Wasmtime GitHub notifications bot (Nov 15 2021 at 17:22):

akirilov-arm created PR review comment:

As @cfallin said, support for unaligned data accesses (not instruction fetches) to normal memory is required by the 64-bit Arm architecture and can be toggled at runtime by privileged software (e.g. the operating system kernel), but in practice all environments we target enable it; in fact, some functionality in the system libraries like memcpy() may rely on it. Vector operations are not an exception, though we could use the structure load LD1, for instance, which requires alignment on the element size (not the vector size). As for performance, it depends on the particular implementation, but usually there is no penalty unless the access crosses a coarser-grained boundary such as 16 or 64 bytes.

There is one major exception to this rule - atomic and exclusive operations. Would this fuzz mode be applied to code that uses the Wasm threads proposal (which I think is the only case that would result in the generation of those instructions)?

view this post on Zulip Wasmtime GitHub notifications bot (Nov 15 2021 at 17:46):

alexcrichton submitted PR review.

view this post on Zulip Wasmtime GitHub notifications bot (Nov 15 2021 at 17:46):

alexcrichton created PR review comment:

Ah yeah my assumption was that we would disable this mode of fuzzing once the threads proposal was implemented. I agree that I don't think we can get away with misaligning the host memory once atomics come into play.

view this post on Zulip Wasmtime GitHub notifications bot (Nov 15 2021 at 18:40):

fitzgen submitted PR review.

view this post on Zulip Wasmtime GitHub notifications bot (Nov 15 2021 at 18:40):

fitzgen created PR review comment:

(Or we could just make sure that we never combine threads and unaligned memories in the fuzzer, rather than disabling it entirely.)


Last updated: Nov 22 2024 at 17:03 UTC