Stream: git-wasmtime

Topic: wasmtime / PR #2473 Fix lowering instruction-sinking (loa...


view this post on Zulip Wasmtime GitHub notifications bot (Dec 03 2020 at 22:51):

cfallin opened PR #2473 from fix-lowering to main:

This fixes a subtle corner case exposed during fuzzing. If we have a bit
of CLIF like:

    v0 = load.i64 ...
    v1 = iadd.i64 v0, ...
    v2 = do_other_thing v1
    v3 = load.i64 v1

and if this is lowered using a machine backend that can merge loads into
ALU ops, and that has an addressing mode that can look through add
ops, then the following can happen:

  1. We lower the load at v3. This looks backward at the address
    operand tree and finds that v1 is v0 plus other things; it has an
    addressing mode that can add v0's register and the other things
    directly; so it calls put_value_in_reg(v0) and uses its register in
    the amode. At this point, the add producing v1 has no references,
    so it will not (yet) be codegen'd.

  2. We lower do_other_thing, which puts v1 in a register and uses it.
    the iadd now has a reference.

  3. We reach the iadd and, because it has a reference, lower it. Our
    machine has the ability to merge a load into an ALU operation.
    Crucially, we think the load at v0 is mergeable because it has
    only one user, the add at v1 (!). So we merge it.

  4. We reach the load at v0 and because it has been merged into the
    iadd, we do not separately codegen it. The register that holds v0
    is thus never written, and the use of this register by the final load
    (Step 1) will see an undefined value.

The logic error here is that in the presence of pattern matching that
looks through pure ops, we can end up with multiple uses of a value that
originally had a single use (because we allow lookthrough of pure ops in
all cases). In other words, the multiple-use-ness of v1 "passes
through" in some sense to v0. However, the load sinking logic is not
aware of this.

The fix, I think, is pretty simple: we disallow an effectful instruction
from sinking/merging if it already has some other use when we look back
at it.

If we disallowed lookthrough of any op that had multiple uses, even
pure ones, then we would avoid this scenario; but earlier experiments
showed that to have a non-negligible performance impact, so (given that
we've worked out the logic above) I think this complexity is worth it.

<!--

Please ensure that the following steps are all taken care of before submitting
the PR.

Please ensure all communication adheres to the code of conduct.
-->

view this post on Zulip Wasmtime GitHub notifications bot (Dec 03 2020 at 22:51):

cfallin requested fitzgen, bnjbvr and julian-seward1 for a review on PR #2473.

view this post on Zulip Wasmtime GitHub notifications bot (Dec 03 2020 at 22:51):

cfallin requested fitzgen, bnjbvr and julian-seward1 for a review on PR #2473.

view this post on Zulip Wasmtime GitHub notifications bot (Dec 03 2020 at 22:51):

cfallin requested fitzgen, bnjbvr and julian-seward1 for a review on PR #2473.

view this post on Zulip Wasmtime GitHub notifications bot (Dec 03 2020 at 22:59):

cfallin updated PR #2473 from fix-lowering to main:

This fixes a subtle corner case exposed during fuzzing. If we have a bit
of CLIF like:

    v0 = load.i64 ...
    v1 = iadd.i64 v0, ...
    v2 = do_other_thing v1
    v3 = load.i64 v1

and if this is lowered using a machine backend that can merge loads into
ALU ops, and that has an addressing mode that can look through add
ops, then the following can happen:

  1. We lower the load at v3. This looks backward at the address
    operand tree and finds that v1 is v0 plus other things; it has an
    addressing mode that can add v0's register and the other things
    directly; so it calls put_value_in_reg(v0) and uses its register in
    the amode. At this point, the add producing v1 has no references,
    so it will not (yet) be codegen'd.

  2. We lower do_other_thing, which puts v1 in a register and uses it.
    the iadd now has a reference.

  3. We reach the iadd and, because it has a reference, lower it. Our
    machine has the ability to merge a load into an ALU operation.
    Crucially, we think the load at v0 is mergeable because it has
    only one user, the add at v1 (!). So we merge it.

  4. We reach the load at v0 and because it has been merged into the
    iadd, we do not separately codegen it. The register that holds v0
    is thus never written, and the use of this register by the final load
    (Step 1) will see an undefined value.

The logic error here is that in the presence of pattern matching that
looks through pure ops, we can end up with multiple uses of a value that
originally had a single use (because we allow lookthrough of pure ops in
all cases). In other words, the multiple-use-ness of v1 "passes
through" in some sense to v0. However, the load sinking logic is not
aware of this.

The fix, I think, is pretty simple: we disallow an effectful instruction
from sinking/merging if it already has some other use when we look back
at it.

If we disallowed lookthrough of any op that had multiple uses, even
pure ones, then we would avoid this scenario; but earlier experiments
showed that to have a non-negligible performance impact, so (given that
we've worked out the logic above) I think this complexity is worth it.

<!--

Please ensure that the following steps are all taken care of before submitting
the PR.

Please ensure all communication adheres to the code of conduct.
-->

view this post on Zulip Wasmtime GitHub notifications bot (Dec 03 2020 at 23:05):

fitzgen submitted PR Review.

view this post on Zulip Wasmtime GitHub notifications bot (Dec 03 2020 at 23:56):

cfallin merged PR #2473.


Last updated: Dec 23 2024 at 12:05 UTC