fitzgen opened PR #8728 from fitzgen:saferpoints
to bytecodealliance:main
:
Tracking GC references and producing stack maps is a significant amount of complexity in
regalloc2
.At the same time, GC reference value types are pretty annoying to deal with in Cranelift itself. We know our
r64
is "actually" just ani64
pointer, and we want to doi64
-y things with it, such as aniadd
to compute a derived pointer, butiadd
only takes integer types and notr64
s. We investigated loosening that restriction and it was way too painful given the way that CLIF type inference and its controlling type vars work. So to compute those derived pointers, we have to firstbitcast
ther64
into ani64
. This is unfortunate in two ways. First, because of arcane interactions between register allocation constraints, stack maps, and ABIs this involves inserting unnecessary register-to-register moves in our generated code which hurts binary size and performance ever so slightly. Second, and much more seriously, this is a serious footgun. If a GC reference isn't anr64
right now, then it will not appear in stack maps, and failure to record a live GC reference in a stack map means that the collector could reclaim the object while you are still using it, leading to use-after-free bugs! Very bad. And the mid-end needs to know not to GVN these bitcasts or else we get similar bugs (see https://github.com/bytecodealliance/wasmtime/pull/8317).Overall GC references are a painful situation for us today.
This commit is the introduction of an alternative. (Note, though, that we aren't quite ready to remove the old stack maps infrastructure just yet.)
Instead of preserving GC references all the way through the whole pipeline and computing live GC references and inserting spills at safepoints for stack maps all the way at the end of that pipeline in register allocation, the CLIF-producing frontend explicitly generates its own stack slots and spills for safepoints. The only thing the rest of the compiler pipeline needs to know is the metadata required to produce the stack map for the associated safepoint. We can completely remove
r32
andr64
from Cranelift and just use plaini32
andi64
values. Orf64
if the runtime uses NaN-boxing, which the old stack maps system did not support at all. Or 32-bit GC references on a 64-bit target, which was also not supported by the old system. Furthermore, we cannot get miscompiles due to GVN'ing bitcasts that shouldn't be GVN'd because there aren't any bitcasts hiding GC references from stack maps anymore. And in the case of a moving GC, we don't need to worry about the mid-end doing illegal code motion across calls that could have triggered a GC that invalidated the moved GC reference because frontends will reload their GC references from the stack slots after the call, and that loaded value simply isn't a candidate for GVN with the previous version. We don't have to worry about those bugs by construction.So everything gets a lot easier under this new system.
But this commit doesn't mean we are 100% done and ready to transition to the new system, so what is actually in here?
CLIF producers can mark values as needing to be present in a stack map if they are live across a safepoint in
cranelift-frontend
. This is theFunctionBuilder::declare_needs_stack_map
method.When we finalize the function we are building, we do a simple, single-pass liveness analysis to determine the set of GC references that are live at each safepoint, and then we insert spills to explicit stack slots just before the safepoint. We intentionally trade away the precision of a fixed-point liveness analysis for the speed and simplicity of a single-pass implementation.
We annotate the safepoint with the metadata necessary to construct its associated stack map. This is the new
cranelift_codegen::ir::DataFlowGraph::append_user_stack_map_entry
method and all that stuff.These stack map entries are part of the CLIF and can be roundtripped through printing and parsing CLIF.
Each stack map entry describes a GC-managed value that is on the stack and how to locate it: its type, the stack slot it is located within, and the offset within that stack slot where it resides. Different stack map entries for the same safepoint may have different types or a different width from the target's pointer.
Here is what is not handled yet, and left for future follow up commits:
Lowering the stack map entries' locations from symbolic stack slot and offset pairs to physical stack frame offsets after register allocation.
Coalescing and aggregating the safepoints and their raw stack map entries into a compact PC-to-stack-map table during emission.
Supporting moving GCs. Right now we generate spills into stack slots for live GC references just before safepoints, but we don't reload the GC references from the stack upon their next use after the safepoint. This involves rewriting uses of the old, spilled values which could be a little finicky, but we think we have a good approach.
Port Wasmtime over to using this new stack maps system.
Removing the old stack map system, including
r{32,64}
from Cranelift and GC reference handling fromregalloc2
. (For the time being, the new system generally refers to "user stack maps" to disambiguate from the old system where it might otherwise be confusing.) If we wanted to remove the old system now, that would require us to also port Wasmtime to the new system now, and we'd end up with a monolithic PR. Better to do this incrementally and temporarily have the old and in-progress new system overlap for a short period of time.<!--
Please make sure you include the following information:
If this work has been discussed elsewhere, please include a link to that
conversation. If it was discussed in an issue, just mention "issue #...".Explain why this change is needed. If the details are in an issue already,
this can be brief.Our development process is documented in the Wasmtime book:
https://docs.wasmtime.dev/contributing-development-process.htmlPlease ensure all communication follows the code of conduct:
https://github.com/bytecodealliance/wasmtime/blob/main/CODE_OF_CONDUCT.md
-->
fitzgen requested wasmtime-compiler-reviewers for a review on PR #8728.
fitzgen requested cfallin for a review on PR #8728.
cfallin submitted PR review:
This is extremely cool work and I'm very happy we'll be able to remove the complexity later in the pipeline eventually!
A few nits and requests for some comments, minor refactorings, etc below but nothing major.
cfallin created PR review comment:
Can we have a doc-comment at the module level here describing the overall abstraction? Basically: call instructions (only?) have sets of
UserStackMapEntry
structs attached; these stack map entries point to CLIF-level stack slots.
cfallin submitted PR review:
This is extremely cool work and I'm very happy we'll be able to remove the complexity later in the pipeline eventually!
A few nits and requests for some comments, minor refactorings, etc below but nothing major.
cfallin created PR review comment:
This sequence to compute
max_vals_in_stack_map_by_size
is a little dense -- could we break it out into some helpers, e.g. something like:fn live_value_count_by_size(dfg: &DataFlowGraph, live: impl Iterator<Item = Value>) -> impl Iterator<Item = (usize, usize)> { ... } fn val_bucket_from_size(size: usize) -> usize { ... } for (byte_size, count) in live_value_count_by_size(&self.func.dfg, live.iter()) { let bucket = val_bucket_from_size(byte_size); max_vals_in_stack_map_by_size[bucket] = core::cmp::max(..., count); }
cfallin created PR review comment:
name
..._by_log2_size
to clarify?
cfallin created PR review comment:
Reloads are conspicuously absent here -- are we making the assumption that we won't have a moving GC (so we need to root but don't need to allow mutation of references)? Would be good to have that in a module-level doc-comment too :-)
cfallin created PR review comment:
(here too, for the constant)
cfallin created PR review comment:
perhaps to note for more precision here (to guide future thinking around this): would require a fixed-point loop in the presence of backedges, at least?
For example if we (i) do propagate liveness back to branch args from blockparams, and (ii) at a branch, observe that all targets have already been visited (have higher RPO numbers), then we can use the real liveness rather than conservatively assume live.
No real data but my hunch is that this may be important when we have "extraneous blocks" due to Wasm control flow structure that have a single forward out-edge and that we can't yet optimize out (and wouldn't have at this point in the pipeline anyway)...
cfallin created PR review comment:
Can we add a comment here noting that this (implicitly) includes block args as well, in reference to above discussion of blockparams? Was looking for the logic as an explicit thing for branches and missed this (implicit/more general) handling.
cfallin created PR review comment:
Can we make the
5
a constant defined somewhere (and used in the initialization above too)?
fitzgen submitted PR review.
fitzgen created PR review comment:
From the commit message / top level PR comment:
Here is what is not handled yet, and left for future follow up commits:
[...]
- Supporting moving GCs. Right now we generate spills into stack slots for live GC references just before safepoints, but we don't reload the GC references from the stack upon their next use after the safepoint. This involves rewriting uses of the old, spilled values which could be a little finicky, but we think we have a good approach.
Happy to add a comment in the meantime.
cfallin submitted PR review.
cfallin created PR review comment:
Definitely missed/forgot that, sorry! Yep, a doc-comment point saying the same would be great.
fitzgen updated PR #8728.
fitzgen has enabled auto merge for PR #8728.
fitzgen updated PR #8728.
fitzgen updated PR #8728.
fitzgen updated PR #8728.
fitzgen has enabled auto merge for PR #8728.
fitzgen merged PR #8728.
Last updated: Dec 23 2024 at 12:05 UTC