fitzgen opened PR #5300 from heap-load-store
to main
:
As discussed in https://github.com/bytecodealliance/wasmtime/issues/5283.
I intend to implement option (3) from that issue, but this PR doesn't actually add any legalization or backend lowerings yet, just interpreter support and a suite of tests.
fitzgen requested cfallin for a review on PR #5300.
fitzgen submitted PR review.
fitzgen created PR review comment:
Not sure why this was needed but
rustc
started giving me failed-to-infer-types errors for this all of a sudden.
fitzgen submitted PR review.
cfallin submitted PR review.
cfallin created PR review comment:
This is more than just a reference to a heap, right? It's (i) the heap reference, (ii) the access flags, and (iii) a fixed offset. Basically, everything needed for a heap load/store that is not an SSA value, if I understand right. Could we note that here?
Also let's note somewhere that this is a little different than other entities in CLIF: it isn't an explicitly-defined entity in the preamble (like
heap0 = ...
orgv0 = ...
) but rather is logically a piece of an instruction (ofHeapLoad
orHeapStore
format) and one is nominally created for each use-site (each instruction).
cfallin created PR review comment:
Why
saturating_add
here rather than e.g.checked_add
with handling for theNone
case? I guess because these arei128
s and not expected to actually saturate? If so could we do.checked_add(...).expect("128-bit math should not overflow here")
or somesuch?
cfallin submitted PR review.
cfallin created PR review comment:
I've seen those come up when unrelated bits fail to typecheck; perplexing to me as well.
Now that things typecheck, though, could you try removing this?
jameysharp submitted PR review.
jameysharp created PR review comment:
So
heap_store
takes twoValue
s and aUimm32
(32 bits each), aMemFlags
(8 bits), and aHeap
, right? And we need two bytes inInstructionData
for the opcode plus the enum discriminant. As a somewhat out-there suggestion: Can we makeHeap
be au8
underneath and avoid the side table? It would be surprising to have more than 256 separate heaps, right?
cfallin submitted PR review.
cfallin created PR review comment:
It's pretty tempting, on the one hand... on the other hand, with component-model work, I suspect that eventually having multiple heaps accessible from one body of code (e.g. adapter functions) may be common, and 256 heaps starts to feel uncomfortably close to what may actually exist (e.g. O(100) heaps for O(100) modules in a large component graph)...
cfallin created PR review comment:
(I guess in such a case one could argue that an adapter function should never need to see more than
from
andto
heaps, rather than all heaps in the graph, so maybe it's OK. I'm not entirely sure, just trying to avoid creating surprising limits if we can)
cfallin submitted PR review.
jameysharp created PR review comment:
A related question:
Heap
is an index into the heaps declared on the current function, so this hypothetical 256-heap limit would be per-function; do I have that right? So a core wasm module, or a collection of components, could still have billions of heaps as long only a few are used in each function. I think that's assumed in your comment about adapter functions, but I don't understand the heap infrastructure well so I wanted to check.I don't have strong feelings on this either way, it just seems like the instruction is so close to fitting without a side table that we ought to at least think about it a little.
jameysharp submitted PR review.
cfallin submitted PR review.
cfallin created PR review comment:
Indeed, it's per-function. I haven't looked closely at how cranelift-wasm actually defines heaps but I would suspect if it doesn't create descriptors on-demand as heaps are actually referenced, it wouldn't be hard to do so.
fitzgen created PR review comment:
Ah this is just a copy-paste of
ir::Heap
's docs that I forgot to update.
fitzgen submitted PR review.
fitzgen submitted PR review.
fitzgen created PR review comment:
Mostly because error handling in the interpreter is pretty weird and there is no easy way to turn arbitrary
impl Error
types into traps. But saturating should be correct and seems less wordy and involved than the checked adds, to me.
fitzgen updated PR #5300 from heap-load-store
to main
.
fitzgen submitted PR review.
fitzgen created PR review comment:
Right now, all of our entities are
u32
so this feels like a larger change we would want to think over a bit. As such, I'm not going to block landing this PR on thinking about that.
fitzgen has enabled auto merge for PR #5300.
fitzgen merged PR #5300.
cfallin submitted PR review.
cfallin created PR review comment:
I think I'd actually prefer the checked_add/expect sequence: saturating_add is explicitly incorrect in general for some values allowed by the types (close to the limit), and is a suspicious and unexpected thing to see in an address computation. It's correct here because of the ranges involved, but those invariants aren't implicit and not obvious to the reader at all. If we're comfortable saying that the saturation should never actually happen (in the current version), then we should be equally happy to say that the expect should never actually fail (in the proposed version), except in the latter case we'll know if we're wrong.
cfallin edited PR review comment.
Last updated: Dec 23 2024 at 12:05 UTC