Stream: git-wasmtime

Topic: wasmtime / PR #5300 Cranelift: Add `heap_load` and `heap_...


view this post on Zulip Wasmtime GitHub notifications bot (Nov 18 2022 at 22:02):

fitzgen opened PR #5300 from heap-load-store to main:

As discussed in https://github.com/bytecodealliance/wasmtime/issues/5283.

I intend to implement option (3) from that issue, but this PR doesn't actually add any legalization or backend lowerings yet, just interpreter support and a suite of tests.

view this post on Zulip Wasmtime GitHub notifications bot (Nov 18 2022 at 22:02):

fitzgen requested cfallin for a review on PR #5300.

view this post on Zulip Wasmtime GitHub notifications bot (Nov 18 2022 at 22:05):

fitzgen submitted PR review.

view this post on Zulip Wasmtime GitHub notifications bot (Nov 18 2022 at 22:05):

fitzgen created PR review comment:

Not sure why this was needed but rustc started giving me failed-to-infer-types errors for this all of a sudden.

view this post on Zulip Wasmtime GitHub notifications bot (Nov 18 2022 at 22:05):

fitzgen submitted PR review.

view this post on Zulip Wasmtime GitHub notifications bot (Nov 18 2022 at 22:25):

cfallin submitted PR review.

view this post on Zulip Wasmtime GitHub notifications bot (Nov 18 2022 at 22:25):

cfallin created PR review comment:

This is more than just a reference to a heap, right? It's (i) the heap reference, (ii) the access flags, and (iii) a fixed offset. Basically, everything needed for a heap load/store that is not an SSA value, if I understand right. Could we note that here?

Also let's note somewhere that this is a little different than other entities in CLIF: it isn't an explicitly-defined entity in the preamble (like heap0 = ... or gv0 = ...) but rather is logically a piece of an instruction (of HeapLoad or HeapStore format) and one is nominally created for each use-site (each instruction).

view this post on Zulip Wasmtime GitHub notifications bot (Nov 18 2022 at 22:25):

cfallin created PR review comment:

Why saturating_add here rather than e.g. checked_add with handling for the None case? I guess because these are i128s and not expected to actually saturate? If so could we do .checked_add(...).expect("128-bit math should not overflow here") or somesuch?

view this post on Zulip Wasmtime GitHub notifications bot (Nov 18 2022 at 22:25):

cfallin submitted PR review.

view this post on Zulip Wasmtime GitHub notifications bot (Nov 18 2022 at 22:25):

cfallin created PR review comment:

I've seen those come up when unrelated bits fail to typecheck; perplexing to me as well.

Now that things typecheck, though, could you try removing this?

view this post on Zulip Wasmtime GitHub notifications bot (Nov 18 2022 at 23:41):

jameysharp submitted PR review.

view this post on Zulip Wasmtime GitHub notifications bot (Nov 18 2022 at 23:41):

jameysharp created PR review comment:

So heap_store takes two Values and a Uimm32 (32 bits each), a MemFlags (8 bits), and a Heap, right? And we need two bytes in InstructionData for the opcode plus the enum discriminant. As a somewhat out-there suggestion: Can we make Heap be a u8 underneath and avoid the side table? It would be surprising to have more than 256 separate heaps, right?

view this post on Zulip Wasmtime GitHub notifications bot (Nov 18 2022 at 23:44):

cfallin submitted PR review.

view this post on Zulip Wasmtime GitHub notifications bot (Nov 18 2022 at 23:44):

cfallin created PR review comment:

It's pretty tempting, on the one hand... on the other hand, with component-model work, I suspect that eventually having multiple heaps accessible from one body of code (e.g. adapter functions) may be common, and 256 heaps starts to feel uncomfortably close to what may actually exist (e.g. O(100) heaps for O(100) modules in a large component graph)...

view this post on Zulip Wasmtime GitHub notifications bot (Nov 18 2022 at 23:45):

cfallin created PR review comment:

(I guess in such a case one could argue that an adapter function should never need to see more than from and to heaps, rather than all heaps in the graph, so maybe it's OK. I'm not entirely sure, just trying to avoid creating surprising limits if we can)

view this post on Zulip Wasmtime GitHub notifications bot (Nov 18 2022 at 23:45):

cfallin submitted PR review.

view this post on Zulip Wasmtime GitHub notifications bot (Nov 19 2022 at 00:08):

jameysharp created PR review comment:

A related question: Heap is an index into the heaps declared on the current function, so this hypothetical 256-heap limit would be per-function; do I have that right? So a core wasm module, or a collection of components, could still have billions of heaps as long only a few are used in each function. I think that's assumed in your comment about adapter functions, but I don't understand the heap infrastructure well so I wanted to check.

I don't have strong feelings on this either way, it just seems like the instruction is so close to fitting without a side table that we ought to at least think about it a little.

view this post on Zulip Wasmtime GitHub notifications bot (Nov 19 2022 at 00:08):

jameysharp submitted PR review.

view this post on Zulip Wasmtime GitHub notifications bot (Nov 19 2022 at 00:48):

cfallin submitted PR review.

view this post on Zulip Wasmtime GitHub notifications bot (Nov 19 2022 at 00:48):

cfallin created PR review comment:

Indeed, it's per-function. I haven't looked closely at how cranelift-wasm actually defines heaps but I would suspect if it doesn't create descriptors on-demand as heaps are actually referenced, it wouldn't be hard to do so.

view this post on Zulip Wasmtime GitHub notifications bot (Nov 21 2022 at 22:17):

fitzgen created PR review comment:

Ah this is just a copy-paste of ir::Heap's docs that I forgot to update.

view this post on Zulip Wasmtime GitHub notifications bot (Nov 21 2022 at 22:17):

fitzgen submitted PR review.

view this post on Zulip Wasmtime GitHub notifications bot (Nov 21 2022 at 22:22):

fitzgen submitted PR review.

view this post on Zulip Wasmtime GitHub notifications bot (Nov 21 2022 at 22:22):

fitzgen created PR review comment:

Mostly because error handling in the interpreter is pretty weird and there is no easy way to turn arbitrary impl Error types into traps. But saturating should be correct and seems less wordy and involved than the checked adds, to me.

view this post on Zulip Wasmtime GitHub notifications bot (Nov 21 2022 at 22:23):

fitzgen updated PR #5300 from heap-load-store to main.

view this post on Zulip Wasmtime GitHub notifications bot (Nov 21 2022 at 22:24):

fitzgen submitted PR review.

view this post on Zulip Wasmtime GitHub notifications bot (Nov 21 2022 at 22:24):

fitzgen created PR review comment:

Right now, all of our entities are u32 so this feels like a larger change we would want to think over a bit. As such, I'm not going to block landing this PR on thinking about that.

view this post on Zulip Wasmtime GitHub notifications bot (Nov 21 2022 at 22:24):

fitzgen has enabled auto merge for PR #5300.

view this post on Zulip Wasmtime GitHub notifications bot (Nov 21 2022 at 23:00):

fitzgen merged PR #5300.

view this post on Zulip Wasmtime GitHub notifications bot (Nov 22 2022 at 00:04):

cfallin submitted PR review.

view this post on Zulip Wasmtime GitHub notifications bot (Nov 22 2022 at 00:04):

cfallin created PR review comment:

I think I'd actually prefer the checked_add/expect sequence: saturating_add is explicitly incorrect in general for some values allowed by the types (close to the limit), and is a suspicious and unexpected thing to see in an address computation. It's correct here because of the ranges involved, but those invariants aren't implicit and not obvious to the reader at all. If we're comfortable saying that the saturation should never actually happen (in the current version), then we should be equally happy to say that the expect should never actually fail (in the proposed version), except in the latter case we'll know if we're wrong.

view this post on Zulip Wasmtime GitHub notifications bot (Nov 22 2022 at 00:04):

cfallin edited PR review comment.


Last updated: Nov 22 2024 at 17:03 UTC