alexcrichton added the performance label to Issue #10102.
alexcrichton added the pulley label to Issue #10102.
alexcrichton opened issue #10102:
I wasnted to open a meta-issue about tracking the performance of Pulley. There's a few items I've identified about improving Pulley's performance which I'm unable to tackle today myself so I'm hoping to track both the meta-topic here of Pulley's performance.
Overall Pulley is in a relatively good spot with respect to performance right now, but Pulley is by no means outstripping other wasm interpreters. Pulley's chief downside from what I can tell is that it's starting from a much lower level of abstraction than other interpreters, CLIF, instead of wasm. This particularly hurts memory accesses where Pulley fundamentally uses two opcodes per memory access: one for the bounds check and one for the actual load/store. In terms of interpreter performance is this pretty costly to turn the interpreter loop twice per load/store where other interpreters are likely only turning the loop once.
That's not to say that Pulley is fundamentally less performant than other interpreters, however. Pulley also has the strengths of an optimizing compiler such as Cranelift to perform relatively advanced optimizations such as hoisting loads/stores out of loops. Pulley also can relatively easily add macro-ops as necessary to improve performance as well.
Locally though what I've seen are performance issues with Pulley are:
- [ ] As mentioned above, loads/stores are two opcodes. This is because one opcode is "rooted" in the CLIF
trapnz
node for the bounds check. Another node is rooted in the load/store itself. Lowering in ISLE has no way currently to merge these two instructions together into a single trapping instruction. Improving this would need the ability to lower two terms at once (sort of?) or something like a peephole optimization pass after VCode is created.- [ ] Pulley register moves are not cheap, unlike native platforms, and regalloc2's allocation decisions don't always minimize register moves. One example is https://github.com/bytecodealliance/wasmtime/issues/9942. Other decisions I've found hard to isolate into reproducible examples, but I've seen when benchmarking this binary that a hot loop both starts and ends with the same
xmov
, so I'm not sure why it's there. I've tried to isolate some small examples intests/disas/pulley/coremark-1.wat
andtests/disas/pulley/fib.wat
in-tree.- [ ] In general Cranelift isn't the best at optimizing explicitly-bounds-checked code. Pulley can't rely on signals for catching faults meaning that all loads/stores are explicitly bounds-checked. One example of suboptimal performance here is that the bound of linear memory, stored in
VMContext
, isn't hoisted outside of a loop. The bound is reloaded each iteration of the loop despite the loop not doing anything that can mutate the bound (such as calling another function or writing to VMContext). The reason for this is that the base pointer if memory is marked (correctly) asreadonly
but the bound is notreadonly
(also correctly).- [ ] It's not clear what to do with the interpreter loop in terms of the "match loop" vs the "tail loop". This is additionally tracked in https://github.com/bytecodealliance/wasmtime/issues/9995 and conventional wisdom is that the "tail loop" should be faster but that doesn't seem to be true for all benchmarks. This probably needs more investigation as I'm not sure what's going on myself.
- [ ] One possible vector for optimization is opcode decoding in the interpreter. Currently all immediates are loaded one-by-one from memory but it should be possible to load multiple at once by issuing a pointer-size-load at the start of an instruction. This in theory reduces the number of loads that the native CPU has to pipeline while putting more work on shifts/bitops on the ALU. The downside of this approach is that we have to be more careful as it's possible to read past the end of a function when decoding it. Initial benchmarking additionally showed that this wasn't a speedup on x64. I've got a branch implementing this but it's not in a clean enough state to land this in-tree.
I plan on adding more items here over time as I see them, and/or spinning off these to sub-issues as appropriate.
github-actions[bot] commented on issue #10102:
Subscribe to Label Action
cc @fitzgen
<details>
This issue or pull request has been labeled: "pulley"Thus the following users have been cc'd because of the following labels:
- fitzgen: pulley
To subscribe or unsubscribe from this label, edit the <code>.github/subscribe-to-label.json</code> configuration file.
Learn more.
</details>
fitzgen commented on issue #10102:
- As mentioned above, loads/stores are two opcodes. This is because one opcode is "rooted" in the CLIF
trapnz
node for the bounds check. Another node is rooted in the load/store itself. Lowering in ISLE has no way currently to merge these two instructions together into a single trapping instruction. Improving this would need the ability to lower two terms at once (sort of?) or something like a peephole optimization pass after VCode is created.Alternatively, rather than making instruction selection support lowering two root instructions at once (which is hard), we could try something like this:
- we add to Pulley a
bounds_checked_{load,store} dest, addr, bound
family of instructions (exact details to vary with what we actually need) that are defined to trap when the address is out of boundswe enable spectre mitigations, or something new but not "spectre" that uses selects instead of trap[n]zs for bounds checks, so that we see the bounds check in the data flow into the memory operation, rather than it being a separate
trap[n]z
, eg:
v0 = select is_in_bounds, addr, zero v1 = load v0
we add lowering rules to pattern match on that data flow going into a load/store and emit
bounds_checked_{load,store}
instructions when possible
- bonus: figure out how to avoid re-adding bounds checks to every load/store during lowering when we already deduped them in the mid-end, so that the first memory operation lowers to a
bounds_checked_*
operation but subsequent ones lower to raw loads/stores
fitzgen commented on issue #10102:
cc https://github.com/bytecodealliance/wasmtime/issues/10111, since I believe we've seen some pulley benchmarks where a conditional trap was deduplicated within a loop body, but not hoisted up above the loop for a combo of reasons (see that issue for more discussion).
alexcrichton commented on issue #10102:
For the spectre ops, I think that could work yeah. We currently discard all trap codes in
MemFlags
on loads/stores but that's mainly because there are none today. That could be used to hook into Pully where we can semantically have instructions that are "trap if addr==0 otherwise do the load/stores". We could then move all macro-ops like*_g32
addressing to these semantics and fold the spectre-check + trapping-load into the new*_g32
ops.I'll see if I can't play around with that this week.
Last updated: Feb 28 2025 at 03:10 UTC