fitzgen opened PR #9341 from fitzgen:funcrefs-in-gc-heap
to bytecodealliance:main
:
<!--
Please make sure you include the following information:
If this work has been discussed elsewhere, please include a link to that
conversation. If it was discussed in an issue, just mention "issue #...".Explain why this change is needed. If the details are in an issue already,
this can be brief.Our development process is documented in the Wasmtime book:
https://docs.wasmtime.dev/contributing-development-process.htmlPlease ensure all communication follows the code of conduct:
https://github.com/bytecodealliance/wasmtime/blob/main/CODE_OF_CONDUCT.md
-->
fitzgen requested alexcrichton for a review on PR #9341.
fitzgen requested wasmtime-core-reviewers for a review on PR #9341.
github-actions[bot] commented on PR #9341:
Subscribe to Label Action
cc @fitzgen
<details>
This issue or pull request has been labeled: "wasmtime:api", "wasmtime:ref-types"Thus the following users have been cc'd because of the following labels:
- fitzgen: wasmtime:ref-types
To subscribe or unsubscribe from this label, edit the <code>.github/subscribe-to-label.json</code> configuration file.
Learn more.
</details>
fitzgen updated PR #9341.
fitzgen commented on PR #9341:
Rebased to resolve conflicts.
alexcrichton submitted PR review:
I think this is all reasonable enough, but this is definitely further amplifying my worries about the cost of GC and runtime performance. Every write of a funcref now requires at least a hash table lookup and reads can require taking a read lock on a global
RwLock
. On one hand I realize that the goal is to get GC working, but on the other hand I also have a hard time seeing how this is going to be a suitable performance profile for anyone actually using GC. I do realize though that improving on this is going to be significantly difficult to, for example, expose the slab to jit code or to implement a GC of the slab entries themselves.In that sense I'm going to go ahead and approve this as-is since I don't believe there are any correctness issues with it. (one minor worry is that the hash map isn't GC'd right now, but that mirrors the behavior of we don't implement instance-level GC within a
Store
so they're O(N) the same size more-or-less). Do you think it would be useful though to build up a list of the possible performance optimizations we can think of to keep track of things? I do still agree that this is the best route for getting things implemented and to a point where we can measure performance to see what to optimize, but I'm also afraid of losing context on the myriad of possible ways to optimize things as PRs land incrementally
fitzgen commented on PR #9341:
Do you think it would be useful though to build up a list of the possible performance optimizations we can think of to keep track of things?
Yeah, I can write up an issue with a list of these.
fitzgen commented on PR #9341:
I share your concerns. It certainly is not going to be performant as-is. My most-immediate thoughts are that we would do roughly the following to speed this up:
- Expose the slab to Wasm, allowing id-to-funcref conversion to happen fully within wasm code
- for funcref-to-id conversion, add an LRU/associative cache to the vmctx (or maybe runtime limits) to cache the results of the libcall and allow the happy path to stay within wasm code. the slow path would still fall back to a libcall however (I do not want to implement hashing in wasm code and try to keep it in sync with the Rust hashing)
My hope is that the above would result in good enough perf for us to not have to revisit this for quite a while.
alexcrichton commented on PR #9341:
Two questions actually:
- On the write side I was hoping we could do something purely in JIT code where it allocates space from the slab itself and the slab slots are managed by the normal GC. Would that be possible? For example during tracing we'd record which slab slots were in use and deallocation would be to iterate over the existing slab slots (and compaction may not be too hard either).
- On the read side is there any way to avoid the lock? That seems required currently for our threat model, but I'm basically wondering if there's any way we can get away with just an index comparison as opposed to a full subtype check.
fitzgen merged PR #9341.
fitzgen commented on PR #9341:
- On the read side is there any way to avoid the lock?
I think we solve this via the approach that we have talked about previously of having a shared, lock-free arena for the supertypes arrays, so that particular subtyping checks don't need to lock anything.
Will write up an issue about this soon.
fitzgen commented on PR #9341:
On the write side I was hoping we could do something purely in JIT code where it allocates space from the slab itself and the slab slots are managed by the normal GC. Would that be possible? For example during tracing we'd record which slab slots were in use and deallocation would be to iterate over the existing slab slots (and compaction may not be too hard either).
I think the tracing part is definitely doable. Compaction should also be possible. But the tricky part will be deduplicating funcrefs in the slab (i.e. the reason the hash map is in there now). Without that deduping, I fear that it will be too easy to fill the funcref table, require a GC, and end up thrashing the collector.
fitzgen commented on PR #9341:
- On the read side is there any way to avoid the lock?
I think we solve this via the approach that we have talked about previously of having a shared, lock-free arena for the supertypes arrays, so that particular subtyping checks don't need to lock anything.
Will write up an issue about this soon.
fitzgen commented on PR #9341:
Do you think it would be useful though to build up a list of the possible performance optimizations we can think of to keep track of things?
Yeah, I can write up an issue with a list of these.
Last updated: Dec 23 2024 at 12:05 UTC