Stream: git-wasmtime

Topic: wasmtime / PR #9341 Add support for `funcref`s inside GC ...


view this post on Zulip Wasmtime GitHub notifications bot (Oct 01 2024 at 00:01):

fitzgen opened PR #9341 from fitzgen:funcrefs-in-gc-heap to bytecodealliance:main:

<!--
Please make sure you include the following information:

Our development process is documented in the Wasmtime book:
https://docs.wasmtime.dev/contributing-development-process.html

Please ensure all communication follows the code of conduct:
https://github.com/bytecodealliance/wasmtime/blob/main/CODE_OF_CONDUCT.md
-->

view this post on Zulip Wasmtime GitHub notifications bot (Oct 01 2024 at 00:01):

fitzgen requested alexcrichton for a review on PR #9341.

view this post on Zulip Wasmtime GitHub notifications bot (Oct 01 2024 at 00:01):

fitzgen requested wasmtime-core-reviewers for a review on PR #9341.

view this post on Zulip Wasmtime GitHub notifications bot (Oct 01 2024 at 02:14):

github-actions[bot] commented on PR #9341:

Subscribe to Label Action

cc @fitzgen

<details>
This issue or pull request has been labeled: "wasmtime:api", "wasmtime:ref-types"

Thus the following users have been cc'd because of the following labels:

To subscribe or unsubscribe from this label, edit the <code>.github/subscribe-to-label.json</code> configuration file.

Learn more.
</details>

view this post on Zulip Wasmtime GitHub notifications bot (Oct 01 2024 at 15:47):

fitzgen updated PR #9341.

view this post on Zulip Wasmtime GitHub notifications bot (Oct 01 2024 at 15:47):

fitzgen commented on PR #9341:

Rebased to resolve conflicts.

view this post on Zulip Wasmtime GitHub notifications bot (Oct 01 2024 at 16:13):

alexcrichton submitted PR review:

I think this is all reasonable enough, but this is definitely further amplifying my worries about the cost of GC and runtime performance. Every write of a funcref now requires at least a hash table lookup and reads can require taking a read lock on a global RwLock. On one hand I realize that the goal is to get GC working, but on the other hand I also have a hard time seeing how this is going to be a suitable performance profile for anyone actually using GC. I do realize though that improving on this is going to be significantly difficult to, for example, expose the slab to jit code or to implement a GC of the slab entries themselves.

In that sense I'm going to go ahead and approve this as-is since I don't believe there are any correctness issues with it. (one minor worry is that the hash map isn't GC'd right now, but that mirrors the behavior of we don't implement instance-level GC within a Store so they're O(N) the same size more-or-less). Do you think it would be useful though to build up a list of the possible performance optimizations we can think of to keep track of things? I do still agree that this is the best route for getting things implemented and to a point where we can measure performance to see what to optimize, but I'm also afraid of losing context on the myriad of possible ways to optimize things as PRs land incrementally

view this post on Zulip Wasmtime GitHub notifications bot (Oct 01 2024 at 16:18):

fitzgen commented on PR #9341:

Do you think it would be useful though to build up a list of the possible performance optimizations we can think of to keep track of things?

Yeah, I can write up an issue with a list of these.

view this post on Zulip Wasmtime GitHub notifications bot (Oct 01 2024 at 16:24):

fitzgen commented on PR #9341:

I share your concerns. It certainly is not going to be performant as-is. My most-immediate thoughts are that we would do roughly the following to speed this up:

My hope is that the above would result in good enough perf for us to not have to revisit this for quite a while.

view this post on Zulip Wasmtime GitHub notifications bot (Oct 01 2024 at 16:38):

alexcrichton commented on PR #9341:

Two questions actually:

view this post on Zulip Wasmtime GitHub notifications bot (Oct 01 2024 at 16:51):

fitzgen merged PR #9341.

view this post on Zulip Wasmtime GitHub notifications bot (Oct 01 2024 at 18:41):

fitzgen commented on PR #9341:

I think we solve this via the approach that we have talked about previously of having a shared, lock-free arena for the supertypes arrays, so that particular subtyping checks don't need to lock anything.

Will write up an issue about this soon.

view this post on Zulip Wasmtime GitHub notifications bot (Oct 01 2024 at 18:46):

fitzgen commented on PR #9341:

On the write side I was hoping we could do something purely in JIT code where it allocates space from the slab itself and the slab slots are managed by the normal GC. Would that be possible? For example during tracing we'd record which slab slots were in use and deallocation would be to iterate over the existing slab slots (and compaction may not be too hard either).

I think the tracing part is definitely doable. Compaction should also be possible. But the tricky part will be deduplicating funcrefs in the slab (i.e. the reason the hash map is in there now). Without that deduping, I fear that it will be too easy to fill the funcref table, require a GC, and end up thrashing the collector.

view this post on Zulip Wasmtime GitHub notifications bot (Oct 01 2024 at 19:32):

fitzgen commented on PR #9341:

  • On the read side is there any way to avoid the lock?

I think we solve this via the approach that we have talked about previously of having a shared, lock-free arena for the supertypes arrays, so that particular subtyping checks don't need to lock anything.

Will write up an issue about this soon.

https://github.com/bytecodealliance/wasmtime/issues/9352

view this post on Zulip Wasmtime GitHub notifications bot (Oct 01 2024 at 19:33):

fitzgen commented on PR #9341:

Do you think it would be useful though to build up a list of the possible performance optimizations we can think of to keep track of things?

Yeah, I can write up an issue with a list of these.

https://github.com/bytecodealliance/wasmtime/issues/9351


Last updated: Dec 23 2024 at 12:05 UTC