fitzgen opened PR #6835 from fitzgen:refactor-pooling-allocator
to bytecodealliance:main
:
We used to have one index allocator, an index per instance, and give out N
tables and M memories to every instance regardless how many tables and memories
they need.Now we have an index allocator for memories and another for tables. An instance
isn't associated with a single instance, each of its memories and tables have an
index. We allocate exactly as many tables and memories as the instance actually
needs.Ultimately, this gives us better component support, where a component instance
might have varying numbers of internal tables and memories.Additionally, you can now limit the number of tables, memories, and core
instances a single component can allocate from the pooling allocator, even if
there is the capacity for that many available. This is to give embedders tools
to limit individual component instances and prevent them from hogging too much
of the pooling allocator's resources.
TODO before landing:
- [ ] Update
RELEASES.md
with a heads up about the config changes and give a small guide of how to migrate existing set ups
fitzgen requested alexcrichton for a review on PR #6835.
fitzgen requested wasmtime-core-reviewers for a review on PR #6835.
fitzgen submitted PR review.
fitzgen submitted PR review.
fitzgen created PR review comment:
Unsure whether this is better as-written, or if moving the
MemoryAllocationIndex
intowasmtime_runtime::Memory
is better. Feel free to bike shed.
fitzgen requested wasmtime-fuzz-reviewers for a review on PR #6835.
fitzgen updated PR #6835.
fitzgen requested wasmtime-default-reviewers for a review on PR #6835.
fitzgen edited PR #6835:
We used to have one index allocator, an index per instance, and give out N
tables and M memories to every instance regardless how many tables and memories
they need.Now we have an index allocator for memories and another for tables. An instance
isn't associated with a single instance, each of its memories and tables have an
index. We allocate exactly as many tables and memories as the instance actually
needs.Ultimately, this gives us better component support, where a component instance
might have varying numbers of internal tables and memories.Additionally, you can now limit the number of tables, memories, and core
instances a single component can allocate from the pooling allocator, even if
there is the capacity for that many available. This is to give embedders tools
to limit individual component instances and prevent them from hogging too much
of the pooling allocator's resources.
TODO before landing:
- [x] Update
RELEASES.md
with a heads up about the config changes and give a small guide of how to migrate existing set ups
fitzgen updated PR #6835.
fitzgen updated PR #6835.
fitzgen requested cfallin for a review on PR #6835.
fitzgen submitted PR review.
fitzgen created PR review comment:
Note from Chris on zoom: can we RAII this stuff?
fitzgen submitted PR review.
fitzgen created PR review comment:
hoist to avoid allocating code objects and things when validation will fail
fitzgen created PR review comment:
note from chris: can we newtype this tuple?
ModuleAndMemory
or something like that
fitzgen created PR review comment:
comment out of date
fitzgen submitted PR review.
fitzgen created PR review comment:
note from Chris: seems like some abstraction here could maybe improve things but I don't know what it is.
for now can we have a comment about why we can't use associated types because of how this is used as a trait object
fitzgen created PR review comment:
note from chris: diagram/comment need updates
fitzgen created PR review comment:
leave a comment that we could keep track of how many memories we have per module instead of probing all possible indices if this ever shows up in a profile
fitzgen created PR review comment:
Note: I am going to avoid an RAII type because
- we would need two types, one for core module instantiation and one for component instantiation, and
- only the error paths would be able to use these, not the success paths, because the RAII type would need to wrap
self
here, which isn't feasible once the instance is returned.Therefore, I'll do the immediately-invoked closure thing instead.
fitzgen submitted PR review.
fitzgen created PR review comment:
request for comment: deallocation needs to be infallible, because if it wasn't we could leak if say the first of two memories failed to drop.
fitzgen created PR review comment:
or we can use an immediately-invoked closure so that we can
?
inside it but still have only a single place for clean up on error
fitzgen created PR review comment:
add comment about how this is needed for validation but does not factor into the pre-allocated pool size and its layout
cfallin submitted PR review:
Together with earlier zoom review and associated comments, this overall looks great to me! High-quality implementation with good attention paid to safety (e.g. index newtypes). A few comments below as well but nothing too major.
cfallin submitted PR review:
Together with earlier zoom review and associated comments, this overall looks great to me! High-quality implementation with good attention paid to safety (e.g. index newtypes). A few comments below as well but nothing too major.
cfallin created PR review comment:
likewise here, and for stacks, total core instances, ... below
cfallin created PR review comment:
should this be called
max_...
(in line with the other limits)?
cfallin created PR review comment:
as above,
max_...
?
cfallin created PR review comment:
here we're using the component-instance array just for its length -- could we store a counter rather than actual instance handles?
fitzgen submitted PR review.
fitzgen created PR review comment:
This one isn't really a maximum, it is the exact number of memories we will have capacity for in the pool. So I'll leave it as-is.
fitzgen submitted PR review.
fitzgen created PR review comment:
Ditto.
cfallin submitted PR review.
cfallin created PR review comment:
Hmm, yeah, I guess it's kind of a matter of semantics -- it is the maximum number that we can allocate across the whole pool. ("Capacity" is another way of saying "maximum", I guess.) But I'll defer to your judgment here if you want to keep as-is.
fitzgen updated PR #6835.
fitzgen updated PR #6835.
fitzgen has enabled auto merge for PR #6835.
fitzgen updated PR #6835.
fitzgen updated PR #6835.
fitzgen updated PR #6835.
fitzgen has enabled auto merge for PR #6835.
fitzgen updated PR #6835.
fitzgen has enabled auto merge for PR #6835.
fitzgen updated PR #6835.
fitzgen has enabled auto merge for PR #6835.
fitzgen merged PR #6835.
alexcrichton submitted PR review:
This all looks great to me, thanks again for tackling this!
alexcrichton submitted PR review:
This all looks great to me, thanks again for tackling this!
alexcrichton created PR review comment:
Mind throwing in a debug assert here that the return value is not 0? (e.g. this never goes negative)
alexcrichton created PR review comment:
Would it be possible to keep this as one trait? Inferring why this was split into two it seems like it wants to guarantee that the default implementations of methods are used, but this is purely internal and it's already an
unsafe trait
, so I think that should be enough to cover the bases? (I don't think we're at risk of duplicating these default trait method implementations anywhere)
alexcrichton created PR review comment:
I mentioned this in person as well, but would it be possible to eschew this index (and the one above) and infer the index from an address in the pooling allocator?
alexcrichton created PR review comment:
(and the other decrement methods too)
Last updated: Nov 22 2024 at 17:03 UTC