cfallin opened PR #3738 from pooling-affinity
to main
:
(Builds on #3697.)
This policy attempts to reuse the same instance slot for subsequent
instantiations of the same module. This is particularly useful when
using a pooling backend such as memfd that benefits from this reuse: for
example, in the memfd case, instantiating the same module into the same
slot allows us to avoid several calls to mmap() because the same
mappings can be reused.The policy tracks a freelist per "compiled module ID", and when
allocating a slot for an instance, tries these three options in order:
A slot from the freelist for this module (i.e., last used for another
instantiation of this particular module);A slot that has never been used before; or
- A slot that was last used by some other module.
The "victim" module for choice 3 is randomly chosen, to avoid biases due
to the hashmap's ordering (which would be persistent across the lifetime
of the pooling allocator).This policy is now the default when the memfd backend is selected via
thememfd-allocator
feature flag.
cfallin requested alexcrichton for a review on PR #3738.
fitzgen submitted PR review.
fitzgen submitted PR review.
fitzgen created PR review comment:
Why not change
CompiledModuleId
to be a newtype aroundNonZeroU64
and then we can use plainOption
andNone
without pessimizing size, instead of having our own custom sentinel that we can forget to check for?
fitzgen created PR review comment:
Also, this is an existing issue (introduced in the memfd PR?) that is maybe going to become worse with these changes, but we use a relaxed ordering in the
fetch_add
when allocating newCompiledModuleId
s below -- it seems like this could be a problem on non-x86?
cfallin created PR review comment:
Re: relaxed ordering, that's fine -- all that means is that the atomic ID-allocation event is not synchronized with respect to any other loads/stores in the system (there are no fences). But the fetch-add itself is still an atomic action, with some linear ordering amongst the other accesses to the "next ID" atomic. That's sufficient for the invariants we need: we just need unique IDs for each module, we don't care what they are.
(Good idea wrt
NonZeroU64
; I'll make that change!)
cfallin submitted PR review.
fitzgen submitted PR review.
fitzgen created PR review comment:
Ah yes, I didn't think about how the
fetch_add
itself was still atomic. Thanks!
cfallin updated PR #3738 from pooling-affinity
to main
.
cfallin updated PR #3738 from pooling-affinity
to main
.
cfallin updated PR #3738 from pooling-affinity
to main
.
cfallin updated PR #3738 from pooling-affinity
to main
.
cfallin edited PR #3738 from pooling-affinity
to main
:
(Builds on #3697.)
This policy attempts to reuse the same instance slot for subsequent
instantiations of the same module. This is particularly useful when
using a pooling backend such as memfd that benefits from this reuse: for
example, in the memfd case, instantiating the same module into the same
slot allows us to avoid several calls to mmap() because the same
mappings can be reused.The policy tracks a freelist per "compiled module ID", and when
allocating a slot for an instance, tries these three options in order:
A slot from the freelist for this module (i.e., last used for another
instantiation of this particular module), orA slot that was last used by some other module or never before.
The "victim" slot for choice 2 is randomly chosen.
The data structures are carefully designed so that all updates are O(1),
and there is no retry-loop in any of the random selection.This policy is now the default when the memfd backend is selected via
thememfd-allocator
feature flag.
cfallin updated PR #3738 from pooling-affinity
to main
.
cfallin updated PR #3738 from pooling-affinity
to main
.
cfallin updated PR #3738 from pooling-affinity
to main
.
cfallin updated PR #3738 from pooling-affinity
to main
.
cfallin edited PR #3738 from pooling-affinity
to main
:
(Builds on #3697.)
This policy attempts to reuse the same instance slot for subsequent
instantiations of the same module. This is particularly useful when
using a pooling backend such as memfd that benefits from this reuse: for
example, in the memfd case, instantiating the same module into the same
slot allows us to avoid several calls to mmap() because the same
mappings can be reused.The policy tracks a freelist per "compiled module ID", and when
allocating a slot for an instance, tries these two options in order:
A slot from the freelist for this module (i.e., last used for another
instantiation of this particular module), orA slot that was last used by some other module or never before.
The "victim" slot for choice 2 is randomly chosen.
The data structures are carefully designed so that all updates are O(1),
and there is no retry-loop in any of the random selection.This policy is now the default when the memfd backend is selected via
thememfd-allocator
feature flag.
cfallin updated PR #3738 from pooling-affinity
to main
.
fitzgen submitted PR review.
fitzgen submitted PR review.
fitzgen created PR review comment:
This is all slot indices that are free, right? That is, an invariant of this code is that every free slot index is always in this list, right?
If my understanding is correct, then I think it makes sense to clarify this comment as such.
fitzgen created PR review comment:
All of the methods on
SlotState
panic if theSlotState
isTaken
. What do you think of refactoringSlotState
to look like this:enum SlotState { Taken, Free(FreeSlotState), } impl SlotState { // Do the expected matching where `Taken` returns `None` here... fn as_free_slot(&self) -> Option<&FreeSlotState> { ... } fn as_free_slot_mut(&mut self) -> Option<&mut FreeSlotState> { ... } } enum FreeSlotState { Empty { ... }, Affinity { ... }, } impl FreeSlotState { // all the methods above, which don't need to have panicking paths anymore }
This way we can consolidate all the is-this-a-taken-slot checking into a single
slot_state[i].as_free_slot().expect("blah blah blah")
call, our types give us guarantees about their data, and potential panics aren't hidden behind method calls.
fitzgen created PR review comment:
Is the idea that as
Module
s get dropped, we don't need to actively empty theper_module
free lists, because they will eventually get stolen by other instance allocations, and once the dropped module's free list is empty, it is removed from theper_module
free lists? I think this would be good to test somehow, since it seems like an easy memory leak if we mess this up. Also, it seems like it would be good to document in a comment somewhere.
fitzgen created PR review comment:
Maybe
debug_assert!(per_module.contains(&id)); debug_assert!(!per_module.get(&id).is_empty());
at the start of this function?
fitzgen created PR review comment:
Nitpick: maybe name this
remove_global_free_list_item
to clarify the difference between this andremove_module_free_list_item
?Ditto about freeling like a free function.
fitzgen created PR review comment:
Nitpick: sort of feels like this should be a free function.
fitzgen created PR review comment:
This is a very helpful comment, thanks for writing this!
fitzgen created PR review comment:
I think the
id
is unnecessary here, and we can recover this from theslot_state[index]
, if it exists? If so, removingid
would help move us towards a single source of truth with fewer things that can get misaligned and out of sync, and would allow us to clean up slots that have a module affinity even when callers forgot to pass a module id here.
fitzgen created PR review comment:
It would be nice to have a newtype for slot indices (
SlotId
?) too.
new_id
threw me for a little bit of a loop, and made me double check that this wasn't aCompiledModuleId
and was actually a slot index, which is what I expected it to be, but just the naming made me doubt myself enough that I had to double check.
fitzgen created PR review comment:
Nitpick: we know we are going to insert into this new vec so we can do
.or_insert_with(|| Vec::with_capacity(1))
here.
fitzgen created PR review comment:
Yeah so calling
free
without anid
will create newSlotState::Empty
s which goes against that steady-state comment above (ignoring a global analysis of allfree
callers). So yeah I think removing theid
param would be nice if we can, since it also helps make it easier to verify statements about the correctness of this system.
fitzgen created PR review comment:
Could also have
SlotState::unwrap_free_slot[_mut]
methods to avoid theexepect
at every call site, but still have the potential panics "visible" to callers.
fitzgen created PR review comment:
and/or we could
s/new_id/slot_index/
here
cfallin updated PR #3738 from pooling-affinity
to main
.
cfallin updated PR #3738 from pooling-affinity
to main
.
cfallin submitted PR review.
cfallin created PR review comment:
Yep, clarified, thanks!
cfallin submitted PR review.
cfallin created PR review comment:
Sure, I refactored along these lines (and took the route of panic'ing in the as_free accessors). Good idea, thanks!
cfallin submitted PR review.
cfallin created PR review comment:
Yep, exactly; a slot's affinity remains until the slot is overwritten (reused for a different module). Extended the doc-comment, and added some testing for the keys of this hashmap (and that its values are non-empty lists) as well.
cfallin submitted PR review.
cfallin created PR review comment:
Yup, done on both.
cfallin submitted PR review.
cfallin created PR review comment:
I newtyped All The Things (
SlotId
,GlobalFreeListIndex
,PerModuleFreeListIndex
); hopefully this is better!
cfallin submitted PR review.
cfallin created PR review comment:
:+1:
cfallin submitted PR review.
cfallin created PR review comment:
Yes, that's a good point; made it a field of the
Taken
state now, to remember it for the free.
cfallin submitted PR review.
cfallin created PR review comment:
Good point!
cfallin submitted PR review.
cfallin created PR review comment:
Indeed. The intent was to be a little more general (one could in theory choose not to retain the state that makes a slot have affinity) but as you say that complexifies things a bit and we should probably approach that as we need it. (And it need not necessarily touch the affinity state either; there's little harm in a stray affinity for a module that will never exist again.)
cfallin submitted PR review.
cfallin created PR review comment:
The first is covered by the
get_mut()
at the top; I added a.expect()
rather than.unwrap()
to make any failure clearer though. Added the second.
cfallin submitted PR review.
cfallin created PR review comment:
Done!
fitzgen submitted PR review.
fitzgen created PR review comment:
The first is covered by the
get_mut()
at the topFWIW, I don't mind having "redundant" assertions because, in some sense, debug assertions are always redundant, since we are checking things that we believe should always be true.
cfallin updated PR #3738 from pooling-affinity
to main
.
fitzgen submitted PR review.
fitzgen submitted PR review.
fitzgen created PR review comment:
I'd personally add
unwrap_
as a prefix to these match-on-a-variant-and-return-the-inner-bits-or-else-panic methods. Typically a plainas_blah
would return anOption<&T>
if it is possibly not aT
, rather than panicking.
cfallin updated PR #3738 from pooling-affinity
to main
.
cfallin submitted PR review.
cfallin created PR review comment:
Done!
cfallin updated PR #3738 from pooling-affinity
to main
.
cfallin updated PR #3738 from pooling-affinity
to main
.
cfallin updated PR #3738 from pooling-affinity
to main
.
cfallin updated PR #3738 from pooling-affinity
to main
.
cfallin updated PR #3738 from pooling-affinity
to main
.
cfallin updated PR #3738 from pooling-affinity
to main
.
cfallin updated PR #3738 from pooling-affinity
to main
.
cfallin updated PR #3738 from pooling-affinity
to main
.
cfallin updated PR #3738 from pooling-affinity
to main
.
cfallin updated PR #3738 from pooling-affinity
to main
.
cfallin updated PR #3738 from pooling-affinity
to main
.
cfallin updated PR #3738 from pooling-affinity
to main
.
cfallin merged PR #3738.
Last updated: Nov 22 2024 at 16:03 UTC