lann opened PR #11490 from lann:pooling-metrics to bytecodealliance:main:
This exposes some basic runtime metrics derived from the internal state of a
PoolingInstanceAllocator.Two new atomics were added to PoolingInstanceAllocator:
live_memoriesandlive_tables. While these counts could be derived from existing state it would require acquiring mutexes on some inner state.Fixes https://github.com/bytecodealliance/wasmtime/issues/11449
<!--
Please make sure you include the following information:
If this work has been discussed elsewhere, please include a link to that
conversation. If it was discussed in an issue, just mention "issue #...".Explain why this change is needed. If the details are in an issue already,
this can be brief.Our development process is documented in the Wasmtime book:
https://docs.wasmtime.dev/contributing-development-process.htmlPlease ensure all communication follows the code of conduct:
https://github.com/bytecodealliance/wasmtime/blob/main/CODE_OF_CONDUCT.md
-->
lann requested pchickey for a review on PR #11490.
lann requested wasmtime-core-reviewers for a review on PR #11490.
lann updated PR #11490.
lann has marked PR #11490 as ready for review.
lann submitted PR review.
lann created PR review comment:
I copied this approach from
increment_component_instance_countbut I could easily be convinced that it would be fine to just increment after successful allocation.
lann created PR review comment:
I was on the fence about whether to just do
Ordering::Relaxedhere, but if someone ends up using metrics in actual business logic (like server backpressure) it would be a pretty nasty surprise for these to underflow...
lann submitted PR review.
lann edited PR review comment.
alexcrichton submitted PR review.
alexcrichton created PR review comment:
Similar to above, I'd say that using
Relaxedhere is fine since these aren't actually synchronizing with anything other than themselves.
alexcrichton created PR review comment:
You'll want to switch this to using a
Dropimplementation because otherwise this won't be correctly accounted for if the future is cancelled/dropped.
alexcrichton created PR review comment:
Given the need to have a
Dropto handle the cancelled-the-future case I'd be ok just putting this in theOkpath since that would avoid the need forDropentirely
alexcrichton created PR review comment:
I'd actually opt towards using
Relaxedhere since there's explicitly no data dependency with anything else in the system (e.g. this isn't looking for memory effects of another operation or publishing its own memory effects). Could you detail more about the underflow point though? How would atomic ordering affect that?
alexcrichton created PR review comment:
Perhaps
debug_assert!the previous value is> 0?
lann submitted PR review.
lann created PR review comment:
Ah good point; I didn't think about this hard enough when I rebased on #11470. I'll just switch to incrementing on Ok anyway.
lann submitted PR review.
lann created PR review comment:
Sounds good. I don't really have a good grasp of relaxed atomic semantics so I just pessimistically assume that a concurrent thread would be able to go back in time and kill its own grandparent or whatever.
lann updated PR #11490.
alexcrichton submitted PR review.
alexcrichton created PR review comment:
Heh nah makes sense. I'm by no means an expert but I'm confident enough in my understanding that if atomics are used for things like metric purposes it's fine to use relaxed. Relaxed ordering, AFAIK, basically means that you get no guarantees about other memory locations in the program when you observe a value. For the memory location in question, though, everything is still just as atomic as you'd expect (e.g. fetch-and-add still does that, it doesn't like switch to non-atomics). This happens because relaxed atomics can be reordered around each other by the compiler, so you can't for example say "I saw N memories in use, therefore I must see N+1 tables in use next", that's not valid.
lann updated PR #11490.
lann submitted PR review.
lann created PR review comment:
Yeah makes sense. I guess I'm not entirely clear on what makes
increment_component_instance_countdifferent here that it would needAcqRel?
lann edited PR review comment.
alexcrichton submitted PR review.
alexcrichton created PR review comment:
Oh it's not, the main difference is I saw this here and no one saw it there most likely. There's no need for that to use
AcqReleither and IMO it should also be usingRelaxed
alexcrichton has enabled auto merge for PR #11490.
alexcrichton commented on PR #11490:
While the vet error is spurious, the miri failure looks legitimate.
This does indeed fail under miri locally: https://github.com/bytecodealliance/wasmtime/blob/fc3f429d72fabd4fb669f17c977c11752e078313/crates/wasmtime/src/runtime/vm/instance/allocator/pooling/metrics.rs#L76-L77
Presumably I can work around this by overriding some default config, but is it a bug that the defaults don't work with miri?
Oh well I guess that's pretty straightforward: https://github.com/bytecodealliance/wasmtime/blob/fc3f429d72fabd4fb669f17c977c11752e078313/crates/environ/src/tunables.rs#L149-L150
Any preference on how to fix this?
- Add a
cfg!(miri)for defaultmax_memory_size- Just set
max_memory_sizein the test
alexcrichton commented on PR #11490:
The main test suite has a helper function for a "small pool config" which could be modeled here as well, but this test isn't too interesting for miri so it's also fine to slap a
#[cfg_attr(miri, ignore)]on the test
lann updated PR #11490.
alexcrichton has enabled auto merge for PR #11490.
alexcrichton merged PR #11490.
Last updated: Dec 06 2025 at 07:03 UTC