alexcrichton opened issue #11869:
At this time the implementation of async libcalls in Wasmtime all use the
block_onhelper internally in the implementation. This has the property, though, that the store is "locked" while the libcall is waiting on the result meaning that in a component-model-async world it's not possible to make progress on anything else in the store while this is happening. This notably affectsStore::run_concurrentas well where any select-ed async computation won't make progress because the wasm is locking up everything.To fix this it will require async libcalls to be refactored/reimplemented to not close over the store for the duration of their execution. Instead something like
Accessorwill be required where mutable access to a store can be temporarily granted but otherwise it's not held acrossawaitpoints. In implementing this it'll fixrun_concurrentto correctly and actually run various computations in the provided closure concurrently. This will also enable other concurrent tasks within the store to make progress while a libcall is blocked.
alexcrichton added the wasm-proposal:component-model-async label to Issue #11869.
alexcrichton commented on issue #11869:
I talked with Luke and Joel about this a bit ago and wanted to write down some notes. Async libcalls right now are:
- Things that trigger an async limiter
- table growth (all kinds)
- memory growth
- Things that may trigger GC (async nature of GC, async limiter, etc)
- table initialization
- growing the gc heap
- gc allocation
- array new/init data/elem
- Fuel running out
- Epochs changing
All of these libcalls fall into the category of "the wasm is stuck between two plain/normal wasm instructions". It would be a violation of runtime semantics if Wasmtime were to allow something else to interleave between instructions. For example during an epoch yield it's not valid for Wasmtime to run some other wasm within the store as that could have visible side effects.
Put another way Wasmtime will need to enforce a "lock" where when these async situations are hit it prevents all wasm from continuing to execute. My rough idea for this is that the store, with concurrency enabled, will grow an async-recursive-mutex-of-sorts (probably not literally). This lock will be "bounced on" whenever wasm is entered via the host call or exited via a hostcall returning. The lock is then held across the async operation of an epoch/fuel/async limiters/gc/etc from above. The rough idea is that this is a
Option<Vec<Waker>>in the store. If that'sNone, the lock isn't held. If it'sSome, then the lock is held. When the lock is "dropped" then all the wakers are woken (if any).This'll likely require refactoring some various points within Wasmtime to instrument more entries/exits with
asyncin Rust. Ideally the lock acquisition/bounce/etc are all nativeasyncfunctions. This'll probably require some finesse.The main learning from this is we can't simply start using
run_concurrent(ish) within these libcalls. If we were to do that then we would accidentally allow the situation we don't want, which is executing wasm instructions between other instructions that don't allow interleaving. This means that to fully avoid blocking the store we'll have to add infrastructure, as opposed to just removing a limitation.
alexcrichton commented on issue #11869:
Another thought that occurs to me borne out of thoughts/discussion from https://github.com/bytecodealliance/wasmtime/pull/12587 -- preventing wasm execution during an async libcall is not enough, we have to prevent mutation of the entire store. This includes things like executing wasm, but it extends to host-initiated modifications of tables/globals as well. That's a needle I don't know how to thread...
Last updated: Feb 24 2026 at 07:22 UTC