alexcrichton opened issue #13212:
This
*.wasttest:<details>
;;! component_model_async = true ;;! reference_types = true (component (type $FT (future u8)) (core module $Memory (memory (export "mem") 1)) (component $C (core instance $memory (instantiate $Memory)) (core module $CM (import "" "mem" (memory 1)) (import "" "waitable.join" (func $waitable.join (param i32 i32))) (import "" "waitable-set.new" (func $waitable-set.new (result i32))) (import "" "waitable-set.wait" (func $waitable-set.wait (param i32 i32) (result i32))) (import "" "future.new" (func $future.new (result i64))) (import "" "future.write-sync" (func $future.write-sync (param i32 i32) (result i32))) (global $writable-end (mut i32) (i32.const 0)) (global $ws (mut i32) (i32.const 0)) ;; create a new future, return the readable end to the caller (func $start-future (export "start-future") (result i32) (local $ret64 i64) (global.set $ws (call $waitable-set.new)) (local.set $ret64 (call $future.new)) (global.set $writable-end (i32.wrap_i64 (i64.shr_u (local.get $ret64) (i64.const 32)))) (call $waitable.join (global.get $writable-end) (global.get $ws) ) (i32.wrap_i64 (local.get $ret64)) ) (func $future-write-sync (export "future-write-sync") (result i32) ;; the caller will assert what they expect the return value to be (i32.store (i32.const 16) (i32.const 42)) (call $future.write-sync (global.get $writable-end) (i32.const 16)) ) (func $acknowledge-future-write (export "acknowledge-future-write") ;; confirm we got a FUTURE_WRITE $writable-end COMPLETED event (local $ret i32) (local.set $ret (call $waitable-set.wait (global.get $ws) (i32.const 0))) ;; TODO: what should this actually be testing? Right now this hits a ;; "BUG" in wasmtime... (if (i32.ne (i32.const 5 (; FUTURE_WRITE ;)) (local.get $ret)) (then unreachable)) (if (i32.ne (global.get $writable-end) (i32.load (i32.const 0))) (then unreachable)) (if (i32.ne (i32.const 0 (; COMPLETED ;)) (i32.load (i32.const 4))) (then unreachable)) ) ) (canon waitable.join (core func $waitable.join)) (canon waitable-set.new (core func $waitable-set.new)) (canon waitable-set.wait (memory $memory "mem") (core func $waitable-set.wait)) (canon future.new $FT (core func $future.new)) (canon future.write $FT (memory $memory "mem") (core func $future.write-sync)) (core instance $cm (instantiate $CM (with "" (instance (export "mem" (memory $memory "mem")) (export "waitable.join" (func $waitable.join)) (export "waitable-set.new" (func $waitable-set.new)) (export "waitable-set.wait" (func $waitable-set.wait)) (export "future.new" (func $future.new)) (export "future.write-sync" (func $future.write-sync)) )))) (func (export "start-future") (result (future u8)) (canon lift (core func $cm "start-future"))) (func (export "future-write-sync") async (result u32) (canon lift (core func $cm "future-write-sync"))) (func (export "acknowledge-future-write") async (canon lift (core func $cm "acknowledge-future-write"))) ) (component $D (import "c" (instance $c (export "start-future" (func (result (future u8)))) (export "future-write-sync" (func async (result u32))) (export "acknowledge-future-write" (func async)) )) (core instance $memory (instantiate $Memory)) (core module $Core (import "" "mem" (memory 1)) (import "" "future.read" (func $future.read (param i32 i32) (result i32))) (import "" "start-future" (func $start-future (result i32))) (import "" "future-write-sync.async" (func $future-write-sync.async (param i32) (result i32))) (import "" "acknowledge-future-write" (func $acknowledge-future-write)) (func $trap-after-future-async-write (export "trap-after-future-async-write") (local $ret i32) (local $fr i32) (local $subtask i32) (local.set $fr (call $start-future)) ;; calling future.write in $C should block (local.set $ret (call $future-write-sync.async (i32.const 4))) (if (i32.ne (i32.const 1 (; SUBTASK_STARTED ;)) (i32.and (local.get $ret) (i32.const 0xf))) (then unreachable)) ;; our future.read should then succeed eagerly (local.set $ret (call $future.read (local.get $fr) (i32.const 16))) (if (i32.ne (i32.const 0 (; COMPLETED ;)) (local.get $ret)) (then unreachable)) (if (i32.ne (i32.const 42) (i32.load8_u (i32.const 16))) (then unreachable)) ;; try to use a waitable-set to acquire an event... (call $acknowledge-future-write) ) ) (canon future.read $FT async (memory $memory "mem") (core func $future.read)) (canon lower (func $c "start-future") (core func $start-future')) (canon lower (func $c "future-write-sync") async (memory $memory "mem") (core func $future-write-sync.async)) (canon lower (func $c "acknowledge-future-write") (core func $acknowledge-future-write')) (core instance $core (instantiate $Core (with "" (instance (export "mem" (memory $memory "mem")) (export "future.read" (func $future.read)) (export "start-future" (func $start-future')) (export "future-write-sync.async" (func $future-write-sync.async)) (export "acknowledge-future-write" (func $acknowledge-future-write')) )))) (func (export "trap-after-future-async-write") async (canon lift (core func $core "trap-after-future-async-write"))) ) (instance $c (instantiate $C)) (instance $d (instantiate $D (with "c" (instance $c)))) (func (export "trap-after-future-async-write") (alias export $d "trap-after-future-async-write")) ) (assert_return (invoke "trap-after-future-async-write"))</details>
currently fails with:
$ cargo run wast -W component-model-async foo.wast Finished `dev` profile [unoptimized + debuginfo] target(s) in 0.11s Running `target/debug/wasmtime wast -W component-model-async foo.wast` thread 'main' (3404937) panicked at crates/wasmtime/src/runtime/bug.rs:56:13: BUG: event expected to be present note: run with `RUST_BACKTRACE=1` environment variable to display a backtraceThe specific scenario here is:
- A subtask is blocked on a synchronous
future.write- The writable-end of a future is in a waitable-set
- Then the future read is completed in another task.
- The waitable-set is blocked on
My hunch is that we're accidentally delivering this event through the waitable set rather than exclusively sending it to the synchronous operation. No matter what we shouldn't be hitting a
bail_bug!here. Once this specific issue is fixed, however, I think that this test should be expanded additionally to encompass other various interleavings. For example adding the future to a waitable set while the synchronous task hasn't yet returned but the event is "pending" currently causes this test to pass, but that feels quite fishy to me since that might result in an event being delivered twice or the task never waking up.cc @lukewagner on this situation as well since this is a corner of the spec I'd like to double-check with you on. Basically when a task is blocked in a synchronous future/stream operation, it should be impossible for events related to that operation to get routed to a waitable set, right? Or should adding the future/stream to a waitable set at all during that blocking operation be a trap? Or something similar?
alexcrichton added the wasm-proposal:component-model-async label to Issue #13212.
lukewagner commented on issue #13212:
Wow, that's definitely a hole in the spec; good points!
So there's already a spec-level
SYNC_COPYING-vs-ASYNC_COPYINGstate distinction on the readable/writable ends of streams/futures and this distinction is currently used to make{stream,future}.cancel-{read,write}trap if notASYNC_COPYING(since otherwise cancellation would leave a sync read/write "dangling"). This situation with waitable sets "stealing" an event from a sync read/write seems analogous, so I think we could update the spec to say that when an end isSYNC_COPYING, events are never delivered towaitable-set.{wait,poll}.Considering the alternative of trapping if you try to add an end to a waitable set during a sync operation: we also have to trap in the case of adding the end to the waitable set and then starting the sync operation, and this seems like more trouble and potentially frustrating, so I think the first options is better?
dicej commented on issue #13212:
FWIW, the Wasmtime implementation for sync reads or writes which block involve temporarily moving the waitable to an internal set, waiting on that set syncronously, then putting it back in the original (possibly null) set: https://github.com/bytecodealliance/wasmtime/blob/a0dd8b3a681146c35ef5bee8af941314f00e25c5/crates/wasmtime/src/runtime/component/concurrent.rs#L1922-L1935
lukewagner commented on issue #13212:
Ah, interesting; does the current impl also do something (trap or... something fancier) if attempting to add a currently-sync-reading/writing end to a waitable-set?
dicej commented on issue #13212:
Ah, interesting; does the current impl also do something (trap or... something fancier) if attempting to add a currently-sync-reading/writing end to a waitable-set?
No, but it probably should. I'll admit these scenarios hadn't occurred to me until now.
alexcrichton commented on issue #13212:
Is this something where it might be reasonable, even at this late hour, to gate this from an 0.3.0 wasip3 release? I'm not aware of any guests requiring sync read/write intrinsics, so gating them behind a separate feature might be a feasible near-term change. Personally I'm basically perpetually worried about having all these sorts of interactions which seem reasonable on the surface but after digging a bit surprising combinations rise and give way to interesting and subtle questions. I'd effectively like to ensure that we are able to dedicate enough time to figuring these things out vs lumping everything together.
lukewagner commented on issue #13212:
Yeah, if there's no users of the sync variants, that sounds reasonable.
alexcrichton commented on issue #13212:
I've opened up https://github.com/WebAssembly/component-model/pull/641 at the spec level to explore gating this functionality by default.
dicej commented on issue #13212:
Okay, I dug into this; the reason we're hitting the
bail_bugis that the code I referenced above joins the waitable to its original set (after temporarily moving it to the internal set), which queues up an internal job to wake the task blocked onwaitable-set.wait, _then_ takes the event from the waitable. By the time the job runs to resume thewaitable-set.waittask, the event has already been taken, and so the code is confused why it was woken up.I can change that code to take the event before joining the waitable to its original set, which will leave the task that's waiting on it to keep waiting (i.e. not queue a job to resume it), which seems consistent with what Luke described above. Does that sound right?
dicej edited a comment on issue #13212:
Okay, I dug into this; the reason we're hitting the
bail_bugis that the code I referenced above joins the waitable to its original set (after temporarily moving it to the internal set), which queues up an internal job to wake the task blocked onwaitable-set.wait, and _then_ takes the event from the waitable. By the time the job runs to resume thewaitable-set.waittask, the event has already been taken, and so the code is confused why it was woken up.I can change that code to take the event before joining the waitable to its original set, which will leave the task that's waiting on it to keep waiting (i.e. not queue a job to resume it), which seems consistent with what Luke described above. Does that sound right?
alexcrichton commented on issue #13212:
I think in theory internally using a waitable set for sync operations means that it's basically broken if
waitable.joinis used while the blocking operation is in-flight? I also wouldn't be surprised if a cancelling a sync operation is allowed right now and processes as-if it were an async operation.
dicej commented on issue #13212:
Yes, good point. Seems that the first thing to do is clarify the spec to enumerate the various possible combinations of
waitable-sets and sync ops and how they should behave, then we can update the implementation to match.
Last updated: May 03 2026 at 22:13 UTC