shumbo opened PR #12899 from shumbo:fix-aarch64-asan-fiber-stack-reuse to bytecodealliance:main:
<!--
Please make sure you include the following information:
If this work has been discussed elsewhere, please include a link to that
conversation. If it was discussed in an issue, just mention "issue #...".Explain why this change is needed. If the details are in an issue already,
this can be brief.Our development process is documented in the Wasmtime book:
https://docs.wasmtime.dev/contributing-development-process.htmlPlease ensure all communication follows the code of conduct:
https://github.com/bytecodealliance/wasmtime/blob/main/CODE_OF_CONDUCT.md
-->Overview
When fiber stacks are reused across async calls (via the Store-level
last_fiber_stackcache), ASAN's shadow memory retains poisoning from the previous fiber execution. On aarch64, whenwasmtime_fiber_initwrites theInitialStackstruct to reinitialize a reused stack, ASAN reports a false stack-buffer-overflow because it sees a write to memory it still considers out-of-scope.This adds an
__asan_unpoison_memory_regioncall (gated behind#[cfg(asan)]) to unpoison exactly the InitialStack-sized region before writing to it.Reproducing
Any .wast file with two or more invocations triggers this, since wasmtime wast defaults to async mode and the Store caches the fiber stack after the first call:
(module (func (export "f") (result i32) i32.const 42 ) ) (assert_return (invoke "f") (i32.const 42)) (assert_return (invoke "f") (i32.const 42))RUSTFLAGS="-Zsanitizer=address" cargo +nightly run --release --target aarch64-unknown-linux-gnu -- wast test.wastThe first invoke allocates and runs a fiber, poisoning shadow memory for the stack region. The second invoke reuses the cached stack, and wasmtime_fiber_init's initial_stack.
write(...)hits the stale poisoning.vscode@6636972de969:/tmp/wasmtime$ uname -a Linux 6636972de969 6.17.8-orbstack-00308-g8f9c941121b1 #1 SMP PREEMPT Thu Nov 20 09:34:02 UTC 2025 aarch64 GNU/Linux vscode@6636972de969:/tmp/wasmtime$ git rev-parse HEAD 63a6358c63617c314f990ec1bdf3ade5ba6b49a7 vscode@6636972de969:/tmp/wasmtime$ RUSTFLAGS="-Zsanitizer=address" cargo +nightly run --release --target aarch64-unknown-linux-gnu -- wast /workspace/test.wast Finished `release` profile [optimized] target(s) in 0.13s Running `target/aarch64-unknown-linux-gnu/release/wasmtime wast /workspace/test.wast` ================================================================= ==41669==ERROR: AddressSanitizer: stack-buffer-overflow on address 0xfbfda9739f50 at pc 0xaaaacaaa97a4 bp 0xffffcf2530f0 sp 0xffffcf2528e0 WRITE of size 64 at 0xfbfda9739f50 thread T0 #0 0xaaaacaaa97a0 (/tmp/wasmtime/target/aarch64-unknown-linux-gnu/release/wasmtime+0xd797a0) (BuildId: 909aee34de8d2e8187618f26ea5563c02997af08) #1 0xaaaacd4a2d00 (/tmp/wasmtime/target/aarch64-unknown-linux-gnu/release/wasmtime+0x3772d00) (BuildId: 909aee34de8d2e8187618f26ea5563c02997af08) #2 0xaaaacd43ff10 (/tmp/wasmtime/target/aarch64-unknown-linux-gnu/release/wasmtime+0x370ff10) (BuildId: 909aee34de8d2e8187618f26ea5563c02997af08) #3 0xaaaacd56720c (/tmp/wasmtime/target/aarch64-unknown-linux-gnu/release/wasmtime+0x383720c) (BuildId: 909aee34de8d2e8187618f26ea5563c02997af08) #4 0xaaaacd4224f0 (/tmp/wasmtime/target/aarch64-unknown-linux-gnu/release/wasmtime+0x36f24f0) (BuildId: 909aee34de8d2e8187618f26ea5563c02997af08) #5 0xaaaacd64b1a4 (/tmp/wasmtime/target/aarch64-unknown-linux-gnu/release/wasmtime+0x391b1a4) (BuildId: 909aee34de8d2e8187618f26ea5563c02997af08) #6 0xaaaacd55a824 (/tmp/wasmtime/target/aarch64-unknown-linux-gnu/release/wasmtime+0x382a824) (BuildId: 909aee34de8d2e8187618f26ea5563c02997af08) #7 0xaaaacd64fc68 (/tmp/wasmtime/target/aarch64-unknown-linux-gnu/release/wasmtime+0x391fc68) (BuildId: 909aee34de8d2e8187618f26ea5563c02997af08) #8 0xaaaacd56aa00 (/tmp/wasmtime/target/aarch64-unknown-linux-gnu/release/wasmtime+0x383aa00) (BuildId: 909aee34de8d2e8187618f26ea5563c02997af08) #9 0xaaaacd6488b8 (/tmp/wasmtime/target/aarch64-unknown-linux-gnu/release/wasmtime+0x39188b8) (BuildId: 909aee34de8d2e8187618f26ea5563c02997af08) #10 0xaaaacd3ff414 (/tmp/wasmtime/target/aarch64-unknown-linux-gnu/release/wasmtime+0x36cf414) (BuildId: 909aee34de8d2e8187618f26ea5563c02997af08) #11 0xaaaacd3fa168 (/tmp/wasmtime/target/aarch64-unknown-linux-gnu/release/wasmtime+0x36ca168) (BuildId: 909aee34de8d2e8187618f26ea5563c02997af08) #12 0xaaaacd543788 (/tmp/wasmtime/target/aarch64-unknown-linux-gnu/release/wasmtime+0x3813788) (BuildId: 909aee34de8d2e8187618f26ea5563c02997af08) #13 0xaaaacd5563c4 (/tmp/wasmtime/target/aarch64-unknown-linux-gnu/release/wasmtime+0x38263c4) (BuildId: 909aee34de8d2e8187618f26ea5563c02997af08) #14 0xaaaacd538308 (/tmp/wasmtime/target/aarch64-unknown-linux-gnu/release/wasmtime+0x3808308) (BuildId: 909aee34de8d2e8187618f26ea5563c02997af08) #15 0xaaaacd403a6c (/tmp/wasmtime/target/aarch64-unknown-linux-gnu/release/wasmtime+0x36d3a6c) (BuildId: 909aee34de8d2e8187618f26ea5563c02997af08) #16 0xaaaacd4023d0 (/tmp/wasmtime/target/aarch64-unknown-linux-gnu/release/wasmtime+0x36d23d0) (BuildId: 909aee34de8d2e8187618f26ea5563c02997af08) #17 0xaaaacabf50d4 (/tmp/wasmtime/target/aarch64-unknown-linux-gnu/release/wasmtime+0xec50d4) (BuildId: 909aee34de8d2e8187618f26ea5563c02997af08) #18 0xaaaacaade198 (/tmp/wasmtime/target/aarch64-unknown-linux-gnu/release/wasmtime+0xdae198) (BuildId: 909aee34de8d2e8187618f26ea5563c02997af08) #19 0xaaaacaaeb614 (/tmp/wasmtime/target/aarch64-unknown-linux-gnu/release/wasmtime+0xdbb614) (BuildId: 909aee34de8d2e8187618f26ea5563c02997af08) #20 0xaaaacaae9548 (/tmp/wasmtime/target/aarch64-unknown-linux-gnu/release/wasmtime+0xdb9548) (BuildId: 909aee34de8d2e8187618f26ea5563c02997af08) #21 0xaaaad05a7970 (/tmp/wasmtime/target/aarch64-unknown-linux-gnu/release/wasmtime+0x6877970) (BuildId: 909aee34de8d2e8187618f26ea5563c02997af08) #22 0xaaaacaae8560 (/tmp/wasmtime/target/aarch64-unknown-linux-gnu/release/wasmtime+0xdb8560) (BuildId: 909aee34de8d2e8187618f26ea5563c02997af08) #23 0xaaaacaae0dbc (/tmp/wasmtime/target/aarch64-unknown-linux-gnu/release/wasmtime+0xdb0dbc) (BuildId: 909aee34de8d2e8187618f26ea5563c02997af08) #24 0xffffb5ef7740 (/lib/aarch64-linux-gnu/libc.so.6+0x27740) (BuildId: 3ff3f95a1642952473d0d5739aaf308b0e24f694) #25 0xffffb5ef7814 (/lib/aarch64-linux-gnu/libc.so.6+0x27814) (BuildId: 3ff3f95a1642952473d0d5739aaf308b0e24f694) #26 0xaaaacaa1a52c (/tmp/wasmtime/target/aarch64-unknown-linux-gnu/release/wasmtime+0xcea52c) (BuildId: 909aee34de8d2e8187618f26ea5563c02997af08) Address 0xfbfda9739f50 is a wild pointer inside of access range of size 0x000000000040. SUMMARY: AddressSanitizer: stack-buffer-overflow (/tmp/wasmtime/target/aarch64-unknown-linux-gnu/release/wasmtime+0xd797a0) (BuildId: 909aee34de8d2e8187618f26ea5563c02997af08) Shadow bytes around the buggy address: 0xfbfda9739c80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0xfbfda9739d00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0xfbfda9739d80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0xfbfda9739e00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0xfbfda9739e80: 00 00 00 00 00 00 00 00 f1 f1 f1 f1 f8 f8 f8 f2 =>0xfbfda9739f00: f2 f2 f2 f2 00 00 00 f3 f3 f3[f3]f3 00 00 00 00 0xfbfda9739f80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0xfbfda973a000: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa 0xfbfda973a080: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa 0xfbfda973a100: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa 0xfbfda973a180: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa Shadow byte legend (one shadow byte represents 8 application bytes): Addressable: 00 Partially addressable: 01 02 03 04 05 06 07 Heap left redzone: fa Freed heap region: fd Stack left redzone: f1 Stack mid redzone: f2 Stack right redzone: f3 Stack after return: f5 Stack use after scope: f8 Global redzone: f9 Global init order: f6 Poisoned by user: f7 Container overflow: fc Array cookie: ac Intra object redzone: bb ASan internal: fe Left alloca redzone: ca Right alloca redzone: cb ==41669==ABORTINGWith this patch, it works as expected:
vscode@6636972de969:/wasmtime$ RUSTFLAGS="-Zsanitizer=address" cargo +nightly run --release --target aarch64-unknown-linux-gnu -- wast /workspace/test.wast Finished `release` profile [optimized] target(s) in 0.14s Running `target/aarch64-unknown-linux-gnu/release/wasmtime wast /workspace/test.wast`
shumbo requested dicej for a review on PR #12899.
shumbo requested wasmtime-core-reviewers for a review on PR #12899.
cfallin commented on PR #12899:
The ASAN behavior here isn't aarch64-specific, right? If not, could you apply the same change to the other architectures as well?
shumbo commented on PR #12899:
The ASAN behavior here isn't aarch64-specific, right? If not, could you apply the same change to the other architectures as well?
In theory, yes; however, for some reason, it doesn't reproduce on x86 (likely due to an ASAN implementation/memory layout difference).
RUSTFLAGS="-Zsanitizer=address" cargo +nightly run --release --target x86_64-unknown-linux-gnu -- wast ./test.wastHappy to apply the same change defensively to the other architectures if you'd prefer.
cfallin commented on PR #12899:
Yes, unless we have reason to believe (based on documentation) that ASAN only expects annotations on one platform, we should apply the same annotations on all platforms. Thanks!
shumbo updated PR #12899.
shumbo commented on PR #12899:
eb48a43f426e417cb21cb5415db4f4c829038f39 applied the same change to other architectures and I can confirm this doesn't break x86_68. I don't have easy access to other architectures to confirm -- I believe this change is low risk (ASAN only + same pattern across architectures) but I'd appreciate some eyes on this.
alexcrichton commented on PR #12899:
Do you know why ASAN is flaky about this? Locally I can't reproduce at all except for
--releaseaarch64 which feels quite odd to me. For example debug-mode aarch64, debug-mode x86_64, and release-mode x86_64 all pass without this false positive. https://github.com/google/sanitizers/wiki/AddressSanitizerManualPoisoning seems to indicate that this is an "undo" operation for something previously done, but nothing was previously done AFAIK. I don't understand enough about ASAN to know what's happening though.
alexcrichton commented on PR #12899:
I did some digging into this, and I think one answer for my confusion is that there's a small set of fiber functions which never actually return. When a fiber exits we switch off them but never switch back, and this is leaving behind corrupt state I think.
I tried out this diff:
diff --git a/crates/fiber/src/unix.rs b/crates/fiber/src/unix.rs index fc202a2e79..f0f43759d1 100644 --- a/crates/fiber/src/unix.rs +++ b/crates/fiber/src/unix.rs @@ -389,7 +389,19 @@ mod asan { // If this fiber is finishing then NULL is passed to asan to let it know // that it can deallocate the "fake stack" that it's tracking for this // fiber. + // + // Additionally inform asan that this function is itself never going to + // return because once we switch away we'll never switch back. It seems + // like this isn't part of `__sanitizer_start_switch_fiber` upon + // switching away, and this helps resolve the issue identified in #12899 + // for example. let private_asan_pointer_ref = if is_finishing { + unsafe extern "C" { + fn __asan_handle_no_return(); + } + unsafe { + __asan_handle_no_return(); + } None } else { Some(&mut private_asan_pointer)and it looks to fix the issue for me locally (similar to this). Does this fix the issue locally for you too @shumbo? If so I think I'd personally prefer to go this route as I feel it strikes at the heart of the issue a bit more which is cleaning up stale state proactively instead of lazily.
shumbo commented on PR #12899:
@alexcrichton, your patch solves the problem and is a better fix than this PR. It seems like (1)
InitialStackmust be big enough and (2) the redzones must be close enough to the top of the stack for this to reproduce, and x86_64 doesn't meet (1), and aarch64 release optimization makes (2) happen. Please feel free to close this PR in favor of your fix.
shumbo edited a comment on PR #12899:
@alexcrichton, your patch solves the problem and is a better fix than this PR. It seems like (1)
InitialStackmust be big enough and (2) the ASAN redzones must be close enough to the top of the stack for this to reproduce, and x86_64 doesn't meet (1), and aarch64 release optimization makes (2) happen. Please feel free to close this PR in favor of your fix.
alexcrichton closed without merge PR #12899.
alexcrichton commented on PR #12899:
Ok I ended up changing things a bit more as well and it culminated in https://github.com/bytecodealliance/wasmtime/pull/12928 which should fix the issue here as well (confirmed locally at least). Closing in favor of that, but thanks regardless @shumbo!
Last updated: Apr 12 2026 at 23:10 UTC