Stream: git-wasmtime

Topic: wasmtime / issue #13134 potential gc fuzzbug: ASAN hits o...


view this post on Zulip Wasmtime GitHub notifications bot (Apr 17 2026 at 19:43):

khagankhan added the fuzz-bug label to Issue #13134.

view this post on Zulip Wasmtime GitHub notifications bot (Apr 17 2026 at 19:43):

khagankhan opened issue #13134:

Summary

I hit an ASAN SEGV while fuzzing Wasmtime locally with the gc_ops target. A
guest Wasm program that allocates in a tight struct.new loop crashes the host
process on a memory access outside any mapped module.

Note: Triggered in PR **khagankhan/wasmtime#13101**
against the struct-fields branch in
khagankhan/wasmtime.
The PR is up-to-date with main; the only additions are to gc_ops
fuzzing, so the runtime/GC/codegen code is identical to upstream which
suggests this may be an upstream issue that the new fuzzer payloads expose.

Environment

Field Value
Wasmtime version 45.0.0
Fork / branch khagankhan/wasmtime @ struct-fields
Commit 07ef78ad5 ("Merge branch 'bytecodealliance:main' into struct-fields")
Target PR #13101
Rustc 1.95.0 (59807616e 2026-04-14)
Platform Linux 5.15.0-168-generic, x86_64-unknown-linux-gnu
Sanitizer AddressSanitizer (nightly cargo fuzz build)
Fuzz target gc_ops

Reproducer

Minimal WAT

(module
  (type (;0;) (func (result externref externref externref)))
  (type (;1;) (func))
  (type (;2;) (func (param externref externref externref)))
  (type (;3;) (func (result externref externref externref)))
  (type (;4;) (func (param structref)))
  (rec
    (type (;5;) (sub (struct)))
  )
  (type (;6;) (func (param (ref null 5))))
  (import "" "gc" (func (;0;) (type 0)))
  (import "" "take_refs" (func (;1;) (type 2)))
  (import "" "make_refs" (func (;2;) (type 3)))
  (import "" "take_struct" (func (;3;) (type 4)))
  (import "" "take_struct_5" (func (;4;) (type 6)))
  (table (;0;) 14 externref)
  (table (;1;) 14 structref)
  (table (;2;) 14 (ref null 5))
  (global (;0;) (mut structref) ref.null struct)
  (global (;1;) (mut (ref null 5)) ref.null 5)
  (export "run" (func 5))
  (func (;5;) (type 1)
    (local externref structref (ref null 5))
    loop
      struct.new 5
      global.set 1
      br 0
    end
  )
)

Hot loop:

loop
  struct.new 5   ;; allocate empty struct
  global.set 1   ;; overwrite prior ref
  br 0
end

Steps

cd ~/wasmtime
cargo +nightly fuzz build gc_ops --no-default-features
./target/x86_64-unknown-linux-gnu/release/gc_ops ~/minimized_artifact

Expected

Run forever (GC reclaims the now-unreachable previous value of global 1) or
trap cleanly with a GC-OOM wasmtime::Trap.

Actual

Host-side SEGV in JIT code (<unknown module>), not a guest trap.

Likely Cause (from trace and exiting fixed issues what I think might be)

With RUST_LOG=trace, the failing sequence is:

FreeList::new(0)                                         # heap starts at 0 capacity
gc_alloc_raw(kind=StructRef, size=24, align=8)
Got GC heap OOM: no capacity for allocation of 24 bytes
Attempting to grow the GC heap by 24 bytes
FreeList::add_capacity(0x10000): 0x0 -> 0x10000          # grown to 64 KiB
<SEGV>                                                   # faults on the FIRST alloc

So this is not a bump allocator walking off the end of an exhausted heap
(the loop never iterates past the first struct.new). It faults immediately
after a successful grow_gc_heap.

Register state at the minimized crash:

rax = 0x10                       # VMGcRef returned by gc_alloc_raw
r15 = 0x00007b86184a0000         # looks like the new GC heap base
fault = 0x7b86184a0018 = r15 + 0x18

Leading hypothesis: the JIT is computing obj_addr = gc_heap_base + gc_ref
using a stale gc_heap_base. The base load in
crates/cranelift/src/func_environ/gc/enabled.rs#L1470
is marked readonly/can_move whenever
!gc_heap_memory_type().memory_may_move(..), which allows CLIF to CSE/hoist
the load above the gc_alloc_raw libcall. But gc_alloc_raw can call
grow_gc_heap,
which updates VMStoreContext.gc_heap.base. so the cached pre-grow base
(0/null on the very first allocation) is used to compute the object address,
landing in unmapped memory.

The fuzz config hits this path easily:
crates/fuzzing/src/generators/config.rs#L443-L452
forces gc_heap_reservation = 0 with a small gc_heap_reservation_for_growth
(1 MiB), giving a malloc-backed heap whose base only becomes non-null after
the first grow.

Possibly adjacent to recent DRC/layout fixes on main:
#13110,
#13115.

ASAN Output

Actual seed (WRITE fault)

==63564==ERROR: AddressSanitizer: SEGV on unknown address 0x7b633f5a0108 (pc 0x7f6346557428 bp 0x7ffe1c9310d0 sp 0x7ffe1c931050 T0)
==63564==The signal is caused by a WRITE memory access.
    #0 0x7f6346557428  (<unknown module>)
    #1 0x7f634655cb5f  (<unknown module>)
    ...
rax = 0x00000000000000f0  rbx = 0x00000000b0000000  rcx = 0x0000000000000001  rdx = 0x0000000000000040
rdi = 0x00000000000000f0  rsi = 0x000000000000beef  rbp = 0x00007ffe1c9310d0  rsp = 0x00007ffe1c931050
 r8 = 0x0000000000000001   r9 = 0x0000000000040000  r10 = 0x00000f6c68a88c5a  r11 = 0x0000000000000000
r12 = 0x0000000000000008  r13 = 0x0000000000000018  r14 = 0x00007b633f5a0000  r15 = 0x00007d13463e0598
SUMMARY: AddressSanitizer: SEGV (<unknown module>)

(fault = r14 + 0x108, r14 looks like a GC heap base)

Minimized seed (READ fault)

==63646==ERROR: AddressSanitizer: SEGV on unknown address 0x7b86184a0018 (pc 0x7f861f4c30d4 bp 0x7ffe022df030 sp 0x7ffe022df000 T0)
==63646==The signal is caused by a READ memory access.
    #0 0x7f861f4c30d4  (<unknown module>)
    #1 0x7f861f4c8860  (<unknown module>)
    ...
rax = 0x0000000000000010  rbx = 0x00007d361f2e0598  rcx = 0x0000000000000010  rdx = 0x0000000000000020
rdi = 0x00000f7143c64f00  rsi = 0x0000000000000000  rbp = 0x00007ffe022df030  rsp = 0x00007ffe022df000
 r8 = 0x0000000000000001   r9 = 0x0000000000400000  r10 = 0x00000f70c3c6cf9a  r11 = 0x0000000000000000
r12 = 0x00007ce61f2e4c20  r13 = 0xfffffffffffffc19  r14 = 0x0000000000000005  r15 = 0x00007b86184a0000
SUMMARY: AddressSanitizer: SEGV (<unknown module>)

(fault = r15 + 0x18)

Artifacts

+cc @fitzgen

view this post on Zulip Wasmtime GitHub notifications bot (Apr 17 2026 at 19:43):

khagankhan added the bug label to Issue #13134.

view this post on Zulip Wasmtime GitHub notifications bot (Apr 17 2026 at 19:50):

khagankhan commented on issue #13134:

I tried to reproduce without ASAN to be sure whether it is a false positive and I was not successful. Here are the options btw. Still not sure that it may be a false positive tho

[2026-04-17T19:47:33Z DEBUG wasmtime_fuzzing::generators::config] creating wasmtime config with CLI options:
    -Ccompiler=cranelift -Ccollector=drc -Ccranelift-debug-verifier=n -Cparallel-compilation=n -Cnative-unwind-info=n -Cinlining=n -Ccranelift-wasmtime_inlining_intra_module=no -Ccranelift-wasmtime_inlining_sum_size_threshold=1000 -Ccranelift-wasmtime_linkopt_padding_between_functions=22140 -Oopt-level=s -Oregalloc-algorithm=backtracking -Omemory-reservation=0 -Ogc-heap-may-move=y -Ogc-heap-reservation-for-growth=0 -Oguard-before-linear-memory=y -Otable-lazy-init=y -Omemory-init-cow=n -Omemory-guaranteed-dense-image-size=16777216 -Osignals-based-traps=n -Ogc-zeal-alloc-counter=1024 -Wnan-canonicalization=y -Wfuel=18446744073709551615 -Wepoch-interruption=n -Wasync-stack-zeroing=y -Wbulk-memory=n -Wmulti-memory=n -Wmulti-value=y -Wreference-types=y -Wsimd=y -Wtail-call=y -Wthreads=n -Wshared-memory=n -Wshared-everything-threads=n -Wmemory64=n -Wcomponent-model-async=n -Wcomponent-model-async-builtins=n -Wcomponent-model-async-stackful=n -Wcomponent-model-threading=n -Wcomponent-model-error-context=n -Wcomponent-model-gc=n -Wcomponent-model-map=n -Wfunction-references=y -Wstack-switching=n -Wgc=y -Wcustom-page-sizes=n -Wwide-arithmetic=y -Wextended-const=n -Wexceptions=n -Wcomponent-model-fixed-length-lists=n -Daddress-map=y

view this post on Zulip Wasmtime GitHub notifications bot (Apr 17 2026 at 21:00):

khagankhan edited issue #13134:

Summary

I hit an ASAN SEGV while fuzzing Wasmtime locally with the gc_ops target. A
guest Wasm program that allocates in a tight struct.new loop crashes the host
process on a memory access outside any mapped module.

Note: Triggered in PR **khagankhan/wasmtime#13101**
against the struct-fields branch in
khagankhan/wasmtime.
The PR is up-to-date with main; the only additions are to gc_ops
fuzzing, so the runtime/GC/codegen code is identical to upstream which
suggests this may be an upstream issue that the new fuzzer payloads expose.

Environment

Field Value
Wasmtime version 45.0.0
Fork / branch khagankhan/wasmtime @ struct-fields
Commit 07ef78ad5 ("Merge branch 'bytecodealliance:main' into struct-fields")
Target PR #13101
Rustc 1.95.0 (59807616e 2026-04-14)
Platform Linux 5.15.0-168-generic, x86_64-unknown-linux-gnu
Sanitizer AddressSanitizer (nightly cargo fuzz build)
Fuzz target gc_ops

Reproducer

Minimal WAT

(module
  (type (;0;) (func (result externref externref externref)))
  (type (;1;) (func))
  (type (;2;) (func (param externref externref externref)))
  (type (;3;) (func (result externref externref externref)))
  (type (;4;) (func (param structref)))
  (rec
    (type (;5;) (sub (struct)))
  )
  (type (;6;) (func (param (ref null 5))))
  (import "" "gc" (func (;0;) (type 0)))
  (import "" "take_refs" (func (;1;) (type 2)))
  (import "" "make_refs" (func (;2;) (type 3)))
  (import "" "take_struct" (func (;3;) (type 4)))
  (import "" "take_struct_5" (func (;4;) (type 6)))
  (table (;0;) 14 externref)
  (table (;1;) 14 structref)
  (table (;2;) 14 (ref null 5))
  (global (;0;) (mut structref) ref.null struct)
  (global (;1;) (mut (ref null 5)) ref.null 5)
  (export "run" (func 5))
  (func (;5;) (type 1)
    (local externref structref (ref null 5))
    loop
      struct.new 5
      global.set 1
      br 0
    end
  )
)

Hot loop:

loop
  struct.new 5   ;; allocate empty struct
  global.set 1   ;; overwrite prior ref
  br 0
end

Steps

cd ~/wasmtime
cargo +nightly fuzz build gc_ops --no-default-features
./target/x86_64-unknown-linux-gnu/release/gc_ops ~/minimized_artifact

Expected

Run forever (GC reclaims the now-unreachable previous value of global 1) or
trap cleanly with a GC-OOM wasmtime::Trap.

Actual

Host-side SEGV in JIT code (<unknown module>), not a guest trap.

Likely Cause (from trace and exiting fixed issues what I think might be)

With RUST_LOG=trace, the failing sequence is:

FreeList::new(0)                                         # heap starts at 0 capacity
gc_alloc_raw(kind=StructRef, size=24, align=8)
Got GC heap OOM: no capacity for allocation of 24 bytes
Attempting to grow the GC heap by 24 bytes
FreeList::add_capacity(0x10000): 0x0 -> 0x10000          # grown to 64 KiB
<SEGV>                                                   # faults on the FIRST alloc

So this is not a bump allocator walking off the end of an exhausted heap
(the loop never iterates past the first struct.new). It faults immediately
after a successful grow_gc_heap.

Register state at the minimized crash:

rax = 0x10                       # VMGcRef returned by gc_alloc_raw
r15 = 0x00007b86184a0000         # looks like the new GC heap base
fault = 0x7b86184a0018 = r15 + 0x18

Leading hypothesis: the JIT is computing obj_addr = gc_heap_base + gc_ref
using a stale gc_heap_base. The base load in
crates/cranelift/src/func_environ/gc/enabled.rs#L1470
is marked readonly/can_move whenever
!gc_heap_memory_type().memory_may_move(..), which allows CLIF to CSE/hoist
the load above the gc_alloc_raw libcall. But gc_alloc_raw can call
grow_gc_heap,
which updates VMStoreContext.gc_heap.base. so the cached pre-grow base
(0/null on the very first allocation) is used to compute the object address,
landing in unmapped memory.

The fuzz config hits this path easily:
crates/fuzzing/src/generators/config.rs#L443-L452
forces gc_heap_reservation = 0 with a small gc_heap_reservation_for_growth
(1 MiB), giving a malloc-backed heap whose base only becomes non-null after
the first grow.

Possibly adjacent to recent DRC/layout fixes on main:
#13110,
#13115.

ASAN Output

Actual seed (WRITE fault)

==63564==ERROR: AddressSanitizer: SEGV on unknown address 0x7b633f5a0108 (pc 0x7f6346557428 bp 0x7ffe1c9310d0 sp 0x7ffe1c931050 T0)
==63564==The signal is caused by a WRITE memory access.
    #0 0x7f6346557428  (<unknown module>)
    #1 0x7f634655cb5f  (<unknown module>)
    ...
rax = 0x00000000000000f0  rbx = 0x00000000b0000000  rcx = 0x0000000000000001  rdx = 0x0000000000000040
rdi = 0x00000000000000f0  rsi = 0x000000000000beef  rbp = 0x00007ffe1c9310d0  rsp = 0x00007ffe1c931050
 r8 = 0x0000000000000001   r9 = 0x0000000000040000  r10 = 0x00000f6c68a88c5a  r11 = 0x0000000000000000
r12 = 0x0000000000000008  r13 = 0x0000000000000018  r14 = 0x00007b633f5a0000  r15 = 0x00007d13463e0598
SUMMARY: AddressSanitizer: SEGV (<unknown module>)

(fault = r14 + 0x108, r14 looks like a GC heap base)

Minimized seed (READ fault)

==63646==ERROR: AddressSanitizer: SEGV on unknown address 0x7b86184a0018 (pc 0x7f861f4c30d4 bp 0x7ffe022df030 sp 0x7ffe022df000 T0)
==63646==The signal is caused by a READ memory access.
    #0 0x7f861f4c30d4  (<unknown module>)
    #1 0x7f861f4c8860  (<unknown module>)
    ...
rax = 0x0000000000000010  rbx = 0x00007d361f2e0598  rcx = 0x0000000000000010  rdx = 0x0000000000000020
rdi = 0x00000f7143c64f00  rsi = 0x0000000000000000  rbp = 0x00007ffe022df030  rsp = 0x00007ffe022df000
 r8 = 0x0000000000000001   r9 = 0x0000000000400000  r10 = 0x00000f70c3c6cf9a  r11 = 0x0000000000000000
r12 = 0x00007ce61f2e4c20  r13 = 0xfffffffffffffc19  r14 = 0x0000000000000005  r15 = 0x00007b86184a0000
SUMMARY: AddressSanitizer: SEGV (<unknown module>)

(fault = r15 + 0x18)

Artifacts

+cc @fitzgen

view this post on Zulip Wasmtime GitHub notifications bot (Apr 17 2026 at 21:10):

khagankhan edited a comment on issue #13134:

Segfault happens with this:

khan22@node0:~/wasmtime$ /users/khan22/wasmtime/target/release/wasmtime run   -O opt-level=s -O regalloc-algorithm=backtracking   -O memory-reservation=0 -O guard-before-linear-memory=y   -O memory-init-cow=n -O memory-guaranteed-dense-image-size=11018058647000789232   -O gc-heap-reservation-for-growth=0 -O gc-heap-may-move=y   -O gc-zeal-alloc-counter=3394916991 -O table-lazy-init=y   -O signals-based-traps=n -C compiler=cranelift -C collector=drc   -C inlining=n -C native-unwind-info=n -D debug-info=n -D address-map=y   -W nan-canonicalization=n -W fuel=1000 -W gc=y -W reference-types=y   -W function-references=y -W multi-value=y -W simd=y -W tail-call=y   -W wide-arithmetic=y -W bulk-memory=n -W unknown-imports-trap=y   --invoke run test.wat
Segmentation fault

view this post on Zulip Wasmtime GitHub notifications bot (Apr 17 2026 at 21:21):

khagankhan edited issue #13134:

Summary

I hit an ASAN SEGV while fuzzing Wasmtime locally with the gc_ops target. A
guest Wasm program that allocates in a tight struct.new loop crashes the host
process on a memory access outside any mapped module.

Note: Triggered in PR **khagankhan/wasmtime#13101**
against the struct-fields branch in
khagankhan/wasmtime.
The PR is up-to-date with main; the only additions are to gc_ops
fuzzing, so the runtime/GC/codegen code is identical to upstream which
suggests this may be an upstream issue that the new fuzzer payloads expose.

Environment

Field Value
Wasmtime version 45.0.0
Fork / branch khagankhan/wasmtime @ struct-fields
Commit 07ef78ad5 ("Merge branch 'bytecodealliance:main' into struct-fields")
Target PR #13101
Rustc 1.95.0 (59807616e 2026-04-14)
Platform Linux 5.15.0-168-generic, x86_64-unknown-linux-gnu
Sanitizer AddressSanitizer (nightly cargo fuzz build)
Fuzz target gc_ops

Reproducer

Minimal WAT

(module
  (type (;0;) (func (result externref externref externref)))
  (type (;1;) (func))
  (type (;2;) (func (param externref externref externref)))
  (type (;3;) (func (result externref externref externref)))
  (type (;4;) (func (param structref)))
  (rec
    (type (;5;) (sub (struct)))
  )
  (type (;6;) (func (param (ref null 5))))
  (import "" "gc" (func (;0;) (type 0)))
  (import "" "take_refs" (func (;1;) (type 2)))
  (import "" "make_refs" (func (;2;) (type 3)))
  (import "" "take_struct" (func (;3;) (type 4)))
  (import "" "take_struct_5" (func (;4;) (type 6)))
  (table (;0;) 14 externref)
  (table (;1;) 14 structref)
  (table (;2;) 14 (ref null 5))
  (global (;0;) (mut structref) ref.null struct)
  (global (;1;) (mut (ref null 5)) ref.null 5)
  (export "run" (func 5))
  (func (;5;) (type 1)
    (local externref structref (ref null 5))
    loop
      struct.new 5
      global.set 1
      br 0
    end
  )
)

Hot loop:

loop
  struct.new 5   ;; allocate empty struct
  global.set 1   ;; overwrite prior ref
  br 0
end

Steps

cd ~/wasmtime
cargo +nightly fuzz build gc_ops --no-default-features
./target/x86_64-unknown-linux-gnu/release/gc_ops ~/minimized_artifact

Expected

Run forever (GC reclaims the now-unreachable previous value of global 1) or
trap cleanly with a GC-OOM wasmtime::Trap.

Actual

Host-side SEGV in JIT code (<unknown module>), not a guest trap.

Likely Cause (from trace and exiting fixed issues what I think might be)

With RUST_LOG=trace, the failing sequence is:

FreeList::new(0)                                         # heap starts at 0 capacity
gc_alloc_raw(kind=StructRef, size=24, align=8)
Got GC heap OOM: no capacity for allocation of 24 bytes
Attempting to grow the GC heap by 24 bytes
FreeList::add_capacity(0x10000): 0x0 -> 0x10000          # grown to 64 KiB
<SEGV>                                                   # faults on the FIRST alloc

So this is not a bump allocator walking off the end of an exhausted heap
(the loop never iterates past the first struct.new). It faults immediately
after a successful grow_gc_heap.

Register state at the minimized crash:

rax = 0x10                       # VMGcRef returned by gc_alloc_raw
r15 = 0x00007b86184a0000         # looks like the new GC heap base
fault = 0x7b86184a0018 = r15 + 0x18

Leading hypothesis: the JIT is computing obj_addr = gc_heap_base + gc_ref
using a stale gc_heap_base. The base load in
crates/cranelift/src/func_environ/gc/enabled.rs#L1470
is marked readonly/can_move whenever
!gc_heap_memory_type().memory_may_move(..), which allows CLIF to CSE/hoist
the load above the gc_alloc_raw libcall. But gc_alloc_raw can call
grow_gc_heap,
which updates VMStoreContext.gc_heap.base. so the cached pre-grow base
(0/null on the very first allocation) is used to compute the object address,
landing in unmapped memory.

The fuzz config hits this path easily:
crates/fuzzing/src/generators/config.rs#L443-L452
forces gc_heap_reservation = 0 with a small gc_heap_reservation_for_growth
(1 MiB), giving a malloc-backed heap whose base only becomes non-null after
the first grow.

Possibly adjacent to recent DRC/layout fixes on main:
#13110,
#13115.

ASAN Output

Actual seed (WRITE fault)

==63564==ERROR: AddressSanitizer: SEGV on unknown address 0x7b633f5a0108 (pc 0x7f6346557428 bp 0x7ffe1c9310d0 sp 0x7ffe1c931050 T0)
==63564==The signal is caused by a WRITE memory access.
    #0 0x7f6346557428  (<unknown module>)
    #1 0x7f634655cb5f  (<unknown module>)
    ...
rax = 0x00000000000000f0  rbx = 0x00000000b0000000  rcx = 0x0000000000000001  rdx = 0x0000000000000040
rdi = 0x00000000000000f0  rsi = 0x000000000000beef  rbp = 0x00007ffe1c9310d0  rsp = 0x00007ffe1c931050
 r8 = 0x0000000000000001   r9 = 0x0000000000040000  r10 = 0x00000f6c68a88c5a  r11 = 0x0000000000000000
r12 = 0x0000000000000008  r13 = 0x0000000000000018  r14 = 0x00007b633f5a0000  r15 = 0x00007d13463e0598
SUMMARY: AddressSanitizer: SEGV (<unknown module>)

(fault = r14 + 0x108, r14 looks like a GC heap base)

Minimized seed (READ fault)

==63646==ERROR: AddressSanitizer: SEGV on unknown address 0x7b86184a0018 (pc 0x7f861f4c30d4 bp 0x7ffe022df030 sp 0x7ffe022df000 T0)
==63646==The signal is caused by a READ memory access.
    #0 0x7f861f4c30d4  (<unknown module>)
    #1 0x7f861f4c8860  (<unknown module>)
    ...
rax = 0x0000000000000010  rbx = 0x00007d361f2e0598  rcx = 0x0000000000000010  rdx = 0x0000000000000020
rdi = 0x00000f7143c64f00  rsi = 0x0000000000000000  rbp = 0x00007ffe022df030  rsp = 0x00007ffe022df000
 r8 = 0x0000000000000001   r9 = 0x0000000000400000  r10 = 0x00000f70c3c6cf9a  r11 = 0x0000000000000000
r12 = 0x00007ce61f2e4c20  r13 = 0xfffffffffffffc19  r14 = 0x0000000000000005  r15 = 0x00007b86184a0000
SUMMARY: AddressSanitizer: SEGV (<unknown module>)

(fault = r15 + 0x18)

Artifacts

+cc @fitzgen

view this post on Zulip Wasmtime GitHub notifications bot (Apr 17 2026 at 21:28):

khagankhan edited a comment on issue #13134:

Segfault happens with this in the main without using the exact PR just build with cargo b --release:

khan22@node0:~/wasmtime$ /users/khan22/wasmtime/target/release/wasmtime run   -O opt-level=s -O regalloc-algorithm=backtracking   -O memory-reservation=0 -O guard-before-linear-memory=y   -O memory-init-cow=n -O memory-guaranteed-dense-image-size=11018058647000789232   -O gc-heap-reservation-for-growth=0 -O gc-heap-may-move=y   -O gc-zeal-alloc-counter=3394916991 -O table-lazy-init=y   -O signals-based-traps=n -C compiler=cranelift -C collector=drc   -C inlining=n -C native-unwind-info=n -D debug-info=n -D address-map=y   -W nan-canonicalization=n -W fuel=1000 -W gc=y -W reference-types=y   -W function-references=y -W multi-value=y -W simd=y -W tail-call=y   -W wide-arithmetic=y -W bulk-memory=n -W unknown-imports-trap=y   --invoke run test.wat
Segmentation fault

Dumped config from the fuzzer:

===== FUZZ CONFIG DUMP BEGIN =====
Config {
    wasmtime: WasmtimeConfig {
        opt_level: SpeedAndSize,
        regalloc_algorithm: Backtracking,
        debug_info: false,
        canonicalize_nans: false,
        interruptible: false,
        consume_fuel: true,
        epoch_interruption: false,
        memory_config: MemoryConfig {
            memory_reservation: Some(
                0,
            ),
            memory_guard_size: None,
            memory_reservation_for_growth: None,
            guard_before_linear_memory: true,
            cranelift_enable_heap_access_spectre_mitigations: None,
            memory_init_cow: false,
            gc_heap_reservation: None,
            gc_heap_guard_size: None,
            gc_heap_reservation_for_growth: Some(
                0,
            ),
            gc_heap_may_move: Some(
                true,
            ),
        },
        force_jump_veneers: false,
        memory_init_cow: true,
        memory_guaranteed_dense_image_size: 11018058647000789232,
        inlining: Some(
            false,
        ),
        inlining_intra_module: Some(
            No,
        ),
        inlining_small_callee_size: None,
        inlining_sum_size_threshold: Some(
            4194192291,
        ),
        use_precompiled_cwasm: false,
        async_stack_zeroing: true,
        strategy: OnDemand,
        codegen: Native,
        padding_between_functions: Some(
            22140,
        ),
        generate_address_map: true,
        native_unwind_info: false,
        compiler_strategy: CraneliftNative,
        collector: DeferredReferenceCounting,
        gc_zeal_alloc_counter: Some(
            3394916991,
        ),
        table_lazy_init: true,
        async_config: Disabled,
        signals_based_traps: false,
    },
    module_config: ModuleConfig {
        config: Config {
            available_imports: None,
            exports: None,
            module_shape: None,
            allow_start_export: true,
            allowed_instructions: InstructionKinds(
                FlagSet(NumericInt | Numeric | VectorInt | Vector | Parametric | Table | MemoryInt | Memory | Aggregate),
            ),
            allow_floats: false,
            bulk_memory_enabled: false,
            canonicalize_nans: false,
            disallow_traps: true,
            exceptions_enabled: false,
            export_everything: false,
            gc_enabled: true,
            custom_descriptors_enabled: false,
            custom_page_sizes_enabled: false,
            generate_custom_sections: false,
            max_aliases: 682,
            max_components: 0,
            max_data_segments: 896,
            max_element_segments: 304,
            max_elements: 855,
            max_exports: 687,
            max_funcs: 748,
            max_globals: 432,
            max_imports: 813,
            max_instances: 0,
            max_instructions: 335,
            max_memories: 1,
            max_memory32_bytes: 941825295,
            max_memory64_bytes: 3105875032306693890,
            max_modules: 0,
            max_nesting_depth: 9,
            max_table_elements: 994292,
            max_tables: 55,
            max_tags: 174,
            max_type_size: 1000,
            max_types: 402,
            max_values: 0,
            memory64_enabled: false,
            memory_max_size_required: false,
            memory_offset_choices: MemoryOffsetChoices(
                90,
                9,
                1,
            ),
            min_data_segments: 0,
            min_element_segments: 0,
            min_elements: 0,
            min_exports: 0,
            min_funcs: 0,
            min_globals: 0,
            min_imports: 0,
            min_memories: 0,
            min_tables: 0,
            min_tags: 0,
            min_types: 0,
            min_uleb_size: 4,
            multi_value_enabled: true,
            reference_types_enabled: true,
            relaxed_simd_enabled: false,
            saturating_float_to_int_enabled: true,
            sign_extension_ops_enabled: true,
            shared_everything_threads_enabled: false,
            simd_enabled: true,
            tail_call_enabled: true,
            table_max_size_required: true,
            threads_enabled: false,
            allow_invalid_funcs: false,
            wide_arithmetic_enabled: true,
            extended_const_enabled: false,
        },
        function_references_enabled: true,
        component_model_async: false,
        component_model_async_builtins: false,
        component_model_async_stackful: false,
        component_model_threading: false,
        component_model_error_context: false,
        component_model_gc: false,
        component_model_map: false,
        component_model_fixed_length_lists: false,
        legacy_exceptions: false,
        shared_memory: false,
        stack_switching: false,
    },
}
===== FUZZ CONFIG DUMP END =====

view this post on Zulip Wasmtime GitHub notifications bot (Apr 17 2026 at 21:50):

khagankhan edited a comment on issue #13134:

Segfault happens with this in the main without using the exact PR just build with cargo b --release:

khan22@node0:~/main/wasmtime$ /users/khan22/main/wasmtime/target/release/wasmtime run   -O memory-reservation=0   -O gc-heap-may-move=y   -W gc=y   --invoke run /users/khan22/wasmtime/test.wat
Segmentation fault

khan22@node0:~/main/wasmtime$ /users/khan22/main/wasmtime/target/release/wasmtime run   -O memory-reservation=10000   -O gc-heap-may-move=y   -W gc=y   --invoke run /users/khan22/wasmtime/test.wat
Segmentation fault

khan22@node0:~/main/wasmtime$ /users/khan22/main/wasmtime/target/release/wasmtime run   -O memory-reservation=100000   -O gc-heap-may-move=y   -W gc=y   --invoke run /users/khan22/wasmtime/test.wat
// OK

test.wat:

(module
  (type (func))
  (rec
    (type (sub (struct)))
  )

  (table 14 externref)
  (table 14 structref)
  (table 14 (ref null 1))

  (global (mut structref) (ref.null struct))
  (global (mut (ref null 1)) (ref.null 1))

  (func $run (type 0)
    (local externref structref (ref null 1))
    loop
      struct.new 1
      global.set 1
      br 0
    end
  )

  (export "run" (func $run))
)

view this post on Zulip Wasmtime GitHub notifications bot (Apr 17 2026 at 22:01):

khagankhan edited a comment on issue #13134:

Segfault happens with this in the main without using the exact PR just build with cargo b --release:

khan22@node0:~/main/wasmtime$ /users/khan22/main/wasmtime/target/release/wasmtime run   -O memory-reservation=0   -O gc-heap-may-move=y   -W gc=y   --invoke run /users/khan22/wasmtime/test.wat
Segmentation fault

khan22@node0:~/main/wasmtime$ /users/khan22/main/wasmtime/target/release/wasmtime run   -O memory-reservation=61440   -O gc-heap-may-move=y   -W gc=y   --invoke run /users/khan22/wasmtime/test.wat
Segmentation fault

khan22@node0:~/main/wasmtime$ /users/khan22/main/wasmtime/target/release/wasmtime run   -O memory-reservation=
61441   -O gc-heap-may-move=y   -W gc=y   --invoke run /users/khan22/wasmtime/test.wat

// OK

test.wat:

(module
  (type (func))
  (rec
    (type (sub (struct)))
  )

  (table 14 externref)
  (table 14 structref)
  (table 14 (ref null 1))

  (global (mut structref) (ref.null struct))
  (global (mut (ref null 1)) (ref.null 1))

  (func $run (type 0)
    (local externref structref (ref null 1))
    loop
      struct.new 1
      global.set 1
      br 0
    end
  )

  (export "run" (func $run))
)

view this post on Zulip Wasmtime GitHub notifications bot (Apr 20 2026 at 13:11):

fitzgen commented on issue #13134:

Thanks! Will look into this shortly.

Leading hypothesis: the JIT is computing obj_addr = gc_heap_base + gc_ref
using a stale gc_heap_base. The base load in
crates/cranelift/src/func_environ/gc/enabled.rs#L1470
is marked readonly/can_move whenever
!gc_heap_memory_type().memory_may_move(..), which allows CLIF to CSE/hoist
the load above the gc_alloc_raw libcall. But gc_alloc_raw can call
grow_gc_heap,
which updates VMStoreContext.gc_heap.base. so the cached pre-grow base
(0/null on the very first allocation) is used to compute the object address,
landing in unmapped memory.

This does sound sus. If !memory_may_move then we shouldn't ever update gc_heap_base.

view this post on Zulip Wasmtime GitHub notifications bot (Apr 20 2026 at 14:41):

cfallin commented on issue #13134:

(agree that sounds like the issue and) I wonder if the lazy GC heap creation is going to conflict with our optimization in general here: the memory may not move, except the first time it's used, when it moves from NULL to its forever-home. I guess we can't use the flag as-is; but add this to the "flexible alias regions" list of opportunities, where we could represent the base as a value that's only possibly clobbered by allocations. (We could even generate the IR such that it's not clobbered by alloc sites dominated by another alloc site, if we wanted to enable maximal caching...)

view this post on Zulip Wasmtime GitHub notifications bot (Apr 21 2026 at 20:07):

alexcrichton closed issue #13134:

Summary

I hit an ASAN SEGV while fuzzing Wasmtime locally with the gc_ops target. A
guest Wasm program that allocates in a tight struct.new loop crashes the host
process on a memory access outside any mapped module.

Note: Triggered in PR **khagankhan/wasmtime#13101**
against the struct-fields branch in
khagankhan/wasmtime.
The PR is up-to-date with main; the only additions are to gc_ops
fuzzing, so the runtime/GC/codegen code is identical to upstream which
suggests this may be an upstream issue that the new fuzzer payloads expose.

Environment

Field Value
Wasmtime version 45.0.0
Fork / branch khagankhan/wasmtime @ struct-fields
Commit 07ef78ad5 ("Merge branch 'bytecodealliance:main' into struct-fields")
Target PR #13101
Rustc 1.95.0 (59807616e 2026-04-14)
Platform Linux 5.15.0-168-generic, x86_64-unknown-linux-gnu
Sanitizer AddressSanitizer (nightly cargo fuzz build)
Fuzz target gc_ops

Reproducer

Minimal WAT

(module
  (type (;0;) (func (result externref externref externref)))
  (type (;1;) (func))
  (type (;2;) (func (param externref externref externref)))
  (type (;3;) (func (result externref externref externref)))
  (type (;4;) (func (param structref)))
  (rec
    (type (;5;) (sub (struct)))
  )
  (type (;6;) (func (param (ref null 5))))
  (import "" "gc" (func (;0;) (type 0)))
  (import "" "take_refs" (func (;1;) (type 2)))
  (import "" "make_refs" (func (;2;) (type 3)))
  (import "" "take_struct" (func (;3;) (type 4)))
  (import "" "take_struct_5" (func (;4;) (type 6)))
  (table (;0;) 14 externref)
  (table (;1;) 14 structref)
  (table (;2;) 14 (ref null 5))
  (global (;0;) (mut structref) ref.null struct)
  (global (;1;) (mut (ref null 5)) ref.null 5)
  (export "run" (func 5))
  (func (;5;) (type 1)
    (local externref structref (ref null 5))
    loop
      struct.new 5
      global.set 1
      br 0
    end
  )
)

Hot loop:

loop
  struct.new 5   ;; allocate empty struct
  global.set 1   ;; overwrite prior ref
  br 0
end

Steps

cd ~/wasmtime
cargo +nightly fuzz build gc_ops --no-default-features
./target/x86_64-unknown-linux-gnu/release/gc_ops ~/minimized_artifact

Expected

Run forever (GC reclaims the now-unreachable previous value of global 1) or
trap cleanly with a GC-OOM wasmtime::Trap.

Actual

Host-side SEGV in JIT code (<unknown module>), not a guest trap.

Likely Cause (from trace and exiting fixed issues what I think might be)

With RUST_LOG=trace, the failing sequence is:

FreeList::new(0)                                         # heap starts at 0 capacity
gc_alloc_raw(kind=StructRef, size=24, align=8)
Got GC heap OOM: no capacity for allocation of 24 bytes
Attempting to grow the GC heap by 24 bytes
FreeList::add_capacity(0x10000): 0x0 -> 0x10000          # grown to 64 KiB
<SEGV>                                                   # faults on the FIRST alloc

So this is not a bump allocator walking off the end of an exhausted heap
(the loop never iterates past the first struct.new). It faults immediately
after a successful grow_gc_heap.

Register state at the minimized crash:

rax = 0x10                       # VMGcRef returned by gc_alloc_raw
r15 = 0x00007b86184a0000         # looks like the new GC heap base
fault = 0x7b86184a0018 = r15 + 0x18

Leading hypothesis: the JIT is computing obj_addr = gc_heap_base + gc_ref
using a stale gc_heap_base. The base load in
crates/cranelift/src/func_environ/gc/enabled.rs#L1470
is marked readonly/can_move whenever
!gc_heap_memory_type().memory_may_move(..), which allows CLIF to CSE/hoist
the load above the gc_alloc_raw libcall. But gc_alloc_raw can call
grow_gc_heap,
which updates VMStoreContext.gc_heap.base. so the cached pre-grow base
(0/null on the very first allocation) is used to compute the object address,
landing in unmapped memory.

The fuzz config hits this path easily:
crates/fuzzing/src/generators/config.rs#L443-L452
forces gc_heap_reservation = 0 with a small gc_heap_reservation_for_growth
(1 MiB), giving a malloc-backed heap whose base only becomes non-null after
the first grow.

Possibly adjacent to recent DRC/layout fixes on main:
#13110,
#13115.

ASAN Output

Actual seed (WRITE fault)

==63564==ERROR: AddressSanitizer: SEGV on unknown address 0x7b633f5a0108 (pc 0x7f6346557428 bp 0x7ffe1c9310d0 sp 0x7ffe1c931050 T0)
==63564==The signal is caused by a WRITE memory access.
    #0 0x7f6346557428  (<unknown module>)
    #1 0x7f634655cb5f  (<unknown module>)
    ...
rax = 0x00000000000000f0  rbx = 0x00000000b0000000  rcx = 0x0000000000000001  rdx = 0x0000000000000040
rdi = 0x00000000000000f0  rsi = 0x000000000000beef  rbp = 0x00007ffe1c9310d0  rsp = 0x00007ffe1c931050
 r8 = 0x0000000000000001   r9 = 0x0000000000040000  r10 = 0x00000f6c68a88c5a  r11 = 0x0000000000000000
r12 = 0x0000000000000008  r13 = 0x0000000000000018  r14 = 0x00007b633f5a0000  r15 = 0x00007d13463e0598
SUMMARY: AddressSanitizer: SEGV (<unknown module>)

(fault = r14 + 0x108, r14 looks like a GC heap base)

Minimized seed (READ fault)

==63646==ERROR: AddressSanitizer: SEGV on unknown address 0x7b86184a0018 (pc 0x7f861f4c30d4 bp 0x7ffe022df030 sp 0x7ffe022df000 T0)
==63646==The signal is caused by a READ memory access.
    #0 0x7f861f4c30d4  (<unknown module>)
    #1 0x7f861f4c8860  (<unknown module>)
    ...
rax = 0x0000000000000010  rbx = 0x00007d361f2e0598  rcx = 0x0000000000000010  rdx = 0x0000000000000020
rdi = 0x00000f7143c64f00  rsi = 0x0000000000000000  rbp = 0x00007ffe022df030  rsp = 0x00007ffe022df000
 r8 = 0x0000000000000001   r9 = 0x0000000000400000  r10 = 0x00000f70c3c6cf9a  r11 = 0x0000000000000000
r12 = 0x00007ce61f2e4c20  r13 = 0xfffffffffffffc19  r14 = 0x0000000000000005  r15 = 0x00007b86184a0000
SUMMARY: AddressSanitizer: SEGV (<unknown module>)

(fault = r15 + 0x18)

Artifacts

+cc @fitzgen

view this post on Zulip Wasmtime GitHub notifications bot (Apr 22 2026 at 00:01):

alexcrichton added the wasm-proposal:gc label to Issue #13134.


Last updated: May 03 2026 at 22:13 UTC