I've been able to get my 11MB React server rendering benchmark to run with ComponentizeJS and wasmtime serve but get a memory error when trying to build with --aot. Is there some way to increase the memory available during a build?
Steps to reproduce:
Checkout and npm install https://github.com/bytecodealliance/jco/tree/main/examples/components/http-server-fetch-handler as a base
Download renderer.mjs from https://raw.githubusercontent.com/lrowe/react-server-render-benchmark/f754693c9cb6ad4f492039897d3444dec3103691/renderer.mjs
Run ./node_modules/.bin/componentize-js -w wit -o component.wasm --enable-wizer-logging --use-debug-build renderer.mjs --aot
$ ./node_modules/.bin/componentize-js -w wit -o component.wasm --enable-wizer-logging --use-debug-build renderer.mjs --aot
(node:1979665) [DEP0190] DeprecationWarning: Passing args to a child process with shell option true can lead to security vulnerabilities, as the arguments are not escaped, only concatenated.
(Use `node --trace-deprecation ...` to show where the warning was created)
memory allocation of 23726160 bytes failed
wasm://wasm/splicer_component.wasm-01127772:1
RuntimeError: unreachable
at splicer_component.wasm.abort (wasm://wasm/splicer_component.wasm-01127772:wasm-function[6972]:0x31e426)
at splicer_component.wasm._ZN3std3sys3pal4wasi7helpers14abort_internal17he7a2be67736436b7E (wasm://wasm/splicer_component.wasm-01127772:wasm-function[6767]:0x30c245)
at splicer_component.wasm._ZN3std7process5abort17hb229e5783e2ded8dE (wasm://wasm/splicer_component.wasm-01127772:wasm-function[6836]:0x31234e)
at splicer_component.wasm._ZN3std5alloc8rust_oom17h293069cf3a87ceb7E (wasm://wasm/splicer_component.wasm-01127772:wasm-function[6880]:0x3143eb)
at splicer_component.wasm._RNvCs691rhTbG0Ee_7___rustc8___rg_oom (wasm://wasm/splicer_component.wasm-01127772:wasm-function[6881]:0x3143f9)
at splicer_component.wasm._RNvCs691rhTbG0Ee_7___rustc26___rust_alloc_error_handler (wasm://wasm/splicer_component.wasm-01127772:wasm-function[31]:0x4b74)
at splicer_component.wasm._ZN5alloc5alloc18handle_alloc_error17hcc35c2aed22157a6E (wasm://wasm/splicer_component.wasm-01127772:wasm-function[7002]:0x31f968)
at splicer_component.wasm._ZN5alloc7raw_vec12handle_error17hb73fb4043b5a38a7E (wasm://wasm/splicer_component.wasm-01127772:wasm-function[6999]:0x31f7ac)
at splicer_component.wasm._ZN67_$LT$alloc..vec..Vec$LT$T$C$A$GT$$u20$as$u20$core..clone..Clone$GT$5clone17hc3827fc32c562e56E (wasm://wasm/splicer_component.wasm-01127772:wasm-function[409]:0x4bba4)
at splicer_component.wasm._ZN9orca_wasm2ir6module6Module14parse_internal17h5e643ff744b734fdE (wasm://wasm/splicer_component.wasm-01127772:wasm-function[438]:0x57619)
Node.js v24.1.0
This is likely a bug in the component itself leaking memory for example. Wasmtime allows guests to grow to the 4G limit by default and wasmtime isn't printing that error message, the guest likely is
I don't think this wasmtime is involved here. Watching top -c the error appears to originate from the node process running componentize-js several seconds after weval completes (albeit with 130g of VIRT.)
Debugging with node --inspect ./node_modules/.bin/componentize-js ... the RuntimeError is being thrown during the splicer.stubWasi call here: https://github.com/bytecodealliance/ComponentizeJS/blob/0.18.4/src/componentize.js#L379
EDIT: next and last JS stack from before WASM is ./node_modules/@bytecodealliance/componentize-js/lib/spidermonkey-embedding-splicer.js:3573
const ret = exports1['local:spidermonkey-embedding-splicer/splicer#stub-wasi'](ptr0, len0, result2, len2, variant4_0, variant4_1, variant4_2, variant6_0, variant6_1, variant6_2, variant8_0, variant8_1, variant8_2);
Looks like the spidermonkey-embedding-splicer.core.wasm memory (aka exports1.memory, memory0) reaches 4GB and cannot grow further.
Hey Lawrence, at first I thought you might be running into this similar issue on Jco:
https://github.com/bytecodealliance/jco/issues/568
(jco calls through to componentize-js)
That said I think your problem might be different... renderer.mjs is 11MB -- I think the problem might be simply trying to load the script itself into memory. That said, like Alex pointed out the memory should be able to grow...
Unless it turns out that the grow operations were removed... (the orca project is now called wirm)
I think we should be able to fix this by enabling multi-memory -- I'll try to put up a quick PR for that (might be a bit painful to try locally but sounds like you're already stuck in :)
This is probably a reasonable bug to file against componentize-js for tracking as well at this point.
Ah a bit weird, it looks like the error is actually deep inside orca/wirm at this point, while trying to parse.
Haven't been able to pin this one down just yet, but I'm now wondering if it has to do with Memory64 -- the orca/wirm code works just fine, going through processing sections until it runs out of memory.
I think the Rust component may be trying to ask for more memory from Node and failing.
The component that is used is actually built by jco (js-component-bindgen), so where I ended up searching around was checking whether this is due to lack memory64 support.
I'm starting to think this might have to do with Memory64:
https://github.com/nodejs/node/issues/57469
https://github.com/nodejs/node/pull/57114
https://github.com/nodejs/node/pull/57753
https://github.com/nodejs/node/pull/58070
https://chromestatus.com/feature/5070065734516736
I still get the error with Node 24, but the bigger problem is that the component (splicer) built in Rust, transpiled by jco (viajs-component-bindgen) and loaded by NodeJS, is not actually memory64 compliant
I think it's that https://github.com/bytecodealliance/ComponentizeJS/tree/main/crates/spidermonkey-embedding-splicer is not built with memory64 support (or has a memory leak that means it needs more than 4GB)
Yup, it seems to be a combination of that, and that even if it was built with memory64 support, Node couldn't run the transpiled component unless you were on a version greater than Node 24
That component is built and transpiled for use by componentize-js
I did see something similar to the initial stack trace in that JCO #568 issue which was caused by calls for random numbers / performance.now during startup. I solved that by patching Math.random and removing globalThis.performance
I'm on Node 24
Hmm. Maybe it is as simple as changing the rust toolchain to wasm64-unknown-unknown. https://github.com/bytecodealliance/ComponentizeJS/blob/main/rust-toolchain.toml
It certainly could be! I saw that but haven't tried it yet
Filed as https://github.com/bytecodealliance/ComponentizeJS/issues/281
What I was looking into was whether even if you could build that component jco transpile wouldn't necessarily transpile it correctly. jco is meant to deal with components
Turns out it requires some tweaks but you can
Just leaving this here so you can try it locally:
(module
(memory i64 1)
(func (export "load1") (result i32)
i64.const 0xffff_ffff_ffff_fff0
i32.load offset=16)
(func (export "load2") (result i32)
i64.const 16
i32.load offset=0xfffffffffffffff0)
)
Those functions willl actually fail, but it's a nice minimal example of a memory 64 module
You'd also need some WIT:
package examples:test;
world component {
}
Leaving it empty was good enough for me just to test
wt component embed wit/ test64.wasm -o test64.embedded.wasm
Then I ran
wt component new test64.embedded.wasm -o test64.component.wasm
Given a specific tweak to Jco, will transpile:
src/jco.js transpile test64.component.wasm -o /tmp
Transpiled JS Component Files:
- /tmp/test64.component.d.ts 0.03 KiB
- /tmp/test64.component.js 2.39 KiB
I'm going to update Jco to be ready for this use case, but obviously it looks like the first problem is building the splicer as a memory64 component
Last updated: Dec 06 2025 at 07:03 UTC