My team is seeing errors calling into transpiled wasm components in the browser (suddenly after months of no problems).
Here's a minimal env that reproduces the problem (for some of us):
https://github.com/James-Mart/minimal-transpile
Further context:
1) when the wasm calls println!(), I see the following
> wasm.core.wasm:0x4da7 Uncaught RuntimeError: unreachable
at wasm.wasm.__rust_start_panic (wasm.core.wasm:0x4da7)
at wasm.wasm.rust_panic (wasm.core.wasm:0x4c1e)
at wasm.wasm._ZN3std9panicking20rust_panic_with_hook17h79071f5fb265d1d9E (wasm.core.wasm:0x4bf1)
at wasm.wasm._ZN3std9panicking19begin_panic_handler28_$u7b$$u7b$closure$u7d$$u7d$17h410c57f452410813E (wasm.core.wasm:0x3c9f)
at wasm.wasm._ZN3std3sys9backtrace26__rust_end_short_backtrace17h514500abf2a2d0caE (wasm.core.wasm:0x3c0b)
at wasm.wasm.rust_begin_unwind (wasm.core.wasm:0x4585)
at wasm.wasm._ZN4core9panicking9panic_fmt17he306018bf71f8e67E (wasm.core.wasm:0x9562)
at wasm.wasm._ZN3std2io5stdio6_print17h4ef99727a983bb66E (wasm.core.wasm:0x325e)
at wasm.wasm.hello (wasm.core.wasm:0x2c6)
at Module.Be (wasm.js:1:17452)
2) when the wasm calls into rand and returns the result (no call to println!()), I get the following in the browser console
> assertion failed at adapter line
Ie @ wasm.js:1
$indirect-wasi:io/streams@0.2.3-[method]output-stream.blocking-write-and-flush @ wit-component:shim-f1fd1512:0x16a
$_ZN22wasi_snapshot_preview18bindings4wasi2io7streams12OutputStream24blocking_write_and_flush17h4f9b86b3d19bf6d5E @ wasm.core2.wasm:0x1154
$_ZN22wasi_snapshot_preview16macros5print17h319e9b3e6f2e5d09E @ wasm.core2.wasm:0x9ee
$_ZN22wasi_snapshot_preview16macros11assert_fail17h67fe26dc6b70f78bE @ wasm.core2.wasm:0x585
$cabi_import_realloc @ wasm.core2.wasm:0x641
Ee @ wasm.js:1
$indirect-wasi:random/random@0.2.3-get-random-bytes @ wit-component:shim-f1fd1512:0x176
$random_get @ wasm.core2.wasm:0x181a
$adapt-wasi_snapshot_preview1-random_get @ wit-component:shim-f1fd1512:0xb8
$_ZN3std3sys12thread_local6statik20LazyStorage$LT$T$GT$10initialize17h9d56f353d90c9518E @ wasm.core.wasm:0x1b09
$_ZN4rand4rngs6thread3rng17h1309d710195068edE @ wasm.core.wasm:0x1d18
$hello @ wasm.core.wasm:0x1641
Ze @ wasm.js:1
handleClick @ App.tsx:22
> 3 [repeated callstacks skipped]
> 7
> 6
> Uncaught RuntimeError: unreachable
at wit-component:adapter:wasi_snapshot_preview1._ZN22wasi_snapshot_preview16macros11assert_fail17h67fe26dc6b70f78bE (wasm.core2.wasm:0x58b)
at wit-component:adapter:wasi_snapshot_preview1.cabi_import_realloc (wasm.core2.wasm:0x641)
at Ee (wasm.js:1:16790)
at wit-component:shim.indirect-wasi:random/random@0.2.3-get-random-bytes (wit-component:shim-f1fd1512:0x176)
at wit-component:adapter:wasi_snapshot_preview1.random_get (wasm.core2.wasm:0x181a)
at wit-component:shim.adapt-wasi_snapshot_preview1-random_get (wit-component:shim-f1fd1512:0xb8)
at wasm.wasm._ZN3std3sys12thread_local6statik20LazyStorage$LT$T$GT$10initialize17h9d56f353d90c9518E (wasm.core.wasm:0x1b09)
at wasm.wasm._ZN4rand4rngs6thread3rng17h1309d710195068edE (wasm.core.wasm:0x1d18)
at wasm.wasm.hello (wasm.core.wasm:0x1641)
at Module.Ze (wasm.js:1:17825)
Important notes:
An additional detail:
Mike just sent me his transpiled wasm, and only then can I reproduce the issue. But if I transpile it myself, no error.
Since it's a containerized dev environment, versions of tooling should all basically be the same.
Hey @Mike M thanks for reporting this -- would you mind filing this at https://github.com/bytecodealliance/jco (as well as discussing it here) ?
Also, I'm curious -- any idea of what changed here tooling wise that might have started introducing this issue? Was it a toolchain update? jco hasn't changed in a bit but other pieces of the toolchain certainly have.
Fascinating that it seems to only happen on aarch64 somehow (?)
If you go to the line in the source that the error points to,
image.png
...you end up here
...The giant comment from @Alex Crichton above that function seems potentially relevant :sweat_smile:
But I'm just not sure why we only just started seeing this error
GH issue: https://github.com/bytecodealliance/jco/issues/634
Work-around?
It's only been a day, but given this is a blocker for me, I'm curious if there's anything you can offer immediately. We could really use a work-around / downgrade if you have one. For now, it's only me who's dead in the water, but I have a feeling we'll have 2 of us blocked very shortly.
Thanks
I think this error implies that anyone running a Mac M1+ is currently unable to transpile a wasm component and run it in the browser.
The component might need to call a wasi import or something to trigger the error. Tested with generating a random number.
Hey apologies for the delay here @Mike M, but can you confirm what version of componentize-js you're using? Jco dynamically evals componentize-js to build the component and I don't see it in your package.json but I'd like to be sure (seeing if we can go for a downgrade here).
[EDIT] Sorry, I misunderstood -- you're only transpiling the rust component, right? You won't need a version of componentize.
At the same time, I'd like to explore what I think could be a workaround -- would you also mind modifying your WIT world to look like the following:
package component:wasm;
// Not required, but generally interfaces are recommended over bare function exports
interface greet {
hello: func() -> string;
}
world example {
import wasi:random/random@0.2.3;
import wasi:random/insecure@0.2.3;
import wasi:random/insecure-seed@0.2.3;
import wasi:cli/environment@0.2.3;
import wasi:cli/exit@0.2.3;
import wasi:cli/stdin@0.2.3;
import wasi:cli/stdout@0.2.3;
import wasi:cli/stderr@0.2.3;
import wasi:cli/terminal-input@0.2.3;
import wasi:cli/terminal-output@0.2.3;
import wasi:cli/terminal-stdin@0.2.3;
import wasi:cli/terminal-stdout@0.2.3;
import wasi:cli/terminal-stderr@0.2.3;
export greet;
}
The above is overkill, but I just want to increase the chance of success for the test you run.
Once you've updated your WIT, please run wkg wit fetch to pull in the noted WIT dependencies, from the folder above (from wasm), which should fill out wasm/wit/deps.
Ahh please note also that if you do the interface change, you'll need to modify your code just a bit to export the right things.
OH! You know what, I think I might know what is wrong
I'm going to do a little bit more poking around to try and fix this and send a PR to your repo directly, hopefully before monday!
ok, sounds great, Victor!
In case it helps, here's my response to your first message (couldn't get it done before you were on to what sounds like the right direction :) )
NOTE: I did not make the interface change. I left the raw func in the export { ... } because you had already replied with the follow-up, and it was taking me time to rework that. I can add the interface to the defn if you think that would be informative. Just wanted to get you this response asap.
Ok, I'm in unfamiliar territory with the details of this container, so I'll be thorough in my response...
pulled the wkg util and ran it from the wasm folder, which seemed to work:
$ ./wkg-aarch64-unknown-linux-gnu wit fetch
2025-04-25T16:06:12.620227Z WARN wasm_pkg_client::oci::loader: Ignoring invalid version tag tag="sha256-bde08985744034d359aefdfa105be38b75f15c921efada1b6dc081a11090ef45.sig" error=Error("unexpected character 's' while parsing major version number")
2025-04-25T16:06:12.620327Z WARN wasm_pkg_client::oci::loader: Ignoring invalid version tag tag="sha256-3da46c1244c00aebe1c8784f900eb3cf55d19f3bd28ce5303858f888854e3ef4.sig" error=Error("unexpected character 's' while parsing major version number")
2025-04-25T16:06:12.758450Z WARN wasm_pkg_client::oci::loader: Ignoring invalid version tag tag="sha256-e1b6482c98d3d299ce75a80c4d15501e32fbd5b1b8434e23aafafe1677637faf.sig" error=Error("unexpected character 's' while parsing major version number")
2025-04-25T16:06:12.758490Z WARN wasm_pkg_client::oci::loader: Ignoring invalid version tag tag="sha256-4e1a777bd0dd370f680645afe3c7aa230e1ebc6bf0c9ae454546ca0dfe3f4bc9.sig" error=Error("unexpected character 's' while parsing major version number")
then recompiled:
$ cargo build
Compiling libc v0.2.172
Compiling getrandom v0.3.2
Compiling zerocopy v0.8.24
Compiling wit-bindgen-rt v0.41.0
Compiling rand_core v0.9.3
Compiling ppv-lite86 v0.2.21
Compiling rand_chacha v0.9.0
Compiling rand v0.9.1
Compiling wasm v0.1.0 (/workspaces/minimal-transpile/wasm)
Finished `dev` profile [unoptimized + debuginfo] target(s) in 1.59s
and I see the same error
Yeah so pulling your code down one thimg I'm wondering is how you're bundling in preview2-shim
This project:
https://www.npmjs.com/package/@bytecodealliance/preview2-shim
Is what adds the shims (like random) to make things work in the browser
I think the answer here is that vite is walking your dynamic import in App.tsx, and resolving that to bundle it
And I see the output in dist
Did your browser change by any chance?
I was thinking that jco was bundling the shims as part of transpilation (and transpile is part of the yarn build script).
But now looking at JCO docs I'm seeing
Components relying on WASI bindings will contain external WASI imports, which are automatically updated to the
@bytecodealliance/preview2-shimpackage.
In which case, I suppose yeah, it's vite that's resolving them?
No, it doesn't bundle the shims -- vite is bundling as normal
Sorry didn't update, but that's what's happening -- you can use rollup/esbuild/vite/etc -- that step has to be done somewhere otherwise it'd have nothing to call on the frontend of course, the code is fine and stuff shows up in dist as expected
What I'm thinking now is actually that this is a deficiency in the browser shim itself, because it seems like writing to streams (whether via stdout or when random is called) is failing
BUT, the really perplexing thing is how it works some places and not others, it's leaving me to think that the browser itself could be different
Yeah keep in mind the shim works fine in my browser
i.e. that a different in browser settings/state is causing shim I/O code to fail in one place and not in the other
Actually I think the problem is something at build time, not runtime
Hmnn, interesting. Considering ya'll can take the exact same code and put it in on one machine versus another and have it run fine, I thought we could rule that out
(let me know if I misunderstood the situation!)
Mike sent me a zip of his wasm-transpiled directory, and I used it in my environment without building it myself, and then I also had the error
But if I transpile the component myself, no error
Ahh right OK you noted that up top
Would you mind providing zip files with both of your versions? built on his computer versus yours?
And, we effectively did the reverse as well, where Mike used his browser to run an app that was built in another environment (in this case build env was CICD and wasn't using that minimal-transpile app), and then he does not have the error
Would you mind providing zip files with both of your versions? built on his computer versus yours?
Yeah sure
And the working build was on Ubuntu (amd64) w/ the failing build on Mac M2 (aarch64) right
And the working build was on Ubuntu (amd64) w/ the failing build on Mac M2 (aarch64) right
Right. Builds from an ubuntu runner in cicd work, as well as builds on my machine which is amd64 (running windows)
Failed on Mac M1 and M2 so far
IIRC we do not do much arch-specific conditional compilation/gating on the jco side -- but clearly something is different, and the big difference seems to be arch...
One thing I'm wondering is if it's actually a compiled wasm level issue
I'll be able to check with the zip files from both environments (and I have an macbook here I can use to reproduce as well, so even if I don't get it no biggie) -- but I wonder if taking only the wasm file over you can reproduce the failure
build-artifacts.working.zip
build-artifacts.broken.zip
i.e. taking only the wasm file over, then transpiling on whichever platform, then trying to repro the failure
Good question, I'll do some of that debugging to see if I can better isolate what causes the error
Yeah, we've either stumbled upon an bug from arch difference in Rust, or in generated (transpiled) JS.
I am leaning on the transpiled JS side of course, but of course checking is the only way to know for sure
My earlier post apparently included some extraneous content. Is a response to your original request still relevant Victor? I'm happy to provide updated/corrected output if still relevant
Oh no thanks Mike, the details here are certainly enough, and the zip files above should be everything I need
Thanks for the patience here, this is certainly a head scratcher, but I should be able to get to the bottom of it with allt his information
No problem at all. We really appreciate your time and effort! If we can provide anything else, we'll be here.
Okay it's wasm.core2.wasm. I believe transferring only that file over triggers the issue
So... some difference in how the core wasm is generated across architectures?
Ah sorry, could you try only transferring the output on the Rust side
confirming James finding. I swapped out my wasm.core2.wasm with his, and my code now works
You should see the same result, but doing it from before transpile runs tells us a little bit more about where the issue is -- whether it's the rust output or "unbundling" routines
Oh, okay great. Will check
Without getting into the nitty gritty -- Rust will output a wasm component (which consists of one or more core modules, i.e. the only thing a browser can run), and jco actually unbundles that component into multiple core modules and supporting machinery
Is it enough to just swap the whole target directory?
the unbundling functionality could also be a point of failure
Or do you want higher resolution
Yep! swapping the target directory is fine, because you use wasm/target in transpile.sh
swapping wasm, wasm/target, etc down to ./wasm/target/wasm32-wasip1/release/wasm.wasm should be fine
Ok - confirmed. I swapped only the .wasm file with the version built on the aarch64 machine, reran transpilation, and was able to reproduce the error
Well, I definitely didn't want to hear that, but that certainly narrows it down!
Haha sorry, thank you very much for your support
Thanks for doing this exploratory work -- it looks like there's a bug in the toolchain somewhere, this is worth surfacing to wasmtime etc.
I'll try to do a little more digging (maybe try and bisect this back) and put up an issue to wasmtime. Clearly the transpilation machinery here isn't the deciding factor, possibly this is a failure of the code generated by wit-bidngen-rt.
confirmed:
I did the opposite of James: dropped his target/wasm32-wasip1/release/wasm.wasm in my release folder and transpiled, ran, did not see the error
If you want to downgrade wit-bindgen-rt to 0.40.0 (I assume things were working 2 months ago), you could try that... but the current version of wit-bindgen-rt has been in place for a while.
I hesitate to think it's code that snuck into the rust toolchain underneath itself, but that's basically going to be what I try next
I think it's not wit-bindgen-rt, because in the other project where this surfaced, it's happening in an app where the wit-bindgen-rt version hasn't changed. And it's also 0.39
Ah thanks!
OK, what about versions of Rust?
Is the other project similarly behind on Rust version?
wouldn't surprised if the other project has an up to date toolchain but just older wit-bindgen-rt dep
Mike and I are both on the same version of rust:
$ rustc --version
rustc 1.85.1
But... I think it is true to say that that was recently updated. I can't say for certain at the moment if that corresponded with when the error started. But this is something we can check
wouldn't surprised if the other project has an up to date toolchain but just older
wit-bindgen-rtdep
Is this a problem? We have a monorepo with lots of apps, each with essentially their own versions of wit-bindgen-rt, and we update the rust toolchain independently (and rarely)
I just tested one of our apps and got the error in both cases
(both) rustc 1.85.1
first wit-bindgen-rt v0.34.0
second wit-bindgen-rt v0.41.0
So wide range of versions there that don't seem relevant
@Victor Adossi
We use containerized development to ensure our environments are the same. Recently, our environment image changed, and it included two changs that are potentially relevant:
I was going to try to manually downgrade versions of cargo-component, but it seems that a default cargo-component --lib project at any version prior to 0.21.0 --locked cannot build when the rust toolchain is at 1.85.1.
So, in the minimal-transpile project, we just dropped the rustc version explicitly to 1.84.1, because that allows us to test prior versions of cargo-component.
Sure enough, with rust toolchain at 1.84.1 and cargo-component at 0.15.0, mike was unable to reproduce the error on his aarch64 machine.
Mike is now going to binary search through cargo-component versions until we isolate the one that breaks.
cargo-component v0.20.0 --> v0.21.0 trips the error
Thanks for getting to the bottom of this @Mike M and @James Mart , guess that's where the issue needs to be filed!
This is excellent, very glad it wasn't the Rust project itself but was instead something else... cargo-component was not the expected problem but definitely a welcome surprise
Would you like us to do that Victor? Or do you have a process you want to follow to close and reopen issues and whatnot?
Oh please go ahead if you'd like! Ya'll did all the work and actually tracked it down
At this point you have everything to make a stellar reproduction case for the issue, and what I can do is close out the jco issue in favor of the cargo-component one after you have it, just to ensure it doesn't fall through the cracks
Sounds good. I'm wrapping up some work now while waiting for a delayed flight. should be able to get that done tonight.
No worries, it is friday before the weekend!
Ok, new issue is here: https://github.com/bytecodealliance/cargo-component/issues/398
I've referenced this discussion thread and the jco issue in it.
and closed the jco issue with a link to the new issue.
Finally, I'll start a zulip thread for cargo-component to raise the proper attention.
Thanks again! :bow: I'll maybe add some people
Last updated: Dec 06 2025 at 07:03 UTC