Hi bytecodealliance,
I've been looking at language options for implementing wasm components. When considering JS as a source language, I was surprised to see the lack of a wasmgc compiler for JS. Seeing as the BA owns starlingmonkey I was hoping to get some context on why that approach is being taken over what dart and the jvm languages seem to be doing (compiling to wasmgc).
Thanks!
This topic was moved here from #general > Starlingmonkey approach vs wasmgc by fitzgen (he/him).
The main considerations are timeline, compatibility and engineering effort. JS is a large language, and a full engine that supports every corner is a multi-year project; so the simplest possible answer to your question is that an engine that uses WasmGC would be a large refactor of an existing engine, and is a much longer path than porting an existing engine using linear memory and its own GC
keep in mind too that JS' GC has a number of subtleties that would be difficult (impossible?) to map 1-1 to WasmGC as it is today: weak maps/ephemerons, finalizer registries, etc
The other factor here is that JS is fundamentally dynamic; one can infer static object shapes at runtime, but that's not the same as what Java and Dart have with truly static, strong type systems. So a mapping to WasmGC would require another level of indirection where the Wasm-level GC structs are various slot shapes
All this points to the more incremental approach we took: we ported SpiderMonkey to run on Wasm/WASI using its existing GC in linear memory, which was relatively straightforward (this was done four years ago, before StarlingMonkey wrapped SpiderMonkey/WASI); and then we've added compilation as a variant of its existing bytecode, using the same object runtime
TYSM for the context, that makes total sense. Is there any perf data that has been published? (startup time, memory use etc)
Startup time: there are various measurements around, and the benchmark in the wasmtime source tree, for Wasm instantiation. When you're launching a wizened wasm produced by the starlingmonkey build process, there is no runtime startup; you're launching a snapshot of an already-started runtime and you very quickly hit the first JS opcode. It's on the order of 5 µs or so.
Execution speed: PRs that added AOT compilation have some relative measurements (tl;dr, about a 2.5-3x speedup). Relative to a full JIT engine on native, it's still slower, depends on workload.
Memory use: depends on what the workload does of course; a base heap image is typically a few megabytes.
As always my standard advice would be "measure it and see if it fits your use-case" :-)
As always my standard advice would be "measure it and see if it fits your use-case" :-)
Excellent advice, the broad strokes here are super helpful as a starting point. And my goodness is 5 µs impressive!
Heya :) Might be worth mentioning that CanadaHonk is working on an from-scratch JS to Wasm compiler, it's not the only effort in this space ofc but he's got 5 years of funding and already passes 50% of test262 so definitely something to watch
doesn't help you right now though I think
@Jonas Kruckenberg always looking for a great path from js to wasm!
Hey @Jonas Kruckenberg do you know anything about the porffor
project's plans to implement WASI? Right now there's not much other than tracking issue from May (though of course I'm sure they're busy with lots of other things :)
hey @Victor Adossi sorry for the delay, I went and asked oliver about wasi and he said currently it’s a custom set of imports for ease of development but that later on supporting wasi probably will be a thing
just bc obviously thats sort of where everybody is heading
Last updated: Nov 22 2024 at 16:03 UTC