Hi guys, I have read https://bytecodealliance.org/articles/making-javascript-run-fast-on-webassembly and what I understood is that it gives the power to us for compiling js to wasm and using it on different platforms, have I understood it correctly? and if yes, is there any example of this approach?
I just know StackBlitz have used this approach that we can run nodejs apps on it
It doesn't exactly compile js to wasm. As I understand it the goal is to embed just the js interpreter part of spidermonkey and not the jit compilation part. The js is compiled down to spidermonkey bytecode that is embedded in a wasm file. Together with the spidermonkey wasm file this makes it possible to quickly start a js interpreter.
The js doesn't get compiled down to wasm. Js compilation requires jitting, which isn't possible with wasm without support from the wasm embedder and even then is likely relatively slow.
@hossein dindar as @bjorn3 said, the initial step is to get the interpreter working, and that's what we've done so far. However there is some thinking in how to go further; there's actually a good amount that one can do ahead-of-time, i.e. without type information, if one has reasonable primitives for handling dynamism (specifically, something like ICs) at runtime. E.g. one could in theory translate JS bytecode to Wasm bytecode that is mostly IC-chain invocations. One could imagine further optimizations from there. Anyway, doing even better here is something I've spent a lot of time thinking about and hopefully at some point we'll have something more :-)
@bjorn3 @Chris Fallin cool, I understood. thanks
@Chris Fallin I wonder if it would be possible to make the typescript compiler compile down type annotations to runtime type checks in a well-known format and then use these in spidermonkey to "speculatively" optimize in a way that is almost guaranteed to never require to fallback to a slow path for handling previously unseen types and then maybe emit wasm that can be linked into the main wasm binary when running wizer. These type annotations would be kind of like how asm.js added |0
and such everywhere to allow compiling down what is valid js to much faster code.
@Chris Fallin is there any case to be made, from a performance perspective, for adding new wasm bytecodes for primitives that support ICs (or whatever else is needed to translate JS efficiently) ? I mean, for operations which cannot efficiently be expressed with existing wasm bytecodes. If you see what I mean.
@Chris Fallin [taking care of course not to fall into the large hole marked "VAX instruction set" :-]
@Julian Seward yes, funnily enough, I actually drafted a (very rough) idea for such a feature a few months ago! We discussed some internally, and it didn't go too far (it was suggested that in theory, function refs + GC structs + tail calls + the right optimizations could cause the same thing to fall out), but maybe I'll see if I can socialize the ideas a bit more to see if there is any interest.
The basic idea was to define a particular kind of function that is a "stub", and to define a new kind of global that can carry a chain of stubs with attached data (as parameters). So basically the runtime is aware of IC chains; making it a first-class Wasm concept could allow for other tools to do interesting things, e.g. inline some IC stubs after freezing (Wizer-style) execution. We would have opcodes to invoke an IC chain, prepend a new stub to a chain, and clear a chain. Every chain would always have a normal-function fallback. I was careful to define conditions so that the engine could (i) do all this without dynamic allocation, (ii) codegen the stubs in a way that would allow for "naked" function bodies, with no frames or stack checks, etc., just as IC stubs work in native engines today.
All of this relies on the fact that the AOT-vs-JIT dimension is orthogonal to the IC-chain kind of dynamism; we can know all of our stub bodies ahead of time and include them in our .wasm, so none of this is a "JIT". That said, if we eventually freeze execution and inspect the chains, we could inline some stubs and further optimize, and then we might start to see this as a JIT -- it's kind of like Warp (SM's new JS tier) in its approach.
I like the idea of a first-class IC-chains feature in particular because it's more "Wasm-like" than special optimizations that rely on particular combinations of features used in a certain way -- it avoids performance cliffs and just expresses a concept directly so no reverse-engineering is necessary.
Anyway, if there's interest in that, I can see about dusting things off and sharing :-)
So is there a public way to use this yet?
I imagine that there would need to be a CLI that takes a JS file and builds a WASM module with the embedded interpreter?
The SpiderMonkey patches are almost completely upstreamed now: https://bugzilla.mozilla.org/show_bug.cgi?id=1701197
though there's no documentation yet, and the CLI wrapper to actually build a .wasm is not public yet; cc @Till Schneidereit for plans on that :-)
Cool, I'll keep an eye out.
It would also be very useful to have a standalone prebuilt module with the interpreter that exports something like a
run_js
function.
I'm currently using QuickJS for this, but a supported Spidermonkey build would be awesome.
very much agreed: that'd be great to have, and I do plan on having example projects along these lines before too long. I can follow up here once we have something in this direction—though I can't make good promises around timing for now, I'm afraid
Hi,
What is the best/recommended way to compile Javascript to Wasm?
Thanks so much
Tim
Hi,
Can you also please share how to compile SpiderMonkey to Wasm? Happy to follow some documentation on this; just not sure where to find it.
Thanks again
Tim
For example, is it possible to compile SpiderMonkey to a Wasm executable and then use that Wasm program to interpret Javascript?
The patches to compile spidermonkey on wasm are currently in the process of being reviewed for getting merged AFAICT.
Thanks @bjorn3 I really appreciate the speedy reply.
Just following up on this. It's all pretty new to me but is there any more public info on compiling JS code to WASM? I'm looking to kick the tires on it to get a better understanding.
While we want to eventually have a generic example that works in the Wasmtime CLI (and probably other WASI runtimes), we unfortunately haven't gotten around to that. In the meantime, you can take a look at how Fastly's js-compute runtime works, which might help. (See the README.md file at the repo's top-level for instructions on building)
thanks @Till Schneidereit!
@bjorn3 @Till Schneidereit has there been any progress on this effort?
It's not yet a fully reusable runtime, but we at Fastly recently published a JS runtime based on this work: https://github.com/fastly/js-compute-runtime/tree/main/c-dependencies/js-compute-runtime
Various of its builtins are highly specific to Fastly's Compute@Edge environments, but the overall setup is not at all. We intend to extract a generic runtime out of this with hooks to use for host-specific aspects like our builtins though
Ah, that's great. Especially https://github.com/tschneidereit/spidermonkey-wasi-embedding , which is basically all I need. Thanks!
Question: what exactly is lib/libjsrust.a
(in constrast to libjs_static.a
?
its the bits of spidermonkey that are written in rust, iiuc
Ah, that makes sense.
I'll probably try to build a reusable .wasm
file which just exports an eval
function and imports a host_call
function that takes an arbitrary string payload and also returns a string.
That can be reused in all kinds of ways, with guest -> host calls facilitated by (eg) just passing and returning JSON payloads.
that makes a lot of sense, yes. Note that in the not-too-distant future passing information in/out would ideally be an Interface Types job, but for the time being what you describe is a great starting point
Is the interface types proposal about to make progress?
That would be great news indeed, but considering the last two years with very little (public) progress, I'm not holding my breath.
there has been a whole lot of progress, but we haven't done too much active communication around it. Not yet for JS inside Wasm, but for some other language/host configurations you can use witx-bindgen to create bindings in ways that are very close to what Interface Types will look like. There's an online playground if you're interested
I've actually used witx and the wasmtime integration before.
I also try to read the wasm meeting notes from time to time, but it's a bit hard to really get much insight from those.
yeah, that makes sense—they often require a whole lot of context to make sense. This presentation might be of interest, though
btw. I'm trying to extract the fastly components of the library, but it's not so easy considering that js-compute-runtime.cpp and js-compute-builtins.cpp are basically files with 5xx/5xxx lines of code :-D
I mean I think the simpliest thing would be if it would be possible to exclude the fastly stuff with just a ifdef/ifndef statement and call a main function or something inside the js
but huge props even the code published is really helpful.
btw. I think this stuff can be used to make server-side rendering easier in MOST languages. at the moment it's a PITA!
Great to hear that it's helpful in its current state already!
I have plans to extract out a general-purpose runtime with hooks that our Fastly-specific builtins will use. And I very much agree about the large files being unwieldy, and plan on moving the builtins into their own files. It'll still be a bit until we're at that point, though
Let me ask another question about the "Making JavaScript run fast on WebAssembly" article. How did you build using wasi-sdk without C++ exception? As far as I browsed https://bugzilla.mozilla.org/show_bug.cgi?id=1701197 with its related tickets and patches, there were neither any change to stub exception nor passing -fno-exceptions
to clang. Did I miss something?
The SpiderMonkey JS engine (just like Google's V8 and IINM Apple's JSC) doesn't use C++ exceptions, so we didn't have to disable them
theduke said:
Cool, I'll keep an eye out.
It would also be very useful to have a standalone prebuilt module with the interpreter that exports something like a
run_js
function.
I'm currently using QuickJS for this, but a supported Spidermonkey build would be awesome.
@theduke Curious if you are using https://github.com/justjake/quickjs-emscripten. As far as I can tell, that project doesn't emit a wasm file. If not, would you or anyone have any pointer that I could get quickjs to run with wasmtime?
I cobbled together a crate that can run Javascript in wasmtime via Spidermonkey
.
It's rather limited right now, JS to host interop is only facilitated by a hostcall_str
function and host to JS basically only provides eval
.
But it might be useful for others as is or provide a starting point.
(it uses the new witx-bindgen
code generator for both spidermonkey and wasmtime bindings)
(the spidermonkey wasm code is precompiled and embedded in the crate, so it can be used without worrying about wasi-sdk , compiling spidermonkey, etc)
https://github.com/theduke/wasm-javascript/tree/main/wasmtime_js
Howdy, my apologies if this should be a new topic.
I've cobbled together a custom fastly compute@edge javascript runtime using QuickJS and some Web APIs such as Request and Response. I've followed along with the post https://bytecodealliance.org/articles/making-javascript-run-fast-on-webassembly and got Wizer to pre-initialize the engine's code. Wizer is really nice :-)
The article also mentions how the application can be sped up even more by making use of inline-cache stubs, I was wondering if the code containing the inline-cache stubs and/or ahead-of-time compilation is publicly available to read at all?
Hi @Jake Champion -- the IC stubs experiment hasn't been released, as it wasn't quite fast enough to justify making use of the technique, but it's possible we'll use it in the future. AOT compilation doesn't yet exist so there's no code to release for that, unfortunately, though plans are in the works
Thanks for the response @Chris Fallin. I've read about ChowJS and their foray into AOT, they saw performance improve by 3.91 times compared to without AOT. Their engine is based on QuickJS. Do you know what the numbers were like for the IC experiment?
Here is a post from ChowJS engineers about their AOT implementation https://mp2.dk/techblog/chowjs/
In our case, an implementation of "portable ICs" (a separate function with a switch-in-a-loop interpreter structure, using indices of pre-built IC sequences) actually had no speedup over the simple interpreter; the best I was able to determine was that this was due to the dispatch and control-flow overhead. The experiment convinced me that we'd probably need a lower-cost form of control flow to make ICs work well; either a first-class Wasm feature for ICs (I had a draft writeup of something for this at one point but never shared it publicly) or use of e.g. tail calls plus funcrefs and some wasm engine opts to remove as much of the function call overhead as possible
@Chris Fallin how did you go about testing the performance of the runtime? I've been using the V8 Benchmark Suite (https://github.com/mozilla/arewefastyet/tree/master/benchmarks/v8-v7) but was wondering if there is a better way to test the performance.
The results I got for wasm versions of spidermonkey and quickjs are here -> https://gist.github.com/JakeChampion/454799e0bf649ac835cf8d1a8d2f159c
@Jake Champion for benchmarking I used Octane, just because it's what I had at hand; for this sort of thing any "CPU-intensive JS" that tests the core operations / interpreter loop is probably good enough for measuring relative improvements, I think
Last updated: Dec 23 2024 at 13:07 UTC