When I was naming the project I knew about JAWS the screen reader, but I didn't think it would lead to any confusion as when you google for JAWS you get mostly pages about Jaws, the movie. A screen reader user mentioned that it's already sometimes hard to find information about JAWS in context of the browser (like javascript developer tools). @benmccann from SvelteKit suggested the name could be Jawsm (pronounced like awesome) and I like that, so I changed the name to avoid any confusion.
Given the maturity of the current WASI 0.3 prototype by Joel I feel you could take a look at it for any newly starting projects, as async becomes so much easier with 0.3. Feel free to ask questions here, I adapted the code for native compilation and found it easy to understand, given Luke's excellent presentation in Barcelona.
Also Joel will likely present more about it tomorrow at WasmCon.
Thanks for the info @Christof Petig, that's exciting! For now I'm using few functions from WASIp2, mostly subscribe-duration
and maybe I'll add outgoing HTTP requests at some point to test things, but a broader WASI integration will come when I get to more builtin types and especially Node.js APIs. Which makes it even better, cause by the time I'm there, I'm guessing WASI 0.3 will be used more widely.
The only potential issue I can see is runtimes support. The code I'm generating is already hard to run almost anywhere without host extensions. For now I added a few simple pollyfill functions to mimic WASIp2 in Node.js/Chromium, but my aim is to be able to run on top of a bare runtime without any custom code. If Wasmtime gets WASI 0.3 support anytime soon, I will definitely give it a shot.
At the moment another blocker for using Wasmtime is the exception handling support, but when I finish await/generators I might just have the tools to rewrite try...catch
into WASM code that doesn't need the feature.
I have a small, but I think exciting update: I have a first working PoC of async/await handling in JAWSM. I think there are some edge cases that it doesn't handle yet, but it handles await
statements in conditionals and loops, so the rest is definitely doable. When I wrote the first post in this thread, on 9th of November, I said I'm working on async/await, but I dropped the work afterwards. The project was just not ready for that in terms of stuff supported then, and most importantly because of very limited tooling. Now that I generate most of the WASM code from Rust, adding new features got easier. On top of that, I also worked on a tool to traverse and modify WASM code, which made it much easier to implement.
I'm very happy with it, because this is one of the last two things that I still didn't confirm are doable (the other thing is generators support).
For those curious on the details, I use a WASM code transformer that splits WASM code at the await
point, saves the state and the stack and restores the state. The effect is kind of similar to asyncify, but a bit more integrated into how I translate JS code. And also rather than unwind the entire stack I do more of a CSP style transforms, ie. a function using one await statement will become two functions - first function will keep everything before the await
keyword and the second function will be called once await
ed promise is resolved.
So I might have been a bit too fast to post the "victory post". When I was writing the above, I had the implementation mostly working with the ability to compile code like this:
function sleep(ms) {
return new Promise((resolve) =>
setTimeout(() => {
resolve(ms);
}, ms),
);
}
async function foo() {
let sum = 0;
for (let i = 0; i < 4; i++) {
sum += await sleep((i+1) * 100);
console.log("awaited ", (i+1) * 100);
}
console.log("Total await time", sum);
}
foo();
The problem is, I still had some edge cases to handle, most notably support for try/catch
. I thought it won't be hard to implement, but it is, as one might have expected, if they were not blinded by optimism, indeed hard. Long story short, I didn't have much time to work on the project, and the time I had was hardly enough for a successful try. I pushed what I have for now to GitHub, so if someone is curious, you may take a look at async-generators-wip branch (a small warning: the code of the entire project is not of a quality I'd normally produce, mostly cause it's still int he proof of concept stage). For example this test asserting on how nested blocks are split into series of callback functions might be interesting: https://github.com/drogus/jawsm/blob/b3cc0fe818f3ad85bf171e2e39eb0d3742894052/src/await_keyword_transformer.rs#L947 (the blocks have to be split in into two parts and then each of the br
calls has to split the code again).
About a week ago I kinda gave up on the callbacks approach and I was thinking that with a limited amount of time I have I should just use asyncify as the implementation would have been much simpler. I've started working on it, but today it turned out asyncify does not support exceptions either.
A good thing is that while working on the Asyncify version I discovered that my implementation is slightly broken for regular "thenables" (for example a simple object with a then
method). For example this code:
let foo = {
then: function(onFullfilled) {
console.log("calling then");
onFullfilled("foo");
}
};
async function fooFunction() {
console.log("fooFunction, before await");
let fooValue = await foo;
console.log("fooValue", fooValue);
}
fooFunction();
console.log("done");
Should print:
fooFunction, before await
done
calling then
fooValue foo
But with my implementation it will print:
fooFunction, before await
calling then
fooValue foo
done
That's because in my current implementation I only pause the execution on calls that give control back to the runtime (like setTimeout
or in the future I/O operations).
So where do I go from here? Given I have the "CSP" version mostly working, I think I'll at least fix it to handle "thenables" properly and when I have more time I'll give exceptions another try. That said, being quite close to make asyncify work makes me want to finish that, too, and compare the two approaches.
The main three reasons on why I went with CSP style transformations were:
Given that now I almost have both implementations ready (minus the exceptions part), I am very curious to benchmark both versions and see if my worries about excessive calls and/or de/serialization are warranted. Due to my work/personal obligations it likely won't happen soon, but I'll definitely post about it when I have something to show!
A small weekend win, I fixed the current CSP style version to work properly with "thenables", ie. both the regular object with then
and Promises that return to the host work.
I wanted to also write about a plan to implement transformations inside try/catch blocks. It will introduce quite a bit of repetition, but I think it's the easiest thing to do for now and I'm hoping by the time the project is anywhere close to being usable I will be able to rewrite the whole thing to use the stack-switching proposal. I know stack-switching code is being upstreamed to Wasmtime, but then Wasmtime doesn't support exception handling yet, so I'm relying mostly on Chromium at the moment.
Anyhow, it seems to me that the easiest way to transform code inside of try/catch is to basically copy the try/catch part with every transformation. So for example given the following simplified function:
(func $foo
(local $counter i32)
i32.const 0
local.set $counter
try
i32.const 1
call $log
loop $test-loop
local.get $counter
i32.const 1
i32.add
local.set $counter
call $produce_a_promise
call $await
local.get $counter
i32.const 5
i32.lt_s
br_if $test-loop
end
catch $AnException
i32.const 2
call $log
end
i32.const 3
call $log
)
Could be transformed into something like that:
;; the new $foo function needs to execute the code before the loop
;; and to enter the loop by calling the loop start function
;; keeping the try/catch around the original parts of the code
(func $foo
(local $counter i32)
i32.const 0
local.set $counter
try
i32.const 1
call $log
call $loop-test-loop-start
catch $AnException
call $log
end
)
;; this is the loop entry point. It carries the try/catch code, too
;; normally it would continue with the code inside and after the loop,
;; but here we also had an await point, which we need to split at
(func $loop-test-loop-start
try
local.get $counter
i32.const 1
i32.add
local.set $counter
call $produce_a_promise
ref.func $foo-promise-callback-1
call $process_promise
catch $AnException
call $log
end
)
;; And finally the promise callback that has the rest of the code.
;; The tricky part is what comes after the loop. We have to call the
;; code that comes after the loop only if we did not jump to the beginning
;; but it still needs to happen outside of the try/catch block.
;; Without try catch it's quite a bit simpler cause I can handle both cases
;; with the same conditional and I don't have to extract whatever is used as
;; an if condition
(func $foo-promise-callback-1
try
local.get $counter
i32.const 5
i32.lt_s
if
call $loop-test-loop-start
end
catch $AnException
call $log
end
local.get $counter
i32.const 5
i32.lt_s
i32.eqz
if
i32.const 3
call $log
end
)
So as you can see there is quite a bit of code repetition in here, but also some tricky parts on how to transform code in context of blocks or loops. I'll also have to consider nested try/catch blocks with nested loops/blocks. Once I have a good idea on how these edge cases should be handled I'll probably try implementing it again, hopefully some time in the next two weeks.
I had some time to work on the project this weekend and I think I'm mostly done with the transformations :tada: I'm sure there are some edge cases I haven't run into yet, but for example this code outputs exactly the same text Node.js outputs:
let foo = {
then: function(onFullfilled) {
console.log("calling then");
onFullfilled("foo");
}
};
async function fooFunction() {
let i = 0;
try {
console.log("Before loop");
while (i < 3) {
console.log("fooFunction, before await inside loop, i:", i);
let fooValue = await foo;
console.log("fooValue", fooValue, "i value:", i);
if (i == 2) {
throw "an error, and fooValue is " + fooValue;
}
i++;
}
console.log("After loop");
} catch(error) {
console.log("Caught error:", error);
}
console.log("After try catch");
}
fooFunction();
console.log("done");
I'll have to clean up the code cause it's a mess now, and I'll try to come up with more examples for testing, including nested try/catch statements and various nestings of loops and try/catch blocks, but I'm hoping I won't need any massive changes now.
If everything goes well I'll try to tackle generators next, and if that goes well I'll have most of the known unknowns confirmed. And then I'll probably try to fix some of the places that misbehave (like for example implicit typecasting for certain operators), finish two last bigger missing syntax pieces (regex literals and bigint literals), and then I'll be finally ready to implement the builtins themselves, which should rapidly increase the percent of compatibility with the JS spec :fingers_crossed:
One thing that made it easier to implement the transformations, was realising that I was making my life harder by using call
s instead of return_call
s. At some point I was thinking about the stack size and I realised that I have to use return_call
and not call
instructions in order to not deepen the stack. And then I'm not sure why I didn't think about it in the first place, but returning early when calling the callbacks makes things much easier even if we ignore the stack size. For example, the last function in the example in my previous message looks like this:
(func $foo-promise-callback-1
try
local.get $counter
i32.const 5
i32.lt_s
if
call $loop-test-loop-start
end
catch $AnException
call $log
end
local.get $counter
i32.const 5
i32.lt_s
i32.eqz
if
i32.const 3
call $log
end
)
The problematic part is that the code after try/catch has to be executed only if the loop does not "jump". But if we swap the call
instruction with a return_call
instruction, the problem pretty much disappears, cause the jump skips the rest of the code:
(func $foo-promise-callback-1
try
local.get $counter
i32.const 5
i32.lt_s
if
return_call $loop-test-loop-start
end
catch $AnException
call $log
end
i32.const 3
call $log
)
Hey a bit late but it just occurred to me that it's possible you hadn't seen this the async work on P3:
In-progress wasmtime
fork you can try out here:
https://github.com/bytecodealliance/wasip3-prototyping
test programs here: you can build these and inspect how they work/what they import/export (callers, callees, etc):
https://github.com/bytecodealliance/wasip3-prototyping/blob/main/crates/test-programs
Reusable environ stuff is everywhere but this list is nice:
https://github.com/bytecodealliance/wasip3-prototyping/blob/eef4b0037eca391234f5600881f17c71052b852d/crates/environ/src/component.rs#L143
Easy to read through impl:
https://github.com/bytecodealliance/wasip3-prototyping/blob/eef4b0037eca391234f5600881f17c71052b852d/crates/wasmtime/src/runtime/component/concurrent/futures_and_streams.rs
Specs and stuff:
https://github.com/WebAssembly/component-model/blob/main/design/mvp/Async.md
https://github.com/WebAssembly/component-model/blob/main/design/mvp/CanonicalABI.md
https://github.com/WebAssembly/component-model/blob/main/design/mvp/Explainer.md#asynchronous-value-types
https://github.com/WebAssembly/component-model/blob/main/design/mvp/Explainer.md#-async-built-ins
If I'm understanding your work here, you're taking a bit of an emscripten asyncify style approach to async which is definitely great/is known to work, but I'd also love if JAWSM was WASI p3-forward as well!
@Victor Adossi thanks for the link! No, I haven't seen that yet, but it looks very interesting!
Regarding your question - yeah, right now what I'm doing is a set of code transformations that convert async functions in a way it's possible to "pause" execution mid function. Asyncify does this by implementing stack unwinding. I do this with CPS transforms, ie. whenever I need a "split point" I return from the function and I allow to get back to the same code by calling a callback and supplying the previous state to the callback.
That said, if I understand correctly, the WASIp3 proposal won't replace the stack-switching capabilities for me, and I still have to do it, but it could potentially allow me to implement running multiple Javascript instances in an async manner, thus lowering the memory and CPU requirements per-instance, especially if there is a lot of I/O code involved. And what I'm doing is a prerequesite considering this paragraph from the "Async Explainer":
This switching may require the use of fibers or a CPS transform, but may also be avoided entirely when a component's producer toolchain is engineered to always return to an event loop.
I'll definitely explore it more and thanks again for the links!
Whew glad I suggested it -- surprised you hadn't seen it yet, but maybe we're not doing enough evangelism of our work just yet!
Ah yes, so this is very much not stack-switching -- though that is planned post 0.3 -- it is definitely more for in-component non parallel async!
surprised you hadn't seen it yet, but maybe we're not doing enough evangelism of our work just yet!
Honestly, I have very little time for my side projects these days, so it might as well be because I didn't spend too much time on being up to date. I only read a bit about stack switching lately cause I was hoping to use it soon-ish, but other than that I mostly work with what I have to increase the percent of JS compatibility as much as possible.
Another weekend, another small win! I implemented generator functions. I don't have more time to test it thoroughly, but the basics definitely work and it's based on the same transformations that I use for await, so I expect most of the language constructs to work correctly in conjunction with generators. As an example, the following script compiled to JAWSM outputs the same result as Node.js:
function *gen(){
console.log('generator starts');
const val = yield 99;
console.log('yield val:', val);
let i = 0;
while (i < 3) {
yield i;
i += 1;
}
return val * 2;
}
let it = gen();
let next = it.next(50);
console.log("next", next.value, "done", next.done);
next = it.next(51);
console.log("next", next.value, "done", next.done);
next = it.next(52);
console.log("next", next.value, "done", next.done);
next = it.next(53);
console.log("next", next.value, "done", next.done);
next = it.next(54);
console.log("next", next.value, "done", next.done);
Which validates loops and the return value. The next step is, unsurprisingly, to support async generators. Which I think shouldn't be hard, but I also haven't had enough time to fully think it through, so I guess we will see soon.
I didn't have a lot of time to work on JAWSM in the recent month, but the last few days have brought quite a lot of progress in fundamentals and a beginning of built-ins support. When I started working on the project I hand-waved my way through a big part of fundamentals, like for example operators. I only had numbers working, and my goal was to primarily have a proof of concept, so it didn't make sense to dive too deep into details there. Now that I have a proof of concept of semantics and support for most basic types, I decided it's time to strengthen the fundamentals. Thus my efforts went to implement various abstract operations properly, like "ToNumber", "ToPrimitive", "IsLessThan", that are needed for a lot of higher level stuff, and for operators working properly. And then I went for Array.prototype.*
implementation, and basic BigInt support.
Long story short, most of the Array functions now work mostly properly, operators like ==
, ===
, <
etc. work much closer to the spec, and today I ended up with ~24.5% of passed tests in Test262 test suite, with proposals excluded (to be more exact 20582 out of 84416 tests now pass).
While I know about some stuff that still doesn't work in the context of async/await and generators (most notably generators delegation, but also edge cases and small inconsistencies), I think for the next month or two I'll continue the route of "smaller" stuff, like builtins support, as I got kind of tired of the whole code transformation thing that I use for generators and async/await. Another reason is that I would like to get to a point where the project is usable for at least code on a simpler side, and working builtins and operators are much more common than generator delegation. And yet another reason is that I wanted to work on something simpler and less mentally taxing. For example I took a big portion of Array methods implementation from the awesome https://github.com/zloirock/core-js project (which I attributed in my code, so it's clear where it comes from), which definitely made it much simpler to move forward. I had to implement some of the stuff myself, cause core-js delegates to native functions from time to time, but it's still on a much simpler level of complexity than figuring out the proper way to transform the WASM AST.
Seeing a big progress like that in the spec coverage (I jumped from ~18% to 24.5% in less than a week), I'm pretty sure I'll be able to hit 50-60% of coverage in the next month or two. Which of course, doesn't mean linear growth all the way to 100%. Like, I wouldn't be surprised if I can hit 50% of Test262 coverage in about half a year since the project started (I'm now at ~5.5months), and have the other 50% take a year more, at least, given I only work on the project in my free time.
There are a few reasons for that. The most obvious one is that implementing a single function like Array.prototype.slice
can easily give a boost of a few hundred passing tests, but fixing the last 20 or 30 edge cases for the function can take way more time than implementing it in the first place. Then there are bugs and inconsistencies that I know will be hard to implement, like for example generators delegation. I think it can easily take weeks to implement, but not change the number of passing tests meaningfully. And then there is a very long tail of small inconsistencies that are pretty much impossible to fix without going through each of the failed tests.
One more thing that I would like to add is that at some point I will desperately need to invest in the tooling. I hinted at the experience of using a 10k lines wasm!
macro before, but there is more stuff like that - for example right now all of the stuff implemented in JavaScript (like most of the Array.prototype
) is in one big file that is simply prepended to the script you want to compile.
That's it for now and I'm hoping to get back with more good news soon!
Last updated: Apr 08 2025 at 10:03 UTC