As I mentioned at last week's meeting, I'm planning to do an overview of the implementation work described in the Async RFC I posted a couple of months ago. It's currently scheduled for .
I've sent an invite to those who have already expressed interest. Please DM me your email address if you want me to add it to the invite.
Here's the recording: YouTube - - YouTube
and document: https://hackmd.io/LQhcwqb9QiyH-YNMyikMHA?view
Thanks I had a conflict with that meeting time so I really appreciate the recording!
I just wanted to say thanks again! That was very informative!
The only things I haven't fully understood, I expect to understand them by reading the RFCs and checking the PRs :praise: !
Regarding:
Promise
: likeFuture
, but requires aStoreContextMut<T>
to make progress
func_wrap_concurrent
This reminds me of an old PR of mine where I ran into nearly the same problem; namely that WASI pollables need temporary mutable access to the ResourceTable, only for the duration of the poll
method, but not in between.
The referenced PR bounced back-and-forth between different solutions, and ultimately didn't land anything. Anyhow, one of the solutions was to have a customWasiFuture
trait which has an additional &mut Store
parameter on its poll method. Also, the WasiFuture
trait was auto-implemented for all regular Future
s. Of course this bifurcates the Futures ecosystem. But just throwing it out there
Yeah, I was thinking of something similar. What I couldn't figure out was how to interoperate ergonomically with regular Future
s and async
/await
.
The blanket WasiFuture
implementation sounds interesting; hadn't thought of that.
Here's an example of using Promise
and PromisesUnordered
(akin to futures::FuturesUnordered
, used to multiplex multiple Promise
s concurrently) to juggle multiple concurrent streams, futures, and export calls in a somewhat realistic wasi-http scenario: https://github.com/bytecodealliance/wasmtime/pull/9582/files#diff-3c649a56157956a049a3450ccba943197b5b50dde10bcd1917a6c236bd7329ecR929-R1121
It's not super idiomatic, but it's not horrible either, IMHO.
And here's a simpler example where we just start three concurrent calls to the same exported function and wait for them all to complete: https://github.com/bytecodealliance/wasmtime/pull/9582/files#diff-3c649a56157956a049a3450ccba943197b5b50dde10bcd1917a6c236bd7329ecR481-R492
BTW, Those links don't link to anything specific for me. They both load the entire diff, from the top
Yeah, maybe the diff is too huge for GH to allow deep linking; just fixed the first one; will fix the second
Regarding the async/await ergonomics;
If I summarize correctly; for the regular non-async case, passing the store as a mutable reference is fine:
linker.func_wrap(|store: &mut Store| {
blabla1(store);
blabla2(store);
});
(heavily pseudo coding here :P )
The problem in the async variant is that the async method should get access to the Store somehow, but isn't allowed to hold it across await boundaries. Ie. this is not ok:
linker.func_wrap_concurrent(|store: &mut Store| async {
blabla1(store);
something.await;
blabla2(store); // `store` shouldn't be used across await-points!
});
Right; non-awaiting host functions are easy: they can have exclusive access to the store since they will return immediately (unless it goes off and does a long computation, I guess; in that case it should spawn a task on Tokio's blocking thread pool and await that). And if a host function needs to await, we can't let it hold exclusive access to the store that whole time since it prevents anything else from happening for that component instance.
So, instead of passing the store directly, maybe we can pass kind of "accessor" (which _can_ be used across awaits) which yields temporary accesses to the store (which can _not_ be used across await points).
linker.func_wrap_concurrent(|store_acc: &mut StoreAccessor| async {
let mut guard: StoreGuard = store_acc.get();
blabla1(&mut guard);
something.await;
blabla2(&mut guard); // `guard` can not be used across await points
})
similar-ish to Mutex & MutexGuard? With the exception that we don't actually need any locking here
How would StoreAccessor
be implemented? (and keep in mind we have to include a <T>
where T
might not be 'static
, so that makes things more interesting)
I recall exploring something like that myself, but I think it was a dead-end; could be that I missed something, though.
I also thought about using thread- or task-locals, but that was also a dead-end due to lifetime-related issues.
Under the hood, we can pass a *mut dyn VMStore
around with impunity, but wrapping a safe, correct API around that seems impossible given the lifetime issues
I even considered experimenting with this unstable feature, but again I ran into lifetime issues (can't use dyn Any
for non-'static
types)
If we were to start adding T: 'static
bounds everywhere, we'd have a lot more options, but that's a tough sell.
And maybe it wouldn't help much anyway given that we're still talking about raw pointers and all the attendant dangers.
I was thinking in terms of some kind of task-local, and indeed hadn't thought about non-static T's :)
Last updated: Dec 23 2024 at 14:03 UTC