brendandburns commented on issue #6091:
Now that #5929 has merged, I think this is ready for review. We may want to merge #6161 first since this PR includes that one.
pchickey commented on issue #6091:
Hi, sorry, I haven't forgotten about this PR, I have just been focusing on getting some other stuff done this week and haven't had time to review. I have set aside some time for it tomorrow.
brendandburns commented on issue #6091:
@pchickey no worries, I'm not in a rush.
pchickey commented on issue #6091:
Can this be rebased on top of https://github.com/bytecodealliance/wasmtime/pull/6228 (or
main
once it lands) now?
brendandburns commented on issue #6091:
Yes, I plan to rebase this once it merges.
brendandburns edited a comment on issue #6091:
Yes, I plan to rebase this once #6228 merges.
While we're on the topic of rebase, do you have a good strategy for rebasing this repo since we're doing squash merges? My standard flow is
git fetch upstream; git rebase upstream/main
but that seems to cause conflicts when rebasing on top of the squashed PR that has merged into main.
pchickey commented on issue #6091:
I end up using
git rebase -i
and delete all the lines that correspond to the squashed PRs that already landed.
brendandburns commented on issue #6091:
@pchickey this is rebased and ready for initial review.
brendandburns commented on issue #6091:
@pchickey thanks for the review and the idiomatic suggestions. I will update this PR to reflect the async request above and fix the idiomatic bits too (and add a test)
It will probably take me a few days to get enough free cycles to get it done, but should be by the end of the week.
brendandburns commented on issue #6091:
@pchickey as I dug into this it seems that a bunch of components (e.g. the
Store
) don't implement theSend
trait which means they can't be passed into an async function (at least as far as I can tell)Do you have a perspective on if it is ok to modify them to implement
Send
or if there is some other approach that is preferable.
pchickey commented on issue #6091:
Your "context" struct
WasiHttp
will need to impl Send in order for Store<WasiHttp> to be used by the async wasmtime apis.The good news is the design goal for the wasmtime async apis was to work perfectly inside a hyper service, https://github.com/fastly/Viceroy/blob/main/lib/src/execute.rs was the prototype where we actually fleshed all that design out a few years ago. So maybe you can crib some of the patterns there. The bad news is that async Rust has a steep learning curve, and the error messages from rustc aren't quite as brilliant as they are for the rest of the language. If you need to pair program on this for an hour sometime, my calendar next week is pretty open, but I am taking off Friday.
pchickey edited a comment on issue #6091:
Your "context" struct
WasiHttp
will need to impl Send in order for Store<WasiHttp> to be used by the async wasmtime apis. If the struct is composed out of Send elements this should be automatic - if rustc is not letting you have a Send impl, don't derive an unsafe one.The good news is the design goal for the wasmtime async apis was to work perfectly inside a hyper service, https://github.com/fastly/Viceroy/blob/main/lib/src/execute.rs was the prototype where we actually fleshed all that design out a few years ago. So maybe you can crib some of the patterns there. The bad news is that async Rust has a steep learning curve, and the error messages from rustc aren't quite as brilliant as they are for the rest of the language. If you need to pair program on this for an hour sometime, my calendar next week is pretty open, but I am taking off Friday.
brendandburns commented on issue #6091:
The biggest problem that I see is that in order to make a call into the WASM module you need to pass the
Linker
and theStore
into the async function and neither of those implementSend
The reason that I need to pass it is because the only time that you can get access to that is when the context is constructed (at least as far as I can tell). If there was a way to statically get at the
Linker
and theStore
then it would just be a question of making my contextSend
but those other wasmtime resources need to be part of the context also.Looks like you worked around this by constructing the store and linking _inside_ each HTTP execution:
https://github.com/fastly/Viceroy/blob/main/lib/src/execute.rs#L301
Which might be possible here, but would require some pretty significant rewiring since that is created/configured via flags etc in a different place.
pchickey commented on issue #6091:
That's right - the pattern required is to construct a store inside each execution so we can give the host functions a mut ref to the Store's T, otherwise you would need to use Arc<Mutex<>> and lock every use to get something Send and Sync to communicate with outside of the execution. The context will have to be designed to take this into account. We pass in the LinkerPre structure to amortize linking, but that will be up to the embedder. Store::new and LinkerPre.instantiate_async is designed to be extremely fast when used with the memory pooling features in the Engine.
pchickey edited a comment on issue #6091:
That's right - the pattern required is to construct a store inside each execution so we can give the host functions a mut ref to the Store's T, otherwise you would need to use Arc<Mutex<>> and lock every use to get something Send and Sync to communicate with outside of the execution. The context will have to be designed to take into account that it is created from the embedder's per-http request execution, not the other way around.
Viceroy passes in the LinkerPre structure to amortize linking, but that will be up to the embedder. Store::new and LinkerPre.instantiate_async is designed to be extremely fast when used with the memory pooling features in the Engine.
brendandburns commented on issue #6091:
Just a quick update that this PR is proving to be more thorny/invasive than I thought b/c of the interplay of Store/Linker/Host.
I think that there needs to be some significant refactoring of how the
run
command is written to make parts of it accessible in other parts of the code-base.I'm still working on it (slowly) but it might be a bit of time before this arrives.
brendandburns commented on issue #6091:
@pchickey ok, I got this working with the switch to
call_async
andinstantiate_async
.Can you take a look and see if this is what you had in mind?
If it looks directionally correct, I'll work on the style and tests.
Thanks!
brendandburns commented on issue #6091:
@pchickey does your :+1: mean that this looks correct? Or did it just mean "acknowledged, I will take a look when I get a chance"?
Sorry, the meaning of response emojis are sometimes hard to parse :)
pchickey commented on issue #6091:
Hi, sorry, the +1 was an acknowledgement that I was going to review it, but I haven't had time to spend on it yet, this week I have been heads down trying to move all of the preview2-prototying repo into this one.
Taking a brief look now, this didn't turn out exactly like I had envisioned, but I am not sure exactly what to suggest to change, I will have to play with refactoring it a bit myself to figure out the right direction. Sorry that isn't very actionable feedback. I will spend time on this next week.
brendandburns commented on issue #6091:
@pchickey sounds good. I actually spent a bunch of time on the refactoring also, I took three different paths at it and this was the one that seemed most "right"
But I'm definitely open to other options.
The most radical one would be to make all wasmtime executions
async
instead of the currentsync
default, but I considered that to be to significant of a change to entertain.
brendandburns commented on issue #6091:
@pchickey this is updated with a rebase to reflect the recent wit changes.
brendandburns commented on issue #6091:
@pchickey friendly ping on this, I suspect that you are busy on other things, so no rush, but wondering what the status of this is.
Thanks!
pchickey commented on issue #6091:
Right now I am working on getting the preview 2 host streams and pollable implementation landed in #6556. That is the prerequisite for getting wasi-http integrated with the rest of the component model implementation in wasmtime.
Once #6556 lands we need to do pretty hefty renovations to the wasi-http implementation to rebase it on top of that work. @dicej has successfully made a new implementation based on a somewhat similar implementation of streams and pollable https://github.com/fermyon/spin/blob/132a32daed6632768d52a9336125073e87fee1e8/crates/wasi-cloud/src/http.rs. The current consensus is that Joel will spearhead the effort to get wasmtime-wasi-http running on top of streams - we are expecting this to more or less amount to a rewrite as he ports his work from the spin repo into this crate. His upstream work also contains a (related, but again a little bit different) implementation of the server piece of this puzzle.
So, I think at this point we have to set this PR aside.
brendandburns commented on issue #6091:
Closing as obsolete.
Last updated: Nov 22 2024 at 16:03 UTC