alexcrichton opened PR #11646 from alexcrichton:p3-serve to bytecodealliance:main:
This commit adds support for WASIp3 to
wasmtime serveby detecting whether the input component is using a WASIp3 exported interface instead of a WASIp2 exported interface (similar to howwasmtime rundetects which is supported).<!--
Please make sure you include the following information:
If this work has been discussed elsewhere, please include a link to that
conversation. If it was discussed in an issue, just mention "issue #...".Explain why this change is needed. If the details are in an issue already,
this can be brief.Our development process is documented in the Wasmtime book:
https://docs.wasmtime.dev/contributing-development-process.htmlPlease ensure all communication follows the code of conduct:
https://github.com/bytecodealliance/wasmtime/blob/main/CODE_OF_CONDUCT.md
-->
alexcrichton assigned alexcrichton to PR #11646.
alexcrichton submitted PR review.
alexcrichton created PR review comment:
I've opened this as a draft PR because of this, and here I have one question and one statement:
- @rvolosatovs the origin of this code looks like this and notably has the "I/O driver" task associated with it to await on. Here, however, it's not present. That's a problem, right? Would you take this to mean that https://github.com/bytecodealliance/wasmtime/issues/11600 is required to implement this correctly? Or should I insert something else here?
- @dicej we'll eventually want to re-benchmark this given all the big changes that have happened around the intenrals of streams and the internals of wasi-http. Notably this optimization is no longer easily possible.
dicej submitted PR review.
dicej created PR review comment:
Would you take this to mean that https://github.com/bytecodealliance/wasmtime/issues/11600 is required to implement this correctly? Or should I insert something else here?
We don't necessarily need #11600 for this. I was envisioning something like:
dicej deleted PR review comment.
dicej submitted PR review.
dicej created PR review comment:
Would you take this to mean that https://github.com/bytecodealliance/wasmtime/issues/11600 is required to implement this correctly? Or should I insert something else here?
Yes, 11600 is probably the best choice here; I can pick that up tomorrow.
A couple of alternatives that would also work, but are not necessarily as elegant:
- Update Response::into_http to add an impl Future<Output = Result<(), ErrorCode>> as a third part of the tuple it currently returns, which will resolve when the BoxBody has finished producing items (or hits an error). Then spawn a task which takes ownership of the Store and calls Instance::run_concurrent with that future.
- Spawn a task which calls
Instance::run_concurrentwithstd::future::pending()and wait for it to returnErr(Trap::Deadlock), meaning there's nothing left to do (i.e. all tasks have completed). The problem there is you get the same error whether the tasks all completed or they got stuck before they completed.
rvolosatovs submitted PR review.
rvolosatovs created PR review comment:
This is exactly the problem I was referring to earlier today, which I hoped we could fix by
handlereturning the "I/O driver" future. My understanding of this so far was that as long as the returnedhttp_body::Bodyis used within arun_concurrent(which I was, perhaps imprecisely, referring to as "part of host's event loop") everything should "just work". It looks like the existing functionality is sufficient forwasi:httpuse case, but I think there definitely should be a way for the embedder to "join all" spawned tasks.Basically, my current understanding is that HTTP implementation works as long as guest is done doing all the work when the host is done reading the response body. If, for example, a guest were to spawn a task after streaming the response body (e.g. doing an async HTTP request after the fact), I am not sure I see a way for the host to drive completion of such tasks. Maybe @dicej could provide some guidance here?
dicej submitted PR review.
dicej created PR review comment:
- Notably this optimization is no longer easily possible.
This is where having an
Store::yield_async function could be useful -- i.e. the host equivalent of theyieldintrinsic, meaning "let any guest tasks make progress before resolving the returnedFuture". Then we'd do astore.yield_().await;before thetx.send(res);, which should have the same effect as the previous optimization.
dicej submitted PR review.
dicej created PR review comment:
I am not sure I see a way for the host to drive completion of such tasks. Maybe @dicej could provide some guidance here?
Either #11600 or passing
std::future::pending()toInstance::run_concurrentwould address that, I believe -- either one would allow the host to wait until the guest task exits completely. Regarding 11600: we could make the returned future mean "this will resolve when the task has exited _and_ any of its subtasks have also exited".
alexcrichton created PR review comment:
It looks like the existing functionality is sufficient for wasi:http use case, but I think there definitely should be a way for the embedder to "join all" spawned tasks.
I'm not actually sure how this tests works because AFAIK it shouldn't, but we don't have sufficient information today because once the guest returns a
Responsethe host has no reason to stay inside ofrun_concurrentbut it needs to stay there so long as the original component model subtask hasn't returned. Avoiding returning is modeled by spawning in Rust currently.@dicej let's brainstorm tomorrow about what shape #11600 might take? My best idea right now is
call_concurrentreturning a pair of futures but I feel like we can probably think of something better
alexcrichton submitted PR review.
dicej created PR review comment:
One other nuance to point out here: in the general case of a guest function that returns streams and/or futures, waiting for the task to exit isn't enough -- it could store the write end of each stream or future in memory and hand of responsibility for them to another task (i.e. one that's running concurrently or expected to run later). In other words, the streams and futures could outlive the task that created them. Probably not a concern for
wasmtime serve, but I thought I'd mention it for completeness.
dicej submitted PR review.
dicej submitted PR review.
dicej created PR review comment:
Expanding on the above: what the caller really wants to wait for is three things joined together:
- the end of the body stream
- the resolution of the trailers future
- the completion of the
handletask
Until all three are finished, we can't consider the request fully handled.
dicej edited PR review comment.
dicej submitted PR review.
dicej created PR review comment:
Err, five things?
- the request body fully consumed
- the request trailers consumed
- the response body fully produced
- the response trailers delivered
- the completion of the
handletask:dizzy:
dicej edited PR review comment.
alexcrichton updated PR #11646 (assigned to alexcrichton).
alexcrichton has marked PR #11646 as ready for review.
alexcrichton requested wasmtime-wasi-reviewers for a review on PR #11646.
alexcrichton requested dicej for a review on PR #11646.
alexcrichton requested wasmtime-core-reviewers for a review on PR #11646.
alexcrichton requested wasmtime-default-reviewers for a review on PR #11646.
alexcrichton submitted PR review.
alexcrichton created PR review comment:
Ok with https://github.com/bytecodealliance/wasmtime/pull/11665 this should now be ready as this hooks into the task run.
alexcrichton updated PR #11646 (assigned to alexcrichton).
dicej submitted PR review.
dicej created PR review comment:
I'm thinking we should check for an error here and log it before returning it to the caller. That's what we're doing in the p2 case.
alexcrichton updated PR #11646 (assigned to alexcrichton).
alexcrichton has enabled auto merge for PR #11646.
alexcrichton merged PR #11646.
Last updated: Dec 06 2025 at 07:03 UTC